Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
6,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification with Support Vector Machines
by Soeren Sonnenburg | Saurabh Mahindre - <a href=\"https
Step1: Liblinear, a library for large- scale linear learning focusing on SVM, is used to do the classification. It supports different solver types.
Step2: We solve ${\bf w}\cdot{\bf x} + \text{b} = 0$ to visualise the separating hyperplane. The methods get_w() and get_bias() are used to get the necessary values.
Step3: The classifier is now applied on a X-Y grid of points to get predictions.
Step4: SVMs using kernels
If the data set is not linearly separable, a non-linear mapping $\Phi
Step5: Just for fun we compute the kernel matrix and display it. There are clusters visible that are smooth for the gaussian and polynomial kernel and block-wise for the linear one. The gaussian one also smoothly decays from some cluster centre while the polynomial one oscillates within the clusters.
Step6: Prediction using kernel based SVM
Now we train an SVM with a Gaussian Kernel. We use LibSVM but we could use any of the other SVM from Shogun. They all utilize the same kernel framework and so are drop-in replacements.
Step7: We could now check a number of properties like what the value of the objective function returned by the particular SVM learning algorithm or the explictly computed primal and dual objective function is
Step8: and based on the objectives we can compute the duality gap (have a look at reference [2]), a measure of convergence quality of the svm training algorithm . In theory it is 0 at the optimum and in reality at least close to 0.
Step9: Let's now apply on the X-Y grid data and plot the results.
Step10: Probabilistic Outputs
Calibrated probabilities can be generated in addition to class predictions using scores_to_probabilities() method of BinaryLabels, which implements the method described in [3]. This should only be used in conjunction with SVM. A parameteric form of a sigmoid function $$\frac{1}{{1+}exp(af(x) + b)}$$ is used to fit the outputs. Here $f(x)$ is the signed distance of a sample from the hyperplane, $a$ and $b$ are parameters to the sigmoid. This gives us the posterier probabilities $p(y=1|f(x))$.
Let's try this out on the above example. The familiar "S" shape of the sigmoid should be visible.
Step11: Soft margins and slack variables
If there is no clear classification possible using a hyperplane, we need to classify the data as nicely as possible while incorporating the misclassified samples. To do this a concept of soft margin is used. The method introduces non-negative slack variables, $\xi_i$, which measure the degree of misclassification of the data $x_i$.
$$
y_i(\mathbf{w}\cdot\mathbf{x_i} + b) \ge 1 - \xi_i \quad 1 \le i \le N $$
Introducing a linear penalty function leads to
$$\arg\min_{\mathbf{w},\mathbf{\xi}, b } ({\frac{1}{2} \|\mathbf{w}\|^2 +C \sum_{i=1}^n \xi_i) }$$
This in its dual form is leads to a slightly modified equation $\qquad(2)$.
\begin{eqnarray} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\ \mbox{s.t.} && 0\leq\alpha_i\leq C\ && \sum_{i=1}^{N} \alpha_i y_i=0 \ \end{eqnarray}
The result is that soft-margin SVM could choose decision boundary that has non-zero training error even if dataset is linearly separable but is less likely to overfit.
Here's an example using LibSVM on the above used data set. Highlighted points show support vectors. This should visually show the impact of C and how the amount of outliers on the wrong side of hyperplane is controlled using it.
Step12: You can see that lower value of C causes classifier to sacrifice linear separability in order to gain stability, in a sense that influence of any single datapoint is now bounded by C. For hard margin SVM, support vectors are the points which are "on the margin". In the picture above, C=1000 is pretty close to hard-margin SVM, and you can see the highlighted points are the ones that will touch the margin. In high dimensions this might lead to overfitting. For soft-margin SVM, with a lower value of C, it's easier to explain them in terms of dual (equation $(2)$) variables. Support vectors are datapoints from training set which are are included in the predictor, ie, the ones with non-zero $\alpha_i$ parameter. This includes margin errors and points on the margin of the hyperplane.
Binary classification using different kernels
Two-dimensional Gaussians are generated as data for this section.
$x_-\sim{\cal N_2}(0,1)-d$
$x_+\sim{\cal N_2}(0,1)+d$
and corresponding positive and negative labels. We create traindata and testdata with num of them being negatively and positively labelled in traindata,trainlab and testdata, testlab. For that we utilize Shogun's Gaussian Mixture Model class (GMM) from which we sample the data points and plot them.
Step13: Now lets plot the contour output on a $-5...+5$ grid for
The Support Vector Machines decision function $\mbox{sign}(f(x))$
The Support Vector Machines raw output $f(x)$
The Original Gaussian Mixture Model Distribution
Step14: And voila! The SVM decision rule reasonably distinguishes the red from the blue points. Despite being optimized for learning the discriminative function maximizing the margin, the SVM output quality wise remotely resembles the original distribution of the gaussian mixture model.
Let us visualise the output using different kernels.
Step15: Kernel Normalizers
Kernel normalizers post-process kernel values by carrying out normalization in feature space. Since kernel based SVMs use a non-linear mapping, in most cases any normalization in input space is lost in feature space. Kernel normalizers are a possible solution to this. Kernel Normalization is not strictly-speaking a form of preprocessing since it is not applied directly on the input vectors but can be seen as a kernel interpretation of the preprocessing. The CKernelNormalizer class provides tools for kernel normalization. Some of the kernel normalizers in Shogun
Step16: Multiclass classification
Multiclass classification can be done using SVM by reducing the problem to binary classification. More on multiclass reductions in this notebook. CGMNPSVM class provides a built in one vs rest multiclass classification using GMNPlib. Let us see classification using it on four classes. CGMM class is used to sample the data.
Step17: Let us try the multiclass classification for different kernels. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
import matplotlib.patches as patches
#To import all shogun classes
import shogun as sg
import numpy as np
#Generate some random data
X = 2 * np.random.randn(10,2)
traindata=np.r_[X + 3, X + 7].T
feats_train=sg.features(traindata)
trainlab=np.concatenate((np.ones(10),-np.ones(10)))
labels=sg.BinaryLabels(trainlab)
# Plot the training data
plt.figure(figsize=(6,6))
plt.gray()
_=plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.title("Training Data")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
p1 = patches.Rectangle((0, 0), 1, 1, fc="k")
p2 = patches.Rectangle((0, 0), 1, 1, fc="w")
plt.legend((p1, p2), ["Class 1", "Class 2"], loc=2)
plt.gray()
Explanation: Classification with Support Vector Machines
by Soeren Sonnenburg | Saurabh Mahindre - <a href=\"https://github.com/Saurabh7\">github.com/Saurabh7</a> as a part of <a href=\"http://www.google-melange.com/gsoc/project/details/google/gsoc2014/saurabh7/5750085036015616\">Google Summer of Code 2014 project</a> mentored by - Heiko Strathmann - <a href=\"https://github.com/karlnapf\">github.com/karlnapf</a> - <a href=\"http://herrstrathmann.de/\">herrstrathmann.de</a>
This notebook illustrates how to train a <a href="http://en.wikipedia.org/wiki/Support_vector_machine">Support Vector Machine</a> (SVM) <a href="http://en.wikipedia.org/wiki/Statistical_classification">classifier</a> using Shogun. The <a href="http://www.shogun-toolbox.org/doc/en/3.0.0/classshogun_1_1CLibSVM.html">CLibSVM</a> class of Shogun is used to do binary classification. Multiclass classification is also demonstrated using CGMNPSVM.
Introduction
Linear Support Vector Machines
Prediction using Linear SVM
SVMs using kernels
Kernels in Shogun
Prediction using kernel based SVM
Probabilistic Outputs using SVM
Soft margins and slack variables
Binary classification using different kernels
Kernel Normalizers
Multiclass classification using SVM
Introduction
Support Vector Machines (SVM's) are a learning method used for binary classification. The basic idea is to find a hyperplane which separates the data into its two classes. However, since example data is often not linearly separable, SVMs operate in a kernel induced feature space, i.e., data is embedded into a higher dimensional space where it is linearly separable.
Linear Support Vector Machines
In a supervised learning problem, we are given a labeled set of input-output pairs $\mathcal{D}=(x_i,y_i)^N_{i=1}\subseteq \mathcal{X} \times \mathcal{Y}$ where $x\in\mathcal{X}$ and $y\in{-1,+1}$. SVM is a binary classifier that tries to separate objects of different classes by finding a (hyper-)plane such that the margin between the two classes is maximized. A hyperplane in $\mathcal{R}^D$ can be parameterized by a vector $\bf{w}$ and a constant $\text b$ expressed in the equation:$${\bf w}\cdot{\bf x} + \text{b} = 0$$
Given such a hyperplane ($\bf w$,b) that separates the data, the discriminating function is: $$f(x) = \text {sign} ({\bf w}\cdot{\bf x} + {\text b})$$
If the training data are linearly separable, we can select two hyperplanes in a way that they separate the data and there are no points between them, and then try to maximize their distance. The region bounded by them is called "the margin". These hyperplanes can be described by the equations
$$({\bf w}\cdot{\bf x} + {\text b}) = 1$$
$$({\bf w}\cdot{\bf x} + {\text b}) = -1$$
the distance between these two hyperplanes is $\frac{2}{\|\mathbf{w}\|}$, so we want to minimize $\|\mathbf{w}\|$.
$$
\arg\min_{(\mathbf{w},b)}\frac{1}{2}\|\mathbf{w}\|^2 \qquad\qquad(1)$$
This gives us a hyperplane that maximizes the geometric distance to the closest data points.
As we also have to prevent data points from falling into the margin, we add the following constraint: for each ${i}$ either
$$({\bf w}\cdot{x}_i + {\text b}) \geq 1$$ or
$$({\bf w}\cdot{x}_i + {\text b}) \leq -1$$
which is similar to
$${y_i}({\bf w}\cdot{x}_i + {\text b}) \geq 1 \forall i$$
Lagrange multipliers are used to modify equation $(1)$ and the corresponding dual of the problem can be shown to be:
\begin{eqnarray}
\max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j {\bf x_i} \cdot {\bf x_j}\
\mbox{s.t.} && \alpha_i\geq 0\
&& \sum_{i}^{N} \alpha_i y_i=0\
\end{eqnarray}
From the derivation of these equations, it was seen that the optimal hyperplane can be written as:
$$\mathbf{w} = \sum_i \alpha_i y_i \mathbf{x}_i. $$
here most $\alpha_i$ turn out to be zero, which means that the solution is a sparse linear combination of the training data.
Prediction using Linear SVM
Now let us see how one can train a linear Support Vector Machine with Shogun. Two dimensional data (having 2 attributes say: attribute1 and attribute2) is now sampled to demonstrate the classification.
End of explanation
#prameters to svm
#parameter C is described in a later section.
C=1
epsilon=1e-3
svm=sg.machine('LibLinear', C1=C, C2=C, liblinear_solver_type='L2R_L2LOSS_SVC', epsilon=epsilon)
#train
svm.put('labels', labels)
svm.train(feats_train)
w=svm.get('w')
b=svm.get('bias')
Explanation: Liblinear, a library for large- scale linear learning focusing on SVM, is used to do the classification. It supports different solver types.
End of explanation
#solve for w.x+b=0
x1=np.linspace(-1.0, 11.0, 100)
def solve (x1):
return -( ( (w[0])*x1 + b )/w[1] )
x2=list(map(solve, x1))
#plot
plt.figure(figsize=(6,6))
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.plot(x1,x2, linewidth=2)
plt.title("Separating hyperplane")
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
Explanation: We solve ${\bf w}\cdot{\bf x} + \text{b} = 0$ to visualise the separating hyperplane. The methods get_w() and get_bias() are used to get the necessary values.
End of explanation
size=100
x1_=np.linspace(-5, 15, size)
x2_=np.linspace(-5, 15, size)
x, y=np.meshgrid(x1_, x2_)
#Generate X-Y grid test data
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
#apply on test grid
predictions = svm.apply(grid)
#Distance from hyperplane
z=predictions.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
#Class predictions
z=predictions.get('labels').reshape((size, size))
#plot
plt.subplot(122)
plt.title("Separating hyperplane")
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
Explanation: The classifier is now applied on a X-Y grid of points to get predictions.
End of explanation
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(100))
#Polynomial kernel of degree 2
poly_kernel=sg.kernel('PolyKernel', degree=2, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel=sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[linear_kernel, poly_kernel, gaussian_kernel]
Explanation: SVMs using kernels
If the data set is not linearly separable, a non-linear mapping $\Phi:{\bf x} \rightarrow \Phi({\bf x}) \in \mathcal{F} $ is used. This maps the data into a higher dimensional space where it is linearly separable. Our equation requires only the inner dot products ${\bf x_i}\cdot{\bf x_j}$. The equation can be defined in terms of inner products $\Phi({\bf x_i}) \cdot \Phi({\bf x_j})$ instead. Since $\Phi({\bf x_i})$ occurs only in dot products with $ \Phi({\bf x_j})$ it is sufficient to know the formula (kernel function) : $$K({\bf x_i, x_j} ) = \Phi({\bf x_i}) \cdot \Phi({\bf x_j})$$ without dealing with the maping directly. The transformed optimisation problem is:
\begin{eqnarray} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\ \mbox{s.t.} && \alpha_i\geq 0\ && \sum_{i=1}^{N} \alpha_i y_i=0 \qquad\qquad(2)\ \end{eqnarray}
Kernels in Shogun
Shogun provides many options for the above mentioned kernel functions. CKernel is the base class for kernels. Some commonly used kernels :
Gaussian kernel : Popular Gaussian kernel computed as $k({\bf x},{\bf x'})= exp(-\frac{||{\bf x}-{\bf x'}||^2}{\tau})$
Linear kernel : Computes $k({\bf x},{\bf x'})= {\bf x}\cdot {\bf x'}$
Polynomial kernel : Polynomial kernel computed as $k({\bf x},{\bf x'})= ({\bf x}\cdot {\bf x'}+c)^d$
Simgmoid Kernel : Computes $k({\bf x},{\bf x'})=\mbox{tanh}(\gamma {\bf x}\cdot{\bf x'}+c)$
Some of these kernels are initialised below.
End of explanation
plt.jet()
def display_km(kernels, svm):
plt.figure(figsize=(20,6))
plt.suptitle('Kernel matrices for different kernels', fontsize=12)
for i, kernel in enumerate(kernels):
kernel.init(feats_train,feats_train)
plt.subplot(1, len(kernels), i+1)
plt.title(kernel.get_name())
km=kernel.get_kernel_matrix()
plt.imshow(km, interpolation="nearest")
plt.colorbar()
display_km(kernels, svm)
Explanation: Just for fun we compute the kernel matrix and display it. There are clusters visible that are smooth for the gaussian and polynomial kernel and block-wise for the linear one. The gaussian one also smoothly decays from some cluster centre while the polynomial one oscillates within the clusters.
End of explanation
C=1
epsilon=1e-3
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=labels)
_=svm.train()
Explanation: Prediction using kernel based SVM
Now we train an SVM with a Gaussian Kernel. We use LibSVM but we could use any of the other SVM from Shogun. They all utilize the same kernel framework and so are drop-in replacements.
End of explanation
libsvm_obj = svm.get('objective')
primal_obj, dual_obj = sg.as_svm(svm).compute_svm_primal_objective(), sg.as_svm(svm).compute_svm_dual_objective()
print(libsvm_obj, primal_obj, dual_obj)
Explanation: We could now check a number of properties like what the value of the objective function returned by the particular SVM learning algorithm or the explictly computed primal and dual objective function is
End of explanation
print("duality_gap", dual_obj-primal_obj)
Explanation: and based on the objectives we can compute the duality gap (have a look at reference [2]), a measure of convergence quality of the svm training algorithm . In theory it is 0 at the optimum and in reality at least close to 0.
End of explanation
out=svm.apply(grid)
z=out.get_values().reshape((size, size))
#plot
plt.jet()
plt.figure(figsize=(16,6))
plt.subplot(121)
plt.title("Classification")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.jet()
z=out.get('labels').reshape((size, size))
plt.subplot(122)
plt.title("Decision boundary")
c=plt.pcolor(x1_, x2_, z)
plt.contour(x1_ , x2_, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=50)
plt.xlabel('attribute1')
plt.ylabel('attribute2')
plt.gray()
Explanation: Let's now apply on the X-Y grid data and plot the results.
End of explanation
n=10
x1t_=np.linspace(-5, 15, n)
x2t_=np.linspace(-5, 15, n)
xt, yt=np.meshgrid(x1t_, x2t_)
#Generate X-Y grid test data
test_grid=sg.features(np.array((np.ravel(xt), np.ravel(yt))))
labels_out=svm.apply(test_grid)
#Get values (Distance from hyperplane)
values=labels_out.get('current_values')
#Get probabilities
labels_out.scores_to_probabilities()
prob=labels_out.get('current_values')
#plot
plt.gray()
plt.figure(figsize=(10,6))
p1=plt.scatter(values, prob)
plt.title('Probabilistic outputs')
plt.xlabel('Distance from hyperplane')
plt.ylabel('Probability')
plt.legend([p1], ["Test samples"], loc=2)
Explanation: Probabilistic Outputs
Calibrated probabilities can be generated in addition to class predictions using scores_to_probabilities() method of BinaryLabels, which implements the method described in [3]. This should only be used in conjunction with SVM. A parameteric form of a sigmoid function $$\frac{1}{{1+}exp(af(x) + b)}$$ is used to fit the outputs. Here $f(x)$ is the signed distance of a sample from the hyperplane, $a$ and $b$ are parameters to the sigmoid. This gives us the posterier probabilities $p(y=1|f(x))$.
Let's try this out on the above example. The familiar "S" shape of the sigmoid should be visible.
End of explanation
def plot_sv(C_values):
plt.figure(figsize=(20,6))
plt.suptitle('Soft and hard margins with varying C', fontsize=12)
for i in range(len(C_values)):
plt.subplot(1, len(C_values), i+1)
linear_kernel=sg.LinearKernel(feats_train, feats_train)
svm1 = sg.machine('LibSVM', C1=C_values[i], C2=C_values[i], kernel=linear_kernel, labels=labels)
svm1 = sg.as_svm(svm1)
svm1.train()
vec1=svm1.get_support_vectors()
X_=[]
Y_=[]
new_labels=[]
for j in vec1:
X_.append(traindata[0][j])
Y_.append(traindata[1][j])
new_labels.append(trainlab[j])
out1=svm1.apply(grid)
z1=out1.get_labels().reshape((size, size))
plt.jet()
c=plt.pcolor(x1_, x2_, z1)
plt.contour(x1_ , x2_, z1, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.gray()
plt.scatter(X_, Y_, c=new_labels, s=150)
plt.scatter(traindata[0, :], traindata[1,:], c=labels, s=20)
plt.title('Support vectors for C=%.2f'%C_values[i])
plt.xlabel('attribute1')
plt.ylabel('attribute2')
C_values=[0.1, 1000]
plot_sv(C_values)
Explanation: Soft margins and slack variables
If there is no clear classification possible using a hyperplane, we need to classify the data as nicely as possible while incorporating the misclassified samples. To do this a concept of soft margin is used. The method introduces non-negative slack variables, $\xi_i$, which measure the degree of misclassification of the data $x_i$.
$$
y_i(\mathbf{w}\cdot\mathbf{x_i} + b) \ge 1 - \xi_i \quad 1 \le i \le N $$
Introducing a linear penalty function leads to
$$\arg\min_{\mathbf{w},\mathbf{\xi}, b } ({\frac{1}{2} \|\mathbf{w}\|^2 +C \sum_{i=1}^n \xi_i) }$$
This in its dual form is leads to a slightly modified equation $\qquad(2)$.
\begin{eqnarray} \max_{\bf \alpha} && \sum_{i=1}^{N} \alpha_i - \sum_{i=1}^{N}\sum_{j=1}^{N} \alpha_i y_i \alpha_j y_j k({\bf x_i}, {\bf x_j})\ \mbox{s.t.} && 0\leq\alpha_i\leq C\ && \sum_{i=1}^{N} \alpha_i y_i=0 \ \end{eqnarray}
The result is that soft-margin SVM could choose decision boundary that has non-zero training error even if dataset is linearly separable but is less likely to overfit.
Here's an example using LibSVM on the above used data set. Highlighted points show support vectors. This should visually show the impact of C and how the amount of outliers on the wrong side of hyperplane is controlled using it.
End of explanation
num=50;
dist=1.0;
gmm=sg.GMM(2)
gmm.set_nth_mean(np.array([-dist,-dist]),0)
gmm.set_nth_mean(np.array([dist,dist]),1)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),0)
gmm.set_nth_cov(np.array([[1.0,0.0],[0.0,1.0]]),1)
gmm.put('m_coefficients', np.array([1.0,0.0]))
xntr=np.array([gmm.sample() for i in range(num)]).T
gmm.set_coef(np.array([0.0,1.0]))
xptr=np.array([gmm.sample() for i in range(num)]).T
traindata=np.concatenate((xntr,xptr), axis=1)
trainlab=np.concatenate((-np.ones(num), np.ones(num)))
#shogun format features
feats_train=sg.features(traindata)
labels=sg.BinaryLabels(trainlab)
gaussian_kernel = sg.kernel("GaussianKernel", log_width=np.log(10))
#Polynomial kernel of degree 2
poly_kernel = sg.kernel('PolyKernel', degree=2, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel = sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
#train machine
C=1
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=labels)
_=svm.train(feats_train)
Explanation: You can see that lower value of C causes classifier to sacrifice linear separability in order to gain stability, in a sense that influence of any single datapoint is now bounded by C. For hard margin SVM, support vectors are the points which are "on the margin". In the picture above, C=1000 is pretty close to hard-margin SVM, and you can see the highlighted points are the ones that will touch the margin. In high dimensions this might lead to overfitting. For soft-margin SVM, with a lower value of C, it's easier to explain them in terms of dual (equation $(2)$) variables. Support vectors are datapoints from training set which are are included in the predictor, ie, the ones with non-zero $\alpha_i$ parameter. This includes margin errors and points on the margin of the hyperplane.
Binary classification using different kernels
Two-dimensional Gaussians are generated as data for this section.
$x_-\sim{\cal N_2}(0,1)-d$
$x_+\sim{\cal N_2}(0,1)+d$
and corresponding positive and negative labels. We create traindata and testdata with num of them being negatively and positively labelled in traindata,trainlab and testdata, testlab. For that we utilize Shogun's Gaussian Mixture Model class (GMM) from which we sample the data points and plot them.
End of explanation
size=100
x1=np.linspace(-5, 5, size)
x2=np.linspace(-5, 5, size)
x, y=np.meshgrid(x1, x2)
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
grid_out=svm.apply(grid)
z=grid_out.get('labels').reshape((size, size))
plt.jet()
plt.figure(figsize=(16,5))
z=grid_out.get_values().reshape((size, size))
plt.subplot(121)
plt.title('Classification')
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.subplot(122)
plt.title('Original distribution')
gmm.put('m_coefficients', np.array([1.0,0.0]))
gmm.set_features(grid)
grid_out=gmm.get_likelihood_for_all_examples()
zn=grid_out.reshape((size, size))
gmm.set_coef(np.array([0.0,1.0]))
grid_out=gmm.get_likelihood_for_all_examples()
zp=grid_out.reshape((size, size))
z=zp-zn
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
Explanation: Now lets plot the contour output on a $-5...+5$ grid for
The Support Vector Machines decision function $\mbox{sign}(f(x))$
The Support Vector Machines raw output $f(x)$
The Original Gaussian Mixture Model Distribution
End of explanation
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Binary Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.put('kernel', kernels[i])
svm.train()
grid_out=svm.apply(grid)
z=grid_out.get_values().reshape((size, size))
c=plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
Explanation: And voila! The SVM decision rule reasonably distinguishes the red from the blue points. Despite being optimized for learning the discriminative function maximizing the margin, the SVM output quality wise remotely resembles the original distribution of the gaussian mixture model.
Let us visualise the output using different kernels.
End of explanation
f = open(os.path.join(SHOGUN_DATA_DIR, 'uci/ionosphere/ionosphere.data'))
mat = []
labels = []
# read data from file
for line in f:
words = line.rstrip().split(',')
mat.append([float(i) for i in words[0:-1]])
if str(words[-1])=='g':
labels.append(1)
else:
labels.append(-1)
f.close()
mat_train=mat[:30]
mat_test=mat[30:110]
lab_train=sg.BinaryLabels(np.array(labels[:30]).reshape((30,)))
lab_test=sg.BinaryLabels(np.array(labels[30:110]).reshape((len(labels[30:110]),)))
feats_train = sg.features(np.array(mat_train).T)
feats_test = sg.features(np.array(mat_test).T)
#without normalization
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(0.1))
gaussian_kernel.init(feats_train, feats_train)
C=1
svm=sg.machine('LibSVM', C1=C, C2=C, kernel=gaussian_kernel, labels=lab_train)
_=svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print('Error:', error)
#set normalization
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(0.1))
# TODO: currently there is a bug that makes it impossible to use Gaussian kernels and kernel normalisers
# See github issue #3504
#gaussian_kernel.set_normalizer(sg.SqrtDiagKernelNormalizer())
gaussian_kernel.init(feats_train, feats_train)
svm.put('kernel', gaussian_kernel)
svm.train()
output=svm.apply(feats_test)
Err=sg.ErrorRateMeasure()
error=Err.evaluate(output, lab_test)
print('Error with normalization:', error)
Explanation: Kernel Normalizers
Kernel normalizers post-process kernel values by carrying out normalization in feature space. Since kernel based SVMs use a non-linear mapping, in most cases any normalization in input space is lost in feature space. Kernel normalizers are a possible solution to this. Kernel Normalization is not strictly-speaking a form of preprocessing since it is not applied directly on the input vectors but can be seen as a kernel interpretation of the preprocessing. The CKernelNormalizer class provides tools for kernel normalization. Some of the kernel normalizers in Shogun:
SqrtDiagKernelNormalizer : This normalization in the feature space amounts to defining a new kernel $k'({\bf x},{\bf x'}) = \frac{k({\bf x},{\bf x'})}{\sqrt{k({\bf x},{\bf x})k({\bf x'},{\bf x'})}}$
AvgDiagKernelNormalizer : Scaling with a constant $k({\bf x},{\bf x'})= \frac{1}{c}\cdot k({\bf x},{\bf x'})$
ZeroMeanCenterKernelNormalizer : Centers the kernel in feature space and ensures each feature must have zero mean after centering.
The set_normalizer() method of CKernel is used to add a normalizer.
Let us try it out on the ionosphere dataset where we use a small training set of 30 samples to train our SVM. Gaussian kernel with and without normalization is used. See reference [1] for details.
End of explanation
num=30;
num_components=4
means=np.zeros((num_components, 2))
means[0]=[-1.5,1.5]
means[1]=[1.5,-1.5]
means[2]=[-1.5,-1.5]
means[3]=[1.5,1.5]
covs=np.array([[1.0,0.0],[0.0,1.0]])
gmm=sg.GMM(num_components)
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
[gmm.set_nth_cov(covs,i) for i in range(num_components)]
gmm.put('m_coefficients', np.array([1.0,0.0,0.0,0.0]))
xntr=np.array([gmm.sample() for i in range(num)]).T
xnte=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,1.0,0.0,0.0]))
xntr1=np.array([gmm.sample() for i in range(num)]).T
xnte1=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,0.0,1.0,0.0]))
xptr=np.array([gmm.sample() for i in range(num)]).T
xpte=np.array([gmm.sample() for i in range(5000)]).T
gmm.put('m_coefficients', np.array([0.0,0.0,0.0,1.0]))
xptr1=np.array([gmm.sample() for i in range(num)]).T
xpte1=np.array([gmm.sample() for i in range(5000)]).T
traindata=np.concatenate((xntr,xntr1,xptr,xptr1), axis=1)
testdata=np.concatenate((xnte,xnte1,xpte,xpte1), axis=1)
l0 = np.array([0.0 for i in range(num)])
l1 = np.array([1.0 for i in range(num)])
l2 = np.array([2.0 for i in range(num)])
l3 = np.array([3.0 for i in range(num)])
trainlab=np.concatenate((l0,l1,l2,l3))
testlab=np.concatenate((l0,l1,l2,l3))
plt.title('Toy data for multiclass classification')
plt.jet()
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=75)
feats_train=sg.features(traindata)
labels=sg.MulticlassLabels(trainlab)
Explanation: Multiclass classification
Multiclass classification can be done using SVM by reducing the problem to binary classification. More on multiclass reductions in this notebook. CGMNPSVM class provides a built in one vs rest multiclass classification using GMNPlib. Let us see classification using it on four classes. CGMM class is used to sample the data.
End of explanation
gaussian_kernel=sg.kernel("GaussianKernel", log_width=np.log(2))
poly_kernel=sg.kernel('PolyKernel', degree=4, c=1.0)
poly_kernel.init(feats_train, feats_train)
linear_kernel=sg.kernel('LinearKernel')
linear_kernel.init(feats_train, feats_train)
kernels=[gaussian_kernel, poly_kernel, linear_kernel]
svm=sg.GMNPSVM(1, gaussian_kernel, labels)
_=svm.train(feats_train)
size=100
x1=np.linspace(-6, 6, size)
x2=np.linspace(-6, 6, size)
x, y=np.meshgrid(x1, x2)
grid=sg.features(np.array((np.ravel(x), np.ravel(y))))
def plot_outputs(kernels):
plt.figure(figsize=(20,5))
plt.suptitle('Multiclass Classification using different kernels', fontsize=12)
for i in range(len(kernels)):
plt.subplot(1,len(kernels),i+1)
plt.title(kernels[i].get_name())
svm.set_kernel(kernels[i])
svm.train(feats_train)
grid_out=svm.apply(grid)
z=grid_out.get_labels().reshape((size, size))
plt.pcolor(x, y, z)
plt.contour(x, y, z, linewidths=1, colors='black', hold=True)
plt.colorbar(c)
plt.scatter(traindata[0,:], traindata[1,:], c=trainlab, s=35)
plot_outputs(kernels)
Explanation: Let us try the multiclass classification for different kernels.
End of explanation |
6,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactuando con Twitter
Introducción
En este cuaderno vamos a crear un bot de twitter (un programa que interactúa de manera semi-automática con twitter) que muestre ciertas capacidades lingüísticas.
A continuación vamos a revisar algunos snippets o ejemplos de código mostrando las instrucciones mínimas que puede incluir un bot para que sea capaz de publicar mensajes. También trataré de mostrar cómo acceder a otras funcionalidades, algo más complejas.
Cuenta de desarrollador en twitter
Para poder crear un bot y, en general, cualquier aplicación que interactúe con twitter, necesitas tener una cuenta con permisos de desarrollador. Sigue los pasos
Step1: Hay que ser cuidadoso a la hora de enviar mensaje automatizados. Si queremos publicar más de un mensaje y queremos controlar el tiempo que transcurre entre un mensaje y otro, podemos utilizar la librería time de la siguiente manera
Step2: Otra funcionalidad típica que tienen los bots de twitter es enviar automáticamente mensajes a otros usuarios. Si queremos que nuestra aplicación busque entre todos los mensajes y responda al autor, podemos hacer lo siguiente
Step3: Si queremos responder a menciones, es decir, si queremos que el bot responda automáticamente a mensajes dirigidos a él, podemos utilizar el siguiente código
Step4: Si queremos retuitear mensajes, podemos ejecutar lo siguiente | Python Code:
import tweepy
# añade las credenciales de tu aplicación de twitter como cadenas de texto
CONSUMER_KEY = 'CAMBIA ESTO'
CONSUMER_SECRET = 'CAMBIA ESTO'
ACCESS_TOKEN = 'CAMBIA ESTO'
ACCESS_TOKEN_SECRET = 'CAMBIA ESTO'
# autentica las credenciales
auth = tweepy.OAuthHandler(CONSUMER_KEY, CONSUMER_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
# crea un cliente de twiter
t = tweepy.API(auth)
# publica un saludo
t.update_status('¡Hola! Soy un bot, no me hagas caso.')
Explanation: Interactuando con Twitter
Introducción
En este cuaderno vamos a crear un bot de twitter (un programa que interactúa de manera semi-automática con twitter) que muestre ciertas capacidades lingüísticas.
A continuación vamos a revisar algunos snippets o ejemplos de código mostrando las instrucciones mínimas que puede incluir un bot para que sea capaz de publicar mensajes. También trataré de mostrar cómo acceder a otras funcionalidades, algo más complejas.
Cuenta de desarrollador en twitter
Para poder crear un bot y, en general, cualquier aplicación que interactúe con twitter, necesitas tener una cuenta con permisos de desarrollador. Sigue los pasos:
Lo primero que tienes que hacer, si no tienes cuenta en twitter, es registrarte. Si ya tienes cuenta en esta red, te recomiendo que la uses y no te crees otra nueva. El proceso de alta de twitter requiere la validación de un número de teléfono móvil único para cada cuenta. Así que, si no tienes más de un número de móvil y no quieres dolores de cabeza, utiliza tu cuenta de todos los días.
Una vez registrado, haz login con tu cuenta y entra en la página de gestión de apps de twitter, donde podrás solicitar permisos para una app nueva. Pincha en Create New App.
Rellena los campos requeridos del formulario que se muestra:
Name: tiene que ser un nombre único, así que sé original. Es probable que tengas que probar varios nombres hasta dar con uno que esté libre. Si no quieres fallar, pon algo como progplnbot-tunombre.
Description: describe brevemente el propósito de tu bot.
Website: por el momento, cualquier URL servirá. Si no tienes web propia y no sabes qué poner, introduce la URL de la web de Lingüística: http://www.ucm.es/linguistica
Lee (cough, cough) el texto del acuerdo, selecciona Yes, I agree y pincha el botón de la parte inferior de la página que dice Create your Twitter application.
Si todo ha ido bien (si has elegido para tu app un nombre que estuviera libre, básicamente), se te mostrará una página de configuración. Tendrás que cambiar algunos parámetros.
En el apartado Application Settings, en Access level, asegúrate de que tienes seleccionados permisos de lectura y escritura: Read and Write. Si no es así, pincha en modify app permissions y otorga permisos para leer y escribir. Una vez cambiado, pincha en Update Settings.
En la pestaña Keys and Access Tokens, pincha en el botón Create my access token para crear los tokens de acceso. Estos nombres probablemente no te digan nada, pero desde esta página tienes acceso a las cuatro credenciales que tu bot necesita para autenticarse y publicar mensajes. Toma nota de ellas, las necesitarás más adelante:
Consumer key
Consumer secret
Access token
Access token secret
Ya tienes todo lo que necesitas para crear tu bot.
Ejemplo mínimo de bot con Python y tweepy.
El código que aparece a continuación contiene las líneas mínimas para crear un cliente de twitter y publicar un mensaje en twitter.
Consejo: No lo ejecutes más de una vez o correrás el riesgo de que te baneen la cuenta.
End of explanation
# añade esta librería al principio de tu código
import time
# algunas citas memorables de Yogi Berra (https://es.wikipedia.org/wiki/Yogi_Berra)
citas = '''The future ain't what it used to be.|
You can observe a lot by watching.|
It ain't over till it's over.|
It ain't the heat, it's the humility.|
We made too many wrong mistakes.|
I never said half the things I said.'''.split('|\n')
# iteramos sobre las citas y las publicamos de una en una
for cita in citas:
t.update_status(cita + ' #yogiberra')
time.sleep(30) # envía el tweet cada 30 segundos
Explanation: Hay que ser cuidadoso a la hora de enviar mensaje automatizados. Si queremos publicar más de un mensaje y queremos controlar el tiempo que transcurre entre un mensaje y otro, podemos utilizar la librería time de la siguiente manera:
End of explanation
# buscamos mensajes que contengan la expresión "gaticos y monetes"
busqueda = t.search(q='gaticos y monetes')
# itero sobre estos mensajes
for mensaje in busqueda:
# capturo el nombre de usuario
usuario = mensaje.user.screen_name
# compongo el mensaje de respuesta
miRespuesta = '@%s ¡monetes!' % (usuario)
# envío la respuesta
mensaje = t.update_status(miRespuesta, mensaje.id)
Explanation: Otra funcionalidad típica que tienen los bots de twitter es enviar automáticamente mensajes a otros usuarios. Si queremos que nuestra aplicación busque entre todos los mensajes y responda al autor, podemos hacer lo siguiente:
End of explanation
# recupero las últimsa 5 menciones de mi usuario
menciones = t.mentions_timeline(count=5)
# ten en cuenta que:
# si tu cuenta de twitter es nueva y no tiene menciones, no funcionará
# si estás usando tu cuenta de todos los días, como hago yo,
# enviarás mensajes a esas personas (o robotitos)
for mencion in menciones:
# capturo el nombre de usuario que me manda el mensaje
usuario = mencion.user.screen_name
# compongo el mensaje de respuesta
miRespuesta = '¡Hola, @%s! Soy un robotito. Este es un mensaje automático, no le hagas caso' % (usuario)
# envío la respuesta
mensaje = t.update_status(miRespuesta, mention.id)
Explanation: Si queremos responder a menciones, es decir, si queremos que el bot responda automáticamente a mensajes dirigidos a él, podemos utilizar el siguiente código:
End of explanation
# buscamos mensajes que contengan la expresión "viejóvenes"
busqueda = t.search(q='viejóvenes')
# para que no se descontrole, solo quiero retwitear los tres últimos mensajes
if len(busqueda) >= 3:
for mensaje in busqueda[:3]:
# para retwitear un mensaje, ejecuto el método retweet
# indicando el identificador único del mensaje en cuestión
t.retweet(mensaje.id)
else:
for mensaje in busqueda:
t.retweet(mensaje.id)
Explanation: Si queremos retuitear mensajes, podemos ejecutar lo siguiente:
End of explanation |
6,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pyparsing Tutorial to capture ML-SQL language
Authors
Written by
Step1: Phone number parser
Mentioned in the tutorial
Grammer
Step2: Chemical Formula parser
Mentioned in the tutorial
Grammer
- integer | Python Code:
from pyparsing import Word, Literal, alphas, Optional, OneOrMore, Group
Explanation: Pyparsing Tutorial to capture ML-SQL language
Authors
Written by: Neeraj Asthana (under Professor Robert Brunner)
University of Illinois at Urbana-Champaign
Summer 2016
Acknowledgements
Followed Tutorial at: http://www.onlamp.com/lpt/a/6435
Description
This notebook is meant to experiment with pyparsing in order to abstract the process for use with the ML-SQL language. The goal is to be able to understand ML-SQL syntax and port commands to actionable directives in Python.
Libraries
End of explanation
#Definitions of literals
dash = Literal( "-" )
lparen = Literal( "(" )
rparen = Literal( ")" )
#Variable lengths and patterns of number => Word token
digits = "0123456789"
number = Word( digits )
#Define phone number with And (+'s)
#Literals can also be defined with direct strings
phoneNumber = lparen + number + rparen + number + dash + number
#Create a results name for easy access
areacode = number.setResultsName("areacode")
#Make the area code optional
phoneNumber = Optional( "(" + areacode + ")" ) + number + "-" + number
#List of phone numbers
phoneNumberList = OneOrMore( phoneNumber )
#Using the grammer
inputString = "(978) 844-0961"
data = phoneNumber.parseString( inputString )
data.areacode
#Bad input
inputStringBad = "978) 844-0961"
data2 = phoneNumber.parseString( inputStringBad )
Explanation: Phone number parser
Mentioned in the tutorial
Grammer:
- number :: '0'.. '9'*
- phoneNumber :: [ '(' number ')' ] number '-' number
End of explanation
#Define Grammer
caps = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
lowers = caps.lower()
digits = "0123456789"
element = Word( caps, lowers )
#Groups elements so that element and numbers appear together
elementRef = Group( element + Optional( Word( digits ), default="1" ) )
formula = OneOrMore( elementRef )
testString = "CO2"
elements = formula.parseString( testString )
print(elements)
tests = [ "H2O", "C6H5OH", "NaCl" ]
for t in tests:
try:
results = formula.parseString( t )
print (t,"->", results)
except ParseException as pe:
print (pe)
else:
wt = sum( [atomicWeight[elem]*int(qty) for elem,qty in results] )
print ("(%.3f)" % wt)
Explanation: Chemical Formula parser
Mentioned in the tutorial
Grammer
- integer :: '0'..'9'+
- cap :: 'A'..'Z'
- lower :: 'a'..'z'
- elementSymbol :: cap lower*
- elementRef :: elementSymbol [ integer ]
- formula :: elementRef+
End of explanation |
6,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flux Noise Mask Design
General Notes
Step1: CPW
We want to use the same cpw dimensions for resonator and feedline/purcell filter cpw's so the kinetic inductance correction is the same for everything.
Step2: $\lambda/4$ readout resonators
IMPAs from Google will be good in the 4-6GHz range. We will aim for resonators near 6GHz, but have a spread from 5-6.5GHz on the mask. They should be spread every 30MHz or so.
We are changing to fixed resonator frequencies (non variable) for this mask.
The frequency is brought down significantly by the capacitance through to ground through the qubit, as well as the self-capacitance of the coupling cap to ground. These capacitances pull down the transmon frequency more, so we will set Q3 to have no extension, and set the other qubit frequencies around it.
Step3: Qubit parameters
From Ted Thorbeck's notes
Step4: Feedline with and without crossovers
Step5: Inductive Coupling
From [1], we have the dephasing of a qubit
Step6: Purcell Filter
Do we even need a purcell filter? [3]
Without purcell filter
Step7: Loss from XY line
From Thorbeck's notes, we have
$R_p = R_s(1+Q_s^2)$ and $C_p = C_s\left(\frac{Q_s^2}{1+Q_s^2}\right)$
where $Q_s = \frac{1}{\omega R_s C_s}$
and the "s" and "p" subscript refer to just the coupling capacitor and Z0 of the line in a series or parallel configuration. Combining this with the normal LC of the qubit, we can find the loss | Python Code:
qubits = []
for i in range(3):
q = qubit.Qubit('Transmon')
#q.C_g = 3.87e-15
#q.C_q = 75.1e-15
q.C_g = 1.8e-15
q.C_q = 77.1e-15
q.C_resToGnd = 79.1e-15
qubits.append(q)
q = qubit.Qubit('OCSQubit')
#q.C_g = 2.94e-15
#q.C_q = 45e-15
#q.C_g = 2.75e-15
#q.C_q = 45.83e-15
q.C_g = 1.5e-15
q.C_q = 47e-15
q.C_resToGnd = 51.5e-15
qubits.append(q)
Explanation: Flux Noise Mask Design
General Notes:
* Looking at the chip left to right, top to bottom, we have Q1-Q4.
* Q4 is the charge sensitive qubit. All others are normal transmons.
Still to do:
* simulate mutual
Problems with mask:
* capacitances change resonator frequencies so nothing matches
* Alex thinks flux bias line might be bad
* ~~Include open resonator caps on mask~~
* ~~put meanders opposite ways to get max qubit spacing~~
* ~~did spacing for flux get messed up in x-mon?~~
* bandaids
Transmon Selected params:
+ w = 34, l_c = 90, w_c = 150:
+ C_q = 75.6fF
+ C_g = 3.39fF
+ C_resToGnd = 79.1fF
+ d_xy = 80: C_xy = 99.5fF
Charge Sensitive Selected params:
+ w = 12, l_c = 90, w_c = 200:
+ C_q = 45.83fF
+ C_g = 2.75fF
+ C_resToGnd = 107fF
+ d_xy = 50: C_xy = 95.5fF
End of explanation
cpw = cpwtools.CPW(material='al', w=10., s=7.)
print cpw
Explanation: CPW
We want to use the same cpw dimensions for resonator and feedline/purcell filter cpw's so the kinetic inductance correction is the same for everything.
End of explanation
l_curve = 2*pi*50/4
coupling_length = 287
tot_length = l_curve*(1+1+2+2+2+2) + 2*850 + 1156 + 200 + 350 + coupling_length # this coupling length ranges from 45-150 depending on desired Qc.
# Plan for 45, can always trombone down
L4 = cpwtools.QuarterLResonator(cpw, tot_length)
print('Highest resonator frequency = {:.3f} GHz'.format(L4.fl()/1e9))
print
length = L4.setLengthFromFreq(5.5e9)
print('Highest resonator frequency = {:.3f} GHz for extension = {:.2f}um'.format( L4.fl()/1e9, (1e6*length-tot_length)/2 ))
length = L4.setLengthFromFreq(5e9)
print('Lowest resonator frequency = {:.3f} GHz for extension = {:.2f}um'.format( L4.fl()/1e9, (1e6*length-tot_length)/2 ))
print
def L4FromQubit(q):
L4 = cpwtools.QuarterLResonator(cpw, tot_length)
seriesCap = q.C_g*q.C_q/(q.C_g+q.C_q)
L4.addCapacitiveCoupling('g', seriesCap, Z0 = 0)
L4.addCapacitiveCoupling('c_coupler', q.C_resToGnd, Z0 = 0)
return L4
L4 = L4FromQubit(qubits[2])
f0 = 6e9 #L4.fl()
print('{:>8} {:>9} {:>8} {:>8} {:>9}'.format('', 'length', 'f_l', 'C_r', 'extension'))
for i,q in enumerate(qubits):
L4 = L4FromQubit(q)
q.res_length = L4.setLengthFromFreq(f0 + 0.04e9*[-2, -1, 0, 1][i])
if i==len(qubits)-1:
q.res_length = qubits[i-1].res_length - 40e-6
L4.l = q.res_length
q.C_r = L4.C()
q.omega_r = L4.wl()
q.omega_q = 2*pi*(f0-1e9)
print('{:>8}: {:>7.2f}um {:>5.3f}GHz {:>6.2f}fF {:>7.3f}um'.format(
q.name, 1e6*q.res_length, L4.fl()/1e9, 1e15*L4.C(), (1e6*q.res_length - tot_length)/2))
Explanation: $\lambda/4$ readout resonators
IMPAs from Google will be good in the 4-6GHz range. We will aim for resonators near 6GHz, but have a spread from 5-6.5GHz on the mask. They should be spread every 30MHz or so.
We are changing to fixed resonator frequencies (non variable) for this mask.
The frequency is brought down significantly by the capacitance through to ground through the qubit, as well as the self-capacitance of the coupling cap to ground. These capacitances pull down the transmon frequency more, so we will set Q3 to have no extension, and set the other qubit frequencies around it.
End of explanation
qb = deepcopy(qubits[2])
g = 2*pi*30e6 # qubit-resonator coupling in Hz
print('Range of C_q on the mask:')
print "C_q = 30fF: E_c = {:.2f}MHz".format( qb.E_c(30e6)/(2*pi*hbar)*1e15 )
print "C_q = 95fF: E_c = {:.2f}MHz".format( qb.E_c(95e6)/(2*pi*hbar)*1e15 )
print
print('Ideal:')
print "Transmon: E_c = 250MHz: C_sigma = C_q + C_g = {:.2f}fF".format( e**2/2/250e6/(2*pi*hbar)*1e15 )
print "Charge Sensitive: E_c = 385MHz: C_sigma = C_q + C_g = {:.2f}fF".format( e**2/2/410e6/(2*pi*hbar)*1e15 )
# With caps chosen from the mask:
print "{:>8} {:>10} {:>10} {:>10} {:>10} {:>10} {:>10}".format(
'', 'C_q', 'E_c', 'E_j', 'alpha', 'g', 'C_g')
for q in qubits:
print "{:>8}: {:8.2f}fF {:7.2f}MHz {:7.2f}GHz {:7.2f}MHz {:7.2f}MHz {:8.2f}fF".format(
q.name, 1e15*q.C_q, -q.E_c()/(2*pi*hbar)/1e6, q.E_j()/2/pi/hbar/1e9, q.alpha(q.E_c(),q.E_j())/(2*pi)/1e6, g/2/pi/1e6, 1e15*q.cap_g(g))
# We choose the closest g capacitance from the mask
print "{:>8} {:>10} {:>10} {:>10} {:>10} {:>7} {:>10} {:>9} {:>10} {:>9}".format(
'', 'C_g', 'g', 'Chi_0/2pi', 'Chi/2pi', 'Q_r', 'kappa', '1/kappa', 'I_c', 'n_crit')
for q in qubits:
print "{:>8}: {:>8.2f}fF {:>7.2f}MHz {:>7.2f}MHz {:>7.2f}MHz {:>7.0f} {:>7.2f}MHz {:>7.0f}ns {:>8.2f}nA {:>9.0f}".format(
q.name, 1e15*q.cap_g(q.g()), q.g()/2/pi/1e6, 1e-6*q.Chi_0()/2/pi, 1e-6*q.Chi()/2/pi, q.Q_r(), q.omega_r/q.Q_r()*1e-6/2/pi, q.Q_r()/q.omega_r*1e9, q.I_c()*1e9, ((q.omega_q-q.omega_r)/2/q.g())**2)
#print "{}: C_g = {:.2f}fF g = {:.2f}MHz Chi_0/2pi = {:.2f}MHz Chi/2pi = {:.2f}MHz Q_r = {:.0f} kappa = {:.2f}MHz 1/kappa = {:.0f}ns I_c={:.2f}nA n_crit={:.0f}".format(
# q.name, 1e15*q.cap_g(q.g()), q.g()/2/pi/1e6, 1e-6*q.Chi_0()/2/pi, 1e-6*q.Chi()/2/pi, q.Q_r(), q.omega_r/q.Q_r()*1e-6/2/pi, q.Q_r()/q.omega_r*1e9, q.I_c()*1e9, ((q.omega_q-q.omega_r)/2/q.g())**2)
delta = 380e-6; #2\Delta/e in V
Jc = 1e8*673e-9 # A/cm^2
nJJs = [2,1,1,2]
print( '{:>8} {:>7} {:>6} {:>13}'.format('', 'I_c', 'R_N', 'width') )
for i,q in enumerate(qubits):
print("{}: {:>5.2f}nA {:>5.2f}k {} x {:.3f}nm".format(
q.name, q.I_c()*1e9, 1e-3*pi/4*delta/q.I_c(), nJJs[i], 1e9*q.I_c()/(1e4*Jc)/100e-9/nJJs[i] ))
print( '{:>8} {:>6} {:>17}'.format('', 'Ej/Ec', 'Charge dispersion') )
for q in qubits:
print "{}: {:>6.3f} {:>15.3f}MHz".format(q.name, q.E_j()/q.E_c(), q.charge_dispersion()/2/pi/hbar/1e6)
# What variation in C_g should be included on mask for the C_q variation we have?
print( '{:>7} {:>9} {:>7}'.format('C_q', 'g', 'C_g') )
for C_q_ in [85e-15, 29e-15, e**2/2/250e6]:
for g_ in [2*pi*25e6, 2*pi*50e6, 2*pi*200e6]:
qb.C_q = C_q_
print "{:>5.2f}fF {:>6.2f}MHz {:>5.2f}fF".format(
1e15*C_q_, g_/2/pi/1e6, 1e15*qb.cap_g(g_))
Explanation: Qubit parameters
From Ted Thorbeck's notes:
$E_c = \frac{e^2}{2C}$, $E_c/\hbar=\alpha=\text{anharmonicity}$
$E_J = \frac{I_o \Phi_0}{2 \pi} $
$\omega_q = \sqrt{8E_JE_c}/\hbar $
$g = \frac{1}{2} \frac{C_g}{\sqrt{(C_q+C_g)(C_r+C_g)}}\sqrt{\omega_r\omega_q}$
We want g in the range 25-200MHz for an ideal anharmonicity $\alpha$=250MHz
End of explanation
cpw.setKineticInductanceCorrection(False)
print cpw
cpwx = cpwtools.CPWWithBridges(material='al', w=1e6*cpw.w, s=1e6*cpw.s, bridgeSpacing = 250, bridgeWidth = 3, t_oxide=0.16)
cpwx.setKineticInductanceCorrection(False)
print cpwx
Explanation: Feedline with and without crossovers
End of explanation
d = 4
MperL = inductiveCoupling.inductiveCoupling.CalcMutual(cpw.w*1e6, cpw.w*1e6, cpw.s*1e6, cpw.s*1e6, d, 10*cpw.w*1e6)[0]
print( '{:>8} {:>7} {:>15}'.format('', 'M', 'coupling length') )
for q in qubits:
M = 1/(np.sqrt(q.Q_r()*pi/8/cpw.z0()**2)*q.omega_r)
print "{}: {:>5.2f}pH {:>13.2f}um".format(q.name, M*1e12, M/MperL*1e6)
print( '{:>5} {:>8}'.format('Q_c', 'l_c') )
for q in [3000,6000,9000,15000,21000,27000,33000]:
print "{:>5} {:>6.2f}um".format(q,1/(np.sqrt(q*pi/8/cpw.z0()**2)*qubits[2].omega_r)/MperL*1e6)
Explanation: Inductive Coupling
From [1], we have the dephasing of a qubit:
$\Gamma_\phi = \eta\frac{4\chi^2}{\kappa}\bar{n}$, where $\eta=\frac{\kappa^2}{\kappa^2+4\chi^2}$, $\bar{n}=\left(\frac{\Delta}{2g}\right)^2$
$\Gamma_\phi = \frac{4\chi^2\kappa}{\kappa^2+4\chi^2}\left(\frac{\Delta}{2g}\right)^2$
To maximize the efficiency of readout, we want to maximize the rate of information leaving the system (into the readout chain), or equivilently, maximize dephasing.
$\partial_\kappa\Gamma_\phi = 0 = -\frac{4\chi^2(\kappa^2-4\chi^2)}{(\kappa^2+4\chi^2)^2}$ when $2\chi=\kappa$.
$2\chi = \kappa_r = \omega_r/Q_r$
$ Q_{r,c} = \frac{8Z_0^2}{\pi(\omega M)^2}$ [2]
We want a $Q_c$ of 3k-30k
[1] Yan et al. The flux qubit revisited to enhance coherence and reproducibility. Nature Communications, 7, 1–9. http://doi.org/10.1038/ncomms12964
[2] Matt Beck's Thesis, p.39
End of explanation
l_curve = 2*pi*50/4
tot_length = l_curve*(1+2+2+2+1)*2 + 4*750 + 2569 + 4*450 + 2*106
purcell = cpwtools.HalfLResonator(cpw,tot_length)
purcell.addCapacitiveCoupling('in', 40e-15)
purcell.addCapacitiveCoupling('out', 130e-15)
print( "f_max = {:.3f}GHz Q_in = {:.2f} Q_out = {:.2f}".format( 1e-9*purcell.fl(), purcell.Qc('in'), purcell.Qc('out') ) )
purcell.l = (tot_length + 503*4)*1e-6
print( "f_min = {:.3f}GHz Q_in = {:.2f} Q_out = {:.2f}".format( 1e-9*purcell.fl(), purcell.Qc('in'), purcell.Qc('out') ) )
print
print('The measured purcell filter (no crossovers) seems to be 150-200MHz below expected. This has been accounted for below.')
f0 = (qubits[1].omega_r + qubits[2].omega_r)/2/2/pi
purcell.setLengthFromFreq(f0 + 175e6) # The measured purcell filter (no crossovers) seems to be 150-200MHz below expected.
print "f = {:.2f}GHz l = {:.3f}um offset = {:.3f}um Q_in = {:.2f} Q_out = {:.2f}".format( 1e-9*purcell.fl(), purcell.l*1e6, (purcell.l*1e6-tot_length)/4, purcell.Qc('in'), purcell.Qc('out') )
print "V_out/V_in =", (purcell.Qc('in')/purcell.Qc('out'))**0.5
print "{:.2f}% power lost through input".format( 100*purcell.Ql()/purcell.Qc('in') )
print "{:.2f}% power lost through output".format( 100*purcell.Ql()/purcell.Qc('out') )
print "{:.2f}% power lost internally".format( 100*purcell.Ql()/purcell.Qint() )
print
print "The purcell filter frequency goes up by 310MHz when crossovers are added:"
purcellx = deepcopy(purcell)
purcellx.cpw = cpwx
print "f = {:.2f}GHz l = {:.3f}um Q_in = {:.2f} Q_out = {:.2f}".format( 1e-9*purcellx.fl(), purcellx.l*1e6, purcellx.Qc('in'), purcellx.Qc('out') )
print "Purcell Filter FWHM = {:.2f}MHz".format(2*pi*f0/purcell.Ql()/2/pi/1e6)
print "Purcell Filter Q_l = {:.2f}".format(purcell.Ql())
print
print('T1 Limits:')
print('{:>8} {:>10} {:>11}'.format('', 'no purcell', 'yes purcell'))
for q in qubits:
kappa_r = q.omega_r/q.Q_r()
Delta = q.omega_q - q.omega_r
#print "{}: T1 limit (no purcell) = {:.2f}us T1 limit (purcell) = {:.2f}us".format(
print "{}: {:>8.2f}us {:>9.2f}us".format(
q.name, (Delta/q.g())**2/kappa_r * 1e6, (Delta/q.g())**2 * (q.omega_r/q.omega_q) * (2*Delta/q.omega_r*purcell.Ql())**2/kappa_r * 1e6 )
Explanation: Purcell Filter
Do we even need a purcell filter? [3]
Without purcell filter: $\kappa_r T_1 \le \left(\frac{\Delta}{g}\right)^2$
With purcell filter: $\kappa_r T_1 \le \left(\frac{\Delta}{g}\right)^2 \left(\frac{\omega_r}{\omega_q}\right) \left(\frac{2\Delta}{\omega_r/Q_{pf}}\right)^2$
$\kappa_r = \omega_r/Q_r$
With the readout resonators spaced ~30MHz appart, we need a bandwidth of at least 4*30MHz=120MHz.
We have a range of readout resonators from 5-6GHz.
[3] Jeffrey et al. Fast accurate state measurement with superconducting qubits. Physical Review Letters, 112(19), 1–5. http://doi.org/10.1103/PhysRevLett.112.190504
End of explanation
C_q = qubits[2].C_q
L_q = 1/(qubits[2].omega_q**2 * C_q)
R_s = 50
C_s = 0.1e-15
Q_s = 1/(qubits[2].omega_q * R_s * C_s)
R_p = R_s*(1 + Q_s**2)
C_p = C_s * Q_s**2/(1 + Q_s**2)
omega = 1/np.sqrt((C_q+C_p)*L_q)
Q_xy = omega*R_p*(C_q+C_p)
print("f: {:.3f}GHz --> {:.3f}GHz".format( 1e-9/np.sqrt(C_q*L_q)/2/pi, 1e-9*omega/2/pi))
print("Q = {:.2f}".format(Q_xy))
print("1/kappa = {:.2f}us".format(1e6*Q_xy/omega))
Explanation: Loss from XY line
From Thorbeck's notes, we have
$R_p = R_s(1+Q_s^2)$ and $C_p = C_s\left(\frac{Q_s^2}{1+Q_s^2}\right)$
where $Q_s = \frac{1}{\omega R_s C_s}$
and the "s" and "p" subscript refer to just the coupling capacitor and Z0 of the line in a series or parallel configuration. Combining this with the normal LC of the qubit, we can find the loss
End of explanation |
6,704 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
How to calculate kurtosis (according to Fisher’s definition) without bias correction? | Problem:
import numpy as np
import scipy.stats
a = np.array([ 1. , 2. , 2.5, 400. , 6. , 0. ])
kurtosis_result = scipy.stats.kurtosis(a) |
6,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas Intro
From this tutorial
What is pandas? Why use Pandas?
Pandas is an open source Python Library. Like must coding languages you can manipulate data in a very easy manner. This means that my days almost manually puckering data are behind me, since I moved in to the green fields of programing. But after cleansing the data, you still needed a program to load it. Proprietary good old Stata for me was very user friendly, but it is expensive! Must user friendly options for programs are expensive too. And after going through many months of “trial” and hacking my way in to getting several trials… After even considering buying a pirated copy of the program I decided that it was going to be much easier to just learn Python.
Data Structures
Pandas has 2 data structures that are built on top of numpy, this makes them faster.
Section | Description
--- | ---
Series | One dimensional Object, simillar to an array. It assigns label indexes to each item
Data Frame | Tabular data structure with rows and coluns
Series
A Series is a one-dimensional object similar to an array, list or a column in a table. It that it has a labeled index for each item. By default, the indexes go from 0-N
Step1: Series from a dictionary
Step2: Accesing an item from a series
Step3: BOOLEAN indexing for selection
Step4: Not null function
Step5: Data Frame
To create a DataFrame we can pass a dictionary of lits in to the DataFrame constructor. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.set_option('max_columns', 50)
%matplotlib inline
series = pd.Series([1, "number", 6, "Happy Series!"])
series
Explanation: Pandas Intro
From this tutorial
What is pandas? Why use Pandas?
Pandas is an open source Python Library. Like must coding languages you can manipulate data in a very easy manner. This means that my days almost manually puckering data are behind me, since I moved in to the green fields of programing. But after cleansing the data, you still needed a program to load it. Proprietary good old Stata for me was very user friendly, but it is expensive! Must user friendly options for programs are expensive too. And after going through many months of “trial” and hacking my way in to getting several trials… After even considering buying a pirated copy of the program I decided that it was going to be much easier to just learn Python.
Data Structures
Pandas has 2 data structures that are built on top of numpy, this makes them faster.
Section | Description
--- | ---
Series | One dimensional Object, simillar to an array. It assigns label indexes to each item
Data Frame | Tabular data structure with rows and coluns
Series
A Series is a one-dimensional object similar to an array, list or a column in a table. It that it has a labeled index for each item. By default, the indexes go from 0-N
End of explanation
dictionary = {'Favorite Food': 'mexican', 'Favorite city': 'Portland', 'Hometown': 'Mexico City'}
favorite = pd.Series(dictionary)
favorite
Explanation: Series from a dictionary:
End of explanation
favorite['Favorite Food']
Explanation: Accesing an item from a series:
End of explanation
favorite[favorite=='mexican']
Explanation: BOOLEAN indexing for selection
End of explanation
favorite.notnull()
favorite[favorite.notnull()]
Explanation: Not null function
End of explanation
data = {'year': [2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012],
'team': ['Bears', 'Bears', 'Bears', 'Packers', 'Packers', 'Lions', 'Lions', 'Lions'],
'wins': [11, 8, 10, 15, 11, 6, 10, 4],
'losses': [5, 8, 6, 1, 5, 10, 6, 12]}
football = pd.DataFrame(data, columns=['year', 'team', 'wins', 'losses'])
football
Explanation: Data Frame
To create a DataFrame we can pass a dictionary of lits in to the DataFrame constructor.
End of explanation |
6,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modularisierungscheck
Fragestellung
Frage
"Wie gut passt der fachliche Schnitt zur Entwicklungsaktivität?"
Idee
Heuristik
Step1: Nur reinen Java-Quellcode betrachten
Step2: Analysis
Marker für erfolgten Commit setzen
Step3: Tabelle drehen ("pivotieren")
Step4: Abstand zwischen Vektoren berechnen
Step5: (Ergebnis schöner darstellen)
Step6: Visualisierung
Reduzierung der Dimensionen
Step7: (Ergebnis schöner darstellen)
Step8: Module extrahieren
Step9: Interaktive Grafik erzeugen | Python Code:
from ozapfdis import git
git_log = git.log_numstat("../../../dropover/")[['sha', 'file']]
git_log.head()
Explanation: Modularisierungscheck
Fragestellung
Frage
"Wie gut passt der fachliche Schnitt zur Entwicklungsaktivität?"
Idee
Heuristik: "Werden Änderungen innerhalb einer Komponente zusammengehörig vorgenommen?"
* Änderungen => Commits aus Versionsverwaltung
* Komponenten => Teil von Dateipfad
Datenimport
Git-Log importieren
End of explanation
prod_code = git_log.copy()
prod_code = prod_code[prod_code.file.str.contains("src/main/java")]
prod_code = prod_code[~prod_code.file.str.endswith("package-info.java")]
prod_code.head()
Explanation: Nur reinen Java-Quellcode betrachten
End of explanation
prod_code['hit'] = 1
prod_code.head()
Explanation: Analysis
Marker für erfolgten Commit setzen
End of explanation
commit_matrix = prod_code.reset_index().pivot_table(
index='file',
columns='sha',
values='hit',
fill_value=0)
commit_matrix.iloc[0:5,50:55]
Explanation: Tabelle drehen ("pivotieren")
End of explanation
from sklearn.metrics.pairwise import cosine_distances
dissimilarity_matrix = cosine_distances(commit_matrix)
dissimilarity_matrix[:5,:5]
Explanation: Abstand zwischen Vektoren berechnen
End of explanation
import pandas as pd
dissimilarity_df = pd.DataFrame(
dissimilarity_matrix,
index=commit_matrix.index,
columns=commit_matrix.index)
dissimilarity_df.iloc[:5,:2]
Explanation: (Ergebnis schöner darstellen)
End of explanation
from sklearn.manifold import MDS
# uses a fixed seed for random_state for reproducibility
model = MDS(dissimilarity='precomputed', random_state=0)
dissimilarity_2d = model.fit_transform(dissimilarity_df)
dissimilarity_2d[:5]
Explanation: Visualisierung
Reduzierung der Dimensionen
End of explanation
dissimilarity_2d_df = pd.DataFrame(
dissimilarity_2d,
index=commit_matrix.index,
columns=["x", "y"])
dissimilarity_2d_df.head()
Explanation: (Ergebnis schöner darstellen)
End of explanation
dissimilarity_2d_df['module'] = dissimilarity_2d_df.index.str.split("/").str[6].values
dissimilarity_2d_df.head()
Explanation: Module extrahieren
End of explanation
from ausi import pygal
xy = pygal.create_xy_chart(dissimilarity_2d_df,"module")
xy.render_in_browser()
Explanation: Interaktive Grafik erzeugen
End of explanation |
6,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PointGraph
This notebook presents the basic concepts behind Menpo's PointGraph class and subclasses.
PointGraph is basically Graphs with geometry (PointCloud). This means that apart from the edge connections between vertices, a PointGraph also defines spatial location coordinates for each vertex. The PointGraph subclasses are
Step1: 1. PointUndirectedGraph
The following undirected graph
Step2: and printed and visualized as
Step3: 2. PointDirectedGraph
Similarly, the following directed graph with isolated vertices
Step4: 3. PointTree
A Tree in Menpo is defined as a directed graph, thus PointTree is a subclass of PointDirectedGraph. The following tree
Step5: 4. Functionality
For the basic properties of graphs and trees, please refer to the Graph notebook. Herein, we present some more advanced functionality, such as shortest paths and minimum spanning tree.
Initialization from edges
All PointGraph subclasses are constructed using the adjacency matrix. However, it is possible to also create a graph (or tree) using the edges, i.e. a matrix of size (n_edges, 2, ) that contains all the pairs of vertices that are connected with an edge. For example, the following undirected graph
Step6: Paths
We can retrieve a path between two vertices (not the shortest one!) as
Step7: Or get all the possible paths between two vertices
Step8: The paths can be easily visualized as
Step9: Shortest path
The previous functionality returns the first path that was found. One can retrieve the shortest path that connects two vertices as
Step10: Similarly
Step11: Of course there may be no path
Step12: Minimum spanning tree
Let us define the following undirected graph with weights
Step13: The minimum spanning tree of the above graph is
Step14: Mask
All PointGraphs have a from_mask method that applies a mask on the graph's vertices. For example, let's remove vertices 2, 6 and 8 from the previous undirected graph
Step15: 5. Facial PointGraph
PointGraphs are useful when defining landmarks with different semantic groups. For example, let us load and visualize the lenna image that has landmarks in the LJSON format.
Step16: The landmarks are actually a PointUndirectedGraph that can be visualized as normal.
Step17: And each one of the subgroups is a PointUndirectedGraph
Step18: 6. Widget
All PointGraphs can be visualized using widgets. Note that Jupyter widget functionality is provided by the menpowidgets package and must be installed prior to using widgets (conda install -c menpo menpowidgets).
Step19: This basically calls visualize_pointclouds() widget, which can accept a list of different PointGraphs. For example | Python Code:
%matplotlib inline
import numpy as np
from scipy.sparse import csr_matrix
import matplotlib.pyplot as plt
from menpo.shape import PointUndirectedGraph, PointDirectedGraph, PointTree
Explanation: PointGraph
This notebook presents the basic concepts behind Menpo's PointGraph class and subclasses.
PointGraph is basically Graphs with geometry (PointCloud). This means that apart from the edge connections between vertices, a PointGraph also defines spatial location coordinates for each vertex. The PointGraph subclasses are:
* PointUndirectedGraph: graph with undirected edge connections
* PointDirectedGraph: graph with directed edge connections
* PointTree: directed graph in which any two vertices are connected with exactly one path
For a tutorial on the basics of the Graph class, pease refer to the Graph notebook.
This presentation contains the following:
PointUndirectedGraph
PointDirectedGraph
PointTree
Functionality
Facial PointGraph
Widget
First, let's make all the necessary imports:
End of explanation
points = np.array([[10, 30], [0, 20], [20, 20], [0, 10], [20, 10], [0, 0]])
adj_undirected = np.array([[0, 1, 1, 0, 0, 0],
[1, 0, 1, 1, 0, 0],
[1, 1, 0, 0, 1, 0],
[0, 1, 0, 0, 1, 1],
[0, 0, 1, 1, 0, 0],
[0, 0, 0, 1, 0, 0]])
undirected_graph = PointUndirectedGraph(points, adj_undirected)
Explanation: 1. PointUndirectedGraph
The following undirected graph:
|---0---|
| |
| |
1-------2
| |
| |
3-------4
|
|
5
can be defined as:
End of explanation
print(undirected_graph)
undirected_graph.view(image_view=False, render_axes=False, line_width=2);
Explanation: and printed and visualized as:
End of explanation
points = np.array([[10, 30], [0, 20], [20, 20], [0, 10], [20, 10], [0, 0], [20, 0]])
adj_directed = np.array([[0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 1, 0, 0, 0],
[1, 1, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 1, 1, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
directed_graph = PointDirectedGraph(points, adj_directed)
print(directed_graph)
directed_graph.view(image_view=False, render_axes=False, line_width=2);
Explanation: 2. PointDirectedGraph
Similarly, the following directed graph with isolated vertices:
|-->0<--|
| |
| |
1<----->2
| |
v v
3------>4
|
v
5 6
can be defined as:
End of explanation
points = np.array([[30, 30], [10, 20], [50, 20], [0, 10], [20, 10], [50, 10], [0, 0], [20, 0], [50, 0]])
adj_tree = csr_matrix(([1] * 8, ([0, 0, 1, 1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6, 7, 8])), shape=(9, 9))
tree = PointTree(points, adj_tree, root_vertex=0)
print(tree)
tree.view(image_view=False, render_axes=False, line_width=2);
Explanation: 3. PointTree
A Tree in Menpo is defined as a directed graph, thus PointTree is a subclass of PointDirectedGraph. The following tree:
0
|
___|___
1 2
| |
_|_ |
3 4 5
| | |
| | |
6 7 8
can be defined as:
End of explanation
points = np.array([[10, 30], [0, 20], [20, 20], [0, 10], [20, 10], [0, 0]])
edges = [[0, 2], [2, 4], [3, 4]]
graph = PointUndirectedGraph.init_from_edges(points, edges)
print(graph)
graph.view(image_view=False, render_axes=False, line_width=2);
Explanation: 4. Functionality
For the basic properties of graphs and trees, please refer to the Graph notebook. Herein, we present some more advanced functionality, such as shortest paths and minimum spanning tree.
Initialization from edges
All PointGraph subclasses are constructed using the adjacency matrix. However, it is possible to also create a graph (or tree) using the edges, i.e. a matrix of size (n_edges, 2, ) that contains all the pairs of vertices that are connected with an edge. For example, the following undirected graph:
```
0---|
|
|
1 2
|
|
3-------4
5
```
can be defined as:
End of explanation
v1 = 2
v2 = 5
print("The path between {} and {} in the undirected_graph is {}.".format(v1, v2, undirected_graph.find_path(v1, v2)))
print("The path between {} and {} in the directed_graph is {}.".format(v1, v2, directed_graph.find_path(v1, v2)))
print("The path between {} and {} in the tree is {}.".format(v1, v2, tree.find_path(v1, v2)))
Explanation: Paths
We can retrieve a path between two vertices (not the shortest one!) as:
End of explanation
v1 = 2
v2 = 4
print("Vertices {} and {} in the directed_graph "
"are connected with the following paths: {}.".format(v1, v2, directed_graph.find_all_paths(v1, v2)))
Explanation: Or get all the possible paths between two vertices:
End of explanation
all_paths = directed_graph.find_all_paths(v1, v2)
paths = []
points = np.array([[10, 30], [0, 20], [20, 20], [0, 10], [20, 10], [0, 0], [20, 0]])
for path_list in all_paths:
path = csr_matrix(([1] * len(path_list[:-1]),
(path_list[:-1], path_list[1:])), shape=(7, 7))
paths.append(PointDirectedGraph(points, path))
renderer = directed_graph.view(new_figure=True, image_view=False, line_width=2);
paths[0].view(figure_id=renderer.figure_id, image_view=False, line_colour='b', line_width=2, render_markers=False);
paths[1].view(figure_id=renderer.figure_id, image_view=False, render_axes=False, line_colour='g', line_width=2);
plt.legend(['Graph edges', 'Graph vertices', 'First path connecting 2 and 4', 'Second path connecting 2 and 4'], loc=8);
Explanation: The paths can be easily visualized as:
End of explanation
v1 = 2
v2 = 4
shortest_path, distance = directed_graph.find_shortest_path(v1, v2)
print("The shortest path connecting vertices {} and {} "
"in the directed_graph is {} and costs {}.".format(v1, v2, shortest_path, distance))
Explanation: Shortest path
The previous functionality returns the first path that was found. One can retrieve the shortest path that connects two vertices as:
End of explanation
v1 = 0
v2 = 4
shortest_path, distance = undirected_graph.find_shortest_path(v1, v2)
print("The shortest path connecting vertices {} and {} "
"in the undirected_graph is {} and costs {}.".format(v1, v2, shortest_path, distance))
Explanation: Similarly:
End of explanation
v1 = 0
v2 = 6
shortest_path, distance = directed_graph.find_shortest_path(v1, v2)
print("The shortest path connecting vertices {} and {} "
"in the directed_graph is {} and costs {}.".format(v1, v2, shortest_path, distance))
Explanation: Of course there may be no path:
End of explanation
points = np.array([[0, 10], [10, 20], [20, 20], [30, 20], [40, 10], [30, 0], [20, 0], [10, 0], [20, 10]])
adj = csr_matrix(([4, 4, 8, 8, 8, 8, 11, 11, 7, 7, 4, 4, 2, 2, 9, 9, 14, 14, 10, 10, 2, 2, 1, 1, 6, 6, 7, 7],
([0, 1, 0, 7, 1, 2, 1, 7, 2, 3, 2, 5, 2, 8, 3, 4, 3, 5, 4, 5, 5, 6, 6, 7, 6, 8, 7, 8],
[1, 0, 7, 0, 2, 1, 7, 1, 3, 2, 5, 2, 8, 2, 4, 3, 5, 3, 5, 4, 6, 5, 7, 6, 8, 6, 8, 7])), shape=(9, 9))
graph = PointUndirectedGraph(points, adj)
print(graph)
graph.view(image_view=False, render_axes=False, axes_x_limits=[-1, 41], axes_y_limits=[-10, 30], line_width=2);
# vertices numbering
for k, p in enumerate(graph.points):
plt.gca().annotate(str(k), xy=(p[0], p[1]),
horizontalalignment='center',
verticalalignment='bottom',
fontsize=14)
Explanation: Minimum spanning tree
Let us define the following undirected graph with weights:
End of explanation
mst = graph.minimum_spanning_tree(root_vertex=0)
print(mst)
mst.view(image_view=False, render_axes=False, axes_x_limits=[-1, 41], axes_y_limits=[-10, 30], line_width=2);
# vertices numbering
for k, p in enumerate(mst.points):
plt.gca().annotate(str(k), xy=(p[0], p[1]),
horizontalalignment='center',
verticalalignment='top',
fontsize=14)
Explanation: The minimum spanning tree of the above graph is:
End of explanation
# Create mask that removes vertices 2, 6 and 8
mask = np.array([True, True, False, True, True, True, False, True, False])
# Mask the graph
masked_graph = graph.from_mask(mask)
# Visualize
print(masked_graph)
masked_graph.view(image_view=False, render_axes=False, axes_x_limits=[-1, 41], axes_y_limits=[-10, 30], line_width=2);
# vertices numbering
for k, p in enumerate(masked_graph.points):
plt.gca().annotate(str(k), xy=(p[0], p[1]),
horizontalalignment='center',
verticalalignment='bottom',
fontsize=14)
Explanation: Mask
All PointGraphs have a from_mask method that applies a mask on the graph's vertices. For example, let's remove vertices 2, 6 and 8 from the previous undirected graph:
End of explanation
import menpo.io as mio
im = mio.import_builtin_asset.lenna_png()
im.view_landmarks(render_legend=True);
Explanation: 5. Facial PointGraph
PointGraphs are useful when defining landmarks with different semantic groups. For example, let us load and visualize the lenna image that has landmarks in the LJSON format.
End of explanation
print(im.landmarks['LJSON'].lms)
im.landmarks['LJSON'].lms.view(render_axes=False);
Explanation: The landmarks are actually a PointUndirectedGraph that can be visualized as normal.
End of explanation
print(im.landmarks['LJSON']['mouth'])
im.landmarks['LJSON']['mouth'].view(render_axes=False);
Explanation: And each one of the subgroups is a PointUndirectedGraph:
End of explanation
tree.view_widget()
Explanation: 6. Widget
All PointGraphs can be visualized using widgets. Note that Jupyter widget functionality is provided by the menpowidgets package and must be installed prior to using widgets (conda install -c menpo menpowidgets).
End of explanation
from menpowidgets import visualize_pointclouds
visualize_pointclouds([undirected_graph, directed_graph, tree, mst])
Explanation: This basically calls visualize_pointclouds() widget, which can accept a list of different PointGraphs. For example:
End of explanation |
6,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Network Graph Demo
Step1: Standard Usage
Step2: Look at Base Class
The NetworkView object extends NetworkViewBase. You can also generate a NetworkViewBase object by itself, which does not include the button and dropdown menu widget elements. This could be incorporated into custom widgets.
Step3: Attributes can be changed programatically | Python Code:
from psst.network.graph import (
NetworkModel, NetworkViewBase, NetworkView
)
from psst.case import read_matpower
case = read_matpower('../cases/case118.m')
Explanation: Network Graph Demo
End of explanation
# Create the model from the case
m = NetworkModel(case, sel_bus='Bus1')
# Create the view from the model
v = NetworkView(model=m)
v
Explanation: Standard Usage
End of explanation
m = NetworkModel(case, sel_bus='Bus112')
v_base = NetworkViewBase(model=m)
v_base
Explanation: Look at Base Class
The NetworkView object extends NetworkViewBase. You can also generate a NetworkViewBase object by itself, which does not include the button and dropdown menu widget elements. This could be incorporated into custom widgets.
End of explanation
v_base.show_gen_names = True
v_base.show_load = False
v_base.show_background_lines = True
Explanation: Attributes can be changed programatically
End of explanation |
6,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:200]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
print('Reviews Length: {}, Words Length: {}'.format(np.shape(reviews), len(words)))
all_text[:200]
words[:10]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
# Create your dictionary that maps vocab words to integers here
vocab_to_int = dict()
for word in words:
vocab_to_int[word] = vocab_to_int.get(word, 1) + 1
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = []
for review in reviews:
reviews_ints.append([vocab_to_int[word] for word in review.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
# Convert labels to 1s and 0s for 'positive' and 'negative'
raw_labels = labels.split('\n')
num_labels = np.zeros((1, len(raw_labels)))
for i in range(1, len(raw_labels)):
if raw_labels[i] == 'positive':
num_labels[:, i] = 1
print(np.shape(num_labels))
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
# Filter out that review with 0 length
reviews_ints = [x for x in reviews_ints if len(x) > 0]
print(len(reviews_ints), np.shape(reviews_ints))
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, v in enumerate(reviews_ints):
features[i, -len(v):] = np.array(v)[:seq_len]
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
features[:10,:100]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
l_f = int(len(features)*0.8)
train_x, val_x = features[:l_f], features[l_f:]
train_y, val_y = labels[:l_f], labels[l_f:]
t_split = int(len(val_x)*0.5)
val_x, test_x = val_x[:t_split], val_x[t_split:]
val_y, test_y = val_y[:t_split], val_y[t_split:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory. Tensorflow is great at matmul however it is better to do multiply a small amount of large matrices than multiply a large amount of small matrices
learning_rate: Learning rate
End of explanation
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name= 'inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name= 'labels')
keep_prob = tf.placeholder(tf.float32, name= 'keep_prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform([n_words, embed_size], -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, your network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
6,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP)
Decoding of motor imagery applied to EEG data decomposed using CSP. A
classifier is then applied to features extracted on CSP-filtered signals.
See https
Step1: Classification with linear discrimant analysis
Step2: Look at performance over time | Python Code:
# Authors: Martin Billinger <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from sklearn.pipeline import Pipeline
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import ShuffleSplit, cross_val_score
from mne import Epochs, pick_types, events_from_annotations
from mne.channels import make_standard_montage
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
print(__doc__)
# #############################################################################
# # Set parameters and read data
# avoid classification of evoked responses by using epochs that start 1s after
# cue onset.
tmin, tmax = -1., 4.
event_id = dict(hands=2, feet=3)
subject = 1
runs = [6, 10, 14] # motor imagery: hands vs feet
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
eegbci.standardize(raw) # set channel names
montage = make_standard_montage('standard_1005')
raw.set_montage(montage)
# strip channel names of "." characters
raw.rename_channels(lambda x: x.strip('.'))
# Apply band-pass filter
raw.filter(7., 30., fir_design='firwin', skip_by_annotation='edge')
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
picks = pick_types(raw.info, meg=False, eeg=True, stim=False, eog=False,
exclude='bads')
# Read epochs (train will be done only between 1 and 2s)
# Testing will be done with a running classifier
epochs = Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=None, preload=True)
epochs_train = epochs.copy().crop(tmin=1., tmax=2.)
labels = epochs.events[:, -1] - 2
Explanation: Motor imagery decoding from EEG data using the Common Spatial Pattern (CSP)
Decoding of motor imagery applied to EEG data decomposed using CSP. A
classifier is then applied to features extracted on CSP-filtered signals.
See https://en.wikipedia.org/wiki/Common_spatial_pattern and [1]. The EEGBCI
dataset is documented in [2]. The data set is available at PhysioNet [3]_.
References
.. [1] Zoltan J. Koles. The quantitative extraction and topographic mapping
of the abnormal components in the clinical EEG. Electroencephalography
and Clinical Neurophysiology, 79(6):440--447, December 1991.
.. [2] Schalk, G., McFarland, D.J., Hinterberger, T., Birbaumer, N.,
Wolpaw, J.R. (2004) BCI2000: A General-Purpose Brain-Computer Interface
(BCI) System. IEEE TBME 51(6):1034-1043.
.. [3] Goldberger AL, Amaral LAN, Glass L, Hausdorff JM, Ivanov PCh, Mark RG,
Mietus JE, Moody GB, Peng C-K, Stanley HE. (2000) PhysioBank,
PhysioToolkit, and PhysioNet: Components of a New Research Resource for
Complex Physiologic Signals. Circulation 101(23):e215-e220.
End of explanation
# Define a monte-carlo cross-validation generator (reduce variance):
scores = []
epochs_data = epochs.get_data()
epochs_data_train = epochs_train.get_data()
cv = ShuffleSplit(10, test_size=0.2, random_state=42)
cv_split = cv.split(epochs_data_train)
# Assemble a classifier
lda = LinearDiscriminantAnalysis()
csp = CSP(n_components=4, reg=None, log=True, norm_trace=False)
# Use scikit-learn Pipeline with cross_val_score function
clf = Pipeline([('CSP', csp), ('LDA', lda)])
scores = cross_val_score(clf, epochs_data_train, labels, cv=cv, n_jobs=1)
# Printing the results
class_balance = np.mean(labels == labels[0])
class_balance = max(class_balance, 1. - class_balance)
print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores),
class_balance))
# plot CSP patterns estimated on full data for visualization
csp.fit_transform(epochs_data, labels)
csp.plot_patterns(epochs.info, ch_type='eeg', units='Patterns (AU)', size=1.5)
Explanation: Classification with linear discrimant analysis
End of explanation
sfreq = raw.info['sfreq']
w_length = int(sfreq * 0.5) # running classifier: window length
w_step = int(sfreq * 0.1) # running classifier: window step size
w_start = np.arange(0, epochs_data.shape[2] - w_length, w_step)
scores_windows = []
for train_idx, test_idx in cv_split:
y_train, y_test = labels[train_idx], labels[test_idx]
X_train = csp.fit_transform(epochs_data_train[train_idx], y_train)
X_test = csp.transform(epochs_data_train[test_idx])
# fit classifier
lda.fit(X_train, y_train)
# running classifier: test classifier on sliding window
score_this_window = []
for n in w_start:
X_test = csp.transform(epochs_data[test_idx][:, :, n:(n + w_length)])
score_this_window.append(lda.score(X_test, y_test))
scores_windows.append(score_this_window)
# Plot scores over time
w_times = (w_start + w_length / 2.) / sfreq + epochs.tmin
plt.figure()
plt.plot(w_times, np.mean(scores_windows, 0), label='Score')
plt.axvline(0, linestyle='--', color='k', label='Onset')
plt.axhline(0.5, linestyle='-', color='k', label='Chance')
plt.xlabel('time (s)')
plt.ylabel('classification accuracy')
plt.title('Classification score over time')
plt.legend(loc='lower right')
plt.show()
Explanation: Look at performance over time
End of explanation |
6,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linguistics 110
Step1: Exploring TIMIT Data <a id='timit'></a>
We will start off by exploring TIMIT data taken from 8 different regions. These measurements are taken at the midpoint of vowels, where vowel boundaries were determined automatically using forced alignment.
Uploading the data
Prior to being able to work with the data, we have to upload our dataset. The following two lines of code will read in our data and create a dataframe. The last line of code prints the timit dataframe, but instead of printing the whole dataframe, by using the method .head, it only prints the first 5 rows.
Step2: Look at the dataframe you created and try to figure out what each column measures. Each column represents a different attribute, see the following table for more information.
|Column Name|Details|
|---|---|
|speaker|unique speaker ID|
|gender|Speaker’s self-reported gender|
|region|Speaker dialect region number|
|word|Lexical item (from sentence prompt)|
|vowel|Vowel ID|
|duration|Vowel duration (seconds)|
|F1/F2/F3/f0|f0 and F1-F3 in BPM (Hz)|
Sometimes data is encoded with with an identifier, or key, to save space and simplify calculations. Each of those keys corresponds to a specific value. If you look at the region column, you will notice that all of the values are numbers. Each of those numbers corresponds to a region, for example, in our first row the speaker, cjf0, is from region 1. That corresponds to New England. Below is a table with all of the keys for region.
|Key|Region|
|---|---|
|1|New England|
|2|Northern|
|3|North Midland|
|4|South Midland|
|5|Southern|
|6|New York City|
|7|Western|
|8|Army Brat|
Transformations
When inspecting data, you may realize that there are changes to be made -- possibly due to the representation to the data or errors in the recording. Before jumping into analysis, it is important to clean the data.
One thing to notice about timit is that the column vowel contains ARPABET identifiers for the vowels. We want to convert the vowel column to be IPA characters, and will do so in the cell below.
Step3: Most of the speakers will say the same vowel multiple times, so we are going to average those values together. The end result will be a dataframe where each row represents the average values for each vowel for each speaker.
Step4: Splitting on Gender
Using the same dataframe from above, timit_avg, we are going to split into dataframes grouped by gender. To identify the possible values of gender in the gender column, we can use the method .unique on the column.
Step5: You could see that for this specific dataset there are only "female" and "male" values in the column. Given that information, we'll create two subsets based off of gender.
We'll split timit_avg into two separate dataframes, one for females, timit_female, and one for males, timit_male. Creating these subset dataframes does not affect the original timit_avg dataframe.
Step6: Distribution of Formants
We want to inspect the distributions of F1, F2, and F3 for those that self-report as male and those that self-report as female to identify possible trends or relationships. Having our two split dataframes, timit_female and timit_male, eases the plotting process.
Run the cell below to see the distribution of F1.
Step7: Does there seem to be a notable difference between male and female distributions of F1?
Next, we plot F2.
Step8: Finally, we create the same visualization, but for F3.
Step9: Do you see a more pronounced difference across the the different F values? Are they the same throughout? Can we make any meaningful assumptions from these visualizations?
An additional question
Step10: The ID column contains a unique value for each individual. Each individual has a row for each of the different vowels they measured.
Step11: Splitting on Gender
As we did with the TIMIT data, we are going to split class_data based on self-reported gender. We need to figure out what the possible responses for the column were.
Step12: Notice that there are three possible values for the column. We do not have a large enough sample size to responsibly come to conclusions for Prefer not to answer, so for now we'll compare Male and Female. We'll call our new split dataframes class_female and class_male.
Step13: Comparing Distributions
The following visualizations compare the the distribution of formants for males and females, like we did for the TIMIT data.
First, we'll start with F1.
Step14: Next is F2.
Step15: And finally F3.
Step16: Do the spread of values appear to be the same for females and males? Do the same patterns that occur in the TIMIT data appear in the class's data?
Vowel Spaces <a id='vs'></a>
Run the cell below to define some functions that we will be using.
Step17: We are going to be recreating the following graphic from this website.
Before we can get to creating, we need to get a singular value for each column for each of the vowels (so we can create coordinate pairs). To do this, we are going to find the average formant values for each of the vowels in our dataframes. We'll do this for both timit and class_data.
Step18: Each of these new tables has a row for each vowel, which comprisises of the averaged values across all speakers.
Plotting the Vowel Space
Run the cell below to construct a vowel space for the class's data, in which we plot F1 on F2.
Note that both axes are descending.
Step19: Using Logarithmic Axes
In our visualization above, we use linear axes in order to construct our vowel space. The chart we are trying to recreate has logged axes (though the picture does not indicate it). Below we log-transform all of the values in our dataframes.
Step20: Below we plot the vowel space using these new values.
Step21: What effect does using the logged values have, if any? What advantages does using these values have? Are there any negatives? This paper might give some ideas.
Overlaying a Vowel Space Chart
Finally, we are going to overlay a blank vowel space chart outline to see how close our data reflects the theoretical vowel chart.
Step22: How well does it match the original?
Below we generate the same graph, except using the information from the TIMIT dataset.
Step23: How does the TIMIT vowel space compare to the vowel space from our class data? What may be the cause for any differences between our vowel space and the one constructed using the TIMIT data? Do you notice any outliers or do any points that seem off?
Variation in Vowel Spaces <a id='vvs'></a>
In the following visualizations, we are going to show each individual vowel from each person in the F2 and F1 dimensions (logged). Each color corresponds to a different vowel -- see the legend for the exact pairs.
Step24: In the following visualization, we replace the colors with the IPA characters and attempt to clump the vowels together.
Step25: Formants vs Height <a id='fvh'></a>
We are going to compare each of the formants and height to see if there is a relationship between the two. To help visualize that, we are going to plot a regression line, which is also referred to as the line of best fit.
We are going to use the maximum of each formant to compare to height. So for each speaker, we will calculate their greatest F1, F2, and F3 across all vowels, then compare one of those to their height. We create the necessary dataframe in the cell below using the class's data.
Step26: First we will plot Max F1 against Height.
Note
Step27: Is there a general trend for the data that you notice? What do you notice about the different color dots?
Next, we plot Max F2 on Height.
Step28: Finally, Max F3 vs Height.
Step29: Do you notice a difference between the trends for the three formants?
Now we are going to plot two lines of best fit -- one for males, one for females. Before we plotted one line for all of the values, but now we are separating by gender to see if gender explains some of the difference in formants values.
For now, we're going deal with just Max F1.
Step30: Is there a noticeable difference between the two? Did you expect this result?
We're going to repeat the above graph, plotting a different regression line for males and females, but this time, using timit -- having a larger sample size may help expose patterns. Before we do that, we have to repeat the process of calulating the maximum value for each formants for each speaker. Run the cell below to do that and generate the plot. The blue dots are females, the orange dots are males, and the green line is the regression line for all speakers. | Python Code:
# DON'T FORGET TO RUN THIS CELL
import math
import numpy as np
import pandas as pd
import seaborn as sns
import datascience as ds
import matplotlib.pyplot as plt
sns.set_style('darkgrid')
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
Explanation: Linguistics 110: Vowel Formants
Professor Susan Lin
In this notebook, we use both data from an outside source and that the class generated to explore the relationships between formants, gender, and height.
Table of Contents
1 - Exploring TIMIT Data
2 - Using the Class's Data
3 - Vowel Spaces
4 - Variation in Vowel Spaces
5 - Formants vs Height
Remember that to run a cell, you can either click the play button in the toolbar, or you can press shift and enter on your keyboard. To get a quick review of Jupyter notebooks, you can look at the VOT Notebook. Make sure to run the following cell before you get started.
End of explanation
timit = pd.read_csv('data/timitvowels.csv')
timit.head()
Explanation: Exploring TIMIT Data <a id='timit'></a>
We will start off by exploring TIMIT data taken from 8 different regions. These measurements are taken at the midpoint of vowels, where vowel boundaries were determined automatically using forced alignment.
Uploading the data
Prior to being able to work with the data, we have to upload our dataset. The following two lines of code will read in our data and create a dataframe. The last line of code prints the timit dataframe, but instead of printing the whole dataframe, by using the method .head, it only prints the first 5 rows.
End of explanation
IPAdict = {"AO" : "ɔ", "AA" : "ɑ", "IY" : "i", "UW" : "u", "EH" : "ɛ", "IH" : "ɪ", "UH":"ʊ", "AH": "ʌ", "AX" : "ə", "AE":"æ", "EY" :"eɪ", "AY": "aɪ", "OW":"oʊ", "AW":"aʊ", "OY" :"ɔɪ", "ER":"ɚ"}
timit['vowel'] = [IPAdict[x] for x in timit['vowel']]
timit.head()
Explanation: Look at the dataframe you created and try to figure out what each column measures. Each column represents a different attribute, see the following table for more information.
|Column Name|Details|
|---|---|
|speaker|unique speaker ID|
|gender|Speaker’s self-reported gender|
|region|Speaker dialect region number|
|word|Lexical item (from sentence prompt)|
|vowel|Vowel ID|
|duration|Vowel duration (seconds)|
|F1/F2/F3/f0|f0 and F1-F3 in BPM (Hz)|
Sometimes data is encoded with with an identifier, or key, to save space and simplify calculations. Each of those keys corresponds to a specific value. If you look at the region column, you will notice that all of the values are numbers. Each of those numbers corresponds to a region, for example, in our first row the speaker, cjf0, is from region 1. That corresponds to New England. Below is a table with all of the keys for region.
|Key|Region|
|---|---|
|1|New England|
|2|Northern|
|3|North Midland|
|4|South Midland|
|5|Southern|
|6|New York City|
|7|Western|
|8|Army Brat|
Transformations
When inspecting data, you may realize that there are changes to be made -- possibly due to the representation to the data or errors in the recording. Before jumping into analysis, it is important to clean the data.
One thing to notice about timit is that the column vowel contains ARPABET identifiers for the vowels. We want to convert the vowel column to be IPA characters, and will do so in the cell below.
End of explanation
timit_avg = timit.groupby(['speaker', 'vowel', 'gender', 'region']).mean().reset_index()
timit_avg.head()
Explanation: Most of the speakers will say the same vowel multiple times, so we are going to average those values together. The end result will be a dataframe where each row represents the average values for each vowel for each speaker.
End of explanation
timit_avg.gender.unique()
Explanation: Splitting on Gender
Using the same dataframe from above, timit_avg, we are going to split into dataframes grouped by gender. To identify the possible values of gender in the gender column, we can use the method .unique on the column.
End of explanation
timit_female = timit_avg[timit_avg['gender'] == 'female']
timit_male = timit_avg[timit_avg['gender'] == 'male']
Explanation: You could see that for this specific dataset there are only "female" and "male" values in the column. Given that information, we'll create two subsets based off of gender.
We'll split timit_avg into two separate dataframes, one for females, timit_female, and one for males, timit_male. Creating these subset dataframes does not affect the original timit_avg dataframe.
End of explanation
sns.distplot(timit_female['F1'], kde_kws={"label": "female"})
sns.distplot(timit_male['F1'], kde_kws={"label": "male"})
plt.title('F1')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
Explanation: Distribution of Formants
We want to inspect the distributions of F1, F2, and F3 for those that self-report as male and those that self-report as female to identify possible trends or relationships. Having our two split dataframes, timit_female and timit_male, eases the plotting process.
Run the cell below to see the distribution of F1.
End of explanation
sns.distplot(timit_female['F2'], kde_kws={"label": "female"})
sns.distplot(timit_male['F2'], kde_kws={"label": "male"})
plt.title('F2')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
Explanation: Does there seem to be a notable difference between male and female distributions of F1?
Next, we plot F2.
End of explanation
sns.distplot(timit_female['F3'], kde_kws={"label": "female"})
sns.distplot(timit_male['F3'], kde_kws={"label": "male"})
plt.title('F3')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
Explanation: Finally, we create the same visualization, but for F3.
End of explanation
# reading in the data
class_data = pd.read_csv('data/110_formants.csv')
class_data.head()
Explanation: Do you see a more pronounced difference across the the different F values? Are they the same throughout? Can we make any meaningful assumptions from these visualizations?
An additional question: How do you think the fact that we average each vowel together first for each individual affects the shape of the histograms?
Using the Class's Data <a id='cls'></a>
This portion of the notebook will rely on the data that was submit for HW5. Just like we did for the TIMIT data, we are going to read it into a dataframe and modify the column vowel to reflect the corresponding IPA translation. We will name the dataframe class_data.
End of explanation
# translating the vowel column
class_data['vowel'] = [IPAdict[x] for x in class_data['vowel']]
class_data.head()
Explanation: The ID column contains a unique value for each individual. Each individual has a row for each of the different vowels they measured.
End of explanation
class_data['Gender'].unique()
Explanation: Splitting on Gender
As we did with the TIMIT data, we are going to split class_data based on self-reported gender. We need to figure out what the possible responses for the column were.
End of explanation
class_female = class_data[class_data['Gender'] == 'Female']
class_male = class_data[class_data['Gender'] == 'Male']
Explanation: Notice that there are three possible values for the column. We do not have a large enough sample size to responsibly come to conclusions for Prefer not to answer, so for now we'll compare Male and Female. We'll call our new split dataframes class_female and class_male.
End of explanation
sns.distplot(class_female['F1'], kde_kws={"label": "female"})
sns.distplot(class_male['F1'], kde_kws={"label": "male"})
plt.title('F1')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
Explanation: Comparing Distributions
The following visualizations compare the the distribution of formants for males and females, like we did for the TIMIT data.
First, we'll start with F1.
End of explanation
sns.distplot(class_female['F2'], kde_kws={"label": "female"})
sns.distplot(class_male['F2'], kde_kws={"label": "male"})
plt.title('F2')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
Explanation: Next is F2.
End of explanation
sns.distplot(class_female['F3'], kde_kws={"label": "female"})
sns.distplot(class_male['F3'], kde_kws={"label": "male"})
plt.title('F3')
plt.xlabel("Hz")
plt.ylabel('Proportion per Hz');
Explanation: And finally F3.
End of explanation
def plot_blank_vowel_chart():
im = plt.imread('images/blankvowel.png')
plt.imshow(im, extent=(plt.xlim()[0], plt.xlim()[1], plt.ylim()[0], plt.ylim()[1]))
def plot_vowel_space(avgs_df):
plt.figure(figsize=(10, 8))
plt.gca().invert_yaxis()
plt.gca().invert_xaxis()
vowels = ['eɪ', 'i', 'oʊ', 'u', 'æ', 'ɑ', 'ɚ', 'ɛ', 'ɪ', 'ʊ', 'ʌ'] + ['ɔ']
for i in range(len(avgs_df)):
plt.scatter(avgs_df.loc[vowels[i]]['F2'], avgs_df.loc[vowels[i]]['F1'], marker=r"$ {} $".format(vowels[i]), s=1000)
plt.ylabel('F1')
plt.xlabel('F2')
Explanation: Do the spread of values appear to be the same for females and males? Do the same patterns that occur in the TIMIT data appear in the class's data?
Vowel Spaces <a id='vs'></a>
Run the cell below to define some functions that we will be using.
End of explanation
class_vowel_avgs = class_data.drop('ID', axis=1).groupby('vowel').mean()
class_vowel_avgs.head()
timit_vowel_avgs = timit.groupby('vowel').mean()
timit_vowel_avgs.head()
Explanation: We are going to be recreating the following graphic from this website.
Before we can get to creating, we need to get a singular value for each column for each of the vowels (so we can create coordinate pairs). To do this, we are going to find the average formant values for each of the vowels in our dataframes. We'll do this for both timit and class_data.
End of explanation
plot_vowel_space(class_vowel_avgs)
plt.xlabel('F2 (Hz)')
plt.ylabel('F1 (Hz)');
Explanation: Each of these new tables has a row for each vowel, which comprisises of the averaged values across all speakers.
Plotting the Vowel Space
Run the cell below to construct a vowel space for the class's data, in which we plot F1 on F2.
Note that both axes are descending.
End of explanation
log_timit_vowels = timit_vowel_avgs.apply(np.log)
log_class_vowels = class_vowel_avgs.apply(np.log)
class_data['log(F1)'] = np.log(class_data['F1'])
class_data['log(F2)'] = np.log(class_data['F2'])
log_class_vowels.head()
Explanation: Using Logarithmic Axes
In our visualization above, we use linear axes in order to construct our vowel space. The chart we are trying to recreate has logged axes (though the picture does not indicate it). Below we log-transform all of the values in our dataframes.
End of explanation
plot_vowel_space(log_class_vowels)
plt.xlabel('log(F2) (Hz)')
plt.ylabel('log(F1) (Hz)');
Explanation: Below we plot the vowel space using these new values.
End of explanation
plot_vowel_space(log_class_vowels)
plot_blank_vowel_chart()
plt.xlabel('log(F2) (Hz)')
plt.ylabel('log(F1) (Hz)');
Explanation: What effect does using the logged values have, if any? What advantages does using these values have? Are there any negatives? This paper might give some ideas.
Overlaying a Vowel Space Chart
Finally, we are going to overlay a blank vowel space chart outline to see how close our data reflects the theoretical vowel chart.
End of explanation
plot_vowel_space(log_timit_vowels)
plot_blank_vowel_chart()
plt.xlabel('log(F2) (Hz)')
plt.ylabel('log(F1) (Hz)');
Explanation: How well does it match the original?
Below we generate the same graph, except using the information from the TIMIT dataset.
End of explanation
sns.lmplot('log(F2)', 'log(F1)', hue='vowel', data=class_data, fit_reg=False, size=8, scatter_kws={'s':30})
plt.xlim(8.2, 6.7)
plt.ylim(7.0, 5.7);
Explanation: How does the TIMIT vowel space compare to the vowel space from our class data? What may be the cause for any differences between our vowel space and the one constructed using the TIMIT data? Do you notice any outliers or do any points that seem off?
Variation in Vowel Spaces <a id='vvs'></a>
In the following visualizations, we are going to show each individual vowel from each person in the F2 and F1 dimensions (logged). Each color corresponds to a different vowel -- see the legend for the exact pairs.
End of explanation
plt.figure(figsize=(10, 12))
pick_vowel = lambda v: class_data[class_data['vowel'] == v]
colors = ['Greys_r', 'Purples_r', 'Blues_r', 'Greens_r', 'Oranges_r', \
'Reds_r', 'GnBu_r', 'PuRd_r', 'winter_r', 'YlOrBr_r', 'pink_r', 'copper_r']
for vowel, color in list(zip(class_data.vowel.unique(), colors)):
vowel_subset = pick_vowel(vowel)
sns.kdeplot(vowel_subset['log(F2)'], vowel_subset['log(F1)'], n_levels=1, cmap=color, shade=False, shade_lowest=False)
for i in range(1, len(class_data)+1):
plt.scatter(class_data['log(F2)'][i], class_data['log(F1)'][i], color='black', linewidths=.5, marker=r"$ {} $".format(class_data['vowel'][i]), s=40)
plt.xlim(8.2, 6.7)
plt.ylim(7.0, 5.7);
Explanation: In the following visualization, we replace the colors with the IPA characters and attempt to clump the vowels together.
End of explanation
genders = class_data['Gender']
plotting_data = class_data.drop('vowel', axis=1)[np.logical_or(genders == 'Male', genders == 'Female')]
maxes = plotting_data.groupby(['ID', 'Gender']).max().reset_index()[plotting_data.columns[:-2]]
maxes.columns = ['ID', 'Language', 'Gender', 'Height', 'Max F1', 'Max F2', 'Max F3']
maxes_female = maxes[maxes['Gender'] == 'Female']
maxes_male = maxes[maxes['Gender'] == 'Male']
maxes.head()
Explanation: Formants vs Height <a id='fvh'></a>
We are going to compare each of the formants and height to see if there is a relationship between the two. To help visualize that, we are going to plot a regression line, which is also referred to as the line of best fit.
We are going to use the maximum of each formant to compare to height. So for each speaker, we will calculate their greatest F1, F2, and F3 across all vowels, then compare one of those to their height. We create the necessary dataframe in the cell below using the class's data.
End of explanation
sns.regplot('Height', 'Max F1', data=maxes)
sns.regplot('Height', 'Max F1', data=maxes_male, fit_reg=False)
sns.regplot('Height', 'Max F1', data=maxes_female, fit_reg=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F1 (Hz)')
print('female: green')
print('male: orange')
Explanation: First we will plot Max F1 against Height.
Note: Each gender has a different color dot, but the line represents the line of best fit for ALL points.
End of explanation
sns.regplot('Height', 'Max F2', data=maxes)
sns.regplot('Height', 'Max F2', data=maxes_male, fit_reg=False)
sns.regplot('Height', 'Max F2', data=maxes_female, fit_reg=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F2 (Hz)')
print('female: green')
print('male: orange')
Explanation: Is there a general trend for the data that you notice? What do you notice about the different color dots?
Next, we plot Max F2 on Height.
End of explanation
sns.regplot('Height', 'Max F3', data=maxes)
sns.regplot('Height', 'Max F3', data=maxes_male, fit_reg=False)
sns.regplot('Height', 'Max F3', data=maxes_female, fit_reg=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F3 (Hz)')
print('female: green')
print('male: orange')
Explanation: Finally, Max F3 vs Height.
End of explanation
sns.lmplot('Height', 'Max F1', data=maxes, hue='Gender')
plt.xlabel('Height (cm)')
plt.ylabel('Max F1 (Hz)');
Explanation: Do you notice a difference between the trends for the three formants?
Now we are going to plot two lines of best fit -- one for males, one for females. Before we plotted one line for all of the values, but now we are separating by gender to see if gender explains some of the difference in formants values.
For now, we're going deal with just Max F1.
End of explanation
timit_maxes = timit.groupby(['speaker', 'gender']).max().reset_index()
timit_maxes.columns = ['speaker', 'gender', 'region', 'height', 'word', 'vowel', 'Max duration', 'Max F1', 'Max F2', 'Max F3', 'Max f0']
plt.xlim(140, 210)
plt.ylim(500, 1400)
sns.regplot('height', 'Max F1', data=timit_maxes[timit_maxes['gender'] == 'female'], scatter_kws={'alpha':0.3})
sns.regplot('height', 'Max F1', data=timit_maxes[timit_maxes['gender'] == 'male'], scatter_kws={'alpha':0.3})
sns.regplot('height', 'Max F1', data=timit_maxes, scatter=False)
plt.xlabel('Height (cm)')
plt.ylabel('Max F1 (Hz)');
Explanation: Is there a noticeable difference between the two? Did you expect this result?
We're going to repeat the above graph, plotting a different regression line for males and females, but this time, using timit -- having a larger sample size may help expose patterns. Before we do that, we have to repeat the process of calulating the maximum value for each formants for each speaker. Run the cell below to do that and generate the plot. The blue dots are females, the orange dots are males, and the green line is the regression line for all speakers.
End of explanation |
6,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
You will learn how to
Step1: 2 - Dataset
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables X and Y.
Step2: Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
Step3: You have
Step4: Expected Output
Step5: You can now plot the decision boundary of these models. Run the code below.
Step7: Expected Output
Step9: Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step17: Expected output
Step19: Expected Output
Step21: Expected Output
Step22: Expected Output
Step23: Expected Output
Step24: Expected Output
Step25: Interpretation | Python Code:
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases_v2 import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
Explanation: Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
You will learn how to:
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- sklearn provides simple and efficient tools for data mining and data analysis.
- matplotlib is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
End of explanation
X, Y = load_planar_dataset()
Explanation: 2 - Dataset
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables X and Y.
End of explanation
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
Explanation: Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
End of explanation
### START CODE HERE ### (≈ 3 lines of code)
shape_X = X.shape
shape_Y = Y.shape
m = shape_X[1] # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
Explanation: You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
Exercise: How many training examples do you have? In addition, what is the shape of the variables X and Y?
Hint: How do you get the shape of a numpy array? (help)
End of explanation
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV();
clf.fit(X.T, Y.T);
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
End of explanation
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
Explanation: You can now plot the decision boundary of these models. Run the code below.
End of explanation
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
### START CODE HERE ### (≈ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
Interpretation: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
Here is our model:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
Mathematically:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1]}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2]}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{2} > 0.5 \ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
Reminder: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call nn_model(). Once you've built nn_model() and learnt the right parameters, you can make predictions on new data.
4.1 - Defining the neural network structure
Exercise: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
Hint: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1)) * 0.01
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1)) * 0.01
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
4.2 - Initialize the model's parameters
Exercise: Implement the function initialize_parameters().
Instructions:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: np.random.randn(a,b) * 0.01 to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: np.zeros((a,b)) to initialize a matrix of shape (a,b) with zeros.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
4.3 - The Loop
Question: Implement forward_propagation().
Instructions:
- Look above at the mathematical representation of your classifier.
- You can use the function sigmoid(). It is built-in (imported) in the notebook.
- You can use the function np.tanh(). It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of initialize_parameters()) by using parameters[".."].
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "cache". The cache will be given as an input to the backpropagation function.
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = np.multiply(np.log(A2), Y) + np.multiply(np.log(1 - A2), 1 - Y)
cost = - np.sum(logprobs) / m
### END CODE HERE ###
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td> 0.262818640198 0.091999045227 -1.30766601287 0.212877681719 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "A2"), which contains $a^{2}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
Exercise: Implement compute_cost() to compute the value of the cost $J$.
Instructions:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{2})$:
python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
(you can use either np.multiply() and then np.sum() or directly np.dot()).
End of explanation
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters['W1']
W2 = parameters['W2']
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = cache['A1']
A2 = cache['A2']
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = A2 - Y
dW2 = 1 / m * np.dot(dZ2, A1.T)
db2 = 1 / m * np.sum(dZ2, axis=1, keepdims=True)
dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))
dW1 = 1 / m * np.dot(dZ1, X.T)
db1 = 1 / m * np.sum(dZ1, axis=1, keepdims=True)
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td>**cost**</td>
<td> 0.693058761... </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
Question: Implement the function backward_propagation().
Instructions:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
Tips:
To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using (1 - np.power(A1, 2)).
End of explanation
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = grads['dW1']
db1 = grads['db1']
dW2 = grads['dW2']
db2 = grads['db2']
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 = W1 - dW1 * learning_rate
b1 = b1 - db1 * learning_rate
W2 = W2 - dW2 * learning_rate
b2 = b2 - db2 * learning_rate
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.00301023 -0.00747267]
[ 0.00257968 -0.00641288]
[-0.00156892 0.003893 ]
[-0.00652037 0.01618243]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[ 0.00176201]
[ 0.00150995]
[-0.00091736]
[-0.00381422]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00078841 0.01765429 -0.00084166 -0.01022527]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[-0.16655712]] </td>
</tr>
</table>
Question: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
General gradient descent rule: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
Illustration: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
End of explanation
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
### START CODE HERE ### (≈ 5 lines of code)
parameters = initialize_parameters(n_x, n_h, n_y)
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=True)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model()
Question: Build your neural network model in nn_model().
Instructions: The neural network model has to use the previous functions in the right order.
End of explanation
# GRADED FUNCTION: predict
def predict(parameters, X):
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = forward_propagation(X, parameters)
predictions = (A2 > 0.5)
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td>
**cost after iteration 0**
</td>
<td>
0.692739
</td>
</tr>
<tr>
<td>
<center> $\vdots$ </center>
</td>
<td>
<center> $\vdots$ </center>
</td>
</tr>
<tr>
<td>**W1**</td>
<td> [[-0.65848169 1.21866811]
[-0.76204273 1.39377573]
[ 0.5792005 -1.10397703]
[ 0.76773391 -1.41477129]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.287592 ]
[ 0.3511264 ]
[-0.2431246 ]
[-0.35772805]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-2.45566237 -3.27042274 2.00784958 3.36773273]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.20459656]] </td>
</tr>
</table>
4.5 Predictions
Question: Use your model to predict by building predict().
Use forward propagation to predict results.
Reminder: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: X_new = (X > threshold)
End of explanation
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.666666666667 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
End of explanation
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
End of explanation
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 20, 50]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
Explanation: Expected Output:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
4.6 - Tuning hidden layer size (optional/ungraded exercise)
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
End of explanation
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=Y, s=40, cmap=plt.cm.Spectral);
Explanation: Interpretation:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
Optional questions:
Note: Remember to submit the assignment but clicking the blue "Submit Assignment" button at the upper-right.
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
You've learnt to:
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
End of explanation |
6,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sveučilište u Zagrebu<br>
Fakultet elektrotehnike i računarstva
Strojno učenje
<a href="http
Step1: Sadržaj
Matrica zabune
Osnovne mjere
F-mjera
Višeklasna klasifikacija
Procjena pogreške
Statističko testiranje
Usporedba klasifikatora
Matrica zabune
Step2: [Skica
Step3: [Skica
Step4: Osnovne mjere
[Skica
Step5: Primjer
Step6: Variranje klasifikacijskog praga
Step7: Krivulja preciznost-odziv
Step8: ROC i AUC
ROC = Receiver Operating Characteristics
TPR kao funkcija od FPR
AUC = Area Under the (ROC) Curve
Step9: F-mjera
F1-mjera
Step10: Višeklasna klasifikacija | Python Code:
# Učitaj osnovne biblioteke...
import scipy as sp
import sklearn
import pandas as pd
%pylab inline
Explanation: Sveučilište u Zagrebu<br>
Fakultet elektrotehnike i računarstva
Strojno učenje
<a href="http://www.fer.unizg.hr/predmet/su">http://www.fer.unizg.hr/predmet/su</a>
Ak. god. 2015./2016.
Bilježnica 10: Vrednovanje modela
(c) 2015 Jan Šnajder
<i>Verzija: 0.1 (2015-12-19)</i>
End of explanation
y_test = sp.random.choice((0,1), size=10); y_test
y_pred = sp.random.choice((0,1), size=10); y_pred
Explanation: Sadržaj
Matrica zabune
Osnovne mjere
F-mjera
Višeklasna klasifikacija
Procjena pogreške
Statističko testiranje
Usporedba klasifikatora
Matrica zabune
End of explanation
def cm(y_true, y_pred):
tp = 0
fp = 0
fn = 0
tn = 0
for (t, p) in zip(y_true, y_pred):
if t == 0 and p == 1: fp += 1
elif t == 1 and p == 0: fn += 1
elif t == 1 and p == 1: tp += 1
else: tn += 1
return sp.array([[tp, fp], [fn, tn]])
cm(y_test, y_pred)
from sklearn.metrics import confusion_matrix
Explanation: [Skica: retci -> klasifikacija, stupci -> stvarno]
End of explanation
confusion_matrix(y_test, y_pred)
confusion_matrix(y_test, y_pred, labels=[1,0])
Explanation: [Skica: retci -> stvarno, stupci -> klasifikacija]
End of explanation
cm(y_test, y_pred)
from sklearn.metrics import accuracy_score, precision_score, recall_score
accuracy_score(y_test, y_pred)
precision_score(y_test, y_pred)
recall_score(y_test, y_pred)
Explanation: Osnovne mjere
[Skica: TP-FP-TN-FN]
Točnost (engl. accuracy)
$$
\mathrm{Acc} = \frac{\mathrm{TP}+\mathrm{TN}}{N} = \frac{\mathrm{TP}+\mathrm{TN}}{\mathrm{TP}+\mathrm{TN}+\mathrm{FP}+\mathrm{FN}}
$$
Preciznost (engl. precision)
$$
\mathrm{P} = \frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}}
$$
Odziv (engl. recall), true positive rate, specificity
$$
\mathrm{R} = \mathrm{TPR} = \frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FN}}
$$
Fall-out, false positive rate
$$
\mathrm{FPR} = \frac{\mathrm{FP}}{\mathrm{FP}+\mathrm{TN}}
$$
Primjer
End of explanation
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import Imputer
titanic_df = pd.read_csv("../data/titanic-train.csv")
titanic_df.drop(['PassengerId'], axis=1, inplace=True)
titanic_df1 = titanic_df[['Pclass', 'Sex', 'Age','Survived']]
titanic_X = titanic_df[['Pclass', 'Sex', 'Age']].as_matrix()
titanic_y = titanic_df['Survived'].as_matrix()
le = LabelEncoder()
titanic_X[:,1] = le.fit_transform(titanic_X[:,1])
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
titanic_X = imp.fit_transform(titanic_X)
titanic_X
titanic_y
shape(titanic_X), shape(titanic_y)
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(titanic_X, titanic_y, train_size=2.0/3, random_state=42)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1)
lr.fit(X_train, y_train)
lr.predict(X_train)
y_pred_lr = lr.predict(X_test); y_pred_lr
y_test
cm(y_test, y_pred_lr)
accuracy_score(y_test, y_pred_lr)
lr.score(X_test, y_test)
lr.score(X_train, y_train)
precision_score(y_test, y_pred_lr, pos_label=1)
recall_score(y_test, y_pred_lr, pos_label=1)
from sklearn.svm import SVC
svm = SVC(C=1)
svm.fit(X_train, y_train)
svm.score(X_test, y_test)
y_pred_svm = svm.predict(X_test); y_pred_svm
cm(y_test, y_pred_svm)
precision_score(y_test, y_pred_svm, pos_label=1)
recall_score(y_test, y_pred_svm, pos_label=1)
Explanation: Primjer: Titanic dataset
End of explanation
y_scores_lr = lr.predict_proba(X_test)[:,1]; y_scores_lr
print precision_score(y_test, y_pred_lr)
print recall_score(y_test, y_pred_lr)
threshold = 0.4
y_pred_lr_tweaked = map(lambda s : 1 if s > threshold else 0, y_scores_lr)
print y_pred_lr_tweaked
print precision_score(y_test, y_pred_lr_tweaked)
print recall_score(y_test, y_pred_lr_tweaked)
Explanation: Variranje klasifikacijskog praga
End of explanation
from sklearn.metrics import precision_recall_curve
pr, re, _ = precision_recall_curve(y_test, y_scores_lr, pos_label=1)
pr
re
plt.plot(re, pr)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.show()
from sklearn.metrics import average_precision_score
average_precision_score(y_test, y_scores_lr)
y_scores_svm = svm.decision_function(X_test)[:,0]
print y_scores_svm
pr_lr, re_lr, _ = precision_recall_curve(y_test, y_scores_lr, pos_label=1)
pr_svm, re_svm, _ = precision_recall_curve(y_test, y_scores_svm, pos_label=1)
plt.plot(re_lr, pr_lr, label='LR')
plt.plot(re_svm, pr_svm, label='SVM')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend()
plt.show()
print average_precision_score(y_test, y_scores_lr)
print average_precision_score(y_test, y_scores_svm)
Explanation: Krivulja preciznost-odziv
End of explanation
from sklearn.metrics import roc_curve, auc
fpr_lr, tpr_lr, _ = roc_curve(y_test, y_scores_lr)
roc_auc_lr = auc(fpr_lr, tpr_lr)
fpr_svm, tpr_svm, _ = roc_curve(y_test, y_scores_svm)
roc_auc_svm = auc(fpr_svm, tpr_svm)
plt.plot(fpr_lr, tpr_lr, label='LR ROC curve (area = %0.2f)' % roc_auc_lr)
plt.plot(fpr_svm, tpr_svm, label='SVM ROC curve (area = %0.2f)' % roc_auc_svm)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend(loc='lower right')
plt.show()
Explanation: ROC i AUC
ROC = Receiver Operating Characteristics
TPR kao funkcija od FPR
AUC = Area Under the (ROC) Curve
End of explanation
def f_beta(p, r, beta):
return ((1 + beta**2) * p * r) / (beta**2 * p + r)
f_beta(0.5, 0.9, 1)
f_beta(0.5, 0.9, 0.5)
f_beta(0.5, 0.9, 2)
(0.5 + 0.9) / 2
sqrt(0.5 * 0.9)
2/(1/0.5 + 1/0.9)
r = 0.5
xs = sp.linspace(0, 1)
plt.plot(xs, (xs + r)/2, label='aritm')
plt.plot(xs, sp.sqrt(xs*r), label='geom')
plt.plot(xs, 2/(1/xs + 1/r), label='harm')
plt.legend(loc='lower right')
plt.show()
Explanation: F-mjera
F1-mjera:
$$
F = \frac{2}{\frac{1}{P}+\frac{1}{R}} = \frac{2PR}{P+R}
$$
F-beta:
$$
F_\beta = \frac{(1+\beta^2)PR}{\beta^2 P +R}
$$
$F_{0.5}$ dvostruko naglašava preciznost, $F_{2}$ dvostruko naglašava recall
End of explanation
data = sp.loadtxt("path/do/glass.data", delimiter=",", skiprows=1)
print data
shape(data)
glass_X, glass_y = data[:,1:10], data[:,10]
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(glass_X, glass_y, train_size=2.0/3, random_state=42)
X_train.shape, X_test.shape
from sklearn.svm import SVC
m = SVC() # SVC(C=1, gamma='auto')
m.fit(X_train, y_train)
m.classes_
y_pred = m.predict(X_test); y_pred
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, pos_label=l, average=None)
sp.mean(_)
f1_score(y_test, y_pred, average='macro')
f1_score(y_test, y_pred, average='micro')
Explanation: Višeklasna klasifikacija
End of explanation |
6,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib
This notebook is (will be) as small crash course on the functionality of the Matplotlib Python module for creating graphs (and embedding it in notebooks). It is of course no substirute for the proper Matplotlib thorough documentation.
First we need to import the library in our notebook. There are a number of different ways to do it, depending on what part of matplotlib we want to import, and how should it be imported into the namespace. This is one of the most common ones; it means that we will use the plt. prefix to refer to the Matplotlib API
Step1: We also need to add a bit of IPython magic to tell the notebook backend that we want to display all graphs within the notebook (otherwise they would generate objects instead of displaying into the interface; objects that we later can output to file or display explicitly).
This is done by the following declaration
Step2: Matplotlib allows extensive customization of graph aspect. Some of these customizations come together in "styles". Let's see which styles are available
Step3: Without much more ado, let's display a simple graphic. For that we define a vector variable, and a function of that vector to be plotted
Step4: And we plot it
Step5: We can extensively alter the aspect of the plot. For instance, we can add markers and change color
Step6: Matplotlib command has two variants | Python Code:
import matplotlib.pyplot as plt
Explanation: Matplotlib
This notebook is (will be) as small crash course on the functionality of the Matplotlib Python module for creating graphs (and embedding it in notebooks). It is of course no substirute for the proper Matplotlib thorough documentation.
First we need to import the library in our notebook. There are a number of different ways to do it, depending on what part of matplotlib we want to import, and how should it be imported into the namespace. This is one of the most common ones; it means that we will use the plt. prefix to refer to the Matplotlib API
End of explanation
%matplotlib inline
Explanation: We also need to add a bit of IPython magic to tell the notebook backend that we want to display all graphs within the notebook (otherwise they would generate objects instead of displaying into the interface; objects that we later can output to file or display explicitly).
This is done by the following declaration
End of explanation
print plt.style.available
# Let's choose one style. And while we are at it, define thicker lines and big graphic sizes
plt.style.use('bmh')
plt.rcParams['lines.linewidth'] = 1.5
plt.rcParams['figure.figsize'] = (15, 5)
Explanation: Matplotlib allows extensive customization of graph aspect. Some of these customizations come together in "styles". Let's see which styles are available:
End of explanation
import numpy as np
x = np.arange( -10, 11 )
y = x*x
Explanation: Without much more ado, let's display a simple graphic. For that we define a vector variable, and a function of that vector to be plotted
End of explanation
plt.plot(x,y)
plt.xlabel('x');
plt.ylabel('x square');
Explanation: And we plot it
End of explanation
plt.plot(x,y,'ro-');
Explanation: We can extensively alter the aspect of the plot. For instance, we can add markers and change color:
End of explanation
%matplotlib notebook
# import matplotlib
# matplotlib.use('nbagg')
Explanation: Matplotlib command has two variants:
* A declarative syntax, with direct plotting commands. It is inspired by Matlab graphics syntax, so if you know Matlab it will be easy. It is the one used above.
* An object-oriented syntax, more complicated but somehow more powerful
(example of object oriented syntax pending)
Interactive plots
End of explanation |
6,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DATASCI W261
Step1: Part 1
Step2: (1b) Sparse vectors
Data points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a few sample vectors represented as dense numpy arrays. Use SparseVector to represent them in a sparse fashion, and verify that both the sparse and dense representations yield the same results when computing dot products (we will later use MLlib to train classifiers via gradient descent, and MLlib will need to compute dot products between SparseVectors and dense parameter vectors).
Use SparseVector(size, *args) to create a new sparse vector where size is the length of the vector and args is either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index). You'll need to create a sparse vector representation of each dense vector aDense and bDense.
Step3: (1c) OHE features as sparse vectors
Now let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should have the value 1.0. For example, the DenseVector for a point with features 2 and 4 would be [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].
Step5: (1d) Define a OHE function
Next we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first sample data point and verify that the result matches the result from Part (1c).
Step6: (1e) Apply OHE to a dataset
Finally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset.
Step7: Part 2
Step8: (2b) OHE Dictionary from distinct features
Next, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap action. Note that there is no unique mapping from keys to values, as all we require is that each (featureID, category) key be mapped to a unique integer between 0 and the number of keys. In this exercise, any valid mapping is acceptable. Use zipWithIndex followed by collectAsMap.
In our sample dataset, one valid list of key-value tuples is
Step10: (2c) Automated creation of an OHE dictionary
Now use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b).
Step11: Part 3
Step12: (3a) Loading and splitting the data
We are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset.
Step14: (3b) Extract features
We will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which is the 0-1 label), and parse the remaining fields (or raw features). To do this, complete the implemention of the parsePoint function.
Step15: (3c) Create an OHE dictionary from the dataset
Note that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note that we will assume for simplicity that all features in our CTR dataset are categorical.
Step17: (3d) Apply OHE to the dataset
Now let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint
Step20: Visualization 1
Step22: (3e) Handling unseen features
We naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, update the oneHotEncoding() function from Part (1d) to ignore previously unseen categories, and then compute OHE features for the validation data.
Step23: Part 4
Step25: (4b) Log loss
Throughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as
Step26: (4c) Baseline log loss
Next we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training points that correspond to click-through events (i.e., where the label is one). Compute this value (which is simply the mean of the training labels), and then use it to compute the training log loss for the baseline model. The log loss for multiple observations is the mean of the individual log loss values.
Step28: (4d) Predicted probability
In order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function $ \scriptsize \sigma(t) = (1+ e^{-t})^{-1} $ to return the model's probabilistic prediction. Then compute probabilistic predictions on the training data.
Note that when incorporating an intercept into our predictions, we simply add the intercept to the value of the prediction obtained from the weights and features. Alternatively, if the intercept was included as the first weight, we would need to add a corresponding feature to our data where the feature has the value one. This is not the case here.
Step30: (4e) Evaluate the model
We are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss.
Step31: (4f) Validation log loss
Next, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset.
Step32: Visualization 2
Step34: Part 5
Step36: (5b) Creating hashed features
Next we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = $ \scriptsize 2^{15} \approx 33K $ to create a LabeledPoint with hashed features stored as a SparseVector. Then use this function to create new training, validation and test datasets with hashed features. Hint
Step38: (5c) Sparsity
Since we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets.
Note that if you have a SparseVector named sparse, calling len(sparse) returns the total number of features, not the number features with entries. SparseVector objects have the attributes indices and values that contain information about which features are nonzero. Continuing with our example, these can be accessed using sparse.indices and sparse.values, respectively.
Step39: (5d) Logistic model with hashed features
Now let's train a logistic regression model using the hashed features. Run a grid search to find suitable hyperparameters for the hashed features, evaluating via log loss on the validation data. Note
Step40: Visualization 3
Step41: (5e) Evaluate on the test set
Finally, evaluate the best model from Part (5d) on the test set. Compare the resulting log loss with the baseline log loss on the test set, which can be computed in the same way that the validation log loss was computed in Part (4f). | Python Code:
labVersion = 'MIDS_MLS_week12_v_0_9'
%cd ~/Documents/W261/hw12/
import os
import sys
spark_home = os.environ['SPARK_HOME'] = \
'/Users/davidadams/packages/spark-1.5.1-bin-hadoop2.6/'
if not spark_home:
raise ValueError('SPARK_HOME enviroment variable is not set')
sys.path.insert(0,os.path.join(spark_home,'python'))
sys.path.insert(0,os.path.join(spark_home,'python/lib/py4j-0.8.2.1-src.zip'))
execfile(os.path.join(spark_home,'python/pyspark/shell.py'))
Explanation: DATASCI W261: Machine Learning at Scale
W261-1 Fall 2015
Week 12: Criteo CTR Project
November 14, 2015
Student name Katrina Adams
Click-Through Rate Prediction Lab
This lab covers the steps for creating a click-through rate (CTR) prediction pipeline. You will work with the Criteo Labs dataset that was used for a recent Kaggle competition.
This lab will cover:
Part 1: Featurize categorical data using one-hot-encoding (OHE)
Part 2: Construct an OHE dictionary
Part 3: Parse CTR data and generate OHE features
Visualization 1: Feature frequency
Part 4: CTR prediction and logloss evaluation
Visualization 2: ROC curve
Part 5: Reduce feature dimension via feature hashing
Visualization 3: Hyperparameter heat map
Note that, for reference, you can look up the details of the relevant Spark methods in Spark's Python API and the relevant NumPy methods in the NumPy Reference
End of explanation
# Data for manual OHE
# Note: the first data point does not include any value for the optional third feature
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDictManual = {}
sampleOHEDictManual[(0,'bear')] = 0
sampleOHEDictManual[(0,'cat')] = 1
sampleOHEDictManual[(0,'mouse')] = 2
sampleOHEDictManual[(1,'black')] = 3
sampleOHEDictManual[(1,'tabby')] = 4
sampleOHEDictManual[(2,'mouse')] = 5
sampleOHEDictManual[(2,'salmon')] = 6
# TEST One-hot-encoding (1a)
from test_helper import Test
Test.assertEqualsHashed(sampleOHEDictManual[(0,'bear')],
'b6589fc6ab0dc82cf12099d1c2d40ab994e8410c',
"incorrect value for sampleOHEDictManual[(0,'bear')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'cat')],
'356a192b7913b04c54574d18c28d46e6395428ab',
"incorrect value for sampleOHEDictManual[(0,'cat')]")
Test.assertEqualsHashed(sampleOHEDictManual[(0,'mouse')],
'da4b9237bacccdf19c0760cab7aec4a8359010b0',
"incorrect value for sampleOHEDictManual[(0,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'black')],
'77de68daecd823babbb58edb1c8e14d7106e83bb',
"incorrect value for sampleOHEDictManual[(1,'black')]")
Test.assertEqualsHashed(sampleOHEDictManual[(1,'tabby')],
'1b6453892473a467d07372d45eb05abc2031647a',
"incorrect value for sampleOHEDictManual[(1,'tabby')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'mouse')],
'ac3478d69a3c81fa62e60f5c3696165a4e5e6ac4',
"incorrect value for sampleOHEDictManual[(2,'mouse')]")
Test.assertEqualsHashed(sampleOHEDictManual[(2,'salmon')],
'c1dfd96eea8cc2b62785275bca38ac261256e278',
"incorrect value for sampleOHEDictManual[(2,'salmon')]")
Test.assertEquals(len(sampleOHEDictManual.keys()), 7,
'incorrect number of keys in sampleOHEDictManual')
Explanation: Part 1: Featurize categorical data using one-hot-encoding
(1a) One-hot-encoding
We would like to develop code to convert categorical features to numerical ones, and to build intuition, we will work with a sample unlabeled dataset with three data points, with each data point representing an animal. The first feature indicates the type of animal (bear, cat, mouse); the second feature describes the animal's color (black, tabby); and the third (optional) feature describes what the animal eats (mouse, salmon).
In a one-hot-encoding (OHE) scheme, we want to represent each tuple of (featureID, category) via its own binary feature. We can do this in Python by creating a dictionary that maps each tuple to a distinct integer, where the integer corresponds to a binary feature. To start, manually enter the entries in the OHE dictionary associated with the sample dataset by mapping the tuples to consecutive integers starting from zero, ordering the tuples first by featureID and next by category.
Later in this lab, we'll use OHE dictionaries to transform data points into compact lists of features that can be used in machine learning algorithms.
End of explanation
import numpy as np
from pyspark.mllib.linalg import SparseVector
# TODO: Replace <FILL IN> with appropriate code
aDense = np.array([0., 3., 0., 4.])
aSparse = SparseVector(4,[(1,3),(3,4)])
bDense = np.array([0., 0., 0., 1.])
bSparse = SparseVector(4,[(3,1)])
w = np.array([0.4, 3.1, -1.4, -.5])
print aDense.dot(w)
print aSparse.dot(w)
print bDense.dot(w)
print bSparse.dot(w)
# TEST Sparse Vectors (1b)
Test.assertTrue(isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector')
Test.assertTrue(aDense.dot(w) == aSparse.dot(w),
'dot product of aDense and w should equal dot product of aSparse and w')
Test.assertTrue(bDense.dot(w) == bSparse.dot(w),
'dot product of bDense and w should equal dot product of bSparse and w')
Explanation: (1b) Sparse vectors
Data points can typically be represented with a small number of non-zero OHE features relative to the total number of features that occur in the dataset. By leveraging this sparsity and using sparse vector representations of OHE data, we can reduce storage and computational burdens. Below are a few sample vectors represented as dense numpy arrays. Use SparseVector to represent them in a sparse fashion, and verify that both the sparse and dense representations yield the same results when computing dot products (we will later use MLlib to train classifiers via gradient descent, and MLlib will need to compute dot products between SparseVectors and dense parameter vectors).
Use SparseVector(size, *args) to create a new sparse vector where size is the length of the vector and args is either a dictionary, a list of (index, value) pairs, or two separate arrays of indices and values (sorted by index). You'll need to create a sparse vector representation of each dense vector aDense and bDense.
End of explanation
# Reminder of the sample features
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
sampleOneOHEFeatManual = SparseVector(7,[(2,1),(3,1)])
sampleTwoOHEFeatManual = SparseVector(7,[(1,1),(4,1),(5,1)])
sampleThreeOHEFeatManual = SparseVector(7,[(0,1),(3,1),(6,1)])
# TEST OHE Features as sparse vectors (1c)
Test.assertTrue(isinstance(sampleOneOHEFeatManual, SparseVector),
'sampleOneOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleTwoOHEFeatManual, SparseVector),
'sampleTwoOHEFeatManual needs to be a SparseVector')
Test.assertTrue(isinstance(sampleThreeOHEFeatManual, SparseVector),
'sampleThreeOHEFeatManual needs to be a SparseVector')
Test.assertEqualsHashed(sampleOneOHEFeatManual,
'ecc00223d141b7bd0913d52377cee2cf5783abd6',
'incorrect value for sampleOneOHEFeatManual')
Test.assertEqualsHashed(sampleTwoOHEFeatManual,
'26b023f4109e3b8ab32241938e2e9b9e9d62720a',
'incorrect value for sampleTwoOHEFeatManual')
Test.assertEqualsHashed(sampleThreeOHEFeatManual,
'c04134fd603ae115395b29dcabe9d0c66fbdc8a7',
'incorrect value for sampleThreeOHEFeatManual')
Explanation: (1c) OHE features as sparse vectors
Now let's see how we can represent the OHE features for points in our sample dataset. Using the mapping defined by the OHE dictionary from Part (1a), manually define OHE features for the three sample data points using SparseVector format. Any feature that occurs in a point should have the value 1.0. For example, the DenseVector for a point with features 2 and 4 would be [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0].
End of explanation
print sampleOHEDictManual
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
You should ensure that the indices used to create a SparseVector are sorted.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
sparsevect = []
for feat in rawFeats:
sparsevect.append((OHEDict[feat],1))
return SparseVector(numOHEFeats,sparsevect)
# Calculate the number of features in sampleOHEDictManual
numSampleOHEFeats = len(sampleOHEDictManual.keys())
# Run oneHotEnoding on sampleOne
sampleOneOHEFeat = oneHotEncoding(sampleOne, sampleOHEDictManual, numSampleOHEFeats)
print sampleOneOHEFeat
# TEST Define an OHE Function (1d)
Test.assertTrue(sampleOneOHEFeat == sampleOneOHEFeatManual,
'sampleOneOHEFeat should equal sampleOneOHEFeatManual')
Test.assertEquals(sampleOneOHEFeat, SparseVector(7, [2,3], [1.0,1.0]),
'incorrect value for sampleOneOHEFeat')
Test.assertEquals(oneHotEncoding([(1, 'black'), (0, 'mouse')], sampleOHEDictManual,
numSampleOHEFeats), SparseVector(7, [2,3], [1.0,1.0]),
'incorrect definition for oneHotEncoding')
Explanation: (1d) Define a OHE function
Next we will use the OHE dictionary from Part (1a) to programatically generate OHE features from the original categorical data. First write a function called oneHotEncoding that creates OHE feature vectors in SparseVector format. Then use this function to create OHE features for the first sample data point and verify that the result matches the result from Part (1c).
End of explanation
# TODO: Replace <FILL IN> with appropriate code
sampleOHEData = sampleDataRDD.map(lambda point: oneHotEncoding(point,sampleOHEDictManual, numSampleOHEFeats))
print sampleOHEData.collect()
# TEST Apply OHE to a dataset (1e)
sampleOHEDataValues = sampleOHEData.collect()
Test.assertTrue(len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements')
Test.assertEquals(sampleOHEDataValues[0], SparseVector(7, {2: 1.0, 3: 1.0}),
'incorrect OHE for first sample')
Test.assertEquals(sampleOHEDataValues[1], SparseVector(7, {1: 1.0, 4: 1.0, 5: 1.0}),
'incorrect OHE for second sample')
Test.assertEquals(sampleOHEDataValues[2], SparseVector(7, {0: 1.0, 3: 1.0, 6: 1.0}),
'incorrect OHE for third sample')
Explanation: (1e) Apply OHE to a dataset
Finally, use the function from Part (1d) to create OHE features for all 3 data points in the sample dataset.
End of explanation
print sampleDataRDD.flatMap(lambda x: x).distinct().collect()
# TODO: Replace <FILL IN> with appropriate code
sampleDistinctFeats = sampleDataRDD.flatMap(lambda x: x).distinct()
# TEST Pair RDD of (featureID, category) (2a)
Test.assertEquals(sorted(sampleDistinctFeats.collect()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'incorrect value for sampleDistinctFeats')
Explanation: Part 2: Construct an OHE dictionary
(2a) Pair RDD of (featureID, category)
To start, create an RDD of distinct (featureID, category) tuples. In our sample dataset, the 7 items in the resulting RDD are (0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon'). Notably 'black' appears twice in the dataset but only contributes one item to the RDD: (1, 'black'), while 'mouse' also appears twice and contributes two items: (0, 'mouse') and (2, 'mouse'). Use flatMap and distinct.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
sampleOHEDict = (sampleDistinctFeats
.zipWithIndex().collectAsMap())
print sampleOHEDict
# TEST OHE Dictionary from distinct features (2b)
Test.assertEquals(sorted(sampleOHEDict.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDict has unexpected keys')
Test.assertEquals(sorted(sampleOHEDict.values()), range(7), 'sampleOHEDict has unexpected values')
Explanation: (2b) OHE Dictionary from distinct features
Next, create an RDD of key-value tuples, where each (featureID, category) tuple in sampleDistinctFeats is a key and the values are distinct integers ranging from 0 to (number of keys - 1). Then convert this RDD into a dictionary, which can be done using the collectAsMap action. Note that there is no unique mapping from keys to values, as all we require is that each (featureID, category) key be mapped to a unique integer between 0 and the number of keys. In this exercise, any valid mapping is acceptable. Use zipWithIndex followed by collectAsMap.
In our sample dataset, one valid list of key-value tuples is: [((0, 'bear'), 0), ((2, 'salmon'), 1), ((1, 'tabby'), 2), ((2, 'mouse'), 3), ((0, 'mouse'), 4), ((0, 'cat'), 5), ((1, 'black'), 6)]. The dictionary defined in Part (1a) illustrates another valid mapping between keys and integers.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def createOneHotDict(inputData):
Creates a one-hot-encoder dictionary based on the input data.
Args:
inputData (RDD of lists of (int, str)): An RDD of observations where each observation is
made up of a list of (featureID, value) tuples.
Returns:
dict: A dictionary where the keys are (featureID, value) tuples and map to values that are
unique integers.
return inputData.flatMap(lambda x: x).distinct().zipWithIndex().collectAsMap()
sampleOHEDictAuto = createOneHotDict(sampleDataRDD)
print sampleOHEDictAuto
# TEST Automated creation of an OHE dictionary (2c)
Test.assertEquals(sorted(sampleOHEDictAuto.keys()),
[(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),
(1, 'tabby'), (2, 'mouse'), (2, 'salmon')],
'sampleOHEDictAuto has unexpected keys')
Test.assertEquals(sorted(sampleOHEDictAuto.values()), range(7),
'sampleOHEDictAuto has unexpected values')
Explanation: (2c) Automated creation of an OHE dictionary
Now use the code from Parts (2a) and (2b) to write a function that takes an input dataset and outputs an OHE dictionary. Then use this function to create an OHE dictionary for the sample dataset, and verify that it matches the dictionary from Part (2b).
End of explanation
# Run this code to view Criteo's agreement
from IPython.lib.display import IFrame
IFrame("http://labs.criteo.com/downloads/2014-kaggle-display-advertising-challenge-dataset/",
600, 350)
# TODO: Replace <FILL IN> with appropriate code
# Just replace <FILL IN> with the url for dac_sample.tar.gz
import glob
import os.path
import tarfile
import urllib
import urlparse
# Paste url, url should end with: dac_sample.tar.gz
url = 'http://labs.criteo.com/wp-content/uploads/2015/04/dac_sample.tar.gz'
url = url.strip()
baseDir = os.path.join('data')
inputPath = os.path.join('cs190', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
inputDir = os.path.split(fileName)[0]
def extractTar(check = False):
# Find the zipped archive and extract the dataset
tars = glob.glob('dac_sample*.tar.gz*')
if check and len(tars) == 0:
return False
if len(tars) > 0:
try:
tarFile = tarfile.open(tars[0])
except tarfile.ReadError:
if not check:
print 'Unable to open tar.gz file. Check your URL.'
return False
tarFile.extract('dac_sample.txt', path=inputDir)
print 'Successfully extracted: dac_sample.txt'
return True
else:
print 'You need to retry the download with the correct url.'
print ('Alternatively, you can upload the dac_sample.tar.gz file to your Jupyter root ' +
'directory')
return False
if os.path.isfile(fileName):
print 'File is already available. Nothing to do.'
elif extractTar(check = True):
print 'tar.gz file was already available.'
elif not url.endswith('dac_sample.tar.gz'):
print 'Check your download url. Are you downloading the Sample dataset?'
else:
# Download the file and store it in the same directory as this notebook
try:
urllib.urlretrieve(url, os.path.basename(urlparse.urlsplit(url).path))
except IOError:
print 'Unable to download and store: {0}'.format(url)
extractTar()
import os.path
baseDir = os.path.join('data')
inputPath = os.path.join('cs190', 'dac_sample.txt')
fileName = os.path.join(baseDir, inputPath)
if os.path.isfile(fileName):
rawData = (sc
.textFile(fileName, 2)
.map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data
print rawData.take(1)
Explanation: Part 3: Parse CTR data and generate OHE features
Before we can proceed, you'll first need to obtain the data from Criteo. If you have already completed this step in the setup lab, just run the cells below and the data will be loaded into the rawData variable.
Below is Criteo's data sharing agreement. After you accept the agreement, you can obtain the download URL by right-clicking on the "Download Sample" button and clicking "Copy link address" or "Copy Link Location", depending on your browser. Paste the URL into the # TODO cell below. The file is 8.4 MB compressed. The script below will download the file to the virtual machine (VM) and then extract the data.
If running the cell below does not render a webpage, open the Criteo agreement in a separate browser tab. After you accept the agreement, you can obtain the download URL by right-clicking on the "Download Sample" button and clicking "Copy link address" or "Copy Link Location", depending on your browser. Paste the URL into the # TODO cell below.
Note that the download could take a few minutes, depending upon your connection speed.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
weights = [.8, .1, .1]
seed = 42
# Use randomSplit with weights and seed
rawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed)
# Cache the data
rawTrainData.cache()
rawValidationData.cache()
rawTestData.cache()
nTrain = rawTrainData.count()
nVal = rawValidationData.count()
nTest = rawTestData.count()
print nTrain, nVal, nTest, nTrain + nVal + nTest
print rawData.take(1)
# TEST Loading and splitting the data (3a)
Test.assertTrue(all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]),
'you must cache the split data')
Test.assertEquals(nTrain, 79911, 'incorrect value for nTrain')
Test.assertEquals(nVal, 10075, 'incorrect value for nVal')
Test.assertEquals(nTest, 10014, 'incorrect value for nTest')
Explanation: (3a) Loading and splitting the data
We are now ready to start working with the actual CTR data, and our first task involves splitting it into training, validation, and test sets. Use the randomSplit method with the specified weights and seed to create RDDs storing each of these datasets, and then cache each of these RDDs, as we will be accessing them multiple times in the remainder of this lab. Finally, compute the size of each dataset.
End of explanation
point = '0,1,1,5,0,1382,4,15,2,181,1,2,,2,68fd1e64,80e26c9b,fb936136,7b4723c4,25c83c98,7e0ccccf,de7995b8,1f89b562,a73ee510,a8cd5504,b2cb9c98,37c9c164,2824a5f6,1adce6ef,8ba8b39a,891b62e7,e5ba7672,f54016b9,21ddcdc9,b1252a9d,07b5194c,,3a171ecb,c5c50484,e8b83407,9727dd16'
print parsePoint(point)
# TODO: Replace <FILL IN> with appropriate code
def parsePoint(point):
Converts a comma separated string into a list of (featureID, value) tuples.
Note:
featureIDs should start at 0 and increase to the number of features - 1.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
Returns:
list: A list of (featureID, value) tuples.
feats = point.split(',')
featlist = []
for i,feat in enumerate(feats):
if i==0:
continue
else:
featlist.append((i-1,feat))
return featlist
parsedTrainFeat = rawTrainData.map(parsePoint)
numCategories = (parsedTrainFeat
.flatMap(lambda x: x)
.distinct()
.map(lambda x: (x[0], 1))
.reduceByKey(lambda x, y: x + y)
.sortByKey()
.collect())
print numCategories[2][1]
# TEST Extract features (3b)
Test.assertEquals(numCategories[2][1], 855, 'incorrect implementation of parsePoint')
Test.assertEquals(numCategories[32][1], 4, 'incorrect implementation of parsePoint')
Explanation: (3b) Extract features
We will now parse the raw training data to create an RDD that we can subsequently use to create an OHE dictionary. Note from the take() command in Part (3a) that each raw data point is a string containing several fields separated by some delimiter. For now, we will ignore the first field (which is the 0-1 label), and parse the remaining fields (or raw features). To do this, complete the implemention of the parsePoint function.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
ctrOHEDict = createOneHotDict(parsedTrainFeat)
numCtrOHEFeats = len(ctrOHEDict.keys())
print numCtrOHEFeats
print ctrOHEDict[(0, '')]
# TEST Create an OHE dictionary from the dataset (3c)
Test.assertEquals(numCtrOHEFeats, 233286, 'incorrect number of features in ctrOHEDict')
Test.assertTrue((0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict')
Explanation: (3c) Create an OHE dictionary from the dataset
Note that parsePoint returns a data point as a list of (featureID, category) tuples, which is the same format as the sample dataset studied in Parts 1 and 2 of this lab. Using this observation, create an OHE dictionary using the function implemented in Part (2c). Note that we will assume for simplicity that all features in our CTR dataset are categorical.
End of explanation
from pyspark.mllib.regression import LabeledPoint
# TODO: Replace <FILL IN> with appropriate code
def parseOHEPoint(point, OHEDict, numOHEFeats):
Obtain the label and feature vector for this raw observation.
Note:
You must use the function `oneHotEncoding` in this implementation or later portions
of this lab may not function as expected.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.
numOHEFeats (int): The number of unique features in the training dataset.
Returns:
LabeledPoint: Contains the label for the observation and the one-hot-encoding of the
raw features based on the provided OHE dictionary.
feats = point.split(',')
featlist = []
for i,feat in enumerate(feats):
if i==0:
label=feat
else:
featlist.append((i-1,feat))
featSparseVector = oneHotEncoding(featlist, OHEDict, numOHEFeats)
return LabeledPoint(label, featSparseVector)
OHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHETrainData.cache()
print OHETrainData.take(1)
# Check that oneHotEncoding function was used in parseOHEPoint
backupOneHot = oneHotEncoding
oneHotEncoding = None
withOneHot = False
try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)
except TypeError: withOneHot = True
oneHotEncoding = backupOneHot
# TEST Apply OHE to the dataset (3d)
numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))
numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))
Test.assertEquals(numNZ, numNZAlt, 'incorrect implementation of parseOHEPoint')
Test.assertTrue(withOneHot, 'oneHotEncoding not present in parseOHEPoint')
Explanation: (3d) Apply OHE to the dataset
Now let's use this OHE dictionary by starting with the raw training data and creating an RDD of LabeledPoint objects using OHE features. To do this, complete the implementation of the parseOHEPoint function. Hint: parseOHEPoint is an extension of the parsePoint function from Part (3b) and it uses the oneHotEncoding function from Part (1d).
End of explanation
def bucketFeatByCount(featCount):
Bucket the counts by powers of two.
for i in range(11):
size = 2 ** i
if featCount <= size:
return size
return -1
featCounts = (OHETrainData
.flatMap(lambda lp: lp.features.indices)
.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
featCountsBuckets = (featCounts
.map(lambda x: (bucketFeatByCount(x[1]), 1))
.filter(lambda (k, v): k != -1)
.reduceByKey(lambda x, y: x + y)
.collect())
print featCountsBuckets
%matplotlib inline
import matplotlib.pyplot as plt
x, y = zip(*featCountsBuckets)
x, y = np.log(x), np.log(y)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
Template for generating the plot layout.
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))
ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$')
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
Explanation: Visualization 1: Feature frequency
We will now visualize the number of times each of the 233,286 OHE features appears in the training data. We first compute the number of times each feature appears, then bucket the features by these counts. The buckets are sized by powers of 2, so the first bucket corresponds to features that appear exactly once ( $ \scriptsize 2^0 $ ), the second to features that appear twice ( $ \scriptsize 2^1 $ ), the third to features that occur between three and four ( $ \scriptsize 2^2 $ ) times, the fifth bucket is five to eight ( $ \scriptsize 2^3 $ ) times and so on. The scatter plot below shows the logarithm of the bucket thresholds versus the logarithm of the number of features that have counts that fall in the buckets.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be
ignored.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
sparsevect = []
for feat in rawFeats:
try:
sparsevect.append((OHEDict[feat],1))
except KeyError:
continue
return SparseVector(numOHEFeats,sparsevect)
OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHEValidationData.cache()
print OHEValidationData.take(1)
# TEST Handling unseen features (3e)
numNZVal = (OHEValidationData
.map(lambda lp: len(lp.features.indices))
.sum())
Test.assertEquals(numNZVal, 372080, 'incorrect number of features')
Explanation: (3e) Handling unseen features
We naturally would like to repeat the process from Part (3d), e.g., to compute OHE features for the validation and test datasets. However, we must be careful, as some categorical values will likely appear in new data that did not exist in the training data. To deal with this situation, update the oneHotEncoding() function from Part (1d) to ignore previously unseen categories, and then compute OHE features for the validation data.
End of explanation
from pyspark.mllib.classification import LogisticRegressionWithSGD
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
# TODO: Replace <FILL IN> with appropriate code
model0 = LogisticRegressionWithSGD.train(OHETrainData, iterations=numIters,
step=stepSize, regParam=regParam,
regType=regType, intercept=includeIntercept)
sortedWeights = sorted(model0.weights)
print sortedWeights[:5], model0.intercept
# TEST Logistic regression (4a)
Test.assertTrue(np.allclose(model0.intercept, 0.56455084025), 'incorrect value for model0.intercept')
Test.assertTrue(np.allclose(sortedWeights[0:5],
[-0.45899236853575609, -0.37973707648623956, -0.36996558266753304,
-0.36934962879928263, -0.32697945415010637]), 'incorrect value for model0.weights')
Explanation: Part 4: CTR prediction and logloss evaluation
(4a) Logistic regression
We are now ready to train our first CTR classifier. A natural classifier to use in this setting is logistic regression, since it models the probability of a click-through event rather than returning a binary response, and when working with rare events, probabilistic predictions are useful. First use LogisticRegressionWithSGD to train a model using OHETrainData with the given hyperparameter configuration. LogisticRegressionWithSGD returns a LogisticRegressionModel. Next, use the LogisticRegressionModel.weights and LogisticRegressionModel.intercept attributes to print out the model's parameters. Note that these are the names of the object's attributes and should be called using a syntax like model.weights for a given model.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
from math import log
def computeLogLoss(p, y):
Calculates the value of log loss for a given probabilty and label.
Note:
log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it
and when p is 1 we need to subtract a small value (epsilon) from it.
Args:
p (float): A probabilty between 0 and 1.
y (int): A label. Takes on the values 0 and 1.
Returns:
float: The log loss value.
epsilon = 10e-12
if p==0:
p = p+epsilon
elif p==1:
p = p-epsilon
if y==1:
return -1.0*log(p)
elif y==0:
return -1.0*log(1-p)
else:
return None
print computeLogLoss(.5, 1)
print computeLogLoss(.5, 0)
print computeLogLoss(.99, 1)
print computeLogLoss(.99, 0)
print computeLogLoss(.01, 1)
print computeLogLoss(.01, 0)
print computeLogLoss(0, 1)
print computeLogLoss(1, 1)
print computeLogLoss(1, 0)
# TEST Log loss (4b)
Test.assertTrue(np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)],
[0.69314718056, 0.0100503358535, 4.60517018599]),
'computeLogLoss is not correct')
Test.assertTrue(np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)],
[25.3284360229, 1.00000008275e-11, 25.3284360229]),
'computeLogLoss needs to bound p away from 0 and 1 by epsilon')
Explanation: (4b) Log loss
Throughout this lab, we will use log loss to evaluate the quality of models. Log loss is defined as: $$ \begin{align} \scriptsize \ell_{log}(p, y) = \begin{cases} -\log (p) & \text{if } y = 1 \\ -\log(1-p) & \text{if } y = 0 \end{cases} \end{align} $$ where $ \scriptsize p$ is a probability between 0 and 1 and $ \scriptsize y$ is a label of either 0 or 1. Log loss is a standard evaluation criterion when predicting rare-events such as click-through rate prediction (it is also the criterion used in the Criteo Kaggle competition). Write a function to compute log loss, and evaluate it on some sample inputs.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Note that our dataset has a very high click-through rate by design
# In practice click-through rate can be one to two orders of magnitude lower
classOneFracTrain = 1.0*OHETrainData.filter(lambda point: point.label==1).count()/OHETrainData.count()
print classOneFracTrain
logLossTrBase = OHETrainData.map(lambda point: computeLogLoss(classOneFracTrain, point.label)).mean()
print 'Baseline Train Logloss = {0:.3f}\n'.format(logLossTrBase)
# TEST Baseline log loss (4c)
Test.assertTrue(np.allclose(classOneFracTrain, 0.22717773523), 'incorrect value for classOneFracTrain')
Test.assertTrue(np.allclose(logLossTrBase, 0.535844), 'incorrect value for logLossTrBase')
Explanation: (4c) Baseline log loss
Next we will use the function we wrote in Part (4b) to compute the baseline log loss on the training data. A very simple yet natural baseline model is one where we always make the same prediction independent of the given datapoint, setting the predicted value equal to the fraction of training points that correspond to click-through events (i.e., where the label is one). Compute this value (which is simply the mean of the training labels), and then use it to compute the training log loss for the baseline model. The log loss for multiple observations is the mean of the individual log loss values.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
from math import exp # exp(-t) = e^-t
def getP(x, w, intercept):
Calculate the probability for an observation given a set of weights and intercept.
Note:
We'll bound our raw prediction between 20 and -20 for numerical purposes.
Args:
x (SparseVector): A vector with values of 1.0 for features that exist in this
observation and 0.0 otherwise.
w (DenseVector): A vector of weights (betas) for the model.
intercept (float): The model's intercept.
Returns:
float: A probability between 0 and 1.
rawPrediction = x.dot(w)+intercept
# Bound the raw prediction value
rawPrediction = min(rawPrediction, 20)
rawPrediction = max(rawPrediction, -20)
return 1.0/(1.0+exp(-1.0*rawPrediction))
trainingPredictions = OHETrainData.map(lambda point: getP(point.features, model0.weights, model0.intercept))
print trainingPredictions.take(5)
# TEST Predicted probability (4d)
Test.assertTrue(np.allclose(trainingPredictions.sum(), 18135.4834348),
'incorrect value for trainingPredictions')
Explanation: (4d) Predicted probability
In order to compute the log loss for the model we trained in Part (4a), we need to write code to generate predictions from this model. Write a function that computes the raw linear prediction from this logistic regression model and then passes it through a sigmoid function $ \scriptsize \sigma(t) = (1+ e^{-t})^{-1} $ to return the model's probabilistic prediction. Then compute probabilistic predictions on the training data.
Note that when incorporating an intercept into our predictions, we simply add the intercept to the value of the prediction obtained from the weights and features. Alternatively, if the intercept was included as the first weight, we would need to add a corresponding feature to our data where the feature has the value one. This is not the case here.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def evaluateResults(model, data):
Calculates the log loss for the data given the model.
Args:
model (LogisticRegressionModel): A trained logistic regression model.
data (RDD of LabeledPoint): Labels and features for each observation.
Returns:
float: Log loss for the data.
return data.map(lambda point: computeLogLoss(getP(point.features, model.weights, model.intercept), point.label)).mean()
logLossTrLR0 = evaluateResults(model0, OHETrainData)
print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossTrBase, logLossTrLR0))
# TEST Evaluate the model (4e)
Test.assertTrue(np.allclose(logLossTrLR0, 0.456903), 'incorrect value for logLossTrLR0')
Explanation: (4e) Evaluate the model
We are now ready to evaluate the quality of the model we trained in Part (4a). To do this, first write a general function that takes as input a model and data, and outputs the log loss. Then run this function on the OHE training data, and compare the result with the baseline log loss.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
logLossValBase = OHEValidationData.map(lambda point: computeLogLoss(classOneFracTrain, point.label)).mean()
logLossValLR0 = evaluateResults(model0, OHEValidationData)
print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'
.format(logLossValBase, logLossValLR0))
# TEST Validation log loss (4f)
Test.assertTrue(np.allclose(logLossValBase, 0.527603), 'incorrect value for logLossValBase')
Test.assertTrue(np.allclose(logLossValLR0, 0.456957), 'incorrect value for logLossValLR0')
Explanation: (4f) Validation log loss
Next, following the same logic as in Parts (4c) and 4(e), compute the validation log loss for both the baseline and logistic regression models. Notably, the baseline model for the validation data should still be based on the label fraction from the training dataset.
End of explanation
labelsAndScores = OHEValidationData.map(lambda lp:
(lp.label, getP(lp.features, model0.weights, model0.intercept)))
labelsAndWeights = labelsAndScores.collect()
labelsAndWeights.sort(key=lambda (k, v): v, reverse=True)
labelsByWeight = np.array([k for (k, v) in labelsAndWeights])
length = labelsByWeight.size
truePositives = labelsByWeight.cumsum()
numPositive = truePositives[-1]
falsePositives = np.arange(1.0, length + 1, 1.) - truePositives
truePositiveRate = truePositives / numPositive
falsePositiveRate = falsePositives / (length - numPositive)
# Generate layout and plot data
fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))
ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)
ax.set_ylabel('True Positive Rate (Sensitivity)')
ax.set_xlabel('False Positive Rate (1 - Specificity)')
plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)
plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model
pass
Explanation: Visualization 2: ROC curve
We will now visualize how well the model predicts our target. To do this we generate a plot of the ROC curve. The ROC curve shows us the trade-off between the false positive rate and true positive rate, as we liberalize the threshold required to predict a positive outcome. A random model is represented by the dashed line.
End of explanation
from collections import defaultdict
import hashlib
def hashFunction(numBuckets, rawFeats, printMapping=False):
Calculate a feature dictionary for an observation's features based on hashing.
Note:
Use printMapping=True for debug purposes and to better understand how the hashing works.
Args:
numBuckets (int): Number of buckets to use as features.
rawFeats (list of (int, str)): A list of features for an observation. Represented as
(featureID, value) tuples.
printMapping (bool, optional): If true, the mappings of featureString to index will be
printed.
Returns:
dict of int to float: The keys will be integers which represent the buckets that the
features have been hashed to. The value for a given key will contain the count of the
(featureID, value) tuples that have hashed to that key.
mapping = {}
for ind, category in rawFeats:
featureString = category + str(ind)
mapping[featureString] = int(int(hashlib.md5(featureString).hexdigest(), 16) % numBuckets)
if(printMapping): print mapping
sparseFeatures = defaultdict(float)
for bucket in mapping.values():
sparseFeatures[bucket] += 1.0
return dict(sparseFeatures)
# Reminder of the sample values:
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# TODO: Replace <FILL IN> with appropriate code
# Use four buckets
sampOneFourBuckets = hashFunction(4, sampleOne, True)
sampTwoFourBuckets = hashFunction(4, sampleTwo, True)
sampThreeFourBuckets = hashFunction(4, sampleThree, True)
# Use one hundred buckets
sampOneHundredBuckets = hashFunction(100, sampleOne, True)
sampTwoHundredBuckets = hashFunction(100, sampleTwo, True)
sampThreeHundredBuckets = hashFunction(100, sampleThree, True)
print '\t\t 4 Buckets \t\t\t 100 Buckets'
print 'SampleOne:\t {0}\t\t {1}'.format(sampOneFourBuckets, sampOneHundredBuckets)
print 'SampleTwo:\t {0}\t\t {1}'.format(sampTwoFourBuckets, sampTwoHundredBuckets)
print 'SampleThree:\t {0}\t {1}'.format(sampThreeFourBuckets, sampThreeHundredBuckets)
# TEST Hash function (5a)
Test.assertEquals(sampOneFourBuckets, {2: 1.0, 3: 1.0}, 'incorrect value for sampOneFourBuckets')
Test.assertEquals(sampThreeHundredBuckets, {72: 1.0, 5: 1.0, 14: 1.0},
'incorrect value for sampThreeHundredBuckets')
Explanation: Part 5: Reduce feature dimension via feature hashing
(5a) Hash function
As we just saw, using a one-hot-encoding featurization can yield a model with good statistical accuracy. However, the number of distinct categories across all features is quite large -- recall that we observed 233K categories in the training data in Part (3c). Moreover, the full Kaggle training dataset includes more than 33M distinct categories, and the Kaggle dataset itself is just a small subset of Criteo's labeled data. Hence, featurizing via a one-hot-encoding representation would lead to a very large feature vector. To reduce the dimensionality of the feature space, we will use feature hashing.
Below is the hash function that we will use for this part of the lab. We will first use this hash function with the three sample data points from Part (1a) to gain some intuition. Specifically, run code to hash the three sample points using two different values for numBuckets and observe the resulting hashed feature dictionaries.
End of explanation
print 2**15
point = rawTrainData.take(1)[0]
feats = point.split(',')
featlist= []
for i,feat in enumerate(feats):
#print i, feat
if i==0:
label=float(feat)
else:
featlist.append((i-1,feat))
print label, featlist
print hashFunction(2**15, featlist, printMapping=False)
# TODO: Replace <FILL IN> with appropriate code
def parseHashPoint(point, numBuckets):
Create a LabeledPoint for this observation using hashing.
Args:
point (str): A comma separated string where the first value is the label and the rest are
features.
numBuckets: The number of buckets to hash to.
Returns:
LabeledPoint: A LabeledPoint with a label (0.0 or 1.0) and a SparseVector of hashed
features.
feats = point.split(',')
featlist = []
for i,feat in enumerate(feats):
if i==0:
label=float(feat)
else:
featlist.append((i-1,feat))
hashSparseVector = SparseVector(numBuckets, hashFunction(numBuckets, featlist, printMapping=False))
return LabeledPoint(label, hashSparseVector)
numBucketsCTR = 2**15
hashTrainData = rawTrainData.map(lambda point: parseHashPoint(point, numBucketsCTR))
hashTrainData.cache()
hashValidationData = rawValidationData.map(lambda point: parseHashPoint(point, numBucketsCTR))
hashValidationData.cache()
hashTestData = rawTestData.map(lambda point: parseHashPoint(point, numBucketsCTR))
hashTestData.cache()
print hashTrainData.take(1)
# TEST Creating hashed features (5b)
hashTrainDataFeatureSum = sum(hashTrainData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTrainDataLabelSum = sum(hashTrainData
.map(lambda lp: lp.label)
.take(100))
hashValidationDataFeatureSum = sum(hashValidationData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashValidationDataLabelSum = sum(hashValidationData
.map(lambda lp: lp.label)
.take(100))
hashTestDataFeatureSum = sum(hashTestData
.map(lambda lp: len(lp.features.indices))
.take(20))
hashTestDataLabelSum = sum(hashTestData
.map(lambda lp: lp.label)
.take(100))
Test.assertEquals(hashTrainDataFeatureSum, 772, 'incorrect number of features in hashTrainData')
Test.assertEquals(hashTrainDataLabelSum, 24.0, 'incorrect labels in hashTrainData')
Test.assertEquals(hashValidationDataFeatureSum, 776,
'incorrect number of features in hashValidationData')
Test.assertEquals(hashValidationDataLabelSum, 16.0, 'incorrect labels in hashValidationData')
Test.assertEquals(hashTestDataFeatureSum, 774, 'incorrect number of features in hashTestData')
Test.assertEquals(hashTestDataLabelSum, 23.0, 'incorrect labels in hashTestData')
Explanation: (5b) Creating hashed features
Next we will use this hash function to create hashed features for our CTR datasets. First write a function that uses the hash function from Part (5a) with numBuckets = $ \scriptsize 2^{15} \approx 33K $ to create a LabeledPoint with hashed features stored as a SparseVector. Then use this function to create new training, validation and test datasets with hashed features. Hint: parsedHashPoint is similar to parseOHEPoint from Part (3d).
End of explanation
# TODO: Replace <FILL IN> with appropriate code
def computeSparsity(data, d, n):
Calculates the average sparsity for the features in an RDD of LabeledPoints.
Args:
data (RDD of LabeledPoint): The LabeledPoints to use in the sparsity calculation.
d (int): The total number of features.
n (int): The number of observations in the RDD.
Returns:
float: The average of the ratio of features in a point to total features.
total = data.map(lambda point: len(point.features.indices)).sum()
avg_num_feat = 1.0*total/n
return 1.0*avg_num_feat/d
averageSparsityHash = computeSparsity(hashTrainData, numBucketsCTR, nTrain)
averageSparsityOHE = computeSparsity(OHETrainData, numCtrOHEFeats, nTrain)
print 'Average OHE Sparsity: {0:.7e}'.format(averageSparsityOHE)
print 'Average Hash Sparsity: {0:.7e}'.format(averageSparsityHash)
# TEST Sparsity (5c)
Test.assertTrue(np.allclose(averageSparsityOHE, 1.6717677e-04),
'incorrect value for averageSparsityOHE')
Test.assertTrue(np.allclose(averageSparsityHash, 1.1805561e-03),
'incorrect value for averageSparsityHash')
Explanation: (5c) Sparsity
Since we have 33K hashed features versus 233K OHE features, we should expect OHE features to be sparser. Verify this hypothesis by computing the average sparsity of the OHE and the hashed training datasets.
Note that if you have a SparseVector named sparse, calling len(sparse) returns the total number of features, not the number features with entries. SparseVector objects have the attributes indices and values that contain information about which features are nonzero. Continuing with our example, these can be accessed using sparse.indices and sparse.values, respectively.
End of explanation
numIters = 500
regType = 'l2'
includeIntercept = True
# Initialize variables using values from initial model training
bestModel = None
bestLogLoss = 1e10
# TODO: Replace <FILL IN> with appropriate code
stepSizes = [1.0,10.0]
regParams = [1.0e-6,1.0e-3]
for stepSize in stepSizes:
for regParam in regParams:
model = (LogisticRegressionWithSGD
.train(hashTrainData, numIters, stepSize, regParam=regParam, regType=regType,
intercept=includeIntercept))
logLossVa = evaluateResults(model, hashValidationData)
print ('\tstepSize = {0:.1f}, regParam = {1:.0e}: logloss = {2:.3f}'
.format(stepSize, regParam, logLossVa))
if (logLossVa < bestLogLoss):
bestModel = model
bestLogLoss = logLossVa
print ('Hashed Features Validation Logloss:\n\tBaseline = {0:.6f}\n\tLogReg = {1:.6f}'
.format(logLossValBase, bestLogLoss))
# TEST Logistic model with hashed features (5d)
Test.assertTrue(np.allclose(bestLogLoss, 0.4481683608), 'incorrect value for bestLogLoss')
Explanation: (5d) Logistic model with hashed features
Now let's train a logistic regression model using the hashed features. Run a grid search to find suitable hyperparameters for the hashed features, evaluating via log loss on the validation data. Note: This may take a few minutes to run. Use 1 and 10 for stepSizes and 1e-6 and 1e-3 for regParams.
End of explanation
from matplotlib.colors import LinearSegmentedColormap
# Saved parameters and results. Eliminate the time required to run 36 models
stepSizes = [3, 6, 9, 12, 15, 18]
regParams = [1e-7, 1e-6, 1e-5, 1e-4, 1e-3, 1e-2]
logLoss = np.array([[ 0.45808431, 0.45808493, 0.45809113, 0.45815333, 0.45879221, 0.46556321],
[ 0.45188196, 0.45188306, 0.4518941, 0.4520051, 0.45316284, 0.46396068],
[ 0.44886478, 0.44886613, 0.44887974, 0.44902096, 0.4505614, 0.46371153],
[ 0.44706645, 0.4470698, 0.44708102, 0.44724251, 0.44905525, 0.46366507],
[ 0.44588848, 0.44589365, 0.44590568, 0.44606631, 0.44807106, 0.46365589],
[ 0.44508948, 0.44509474, 0.44510274, 0.44525007, 0.44738317, 0.46365405]])
numRows, numCols = len(stepSizes), len(regParams)
logLoss = np.array(logLoss)
logLoss.shape = (numRows, numCols)
fig, ax = preparePlot(np.arange(0, numCols, 1), np.arange(0, numRows, 1), figsize=(8, 7),
hideLabels=True, gridWidth=0.)
ax.set_xticklabels(regParams), ax.set_yticklabels(stepSizes)
ax.set_xlabel('Regularization Parameter'), ax.set_ylabel('Step Size')
colors = LinearSegmentedColormap.from_list('blue', ['#0022ff', '#000055'], gamma=.2)
image = plt.imshow(logLoss,interpolation='nearest', aspect='auto',
cmap = colors)
pass
Explanation: Visualization 3: Hyperparameter heat map
We will now perform a visualization of an extensive hyperparameter search. Specifically, we will create a heat map where the brighter colors correspond to lower values of logLoss.
The search was run using six step sizes and six values for regularization, which required the training of thirty-six separate models. We have included the results below, but omitted the actual search to save time.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Log loss for the best model from (5d)
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
model_5dbest = LogisticRegressionWithSGD.train(hashTrainData, iterations=numIters,
step=stepSize, regParam=regParam,
regType=regType, intercept=includeIntercept)
logLossTest = evaluateResults(model_5dbest, hashTestData)
# Log loss for the baseline model
classOneFracTrain = 1.0*hashTrainData.filter(lambda point: point.label==1).count()/hashTrainData.count()
logLossTestBaseline = hashTestData.map(lambda point: computeLogLoss(classOneFracTrain, point.label)).mean()
print ('Hashed Features Test Log Loss:\n\tBaseline = {0:.6f}\n\tLogReg = {1:.6f}'
.format(logLossTestBaseline, logLossTest))
# TEST Evaluate on the test set (5e)
Test.assertTrue(np.allclose(logLossTestBaseline, 0.537438),
'incorrect value for logLossTestBaseline')
Test.assertTrue(np.allclose(logLossTest, 0.455616931), 'incorrect value for logLossTest')
Explanation: (5e) Evaluate on the test set
Finally, evaluate the best model from Part (5d) on the test set. Compare the resulting log loss with the baseline log loss on the test set, which can be computed in the same way that the validation log loss was computed in Part (4f).
End of explanation |
6,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Session 7 - Parallel Processing and the Veneer command line
This session looks at options for parallel processing with Veneer - that is, by running multiple copies of Source, each with a Veneer server running, and giving instructions to each running copy in parallel.
You can establish multiple copies of Source/Veneer by running multiple copies of the Source application, loading a project and starting the Web Server Monitoring window on each one. Alternatively, you can use the Vener Command Line, which presents the same interface to Python and other systems, without the overheads of the user interface.
This session shows how you can run the Veneer command line and use it from Python as you would the main Source application.
Overview
Launching multiple copies of Veneer command line using veneer-py
Running simulations in parallel
Which Model?
Note
Step1: Starting the Command Line
You can run FlowMatters.Source.VeneerCmd.exe program from a windows command prompt, but throughout these tutorials, we will use veneer-py functions for starting and stopping the program.
Specifically, we will use start to start one or more copies of the program and kill_all_now to shutdown the program. (Alternatively, they will shutdown when the Jupyter notebook is shutdown).
Step2: The main things you need, in order to call start are a Source project file (a path to the .rsproj file) and a path to the Veneer command line exe. (The latter we saved to variable veneer_cmd on calling create_command_line).
Step3: We can now start up a number of Veneer command line 'servers'.
We'll specify how many we want using num_copies - Its a good idea to set this based on the number of CPU cores available.
We also set first_port - Which is used for the first server. This number is incremented by one for each extra server.
Step4: You should see a number of lines along the lines of [3] Server started. Ctrl-C to exit... indicating that the servers have started.
These servers will now run until your current python session ends. (To make that happen, without closing the notebook, use the Kernel|Restart menu option in Jupyter)
Parallel Simulations
You can now work with each of these Veneer servers in the same way that you worked with a single server in the earlier sessions.
You will need an instance of the Veneer client object - one for each instance.
Here, we'll create a list of Veneer clients, each connected to a different instance based on the port number
Step5: You can now ask one of these servers to run a model for you
Step6: You could run a model on each server using a for loop
Step7: But that is a sequential run - One run won't start until the previous run has finished.
The run_async option on v.run_model will trigger the run on the server and then allow Python to continue
Step8: The above code block flies through quickly, because it doesn't wait for the simulation to finish. But how will we know when the run has finished, so that we can continue our script?
(We can look at Windows Task Manager to see the CPU load - but its not exactly a precise mechanism...)
When run with run_async=True, v.run_model returns a HTTP connection object, that can be queried for the success code of the run. We can use this to block for a particular run to finish. Assuming we don't want to do anything else in our script until ALL runs are finished, this is a good approach
Step9: You can use this the run_async approach to run multiple, parallel simulations, from a notebook.
Note
Step10: Running a batch simulation with parallel simulations...
In Tutorial 5, we performed exponential sampling for an inflow scaling factor. We ran the model 50 times. Lets run that process again, using parallel processing.
The code looked like this (combining a few notebook cells and removing some interim visualisation)
```python
import numpy as np
NUMBER_OF_SIMULATIONS=50
sampled_scaling_factors = np.random.exponential(size=NUMBER_OF_SIMULATIONS)
sampled_scaling_factors
spill_results=[]
Store our time series criteria in a variable to use it in configuring recording and retrieving results
ts_match_criteria = {'NetworkElement'
Step11: We need a veneer client object for each running server
Step12: The sampling process is the same as before...
Except we'll use more samples and make sure the number of samples is a multiple of the number of servers!
Step13: NOW We will organise our samples based on the number of servers we're running - effectively creating batches
Step14: If we iterate over our samples array now, we'll get groups of four. (The break statement stops after the first itertation of the loop)
Step15: Importantly, In switching on the output recording, we need to do so on each of the running servers
Step16: Now, we want to trigger all our runs. We'll use the run_async=True option and wait for each group of runs to finish before starting the next group.
Step17: Final remarks
The above example isn't as efficient as it could be, but it may be good enough for many circumstances.
The simulations run in parallel, but everything else (configuring recorders, retrieving and post-processing results) is done sequentially. (And no simulations are taking place while that's happening).
Furthermore, if some model runs complete quicker than others, one or more Veneer servers will be idle waiting for further instructions.
The example above is a reasonable approach if the simulations take much longer than the post processing and if the simulations will typically take around the same amount of time. | Python Code:
from veneer.manage import create_command_line
help(create_command_line)
veneer_install = 'D:\\src\\projects\\Veneer\\Compiled\\Source 4.1.1.4484 (public version)'
source_version = '4.1.1'
cmd_directory = 'E:\\temp\\veneer_cmd'
veneer_cmd = create_command_line(veneer_install,source_version,dest=cmd_directory)
veneer_cmd
Explanation: Session 7 - Parallel Processing and the Veneer command line
This session looks at options for parallel processing with Veneer - that is, by running multiple copies of Source, each with a Veneer server running, and giving instructions to each running copy in parallel.
You can establish multiple copies of Source/Veneer by running multiple copies of the Source application, loading a project and starting the Web Server Monitoring window on each one. Alternatively, you can use the Vener Command Line, which presents the same interface to Python and other systems, without the overheads of the user interface.
This session shows how you can run the Veneer command line and use it from Python as you would the main Source application.
Overview
Launching multiple copies of Veneer command line using veneer-py
Running simulations in parallel
Which Model?
Note: This session uses ExampleProject/RiverModel1.rsproj. You are welcome to work with your own model instead, however you will need to change the notebook text at certain points to reflect the names of nodes, links and functions in your model file.
The Veneer Command Line
The Veneer command line is a standalone executable program that runs the Source engine and exposes the Veneer network interface, without the main Source user interface. This means that all the veneer-py functionality able to be used whether you are running the Source application or the Veneer command line.
Setting up the command line
The Veneer Command Line is distributed with Veneer, although the setup process is a little different to the regular Veneer. The Veneer Command Line is a standalone, executable program (specifically FlowMatters.Source.VeneerCmd.exe) and can be found in the directory where you installed (probably unzipped) Veneer.
Now, where the Veneer plugin DLL can be installed and used from any directory, the Veneer Command Line needs access to all of the main libraries (DLLs) supplied with Source - and so the Veneer Command Line and the Source DLLs must reside in the same directory.
There are two options. You can either
copy the program and the other files supplied with Veneer, into your main Source installation directory, or
copy ALL of the files from the main Source directory into a common directory with Veneer.
Once you've done so, you should be able to run the Veneer command line. You can launch the Veneer command line directory from a Windows command prompt. Alternatively, you can start one or more copies directly from Python using veneer.manage.start (described below).
Because it is a common requirement for everyone to co-locate the Veneer Command Line with the files from the Source software, veneer-py includes a create_command_line function that performs the copy for you:
End of explanation
from veneer.manage import start,kill_all_now
help(start)
Explanation: Starting the Command Line
You can run FlowMatters.Source.VeneerCmd.exe program from a windows command prompt, but throughout these tutorials, we will use veneer-py functions for starting and stopping the program.
Specifically, we will use start to start one or more copies of the program and kill_all_now to shutdown the program. (Alternatively, they will shutdown when the Jupyter notebook is shutdown).
End of explanation
project='ExampleProject/RiverModel1.rsproj'
Explanation: The main things you need, in order to call start are a Source project file (a path to the .rsproj file) and a path to the Veneer command line exe. (The latter we saved to variable veneer_cmd on calling create_command_line).
End of explanation
num_copies=4
first_port=9990
processes, ports = start(project,n_instances=num_copies,ports=first_port,debug=True,remote=False,veneer_exe=veneer_cmd)
Explanation: We can now start up a number of Veneer command line 'servers'.
We'll specify how many we want using num_copies - Its a good idea to set this based on the number of CPU cores available.
We also set first_port - Which is used for the first server. This number is incremented by one for each extra server.
End of explanation
import veneer
ports # Saved when we called start()
vs = [veneer.Veneer(port=p) for p in ports]
Explanation: You should see a number of lines along the lines of [3] Server started. Ctrl-C to exit... indicating that the servers have started.
These servers will now run until your current python session ends. (To make that happen, without closing the notebook, use the Kernel|Restart menu option in Jupyter)
Parallel Simulations
You can now work with each of these Veneer servers in the same way that you worked with a single server in the earlier sessions.
You will need an instance of the Veneer client object - one for each instance.
Here, we'll create a list of Veneer clients, each connected to a different instance based on the port number
End of explanation
vs[0].run_model()
vs[0].retrieve_multiple_time_series(criteria={'RecordingVariable':'Downstream Flow Volume'})[0:10]
Explanation: You can now ask one of these servers to run a model for you:
End of explanation
for v in vs:
veneer.log('Running on port %d'%v.port)
v.run_model()
print('All runs finished')
Explanation: You could run a model on each server using a for loop:
End of explanation
for v in vs:
veneer.log('Running on port %d'%v.port)
v.run_model(run_async=True)
print('All runs started... But when will they finish? And how will we know?')
Explanation: But that is a sequential run - One run won't start until the previous run has finished.
The run_async option on v.run_model will trigger the run on the server and then allow Python to continue:
End of explanation
responses = []
for v in vs:
veneer.log('Running on port %d'%v.port)
responses.append(v.run_model(run_async=True))
veneer.log("All runs started... Now we'll wait when until they finish")
for r,v in zip(responses,vs):
code = r.getresponse().getcode()
veneer.log('Run finished on port %d. Returned a HTTP %d code'%(v.port,code))
Explanation: The above code block flies through quickly, because it doesn't wait for the simulation to finish. But how will we know when the run has finished, so that we can continue our script?
(We can look at Windows Task Manager to see the CPU load - but its not exactly a precise mechanism...)
When run with run_async=True, v.run_model returns a HTTP connection object, that can be queried for the success code of the run. We can use this to block for a particular run to finish. Assuming we don't want to do anything else in our script until ALL runs are finished, this is a good approach:
End of explanation
kill_all_now(processes)
Explanation: You can use this the run_async approach to run multiple, parallel simulations, from a notebook.
Note: We will shutdown those 4 copies of the server now as the next exercise will use a different model
End of explanation
project='ExampleProject/RiverModel2.rsproj'
num_copies=4
first_port=9990
processes, ports = start(project,n_instances=num_copies,ports=first_port,debug=True,remote=False,veneer_exe=veneer_cmd)
Explanation: Running a batch simulation with parallel simulations...
In Tutorial 5, we performed exponential sampling for an inflow scaling factor. We ran the model 50 times. Lets run that process again, using parallel processing.
The code looked like this (combining a few notebook cells and removing some interim visualisation)
```python
import numpy as np
NUMBER_OF_SIMULATIONS=50
sampled_scaling_factors = np.random.exponential(size=NUMBER_OF_SIMULATIONS)
sampled_scaling_factors
spill_results=[]
Store our time series criteria in a variable to use it in configuring recording and retrieving results
ts_match_criteria = {'NetworkElement':'Recreational Lake','RecordingVariable':'Spill Volume'}
v.configure_recording(enable=[ts_match_criteria])
for scaling_factor in sampled_scaling_factors:
veneer.log('Running for $InflowScaling=%f'%scaling_factor)
# We are running the multiple many times in this case - so lets drop any results we already have...
v.drop_all_runs()
# Set $InflowScaling to current scaling factor
v.update_function('$InflowScaling',scaling_factor)
v.run_model()
# Retrieve the spill time series, as an annual sum, with the column named for the variable ('Spill Volume')
run_results = v.retrieve_multiple_time_series(criteria=ts_match_criteria,timestep='annual',name_fn=veneer.name_for_variable)
# Store the mean spill volume and the scaling factor we used
spill_results.append({'ScalingFactor':scaling_factor,'SpillVolume':run_results['Spill Volume'].mean()})
Convert the results to a Data Frame
spill_results_df = pd.DataFrame(spill_results)
spill_results_df
```
Lets convert this to something that runs in parallel
First, lets set up the servers
End of explanation
vs = [veneer.Veneer(port=p) for p in ports]
Explanation: We need a veneer client object for each running server:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
NUMBER_OF_SIMULATIONS=100
sampled_scaling_factors = np.random.exponential(size=NUMBER_OF_SIMULATIONS)
sampled_scaling_factors
plt.hist(sampled_scaling_factors)
Explanation: The sampling process is the same as before...
Except we'll use more samples and make sure the number of samples is a multiple of the number of servers!
End of explanation
samples = sampled_scaling_factors.reshape(NUMBER_OF_SIMULATIONS/len(ports),len(ports))
samples
Explanation: NOW We will organise our samples based on the number of servers we're running - effectively creating batches
End of explanation
for row in samples:
print(row)
break
Explanation: If we iterate over our samples array now, we'll get groups of four. (The break statement stops after the first itertation of the loop)
End of explanation
# Store our time series criteria in a variable to use it in configuring recording and retrieving results
ts_match_criteria = {'NetworkElement':'Recreational Lake','RecordingVariable':'Spill Volume'}
for v in vs:
v.configure_recording(enable=[ts_match_criteria])
Explanation: Importantly, In switching on the output recording, we need to do so on each of the running servers:
End of explanation
spill_results=[]
total_runs=0
for group in samples:
group_run_responses = [] # Somewhere
for i in range(len(vs)): # Will be 0,1.. #ports
total_runs += 1
scaling_factor = group[i]
v = vs[i]
# We are running the multiple many times in this case - so lets drop any results we already have...
v.drop_all_runs()
# Set $InflowScaling to current scaling factor
v.update_function('$InflowScaling',scaling_factor)
response = v.run_model(run_async=True)
group_run_responses.append(response)
#### NOW, All runs for this group have been triggered. Now go back and retrieve results
# Retrieve the spill time series, as an annual sum, with the column named for the variable ('Spill Volume')
for i in range(len(vs)): # Will be 0,1.. #ports
scaling_factor = group[i]
v = vs[i]
r = group_run_responses[i]
code = r.getresponse().getcode() # Wait until the job is finished
run_results = v.retrieve_multiple_time_series(criteria=ts_match_criteria,timestep='annual',name_fn=veneer.name_for_variable)
# Store the mean spill volume and the scaling factor we used
spill_results.append({'ScalingFactor':scaling_factor,'SpillVolume':run_results['Spill Volume'].mean()})
veneer.log('Completed %d runs'%total_runs)
# Convert the results to a Data Frame
import pandas as pd
spill_results_df = pd.DataFrame(spill_results)
spill_results_df
spill_results_df['SpillVolumeGL'] = spill_results_df['SpillVolume'] * 1e-6 # Convert to GL
spill_results_df['SpillVolumeGL'].hist()
Explanation: Now, we want to trigger all our runs. We'll use the run_async=True option and wait for each group of runs to finish before starting the next group.
End of explanation
# Terminate the veneer servers
kill_all_now(processes)
Explanation: Final remarks
The above example isn't as efficient as it could be, but it may be good enough for many circumstances.
The simulations run in parallel, but everything else (configuring recorders, retrieving and post-processing results) is done sequentially. (And no simulations are taking place while that's happening).
Furthermore, if some model runs complete quicker than others, one or more Veneer servers will be idle waiting for further instructions.
The example above is a reasonable approach if the simulations take much longer than the post processing and if the simulations will typically take around the same amount of time.
End of explanation |
6,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sympy - Symbolic algebra in Python
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http
Step1: Introduction
There are two notable Computer Algebra Systems (CAS) for Python
Step2: To get nice-looking $\LaTeX$ formatted output run
Step3: Symbolic variables
In SymPy we need to create symbols for the variables we want to work with. We can create a new symbol using the Symbol class
Step4: We can add assumptions to symbols when we create them
Step5: Complex numbers
The imaginary unit is denoted I in Sympy.
Step6: Rational numbers
There are three different numerical types in SymPy
Step7: Numerical evaluation
SymPy uses a library for artitrary precision as numerical backend, and has predefined SymPy expressions for a number of mathematical constants, such as
Step8: When we numerically evaluate algebraic expressions we often want to substitute a symbol with a numerical value. In SymPy we do that using the subs function
Step9: The subs function can of course also be used to substitute Symbols and expressions
Step10: We can also combine numerical evolution of expressions with NumPy arrays
Step11: However, this kind of numerical evolution can be very slow, and there is a much more efficient way to do it
Step12: The speedup when using "lambdified" functions instead of direct numerical evaluation can be significant, often several orders of magnitude. Even in this simple example we get a significant speed up
Step13: Algebraic manipulations
One of the main uses of an CAS is to perform algebraic manipulations of expressions. For example, we might want to expand a product, factor an expression, or simply an expression. The functions for doing these basic operations in SymPy are demonstrated in this section.
Expand and factor
The first steps in an algebraic manipulation
Step14: The expand function takes a number of keywords arguments which we can tell the functions what kind of expansions we want to have performed. For example, to expand trigonometric expressions, use the trig=True keyword argument
Step15: See help(expand) for a detailed explanation of the various types of expansions the expand functions can perform.
The opposite a product expansion is of course factoring. The factor an expression in SymPy use the factor function
Step16: Simplify
The simplify tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the simplify functions also exists
Step17: apart and together
To manipulate symbolic expressions of fractions, we can use the apart and together functions
Step18: Simplify usually combines fractions but does not factor
Step19: Calculus
In addition to algebraic manipulations, the other main use of CAS is to do calculus, like derivatives and integrals of algebraic expressions.
Differentiation
Differentiation is usually simple. Use the diff function. The first argument is the expression to take the derivative of, and the second argument is the symbol by which to take the derivative
Step20: For higher order derivatives we can do
Step21: To calculate the derivative of a multivariate expression, we can do
Step22: $\frac{d^3f}{dxdy^2}$
Step23: Integration
Integration is done in a similar fashion
Step24: By providing limits for the integration variable we can evaluate definite integrals
Step25: and also improper integrals
Step26: Remember, oo is the SymPy notation for inifinity.
Sums and products
We can evaluate sums and products using the functions
Step27: Products work much the same way
Step28: Limits
Limits can be evaluated using the limit function. For example,
Step29: We can use 'limit' to check the result of derivation using the diff function
Step30: $\displaystyle \frac{\mathrm{d}f(x,y)}{\mathrm{d}x} = \frac{f(x+h,y)-f(x,y)}{h}$
Step31: OK!
We can change the direction from which we approach the limiting point using the dir keywork argument
Step32: Series
Series expansion is also one of the most useful features of a CAS. In SymPy we can perform a series expansion of an expression using the series function
Step33: By default it expands the expression around $x=0$, but we can expand around any value of $x$ by explicitly include a value in the function call
Step34: And we can explicitly define to which order the series expansion should be carried out
Step35: The series expansion includes the order of the approximation, which is very useful for keeping track of the order of validity when we do calculations with series expansions of different order
Step36: If we want to get rid of the order information we can use the removeO method
Step37: But note that this is not the correct expansion of $\cos(x)\sin(x)$ to $5$th order
Step38: Linear algebra
Matrices
Matrices are defined using the Matrix class
Step39: With Matrix class instances we can do the usual matrix algebra operations
Step40: And calculate determinants and inverses, and the like
Step41: Solving equations
For solving equations and systems of equations we can use the solve function
Step42: System of equations
Step43: In terms of other symbolic expressions
Step44: Further reading
http | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Sympy - Symbolic algebra in Python
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http://github.com/jrjohansson/scientific-python-lectures.
The other notebooks in this lecture series are indexed at http://jrjohansson.github.io.
End of explanation
from sympy import *
Explanation: Introduction
There are two notable Computer Algebra Systems (CAS) for Python:
SymPy - A python module that can be used in any Python program, or in an IPython session, that provides powerful CAS features.
Sage - Sage is a full-featured and very powerful CAS enviroment that aims to provide an open source system that competes with Mathematica and Maple. Sage is not a regular Python module, but rather a CAS environment that uses Python as its programming language.
Sage is in some aspects more powerful than SymPy, but both offer very comprehensive CAS functionality. The advantage of SymPy is that it is a regular Python module and integrates well with the IPython notebook.
In this lecture we will therefore look at how to use SymPy with IPython notebooks. If you are interested in an open source CAS environment I also recommend to read more about Sage.
To get started using SymPy in a Python program or notebook, import the module sympy:
End of explanation
init_printing()
# or with older versions of sympy/ipython, load the IPython extension
#%load_ext sympy.interactive.ipythonprinting
# or
#%load_ext sympyprinting
Explanation: To get nice-looking $\LaTeX$ formatted output run:
End of explanation
x = Symbol('x')
(pi + x)**2
# alternative way of defining symbols
a, b, c = symbols("a, b, c")
type(a)
Explanation: Symbolic variables
In SymPy we need to create symbols for the variables we want to work with. We can create a new symbol using the Symbol class:
End of explanation
x = Symbol('x', real=True)
x.is_imaginary
x = Symbol('x', positive=True)
x > 0
Explanation: We can add assumptions to symbols when we create them:
End of explanation
1+1*I
I**2
(x * I + 1)**2
Explanation: Complex numbers
The imaginary unit is denoted I in Sympy.
End of explanation
r1 = Rational(4,5)
r2 = Rational(5,4)
r1
r1+r2
r1/r2
Explanation: Rational numbers
There are three different numerical types in SymPy: Real, Rational, Integer:
End of explanation
pi.evalf(n=50)
y = (x + pi)**2
N(y, 5) # same as evalf
Explanation: Numerical evaluation
SymPy uses a library for artitrary precision as numerical backend, and has predefined SymPy expressions for a number of mathematical constants, such as: pi, e, oo for infinity.
To evaluate an expression numerically we can use the evalf function (or N). It takes an argument n which specifies the number of significant digits.
End of explanation
y.subs(x, 1.5)
N(y.subs(x, 1.5))
Explanation: When we numerically evaluate algebraic expressions we often want to substitute a symbol with a numerical value. In SymPy we do that using the subs function:
End of explanation
y.subs(x, a+pi)
Explanation: The subs function can of course also be used to substitute Symbols and expressions:
End of explanation
import numpy
x_vec = numpy.arange(0, 10, 0.1)
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
fig, ax = plt.subplots()
ax.plot(x_vec, y_vec);
Explanation: We can also combine numerical evolution of expressions with NumPy arrays:
End of explanation
f = lambdify([x], (x + pi)**2, 'numpy') # the first argument is a list of variables that
# f will be a function of: in this case only x -> f(x)
y_vec = f(x_vec) # now we can directly pass a numpy array and f(x) is efficiently evaluated
Explanation: However, this kind of numerical evolution can be very slow, and there is a much more efficient way to do it: Use the function lambdify to "compile" a Sympy expression into a function that is much more efficient to evaluate numerically:
End of explanation
%%timeit
y_vec = numpy.array([N(((x + pi)**2).subs(x, xx)) for xx in x_vec])
%%timeit
y_vec = f(x_vec)
Explanation: The speedup when using "lambdified" functions instead of direct numerical evaluation can be significant, often several orders of magnitude. Even in this simple example we get a significant speed up:
End of explanation
(x+1)*(x+2)*(x+3)
expand((x+1)*(x+2)*(x+3))
Explanation: Algebraic manipulations
One of the main uses of an CAS is to perform algebraic manipulations of expressions. For example, we might want to expand a product, factor an expression, or simply an expression. The functions for doing these basic operations in SymPy are demonstrated in this section.
Expand and factor
The first steps in an algebraic manipulation
End of explanation
sin(a+b)
expand(sin(a+b), trig=True)
Explanation: The expand function takes a number of keywords arguments which we can tell the functions what kind of expansions we want to have performed. For example, to expand trigonometric expressions, use the trig=True keyword argument:
End of explanation
factor(x**3 + 6 * x**2 + 11*x + 6)
Explanation: See help(expand) for a detailed explanation of the various types of expansions the expand functions can perform.
The opposite a product expansion is of course factoring. The factor an expression in SymPy use the factor function:
End of explanation
# simplify expands a product
simplify((x+1)*(x+2)*(x+3))
# simplify uses trigonometric identities
simplify(sin(a)**2 + cos(a)**2)
simplify(cos(x)/sin(x))
Explanation: Simplify
The simplify tries to simplify an expression into a nice looking expression, using various techniques. More specific alternatives to the simplify functions also exists: trigsimp, powsimp, logcombine, etc.
The basic usages of these functions are as follows:
End of explanation
f1 = 1/((a+1)*(a+2))
f1
apart(f1)
f2 = 1/(a+2) + 1/(a+3)
f2
together(f2)
Explanation: apart and together
To manipulate symbolic expressions of fractions, we can use the apart and together functions:
End of explanation
simplify(f2)
Explanation: Simplify usually combines fractions but does not factor:
End of explanation
y
diff(y**2, x)
Explanation: Calculus
In addition to algebraic manipulations, the other main use of CAS is to do calculus, like derivatives and integrals of algebraic expressions.
Differentiation
Differentiation is usually simple. Use the diff function. The first argument is the expression to take the derivative of, and the second argument is the symbol by which to take the derivative:
End of explanation
diff(y**2, x, x)
diff(y**2, x, 2) # same as above
Explanation: For higher order derivatives we can do:
End of explanation
x, y, z = symbols("x,y,z")
f = sin(x*y) + cos(y*z)
Explanation: To calculate the derivative of a multivariate expression, we can do:
End of explanation
diff(f, x, 1, y, 2)
Explanation: $\frac{d^3f}{dxdy^2}$
End of explanation
f
integrate(f, x)
Explanation: Integration
Integration is done in a similar fashion:
End of explanation
integrate(f, (x, -1, 1))
Explanation: By providing limits for the integration variable we can evaluate definite integrals:
End of explanation
integrate(exp(-x**2), (x, -oo, oo))
Explanation: and also improper integrals
End of explanation
n = Symbol("n")
Sum(1/n**2, (n, 1, 10))
Sum(1/n**2, (n,1, 10)).evalf()
Sum(1/n**2, (n, 1, oo)).evalf()
Explanation: Remember, oo is the SymPy notation for inifinity.
Sums and products
We can evaluate sums and products using the functions: 'Sum'
End of explanation
Product(n, (n, 1, 10)) # 10!
Explanation: Products work much the same way:
End of explanation
limit(sin(x)/x, x, 0)
Explanation: Limits
Limits can be evaluated using the limit function. For example,
End of explanation
f
diff(f, x)
Explanation: We can use 'limit' to check the result of derivation using the diff function:
End of explanation
h = Symbol("h")
limit((f.subs(x, x+h) - f)/h, h, 0)
Explanation: $\displaystyle \frac{\mathrm{d}f(x,y)}{\mathrm{d}x} = \frac{f(x+h,y)-f(x,y)}{h}$
End of explanation
limit(1/x, x, 0, dir="+")
limit(1/x, x, 0, dir="-")
Explanation: OK!
We can change the direction from which we approach the limiting point using the dir keywork argument:
End of explanation
series(exp(x), x)
Explanation: Series
Series expansion is also one of the most useful features of a CAS. In SymPy we can perform a series expansion of an expression using the series function:
End of explanation
series(exp(x), x, 1)
Explanation: By default it expands the expression around $x=0$, but we can expand around any value of $x$ by explicitly include a value in the function call:
End of explanation
series(exp(x), x, 1, 10)
Explanation: And we can explicitly define to which order the series expansion should be carried out:
End of explanation
s1 = cos(x).series(x, 0, 5)
s1
s2 = sin(x).series(x, 0, 2)
s2
expand(s1 * s2)
Explanation: The series expansion includes the order of the approximation, which is very useful for keeping track of the order of validity when we do calculations with series expansions of different order:
End of explanation
expand(s1.removeO() * s2.removeO())
Explanation: If we want to get rid of the order information we can use the removeO method:
End of explanation
(cos(x)*sin(x)).series(x, 0, 6)
Explanation: But note that this is not the correct expansion of $\cos(x)\sin(x)$ to $5$th order:
End of explanation
m11, m12, m21, m22 = symbols("m11, m12, m21, m22")
b1, b2 = symbols("b1, b2")
A = Matrix([[m11, m12],[m21, m22]])
A
b = Matrix([[b1], [b2]])
b
Explanation: Linear algebra
Matrices
Matrices are defined using the Matrix class:
End of explanation
A**2
A * b
Explanation: With Matrix class instances we can do the usual matrix algebra operations:
End of explanation
A.det()
A.inv()
Explanation: And calculate determinants and inverses, and the like:
End of explanation
solve(x**2 - 1, x)
solve(x**4 - x**2 - 1, x)
Explanation: Solving equations
For solving equations and systems of equations we can use the solve function:
End of explanation
solve([x + y - 1, x - y - 1], [x,y])
Explanation: System of equations:
End of explanation
solve([x + y - a, x - y - c], [x,y])
Explanation: In terms of other symbolic expressions:
End of explanation
%reload_ext version_information
%version_information numpy, matplotlib, sympy
Explanation: Further reading
http://sympy.org/en/index.html - The SymPy projects web page.
https://github.com/sympy/sympy - The source code of SymPy.
http://live.sympy.org - Online version of SymPy for testing and demonstrations.
Versions
End of explanation |
6,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
lexsub
Step1: Run the default solution on dev
Step2: Evaluate the default output | Python Code:
from default import *
import os
Explanation: lexsub: default program
End of explanation
lexsub = LexSub(os.path.join('data','glove.6B.100d.magnitude'))
output = []
with open(os.path.join('data','input','dev.txt')) as f:
for line in f:
fields = line.strip().split('\t')
output.append(" ".join(lexsub.substitutes(int(fields[0].strip()), fields[1].strip().split())))
print("\n".join(output[:10]))
Explanation: Run the default solution on dev
End of explanation
from lexsub_check import precision
with open(os.path.join('data','reference','dev.out'), 'rt') as refh:
ref_data = [str(x).strip() for x in refh.read().splitlines()]
print("Score={:.2f}".format(100*precision(ref_data, output)))
Explanation: Evaluate the default output
End of explanation |
6,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, here's the SPA power function
Step1: Here are two helper functions for computing the dot product over space, and for plotting the results | Python Code:
def power(s, e):
x = np.fft.ifft(np.fft.fft(s.v) ** e).real
return spa.SemanticPointer(data=x)
Explanation: First, here's the SPA power function:
End of explanation
def spatial_dot(v, Xs, Ys, scales, xs, ys, transform=1):
identity = spa.SemanticPointer(data=np.eye(D)[0])
vs = np.zeros((len(ys),len(xs)))
for i,x in enumerate(xs):
for j, y in enumerate(ys):
t = identity
for k,s in enumerate(scales):
t = t*power(Xs[k], x*s)*power(Ys[k], y*s)
t = t*transform
vs[j,i] = np.dot(v.v, t.v)
return vs
def spatial_plot(vs, vmax=1, vmin=-1, colorbar=True):
vs = vs[::-1, :]
plt.imshow(vs, interpolation='none', extent=(xs[0],xs[-1],ys[0],ys[-1]), vmax=vmax, vmin=vmin, cmap='plasma')
if colorbar:
plt.colorbar()
D = 256
phi = (1+np.sqrt(5))/2
scales = phi**(np.arange(5))
print(scales)
Xs = []
Ys = []
for i in range(len(scales)):
X = spa.SemanticPointer(D)
X.make_unitary()
Y = spa.SemanticPointer(D)
Y.make_unitary()
Xs.append(X)
Ys.append(Y)
W = 1
Q = 100
xs = np.linspace(-W, W, Q)
ys = np.linspace(-W, W, Q)
def relu(x):
return np.maximum(x, 0)
M = 3
plt.figure(figsize=(12,12))
for i in range(M):
for j in range(M):
plt.subplot(M, M, i*M+j+1)
spatial_plot(relu(spatial_dot(spa.SemanticPointer(D), Xs, Ys, scales, xs, ys)), vmin=None, vmax=None, colorbar=False)
Explanation: Here are two helper functions for computing the dot product over space, and for plotting the results
End of explanation |
6,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Torneo de los 30
Partiendo de un archivo .csv con los resultados de los partidos del torneo, por fecha, hacemos un par de transformaciones y cálculos para obtener algunos gráficos y estadísticas del ya finalizado torneo argentino de fútbol.
Sirve además como excusa par experimentar con Pandas, matplotlib y Jupyter!
Step1: Leemos los datos del archivo .csv
Step2: Precalculamos información útil
Vamos a extender nuestros datos con información adicional.
En este caso vamos a agregar columnas extras indicando si el resultado del partido fue para el local ('L'), para el visitante ('V') o empate ('E'), y por otro lado, una columna con el total de goles por partido.
Step3: Distribución de los resultados
A partir de los resultados de los partidos vamos a generar un gráfico que refleje qué resultados y cuántos de cada uno se dieron a lo largo del torneo.
Step4: Distribución de Local/Empate/Visitante por fecha
Para cada fecha, cantidad de Local/Empate/Visitante.
Step5: Cantidad de goles por fecha
Cuál fue la fecha con más goles? La de menor cantidad?
Step6: Evolución de los equipos
Vamos a graficar cómo fueron progresando los equipos (en este caso el Top 5 final) a lo largo del torneo. | Python Code:
# imports iniciales
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# queremos que los gráficos se rendericen inline
%matplotlib inline
%pylab inline
# configuración para los gráficos, estilo y dimensiones
matplotlib.style.use('ggplot')
figsize(12, 12)
Explanation: Torneo de los 30
Partiendo de un archivo .csv con los resultados de los partidos del torneo, por fecha, hacemos un par de transformaciones y cálculos para obtener algunos gráficos y estadísticas del ya finalizado torneo argentino de fútbol.
Sirve además como excusa par experimentar con Pandas, matplotlib y Jupyter!
End of explanation
# levantamos los datos del archivo en un DataFrame de Pandas
data = pd.read_csv('data/resultados.csv')
# una muestra de los datos que tenemos disponibles: una fila por cada partido, agrupados por fecha
data.head()
Explanation: Leemos los datos del archivo .csv
End of explanation
data['L'], data['E'], data['V'] = data.gl > data.gv, data.gl == data.gv, data.gl < data.gv
data['goles'] = data.gl + data.gv
Explanation: Precalculamos información útil
Vamos a extender nuestros datos con información adicional.
En este caso vamos a agregar columnas extras indicando si el resultado del partido fue para el local ('L'), para el visitante ('V') o empate ('E'), y por otro lado, una columna con el total de goles por partido.
End of explanation
# nos quedamos con las columnas de goles local y goles visitante (ie. los resultados)
goles = data[['gl', 'gv']]
# agrupamos los resultados para quedarnos con una ocurrencia de cada combinación y los contamos
resultados = goles.groupby(['gl', 'gv'])
num_resultados = resultados.size()
# separamos resultados y su cantidad, y también los goles, para generar el gráfico
uniq_resultados, counts = zip(*num_resultados.iteritems())
goles_local, goles_visitante = zip(*uniq_resultados)
plt.scatter(goles_local, goles_visitante, s=list(map(lambda i: i*20, counts)), c=np.random.rand(len(counts)))
Explanation: Distribución de los resultados
A partir de los resultados de los partidos vamos a generar un gráfico que refleje qué resultados y cuántos de cada uno se dieron a lo largo del torneo.
End of explanation
# nos quedamos con las columnas que nos interesan, agrupamos por fecha, y contamos
por_fecha = data[['fecha', 'L', 'E', 'V']].groupby(['fecha']).agg(np.count_nonzero)
por_fecha.plot(kind='bar', stacked=True)
Explanation: Distribución de Local/Empate/Visitante por fecha
Para cada fecha, cantidad de Local/Empate/Visitante.
End of explanation
# nos quedamos con las columnas que nos interesan, agrupamos y sumamos los goles por fecha
goles_por_fecha = data[['fecha', 'goles']].groupby(['fecha']).agg(np.sum)
# además calculamos la media
promedio_por_fecha = int(goles_por_fecha.mean())
goles_por_fecha.plot(kind='bar')
plt.axhline(promedio_por_fecha, color='k')
plt.text(30, promedio_por_fecha, 'media: %d' % promedio_por_fecha)
Explanation: Cantidad de goles por fecha
Cuál fue la fecha con más goles? La de menor cantidad?
End of explanation
# nombres de los equipos
equipos = data.local.unique()
# función para calcular los puntos obtenidos por un equipo en un partido dado
func_puntos = lambda team, match: 3 if ((match['local'] == team and match['L']) or (match['visitante'] == team and match['V'])) else (1 if match['E'] else 0)
# puntaje total por equipo
final = {}
# puntos por fecha por equipo
fechas = pd.DataFrame(data.fecha.unique(), columns=['fecha'])
for equipo in equipos:
partidos_por_equipo = data[(data.local == equipo) | (data.visitante == equipo)]
puntos = partidos_por_equipo.apply(lambda row: func_puntos(equipo, row), axis=1)
partidos_por_equipo[equipo] = puntos.cumsum()
fechas = fechas.merge(partidos_por_equipo[['fecha', equipo]], on='fecha')
# guardamos el total
final[equipo] = puntos.sum()
# posiciones finales del torneo
posiciones = sorted(final.items(), key=lambda i: i[1], reverse=True)
posiciones[:10]
N = 5
# nos quedamos con los primeros N equipos en las posiciones finales
top_n = [e for e, pts in posiciones[:N]]
# graficamos la evolución fecha a fecha para esos equipos
fechas[['fecha'] + top_n].plot(x='fecha')
for e, pts in posiciones[:N]:
plt.text(30, pts, ' %s (%d)' % (e, pts))
Explanation: Evolución de los equipos
Vamos a graficar cómo fueron progresando los equipos (en este caso el Top 5 final) a lo largo del torneo.
End of explanation |
6,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Cleaning the groundtruth images from their spurious pixels
Step2: Download the dataset from its repository at github
https
Step3: Let's compare the groundtruth image befor and after cleaning
Step4: Clean the whole dataset
Step5: Save the dataset using hdf5 format | Python Code:
import os
import h5py
from matplotlib import pyplot as plt
import numpy as np
from numpy import newaxis
from skimage import morphology as mo
from scipy.ndimage import distance_transform_bf as distance
def distanceTransform(bIm):
#from pythonvision.org
dist = distance(bIm)
dist = dist.max() - dist
dist -= dist.min()
dist = dist/float(dist.ptp()) * 255
dist = dist.astype(np.uint8)
return dist
def clean_ground_truth(gd_lab, size = 2):
Remove spurious pixels from badly labelled groundtruth image
returns three binary images (One hot shot) first and a labelled image.
mask = gd_lab > 0
dmap = distanceTransform(mask)
cleaned_lab1 = mo.binary_opening(gd_lab == 1, selem = mo.disk(size))
cleaned_lab2 = mo.binary_opening(gd_lab == 2, selem = mo.disk(size))
cleaned_lab3 = mo.binary_opening(gd_lab == 3, selem = mo.disk(size))
seeds = cleaned_lab1+2*cleaned_lab2+3*cleaned_lab3
seg = mo.watershed(dmap, markers = seeds, mask = 1*mask)
chrom_lab1 = seg == 1
chrom_lab2 = seg == 2
overlapp = seg == 3
return chrom_lab1, chrom_lab2, overlapp, seg
Explanation: Cleaning the groundtruth images from their spurious pixels
End of explanation
!wget https://github.com/jeanpat/DeepFISH/blob/master/dataset/LowRes_13434_overlapping_pairs.h5
filename = './LowRes_13434_overlapping_pairs.h5'
h5f = h5py.File(filename,'r')
pairs = h5f['dataset_1'][:]
h5f.close()
print('dataset is a numpy array of shape:', pairs.shape)
N = 11508
grey = pairs[N,:,:,0]
g_truth = pairs[N,:,:,1]
l1, l2, l3, seg = clean_ground_truth(g_truth, size = 1)
test = np.dstack((grey, g_truth))
print(test.shape)
t2 = np.stack((test,test))
print(t2.shape)
Explanation: Download the dataset from its repository at github
https://github.com/jeanpat/DeepFISH/tree/master/dataset
End of explanation
plt.figure(figsize=(20,10))
plt.subplot(251,xticks=[],yticks=[])
plt.imshow(grey, cmap=plt.cm.gray)
plt.subplot(252,xticks=[],yticks=[])
plt.imshow(g_truth, cmap=plt.cm.flag_r)
plt.subplot(253,xticks=[],yticks=[])
plt.imshow(g_truth == 1, cmap=plt.cm.flag_r)
plt.subplot(254,xticks=[],yticks=[])
plt.imshow(g_truth == 2, cmap=plt.cm.flag_r)
plt.subplot(255,xticks=[],yticks=[])
plt.imshow(g_truth == 3, cmap=plt.cm.flag_r)
#plt.subplot(256,xticks=[],yticks=[])
#plt.imshow(mo.white_tophat(grey, selem = mo.disk(2)) > 0, cmap=plt.cm.jet)
plt.subplot(257,xticks=[],yticks=[])
plt.imshow(l1+2*l2+3*l3, cmap=plt.cm.flag_r)
plt.subplot(258,xticks=[],yticks=[])
plt.imshow(l1, cmap=plt.cm.flag_r)
plt.subplot(259,xticks=[],yticks=[])
plt.imshow(l2, cmap=plt.cm.flag_r)
plt.subplot(2,5,10,xticks=[],yticks=[])
plt.imshow(l3, cmap=plt.cm.flag_r)
Explanation: Let's compare the groundtruth image befor and after cleaning
End of explanation
new_data = np.zeros((1,94,93,2), dtype = int)
N = pairs.shape[0]#10
for idx in range(N):
g_truth = pairs[idx,:,:,1]
grey = pairs[idx,:,:,0]
_, _, _, seg = clean_ground_truth(g_truth, size = 1)
paired = np.dstack((grey, seg))
#
#https://stackoverflow.com/questions/7372316/how-to-make-a-2d-numpy-array-a-3d-array/7372678
#
new_data = np.concatenate((new_data, paired[newaxis,:, :, :]))
new_data = new_data[1:,:,:,:]
plt.figure(figsize=(20,10))
N=10580
grey = new_data[N,:,:,0]
g_truth = new_data[N,:,:,1]
plt.subplot(121,xticks=[],yticks=[])
plt.imshow(grey, cmap=plt.cm.gray)
plt.subplot(122,xticks=[],yticks=[])
plt.imshow(g_truth, cmap=plt.cm.flag_r)
Explanation: Clean the whole dataset
End of explanation
filename = './Cleaned_LowRes_13434_overlapping_pairs.h5'
hf = h5py.File(filename,'w')
hf.create_dataset('13434_overlapping_chrom_pairs_LowRes', data=new_data, compression='gzip', compression_opts=9)
hf.close()
Explanation: Save the dataset using hdf5 format
End of explanation |
6,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
국민대, 파이썬, 데이터
W03 Python 101
Step1: NOTE
Step2: 위의 예제를 봐서 알겠지만 Python은 변수의 타입을 미리 선언하지 않습니다. 미리 선언하는 언어가 있지요. 하지만 Python은 변수의 타입 선언에 매우 자유롭습니다. 또한 PEP8에 의해 age=20이라고 하지 않고 age = 20이라고 표현한 것을 눈여겨 보시면 좋겠습니다. 처음 공부할 때 부터 바른 습관을 갖는 것도 좋습니다.
2. 연산자 (Operator)
Step3: 다시 age에 20을 할당 시켰던 문법을 다시 보도록 하겠습니다. 제가 강조해서 말하고 있는 것이바로 '할당하고 있다'라는 표현입니다. 즉 age = 20에서 =은 데이터를 할당하는 연산자라고 합니다.
2.1 숫자 연산
Step4: 2.2 비교 연산
Step5: 2.3 확장 연산
Step6: 2.4 문자열 포맷
우리는 위에서 숫자 연산과 비교 연산을 보았습니다. 그렇다면 문자가 연산이 있나요!? 없습니다. 대신에 문자끼리 합치는 것은 있겠죠!? 문자와 문자를 어떻게 합칠까요?
1) + (더하기)
Step7: 문자를 합치는 방법 중 가장 쉬운 것은 + 를 사용하는 것입니다.
Step8: 그러나 + 을 사용하는 방법은 합치는 두 개의 데이터 타입이 같아야만 사용할 수 있습니다. 위에처럼 문자열과 숫자를 합치려고 할 때는 에러가 납니다.
잠깐!
올 해를 자동으로 알 수 있는 방법은 없나요? 잠깐 알아보고 넘어갑시다.
Step9: 위에서 사용한 str()을 변환 함수라고 합니다. 문자열로 바꿔주는 기능이 있습니다.
2) % (나머지 연산자)
Step10: 제가 Python을 처음 공부했을 매우 이해가 안가고 당황했던 것이 바로 나머지 연산자를 사용하는 방법이었습니다. 저만 특이한 것일까요!?ㅎㅎ 이렇게 나머지 연산자를 통해 문자열이 합쳐질 때는 아래처럼 표현해야 합니다. 위를 예로 들면 '%d'라고 되어있는 부분이 변환 지정자이며, '년 입니다.'은 보통의 문자가 되는 것입니다.
포맷 문자열 = (보통의 문자 + 변환 지정자)
문자열 포맷 변환 문자
Step11: 좀 복잡한 듯 보이지만 잘 보면 어렵지니 않습니다. %0.3s에서 s는 문자열인 것은 알겠는데 0.3이라는 것은 뭔가 특이하죠!? 문자열에 왠 상수값? 이상합니다. 0.3이 의미하는 바는 문자열 0번번째부터 시작하여 3번째 직전까지 잘라서(slicing) 표시하라는 뜻입니다. 이렇듯 Python은 0부터 시작한다는 것을 다시 한 번 알 수 있습니다.
3) format 함수 (치환자, formatter)
Step12: 이건 또 도대체 무엇인가요. 함수에 대해서 공부한 적도 없는데 말이지요. 괜찮습니다. 이런게 있다는 것만 알고 넘어가겠습니다. 이 후에 함수를 배우게 되면 이런게 가능하다는 것을 알게 됩니다. 여기서 0과 1은 format 함수의 Argument 0번째 값과 1번째 값을 의미합니다.
formatter에서 지원하는 타입 코드
Step13: 3. 프로그램 제어 흐름 (Control Flow)
3.1 조건문 (Conditionals)
Python에서의 조건문은 아주 간단합니다. 다른 언어에서처럼 Switch라던가 Case가 없고 if만 있습니다.
Step14: 3.2 반복문 (Loops and Iteration)
1) while
Step15: 2) for
for 문은 굉장히 중요합니다. 아래와 같이 사용합니다.
python
for i in s
Step16: range()라는 함수는 0부터 4전까지 연속된 숫자를 만들어 List라는 타입으로 값을 반환해주는 함수입니다. 0부터 4전까지이므로 0, 1, 2, 3 이렇게 숫자 4개를 List라는 객체에 담아 표현해줍니다. 즉 for 반복문이 range(4) 함수를 통해 만들어진 값 안에서(in) 첫번째 값 0을 끄집어 내 item에 할당하고 아래 문장을 실행하는 것입니다. 이 후 두번째 값 1을 끄집어 내 item에 할당하고 다시 아래 문장을 실행하는 것입니다.
여기서 item라는 것은 변수입니다. 따라서 어떤 이름도 넣을 수 있습니다.
3) enumerate()
하아.. 이건 뭘까요?! 먹는건가요?ㅠ
Step17: Python은 몇 가지 내장 함수를 가지고 있습니다. 그 중에 enumerate()라는 함수는 어떤 객체의 인덱스 번호와 인덱스 번호에 해당하는 값까지 모두 반환해주는 역할을 합니다. 유용하게 쓰이니 잘 기억해 두시길 바랍니다.
4. 데이터 타입(Data Type)
Step18: 위에 age라는 변수에 할당된 값은 숫자 20입니다. 즉 20이라는 데이타의 타입은 정수 integer입니다. 그렇다면 Python에서의 데이터 타입에는 어떤 것들이 있을까요!?
Step19: 숫자는 위에 나와있는 것만으로도 충분합니다. 뭔가 생소한 순서열부터 보도록 하겠습니다.
4.1 순서열 (Sequence)
순서열 이라는 것은 정수로 색인되어 순서있는 객체들의 모음을 말합니다.
1) 문자열 (String)
"python"이라는 단어를 우리가 볼 때는 그저 단순한 문자처럼 보이지만 Python은 "python"을 순서있는 객체들의 모음으로 인식합니다. 즉 글자 'p', 'y', 't', 'h', 'o', 'n' 하나 하나가 순서있게 모인 것이라고 생각하면 됩니다.
Step20: Index 번호로 순서가 매겨져 있는 객체들의 모음이기 때문에 객체 하나 하나를 꺼내어 출력이 가능한 것입니다. Index 번호를 위와 같이 확인할 수 있습니다. 그렇다면 숫자는 어떨까요? 숫자도 순서가 매겨져 있는 객체들의 모음일까요?
Step21: 아니네요. not iterable하다고 에러 메시지가 나옵니다.
메서드
메서드|설명
--|--
s.startswith(prefix)|문자열이 prefix로 시작하는지 검사
s.endswith(suffix)|문자열이 suffix로 끝나는지 검사
s.format(args, *kwargs)|s의 포맷을 지정한다.
s.join(t)|s를 분리자로 사용해서 순서열 t에 들어있는 문자열들을 이어 붙인다.
s.replace(old, new, [,maxreplace])|부분문자열을 대체한다.
s.split([sep])|sep를 구분자로 사용해서 문자열을 분할
s.strip([chars])|앞이나 뒤에 나오는 공백이나 chars로 지정된 문자들을 제거
Python의 모든 것은 객체라고 했습니다. 이것은 Python을 이해하는데 있어 매우 중요합니다. 문자열도 마찬가지로 객체이기 때문에 위와 같이 사용할 수 있는 메서드(함수)가 있습니다. 다음 시간에 객체 지향 언어에 대해 자세히 살펴보도록 하겠습니다. 우선 위와 같은 메서드(함수)가 있다는 것을 확인하시기 바랍니다.
Step22: Data Structure
2) List와 Tuple
데이터 타입 중 List와 Tuple이라고 하는 것이 있습니다. 이 두개는 추가/수정/삭제가 되느냐 안되느냐와 모양새가 조금 다를 뿐 거의 비슷한 객체입니다.
List
추가/수정/삭제 가능
ex) obj = [1, 2, 3,]
Step23: 메서드|설명
--|--
list(s)|s를 리스트로 변환
s.append(x)|s의 끝에 새로운 원소 x를 추가
s.extend(t)|s의 끝에 새로운 리스트 t를 추가
s.count(x)|s에서 x가 출현한 횟수를 셈
s.insert(i, x)|색인 i의 위치에 x를 삽입
s.pop([i])|리스트에서 원소 i를 제거하면서 i를 반환, i를 생략하면 마지막 원소가 제거되면서 반환
s.remove(x)|x를 찾아서 s에서 제거
s.reverse()|s의 항목들을 그 자리에서 뒤집음
Step24: Tuple
추가/수정/삭제 불가능
ex) obj = (1, 2, 3,)
Step25: 왜 추가/수정/삭제를 불가능하게 했을까요?
떄론 데이터를 수정하면 안되는 것들이 있습니다. 그럴 때는 데이터가 수정되는 것을 방지하기 위해 Tuple로 데이터 수정 가능성을 원천 차단하는 것입니다.
4.2 매핑 (Mapping)
1) 사전 (Dictionary)
{key1
Step26: 항목|설명
--|--
m.clear()|m에서 모든 항목 제거
m.copy()|복사본 생성
m.get(k [,v])|m[k]가 있으면 m[k]를 반환하고 아니면 v를 반환
m.items()|m의 모든 (키, 값) 쌍들로 구성되는 순서열 반환
m.keys()|m의 모든 키들의 순서열 바환
m.values()|m에 있는 모든 값으로 구성되는 순서열 반환
Step27: 5. Practice Makes Perfect
5.1 띠
Year(ex)|Animal|Remain
--|--
2000|용|8
2001|뱀|9
2002|말|10
2003|양|11
2004|원숭이|0
2005|닭|1
2006|개|2
2007|돼지|3
2008|쥐|4
2009|소|5
2010|호랑이|6
2011|토끼|7
위에 표에 나와있는 값을 이용해서 사용자가 입력한 년도에 해당되는 띠를 알려주세요. 사용자가 어떤 값을 입력해도 말이지요.
Step28: 5.2 Hashtag
요새는 Hashtag(#으로 시작되는 단어)를 소셜 미디어에 많이 사용하죠? 이거를 어떻게 추출할까요? 추출해서 List에 담아보세요.
Step29: 5.3 Morse
모스 부호라는 것을 다들 알고 계실거라 생각합니다. 모스 부호를 사전형태로 가져왔는데요. 이 정보를 통해 암호를 풀어보도록 합시다.
Step30: 5.4 카드 52장
카드가 총 52장이 있습니다. 스페이드, 하트, 다이아몬드, 클럽 이렇게 총 4가지 종류가 있고 각자 숫자 2부터 10까지 그리고 Jack, Queen, King, Ace가 있습니다. 즉 모든 카드는 아래처럼 두 개의 글자로 표현할 수 있습니다.
카드|약자
--|--
스페이드 잭|sJ
클럽 2|c2
다이아몬드 10|d10
하트 에이스|hA
스페이드 9|s9
위와 같이 두 개의 글자로 구성되게 52장의 카드를 만들어 하나의 List에 담아보세요. | Python Code:
from IPython.display import Image
Explanation: 국민대, 파이썬, 데이터
W03 Python 101
End of explanation
age = 20
print(type(age))
print(age)
Explanation: NOTE:
이 문서에 사용되는 표(숫자 연산, 비교 연산, 문자열 포맷 변환 문자, 확장 연산, formatter에서 지원하는 타입 코드)는 인사이트 출판사에서 나온 파이썬 완벽 가이드에서 발췌했음을 알려드립니다.
Table of Contents
변수 (Variable)
연산자 (Operator)
숫자 연산
비교 연산
확장 연산
문자열 포맷
+ (더하기)
% (나머지 연산자)
format 함수 (치환자, formatter)
프로그램 제어 흐름 (Control Flow)
조건문 (Conditionals)
반복문 (Loops and Iterations)
while
for
enumerate()
데이터 타입 (Data Type)
순서열 (Sequence)
문자열 (String)
List와 Tuple
매핑 (Mapping)
사전 (Dictionary)
Practice Makes Perfect
1. 변수 (Variable)
변수(Variable)라는 것은 데이터를 담는 그릇입니다. 그리고 이 데이터를 담는 그릇에는 담겨진 내용물의 유형(type)을 가지고 있습니다. 즉 아래와 같이 세 개로 나눠져 있다고 생각하시면 됩니다.
데이터(data)
데이터의 유형(type)
데이터를 담을 수 있는 그릇(identifier)
말은 어렵지만 아래의 간단한 예제를 보도록 하겠습니다. age라는 그릇(identifier)에 데이터 유형(type)이 정수(int)인 20을 넣은 것 뿐입니다. 자료구조 관점으로 보면 좀 더 복잡하긴 합니다. 20이라는 데이터를 저장하는 곳을 가리킨다고 보면 되지만 거기까지는 다루지 않겠습니다.
End of explanation
age = 20
print(type(age))
print(age)
Explanation: 위의 예제를 봐서 알겠지만 Python은 변수의 타입을 미리 선언하지 않습니다. 미리 선언하는 언어가 있지요. 하지만 Python은 변수의 타입 선언에 매우 자유롭습니다. 또한 PEP8에 의해 age=20이라고 하지 않고 age = 20이라고 표현한 것을 눈여겨 보시면 좋겠습니다. 처음 공부할 때 부터 바른 습관을 갖는 것도 좋습니다.
2. 연산자 (Operator)
End of explanation
Image(filename='images/number_operator.png')
print(3 + 4)
3 / 4
3 // 4
Explanation: 다시 age에 20을 할당 시켰던 문법을 다시 보도록 하겠습니다. 제가 강조해서 말하고 있는 것이바로 '할당하고 있다'라는 표현입니다. 즉 age = 20에서 =은 데이터를 할당하는 연산자라고 합니다.
2.1 숫자 연산
End of explanation
Image(filename='images/comparison_operator.png')
1 < 2
1 == 2
"yes" != "no"
Explanation: 2.2 비교 연산
End of explanation
Image(filename='images/extended_operator.png')
a = 3
b = 2
a += b
print(a) # a = a + b
Explanation: 2.3 확장 연산
End of explanation
question = "올해는 몇년도 인가요?"
answer = "2016"
print(question + answer + "년 입니다.")
Explanation: 2.4 문자열 포맷
우리는 위에서 숫자 연산과 비교 연산을 보았습니다. 그렇다면 문자가 연산이 있나요!? 없습니다. 대신에 문자끼리 합치는 것은 있겠죠!? 문자와 문자를 어떻게 합칠까요?
1) + (더하기)
End of explanation
question = "올해는 몇년도 인가요?"
answer = 2016
print(question + answer + "년 입니다.")
Explanation: 문자를 합치는 방법 중 가장 쉬운 것은 + 를 사용하는 것입니다.
End of explanation
from datetime import datetime
print(str(datetime.today().year) + "년")
print(str(datetime.today().month) + "월")
Explanation: 그러나 + 을 사용하는 방법은 합치는 두 개의 데이터 타입이 같아야만 사용할 수 있습니다. 위에처럼 문자열과 숫자를 합치려고 할 때는 에러가 납니다.
잠깐!
올 해를 자동으로 알 수 있는 방법은 없나요? 잠깐 알아보고 넘어갑시다.
End of explanation
print("올해는 몇년도 인가요?")
print("%d년 입니다." % 2016)
"%s 년 입니다." % "2016"
Explanation: 위에서 사용한 str()을 변환 함수라고 합니다. 문자열로 바꿔주는 기능이 있습니다.
2) % (나머지 연산자)
End of explanation
Image(filename='images/formatter.png')
a = "hello"
b = "world"
print("%0.3s %s" % (a, b))
Explanation: 제가 Python을 처음 공부했을 매우 이해가 안가고 당황했던 것이 바로 나머지 연산자를 사용하는 방법이었습니다. 저만 특이한 것일까요!?ㅎㅎ 이렇게 나머지 연산자를 통해 문자열이 합쳐질 때는 아래처럼 표현해야 합니다. 위를 예로 들면 '%d'라고 되어있는 부분이 변환 지정자이며, '년 입니다.'은 보통의 문자가 되는 것입니다.
포맷 문자열 = (보통의 문자 + 변환 지정자)
문자열 포맷 변환 문자
End of explanation
print("올 해는 {0}년 {1}월 입니다.".format(2016, 3))
Explanation: 좀 복잡한 듯 보이지만 잘 보면 어렵지니 않습니다. %0.3s에서 s는 문자열인 것은 알겠는데 0.3이라는 것은 뭔가 특이하죠!? 문자열에 왠 상수값? 이상합니다. 0.3이 의미하는 바는 문자열 0번번째부터 시작하여 3번째 직전까지 잘라서(slicing) 표시하라는 뜻입니다. 이렇듯 Python은 0부터 시작한다는 것을 다시 한 번 알 수 있습니다.
3) format 함수 (치환자, formatter)
End of explanation
Image(filename='images/advanced_formatter.png')
Explanation: 이건 또 도대체 무엇인가요. 함수에 대해서 공부한 적도 없는데 말이지요. 괜찮습니다. 이런게 있다는 것만 알고 넘어가겠습니다. 이 후에 함수를 배우게 되면 이런게 가능하다는 것을 알게 됩니다. 여기서 0과 1은 format 함수의 Argument 0번째 값과 1번째 값을 의미합니다.
formatter에서 지원하는 타입 코드
End of explanation
a = 3
b = 2
if a < b:
print("Yes")
print("a")
else:
print("No")
print("b")
suffix = "htm"
if suffix == "htm":
content = "text/html"
elif suffix == "jpg":
content = "image/jpeg"
elif suffix == "png":
content = "image/png"
else:
raise RuntimeError("Unknown content type")
print(content)
Explanation: 3. 프로그램 제어 흐름 (Control Flow)
3.1 조건문 (Conditionals)
Python에서의 조건문은 아주 간단합니다. 다른 언어에서처럼 Switch라던가 Case가 없고 if만 있습니다.
End of explanation
answer1 = input("Enter password: ")
answer2 = input("Re-enter password: ")
while answer1 != answer2:
print("does not match.")
answer1 = input("Enter password: ")
answer2 = input("Re-enter password: ")
Explanation: 3.2 반복문 (Loops and Iteration)
1) while
End of explanation
for item in range(0, 4):
print(item)
Explanation: 2) for
for 문은 굉장히 중요합니다. 아래와 같이 사용합니다.
python
for i in s:
statements
Python은 인간 친화적인 문법이 특징입니다. 여기서도 위의 문법을 인간미있게 읽어보도록 하곘습니다. "s 안에(in) 한 개를 꺼내서 i에 넣고 아래 문장을 실행하고, 아래 문장 실행이 다 끝났으면 다시 s 안에(in) 그 전에 꺼냈던 것 다음의 것을 꺼내어 다시 i에 넣고 아래 문장을 실행한다." s 안에서 한 개를 꺼낸다? 이게 무슨 말인지 아래를 보도록 하겠습니다.
End of explanation
s = "statement"
for i, x in enumerate(s):
print(i, x)
Explanation: range()라는 함수는 0부터 4전까지 연속된 숫자를 만들어 List라는 타입으로 값을 반환해주는 함수입니다. 0부터 4전까지이므로 0, 1, 2, 3 이렇게 숫자 4개를 List라는 객체에 담아 표현해줍니다. 즉 for 반복문이 range(4) 함수를 통해 만들어진 값 안에서(in) 첫번째 값 0을 끄집어 내 item에 할당하고 아래 문장을 실행하는 것입니다. 이 후 두번째 값 1을 끄집어 내 item에 할당하고 다시 아래 문장을 실행하는 것입니다.
여기서 item라는 것은 변수입니다. 따라서 어떤 이름도 넣을 수 있습니다.
3) enumerate()
하아.. 이건 뭘까요?! 먹는건가요?ㅠ
End of explanation
age = 20
print(type(age))
print(age)
Explanation: Python은 몇 가지 내장 함수를 가지고 있습니다. 그 중에 enumerate()라는 함수는 어떤 객체의 인덱스 번호와 인덱스 번호에 해당하는 값까지 모두 반환해주는 역할을 합니다. 유용하게 쓰이니 잘 기억해 두시길 바랍니다.
4. 데이터 타입(Data Type)
End of explanation
Image(filename='images/data_type.png')
Explanation: 위에 age라는 변수에 할당된 값은 숫자 20입니다. 즉 20이라는 데이타의 타입은 정수 integer입니다. 그렇다면 Python에서의 데이터 타입에는 어떤 것들이 있을까요!?
End of explanation
for char in "python":
print(char)
for idx, char in enumerate("python"):
print(idx, char)
Explanation: 숫자는 위에 나와있는 것만으로도 충분합니다. 뭔가 생소한 순서열부터 보도록 하겠습니다.
4.1 순서열 (Sequence)
순서열 이라는 것은 정수로 색인되어 순서있는 객체들의 모음을 말합니다.
1) 문자열 (String)
"python"이라는 단어를 우리가 볼 때는 그저 단순한 문자처럼 보이지만 Python은 "python"을 순서있는 객체들의 모음으로 인식합니다. 즉 글자 'p', 'y', 't', 'h', 'o', 'n' 하나 하나가 순서있게 모인 것이라고 생각하면 됩니다.
End of explanation
for idx, char in enumerate(12345):
print(idx, char)
Explanation: Index 번호로 순서가 매겨져 있는 객체들의 모음이기 때문에 객체 하나 하나를 꺼내어 출력이 가능한 것입니다. Index 번호를 위와 같이 확인할 수 있습니다. 그렇다면 숫자는 어떨까요? 숫자도 순서가 매겨져 있는 객체들의 모음일까요?
End of explanation
university = "kookmin"
university.startswith("k")
university.endswith("e")
".".join(university)
universities = "kookmin yonse seoul"
print(universities.split())
universities = "kookmin-yonse-seoul"
print(universities.split("-"))
university = "kookmik"
print(" kookmin university".strip())
print(university.strip("k"))
Explanation: 아니네요. not iterable하다고 에러 메시지가 나옵니다.
메서드
메서드|설명
--|--
s.startswith(prefix)|문자열이 prefix로 시작하는지 검사
s.endswith(suffix)|문자열이 suffix로 끝나는지 검사
s.format(args, *kwargs)|s의 포맷을 지정한다.
s.join(t)|s를 분리자로 사용해서 순서열 t에 들어있는 문자열들을 이어 붙인다.
s.replace(old, new, [,maxreplace])|부분문자열을 대체한다.
s.split([sep])|sep를 구분자로 사용해서 문자열을 분할
s.strip([chars])|앞이나 뒤에 나오는 공백이나 chars로 지정된 문자들을 제거
Python의 모든 것은 객체라고 했습니다. 이것은 Python을 이해하는데 있어 매우 중요합니다. 문자열도 마찬가지로 객체이기 때문에 위와 같이 사용할 수 있는 메서드(함수)가 있습니다. 다음 시간에 객체 지향 언어에 대해 자세히 살펴보도록 하겠습니다. 우선 위와 같은 메서드(함수)가 있다는 것을 확인하시기 바랍니다.
End of explanation
obj = [
1,
2,
3, # trailing comma
]
print(obj)
Explanation: Data Structure
2) List와 Tuple
데이터 타입 중 List와 Tuple이라고 하는 것이 있습니다. 이 두개는 추가/수정/삭제가 되느냐 안되느냐와 모양새가 조금 다를 뿐 거의 비슷한 객체입니다.
List
추가/수정/삭제 가능
ex) obj = [1, 2, 3,]
End of explanation
cities = ['Seoul', 'Tokyo', 'New York',]
cities
cities.append('Busan')
cities
cities.extend(['London'])
cities
cities.remove('Tokyo')
cities
cities.append(['Daejeon', 'Yokohama'])
cities
cities.append(123)
cities.append(["123", "123", ["333", ],])
cities
Explanation: 메서드|설명
--|--
list(s)|s를 리스트로 변환
s.append(x)|s의 끝에 새로운 원소 x를 추가
s.extend(t)|s의 끝에 새로운 리스트 t를 추가
s.count(x)|s에서 x가 출현한 횟수를 셈
s.insert(i, x)|색인 i의 위치에 x를 삽입
s.pop([i])|리스트에서 원소 i를 제거하면서 i를 반환, i를 생략하면 마지막 원소가 제거되면서 반환
s.remove(x)|x를 찾아서 s에서 제거
s.reverse()|s의 항목들을 그 자리에서 뒤집음
End of explanation
obj = (1, 2, 3,)
print(obj)
Explanation: Tuple
추가/수정/삭제 불가능
ex) obj = (1, 2, 3,)
End of explanation
final_exam = {
"math": 100,
"English": 90,
}
print(final_exam)
Explanation: 왜 추가/수정/삭제를 불가능하게 했을까요?
떄론 데이터를 수정하면 안되는 것들이 있습니다. 그럴 때는 데이터가 수정되는 것을 방지하기 위해 Tuple로 데이터 수정 가능성을 원천 차단하는 것입니다.
4.2 매핑 (Mapping)
1) 사전 (Dictionary)
{key1: value1, key2: value2, .... }의 형태로 짝지어진 객체들의 모음입니다. 매우 유용하게, 그리고 자주 사용하니 꼭 익혀두시기 바랍니다. 후에 데이터를 가져오는 작업을 할 때 JSON이라는 형태로 많이 가져오는데 JSON이 Python의 Dictionary와 똑같이 생겼습니다.
End of explanation
final_exam.get("math")
singer = {
"Talyor Swift": {
"album": {
"1st_album": ['a1', 'b1', 'c1', ],
"2nd_album": ['a2', 'b2', 'c2', ],
},
"age": 123,
},
}
singer['Talyor Swift']['album']['1st_album']
singer['Talyor Swift']['age']
print(singer)
print(singer.get("2nd_album"))
Explanation: 항목|설명
--|--
m.clear()|m에서 모든 항목 제거
m.copy()|복사본 생성
m.get(k [,v])|m[k]가 있으면 m[k]를 반환하고 아니면 v를 반환
m.items()|m의 모든 (키, 값) 쌍들로 구성되는 순서열 반환
m.keys()|m의 모든 키들의 순서열 바환
m.values()|m에 있는 모든 값으로 구성되는 순서열 반환
End of explanation
# Write your code below
year = input("년도를 입력하세요: ")
year = int(year)
if year % 12 == 0:
print("원숭이")
elif year % 12 == 1:
print("닭")
Explanation: 5. Practice Makes Perfect
5.1 띠
Year(ex)|Animal|Remain
--|--
2000|용|8
2001|뱀|9
2002|말|10
2003|양|11
2004|원숭이|0
2005|닭|1
2006|개|2
2007|돼지|3
2008|쥐|4
2009|소|5
2010|호랑이|6
2011|토끼|7
위에 표에 나와있는 값을 이용해서 사용자가 입력한 년도에 해당되는 띠를 알려주세요. 사용자가 어떤 값을 입력해도 말이지요.
End of explanation
# 게시글 제목
title = "On top of the world! Life is so fantastic if you just let it. \
I have never been happier. #nyc #newyork #vacation #traveling"
print(title)
# Write your code below.
Explanation: 5.2 Hashtag
요새는 Hashtag(#으로 시작되는 단어)를 소셜 미디어에 많이 사용하죠? 이거를 어떻게 추출할까요? 추출해서 List에 담아보세요.
End of explanation
# 모스부호
dic = {
'.-':'A','-...':'B','-.-.':'C','-..':'D','.':'E','..-.':'F',
'--.':'G','....':'H','..':'I','.---':'J','-.-':'K','.-..':'L',
'--':'M','-.':'N','---':'O','.--.':'P','--.-':'Q','.-.':'R',
'...':'S','-':'T','..-':'U','...-':'V','.--':'W','-..-':'X',
'-.--':'Y','--..':'Z'
}
# 풀어야할 암호
code = '.... . ... .-.. . . .--. ... . .- .-. .-.. -.--'
# Write your code below.
Explanation: 5.3 Morse
모스 부호라는 것을 다들 알고 계실거라 생각합니다. 모스 부호를 사전형태로 가져왔는데요. 이 정보를 통해 암호를 풀어보도록 합시다.
End of explanation
# Write your code below.
Explanation: 5.4 카드 52장
카드가 총 52장이 있습니다. 스페이드, 하트, 다이아몬드, 클럽 이렇게 총 4가지 종류가 있고 각자 숫자 2부터 10까지 그리고 Jack, Queen, King, Ace가 있습니다. 즉 모든 카드는 아래처럼 두 개의 글자로 표현할 수 있습니다.
카드|약자
--|--
스페이드 잭|sJ
클럽 2|c2
다이아몬드 10|d10
하트 에이스|hA
스페이드 9|s9
위와 같이 두 개의 글자로 구성되게 52장의 카드를 만들어 하나의 List에 담아보세요.
End of explanation |
6,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Séquences 1 - Découverte de la programmation objet et du language Python
Activité 1 - Manipuler les objets Python
Compétences visées par cette activité
Step1: On dira que vous avez assigné la valeur 'bol' à la variable objet1.
Maintenant, si je veux savoir qu'est ce que c'est que objet1, je fais
Step2: A ton tour de créer un objet2 étant une assiette et ensuite d'afficher ce prix
Step3: Nous voulons à présent regrouper nos objets dans une liste d'objet que nous appellerons un placard, la syntaxe pour la fabriquer est la suivante
Step4: A vous maintenant d'afficher le contenu de placard à l'aide de l'instruction print.
Step5: Pour créer un objet3 étant une fourchette et la rajouter dans mon placard, je dois faire
Step6: A présent, ré-affichez le contenu de placard en ré-éxécutant une précédente cellule.
Pour ajouter les fourchettes dans le placard, nous avons utilisée une méthode de l'objet placard. Cette méthode s'appelle "append" et permet de rajouter une valeur à une liste. Lorsque je veux que cette méthode "append" agissent sur l'objet placard je met un point "." entre placard et append.
Le point est un signe de ponctuation très important en Python, il permet d'accéder à ce qu'il y a à l'intérieur d'un objet.
A vous de créer un nouvel objet verre et de le rajouter à notre liste placard. Afficher ensuite le contenu de placard | Python Code:
objet1 = 'bol'
Explanation: Séquences 1 - Découverte de la programmation objet et du language Python
Activité 1 - Manipuler les objets Python
Compétences visées par cette activité :
Savoir créer des variables de types chaîne de caractères et liste. Utiliser une méthode liée à un objet par la syntaxe objet.méthode().
Programme de mathématique - seconde :
L’utilisation de logiciels (calculatrice ou ordinateur), d’outils de visualisation et de représentation, de calcul (numérique ou formel), de simulation, de programmation développe la possibilité d’expérimenter, ouvre largement la dialectique entre l’observation et la démonstration et change profondément la nature de l’enseignement.
http://cache.media.education.gouv.fr/file/30/52/3/programme_mathematiques_seconde_65523.pdf (page 2)
À l’occasion de l’écriture d’algorithmes et de petits programmes, il convient de donner aux élèves de bonnes habitudes de
rigueur et de les entraîner aux pratiques systématiques de vérification et de contrôle.
http://cache.media.education.gouv.fr/file/30/52/3/programme_mathematiques_seconde_65523.pdf (page 9)
Programme de mathématique - première :
Algorithmique : en seconde, les élèves ont conçu et mis en œuvre quelques algorithmes. Cette formation se poursuit tout au
long du cycle terminal.
http://cache.media.education.gouv.fr/file/special_9/21/1/mathsS_155211.pdf (page 6)
Programme ISN - terminale : découverte du monde numérique
Programme SI - etc...
Les languages de programmation moderne sont tous des languages dit "orienté objet". Il est donc préférable de dire que l'on n'écrit pas des lignes de codes mais que l'on manipule des objets. Ce principe est fondamental et très structurant dans l'écriture d'un programme.
L'activité qui suit à pour but de donner quelques mots de vocabulaire et de ponctuation du language Python utilisé pour programmer notre robot.
Mais tout cela est un peu abstrait donc place aux exemples.
Imaginons que nous voulions définir un objet comme étant un bol. La syntaxe à utiliser est la suivante :
End of explanation
print objet1
Explanation: On dira que vous avez assigné la valeur 'bol' à la variable objet1.
Maintenant, si je veux savoir qu'est ce que c'est que objet1, je fais :
End of explanation
#Ecrivez votre code ci-dessous et executez le en cliquant sur lecture dans la barre de menu :
objet2 = 'assiette'
print objet2
Explanation: A ton tour de créer un objet2 étant une assiette et ensuite d'afficher ce prix :
End of explanation
placard = [objet1,objet2]
Explanation: Nous voulons à présent regrouper nos objets dans une liste d'objet que nous appellerons un placard, la syntaxe pour la fabriquer est la suivante :
End of explanation
#Ecrivez votre code ci-dessous et executez le en cliquant sur lecture dans la barre de menu :
print placard
Explanation: A vous maintenant d'afficher le contenu de placard à l'aide de l'instruction print.
End of explanation
objet3 = 'fourchette'
placard.append(objet3)
Explanation: Pour créer un objet3 étant une fourchette et la rajouter dans mon placard, je dois faire :
End of explanation
#Ecrivez votre code ci-dessous et executez le en cliquant sur lecture dans la barre de menu :
objet4 = 'verre'
placard.append(objet4)
print placard
#!!!!!!!!! Résultat faisant l'objet d'une évaluation !!!!!!!!!!!!!!!!!!!!!
Explanation: A présent, ré-affichez le contenu de placard en ré-éxécutant une précédente cellule.
Pour ajouter les fourchettes dans le placard, nous avons utilisée une méthode de l'objet placard. Cette méthode s'appelle "append" et permet de rajouter une valeur à une liste. Lorsque je veux que cette méthode "append" agissent sur l'objet placard je met un point "." entre placard et append.
Le point est un signe de ponctuation très important en Python, il permet d'accéder à ce qu'il y a à l'intérieur d'un objet.
A vous de créer un nouvel objet verre et de le rajouter à notre liste placard. Afficher ensuite le contenu de placard :
End of explanation |
6,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
calculate LCIA
try first sth from ecoinvent
Step1: try now for bouillon | Python Code:
act = Database("ecoinvent 3.2 cutoff").search("pineapple")
act
act = Database("ecoinvent 3.2 cutoff").search("pineapple")[1]
act
lca = LCA(
{act.key: 1},
method=('IPCC 2013', 'climate change', 'GWP 100a'),
)
lca.lci()
lca.lcia()
lca.score
Explanation: calculate LCIA
try first sth from ecoinvent
End of explanation
bou = Database("bouillon").search("Paste")
bou
lca = LCA(
demand={bouillon.random(): 1},
method=('IPCC 2013', 'climate change', 'GWP 100a'),
)
lca.lci()
lca.lcia()
lca.score
lca = LCA(
demand={bouillonZ.random(): 1},
method=('IPCC 2013', 'climate change', 'GWP 100a'),
)
lca.lci()
lca.lcia()
lca.score
Explanation: try now for bouillon
End of explanation |
6,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="float
Step1: Contents
Single point
Sampling
Post-processing
Create spectrum
Single point
Let's start with calculating the vertical excitation energy and oscillator strengths at the ground state minimum (aka Franck-Condon) geometry.
Note that the active space and number of included states here is system-specific.
Step2: This cell print a summary of the possible transitions.
Note
Step3: Sampling
Of course, molecular spectra aren't just a set of discrete lines - they're broadened by several mechanisms. We'll treat vibrations here by sampling the molecule's motion on the ground state at 300 Kelvin.
To do this, we'll sample its geometries as it moves on the ground state by
Step4: Post-processing
Next, we calculate the spectrum at each sampled geometry. Depending on your computer speed and if PySCF is installed locally, this may take up to several minutes to run.
Step5: This cell plots the results - wavelength vs. oscillator strength at each geometry for each transition
Step6: Create spectrum
We're finally ready to calculate a spectrum - we'll create a histogram of all calculated transition wavelengths over all states, weighted by the oscillator strengths. | Python Code:
%matplotlib inline
import numpy as np
from matplotlib.pylab import *
try: import seaborn #optional, makes plots look nicer
except ImportError: pass
import moldesign as mdt
from moldesign import units as u
Explanation: <span style="float:right"><a href="http://moldesign.bionano.autodesk.com/" target="_blank" title="About">About</a> <a href="https://github.com/autodesk/molecular-design-toolkit/issues" target="_blank" title="Issues">Issues</a> <a href="http://bionano.autodesk.com/MolecularDesignToolkit/explore.html" target="_blank" title="Tutorials">Tutorials</a> <a href="http://autodesk.github.io/molecular-design-toolkit/" target="_blank" title="Documentation">Documentation</a></span>
</span>
<br>
<center><h1>Example 2: Using MD sampling to calculate UV-Vis spectra</h1> </center>
This notebook uses basic quantum chemical calculations to calculate the absorption spectra of a small molecule.
Author: Aaron Virshup, Autodesk Research<br>
Created on: September 23, 2016
Tags: excited states, CASSCF, absorption, sampling
End of explanation
qmmol = mdt.from_name('benzene')
qmmol.set_energy_model(mdt.models.CASSCF, active_electrons=6,
active_orbitals=6, state_average=6, basis='sto-3g')
properties = qmmol.calculate()
Explanation: Contents
Single point
Sampling
Post-processing
Create spectrum
Single point
Let's start with calculating the vertical excitation energy and oscillator strengths at the ground state minimum (aka Franck-Condon) geometry.
Note that the active space and number of included states here is system-specific.
End of explanation
for fstate in range(1, len(qmmol.properties.state_energies)):
excitation_energy = properties.state_energies[fstate] - properties.state_energies[0]
print('--- Transition from S0 to S%d ---' % fstate )
print('Excitation wavelength: %s' % excitation_energy.to('nm', 'spectroscopy'))
print('Oscillator strength: %s' % qmmol.properties.oscillator_strengths[0,fstate])
Explanation: This cell print a summary of the possible transitions.
Note: you can convert excitation energies directly to nanometers using Pint by calling energy.to('nm', 'spectroscopy').
End of explanation
mdmol = mdt.Molecule(qmmol)
mdmol.set_energy_model(mdt.models.GaffSmallMolecule)
mdmol.minimize()
mdmol.set_integrator(mdt.integrators.OpenMMLangevin, frame_interval=250*u.fs,
timestep=0.5*u.fs, constrain_hbonds=False, remove_rotation=True,
remove_translation=True, constrain_water=False)
mdtraj = mdmol.run(5.0 * u.ps)
Explanation: Sampling
Of course, molecular spectra aren't just a set of discrete lines - they're broadened by several mechanisms. We'll treat vibrations here by sampling the molecule's motion on the ground state at 300 Kelvin.
To do this, we'll sample its geometries as it moves on the ground state by:
1. Create a copy of the molecule
2. Assign a forcefield (GAFF2/AM1-BCC)
3. Run dynamics for 5 ps, taking a snapshot every 250 fs, for a total of 20 separate geometries.
End of explanation
post_traj = mdt.Trajectory(qmmol)
for frame in mdtraj:
qmmol.positions = frame.positions
qmmol.calculate()
post_traj.new_frame()
Explanation: Post-processing
Next, we calculate the spectrum at each sampled geometry. Depending on your computer speed and if PySCF is installed locally, this may take up to several minutes to run.
End of explanation
wavelengths_to_state = []
oscillators_to_state = []
for i in range(1, len(qmmol.properties.state_energies)):
wavelengths_to_state.append(
(post_traj.state_energies[:,i] - post_traj.potential_energy).to('nm', 'spectroscopy'))
oscillators_to_state.append([o[0,i] for o in post_traj.oscillator_strengths])
for istate, (w,o) in enumerate(zip(wavelengths_to_state, oscillators_to_state)):
plot(w,o, label='S0 -> S%d'%(istate+1),
marker='o', linestyle='none')
xlabel('wavelength / nm'); ylabel('oscillator strength'); legend()
Explanation: This cell plots the results - wavelength vs. oscillator strength at each geometry for each transition:
End of explanation
from itertools import chain
all_wavelengths = u.array(list(chain(*wavelengths_to_state)))
all_oscs = u.array(list(chain(*oscillators_to_state)))
hist(all_wavelengths, weights=all_oscs, bins=50)
xlabel('wavelength / nm')
Explanation: Create spectrum
We're finally ready to calculate a spectrum - we'll create a histogram of all calculated transition wavelengths over all states, weighted by the oscillator strengths.
End of explanation |
6,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using custom containers with Vertex AI Training
Learning Objectives
Step1: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for Vertex AI Training and Prediction
ARTIFACT_STORE - A GCS bucket in the created in the same region.
Step2: We now create the ARTIFACT_STORE bucket if it's not there. Note that this bucket should be created in the region specified in the variable REGION (if you have already a bucket with this name in a different region than REGION, you may want to change the ARTIFACT_STORE name so that you can recreate a bucket in REGION with the command in the cell below).
Step3: Importing the dataset into BigQuery
Step4: Explore the Covertype dataset
Step5: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to GCS storage
Create a training split
Step6: Create a validation split
Step7: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
Step8: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
Step9: Run the pipeline locally.
Step10: Calculate the trained model's accuracy.
Step11: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
Step12: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to Vertex AI hyperparameter tuning service.
Step13: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
Step14: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
Step15: Submit an Vertex AI hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier
Step16: Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs".
Retrieve HP-tuning results.
After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized)
Step17: You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
Step20: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
Step21: Examine the training output
The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on GCS.
Note
Step22: Deploy the model to Vertex AI Prediction
Step23: Uploading the trained model
Step24: Deploying the uploaded model
Step25: Serve predictions
Prepare the input file with JSON formated instances. | Python Code:
import os
import time
import pandas as pd
from google.cloud import aiplatform, bigquery
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, StandardScaler
Explanation: Using custom containers with Vertex AI Training
Learning Objectives:
1. Learn how to create a train and a validation split with BigQuery
1. Learn how to wrap a machine learning model into a Docker container and train in on Vertex AI
1. Learn how to use the hyperparameter tuning engine on Vertex AI to find the best hyperparameters
1. Learn how to deploy a trained machine learning model on Vertex AI as a REST API and query it
In this lab, you develop, package as a docker image, and run on Vertex AI Training a training application that trains a multi-class classification model that predicts the type of forest cover from cartographic data. The dataset used in the lab is based on Covertype Data Set from UCI Machine Learning Repository.
The training code uses scikit-learn for data pre-processing and modeling. The code has been instrumented using the hypertune package so it can be used with Vertex AI hyperparameter tuning.
End of explanation
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}-kfp-artifact-store"
DATA_ROOT = f"{ARTIFACT_STORE}/data"
JOB_DIR_ROOT = f"{ARTIFACT_STORE}/jobs"
TRAINING_FILE_PATH = f"{DATA_ROOT}/training/dataset.csv"
VALIDATION_FILE_PATH = f"{DATA_ROOT}/validation/dataset.csv"
API_ENDPOINT = f"{REGION}-aiplatform.googleapis.com"
os.environ["JOB_DIR_ROOT"] = JOB_DIR_ROOT
os.environ["TRAINING_FILE_PATH"] = TRAINING_FILE_PATH
os.environ["VALIDATION_FILE_PATH"] = VALIDATION_FILE_PATH
os.environ["PROJECT_ID"] = PROJECT_ID
os.environ["REGION"] = REGION
Explanation: Configure environment settings
Set location paths, connections strings, and other environment settings. Make sure to update REGION, and ARTIFACT_STORE with the settings reflecting your lab environment.
REGION - the compute region for Vertex AI Training and Prediction
ARTIFACT_STORE - A GCS bucket in the created in the same region.
End of explanation
!gsutil ls | grep ^{ARTIFACT_STORE}/$ || gsutil mb -l {REGION} {ARTIFACT_STORE}
Explanation: We now create the ARTIFACT_STORE bucket if it's not there. Note that this bucket should be created in the region specified in the variable REGION (if you have already a bucket with this name in a different region than REGION, you may want to change the ARTIFACT_STORE name so that you can recreate a bucket in REGION with the command in the cell below).
End of explanation
%%bash
DATASET_LOCATION=US
DATASET_ID=covertype_dataset
TABLE_ID=covertype
DATA_SOURCE=gs://workshop-datasets/covertype/small/dataset.csv
SCHEMA=Elevation:INTEGER,\
Aspect:INTEGER,\
Slope:INTEGER,\
Horizontal_Distance_To_Hydrology:INTEGER,\
Vertical_Distance_To_Hydrology:INTEGER,\
Horizontal_Distance_To_Roadways:INTEGER,\
Hillshade_9am:INTEGER,\
Hillshade_Noon:INTEGER,\
Hillshade_3pm:INTEGER,\
Horizontal_Distance_To_Fire_Points:INTEGER,\
Wilderness_Area:STRING,\
Soil_Type:STRING,\
Cover_Type:INTEGER
bq --location=$DATASET_LOCATION --project_id=$PROJECT_ID mk --dataset $DATASET_ID
bq --project_id=$PROJECT_ID --dataset_id=$DATASET_ID load \
--source_format=CSV \
--skip_leading_rows=1 \
--replace \
$TABLE_ID \
$DATA_SOURCE \
$SCHEMA
Explanation: Importing the dataset into BigQuery
End of explanation
%%bigquery
SELECT *
FROM `covertype_dataset.covertype`
Explanation: Explore the Covertype dataset
End of explanation
!bq query \
-n 0 \
--destination_table covertype_dataset.training \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (1, 2, 3, 4)'
!bq extract \
--destination_format CSV \
covertype_dataset.training \
$TRAINING_FILE_PATH
Explanation: Create training and validation splits
Use BigQuery to sample training and validation splits and save them to GCS storage
Create a training split
End of explanation
!bq query \
-n 0 \
--destination_table covertype_dataset.validation \
--replace \
--use_legacy_sql=false \
'SELECT * \
FROM `covertype_dataset.covertype` AS cover \
WHERE \
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), 10) IN (8)'
!bq extract \
--destination_format CSV \
covertype_dataset.validation \
$VALIDATION_FILE_PATH
df_train = pd.read_csv(TRAINING_FILE_PATH)
df_validation = pd.read_csv(VALIDATION_FILE_PATH)
print(df_train.shape)
print(df_validation.shape)
Explanation: Create a validation split
End of explanation
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
("num", StandardScaler(), numeric_feature_indexes),
("cat", OneHotEncoder(), categorical_feature_indexes),
]
)
pipeline = Pipeline(
[
("preprocessor", preprocessor),
("classifier", SGDClassifier(loss="log", tol=1e-3)),
]
)
Explanation: Develop a training application
Configure the sklearn training pipeline.
The training pipeline preprocesses data by standardizing all numeric features using sklearn.preprocessing.StandardScaler and encoding all categorical features using sklearn.preprocessing.OneHotEncoder. It uses stochastic gradient descent linear classifier (SGDClassifier) for modeling.
End of explanation
num_features_type_map = {
feature: "float64" for feature in df_train.columns[numeric_feature_indexes]
}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
Explanation: Convert all numeric features to float64
To avoid warning messages from StandardScaler all numeric features are converted to float64.
End of explanation
X_train = df_train.drop("Cover_Type", axis=1)
y_train = df_train["Cover_Type"]
X_validation = df_validation.drop("Cover_Type", axis=1)
y_validation = df_validation["Cover_Type"]
pipeline.set_params(classifier__alpha=0.001, classifier__max_iter=200)
pipeline.fit(X_train, y_train)
Explanation: Run the pipeline locally.
End of explanation
accuracy = pipeline.score(X_validation, y_validation)
print(accuracy)
Explanation: Calculate the trained model's accuracy.
End of explanation
TRAINING_APP_FOLDER = "training_app"
os.makedirs(TRAINING_APP_FOLDER, exist_ok=True)
Explanation: Prepare the hyperparameter tuning application.
Since the training run on this dataset is computationally expensive you can benefit from running a distributed hyperparameter tuning job on Vertex AI Training.
End of explanation
%%writefile {TRAINING_APP_FOLDER}/train.py
import os
import subprocess
import sys
import fire
import hypertune
import numpy as np
import pandas as pd
import pickle
from sklearn.compose import ColumnTransformer
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
def train_evaluate(job_dir, training_dataset_path, validation_dataset_path, alpha, max_iter, hptune):
df_train = pd.read_csv(training_dataset_path)
df_validation = pd.read_csv(validation_dataset_path)
if not hptune:
df_train = pd.concat([df_train, df_validation])
numeric_feature_indexes = slice(0, 10)
categorical_feature_indexes = slice(10, 12)
preprocessor = ColumnTransformer(
transformers=[
('num', StandardScaler(), numeric_feature_indexes),
('cat', OneHotEncoder(), categorical_feature_indexes)
])
pipeline = Pipeline([
('preprocessor', preprocessor),
('classifier', SGDClassifier(loss='log',tol=1e-3))
])
num_features_type_map = {feature: 'float64' for feature in df_train.columns[numeric_feature_indexes]}
df_train = df_train.astype(num_features_type_map)
df_validation = df_validation.astype(num_features_type_map)
print('Starting training: alpha={}, max_iter={}'.format(alpha, max_iter))
X_train = df_train.drop('Cover_Type', axis=1)
y_train = df_train['Cover_Type']
pipeline.set_params(classifier__alpha=alpha, classifier__max_iter=max_iter)
pipeline.fit(X_train, y_train)
if hptune:
X_validation = df_validation.drop('Cover_Type', axis=1)
y_validation = df_validation['Cover_Type']
accuracy = pipeline.score(X_validation, y_validation)
print('Model accuracy: {}'.format(accuracy))
# Log it with hypertune
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='accuracy',
metric_value=accuracy
)
# Save the model
if not hptune:
model_filename = 'model.pkl'
with open(model_filename, 'wb') as model_file:
pickle.dump(pipeline, model_file)
gcs_model_path = "{}/{}".format(job_dir, model_filename)
subprocess.check_call(['gsutil', 'cp', model_filename, gcs_model_path], stderr=sys.stdout)
print("Saved model in: {}".format(gcs_model_path))
if __name__ == "__main__":
fire.Fire(train_evaluate)
Explanation: Write the tuning script.
Notice the use of the hypertune package to report the accuracy optimization metric to Vertex AI hyperparameter tuning service.
End of explanation
%%writefile {TRAINING_APP_FOLDER}/Dockerfile
FROM gcr.io/deeplearning-platform-release/base-cpu
RUN pip install -U fire cloudml-hypertune scikit-learn==0.20.4 pandas==0.24.2
WORKDIR /app
COPY train.py .
ENTRYPOINT ["python", "train.py"]
Explanation: Package the script into a docker image.
Notice that we are installing specific versions of scikit-learn and pandas in the training image. This is done to make sure that the training runtime in the training container is aligned with the serving runtime in the serving container.
Make sure to update the URI for the base image so that it points to your project's Container Registry.
End of explanation
IMAGE_NAME = "trainer_image"
IMAGE_TAG = "latest"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}:{IMAGE_TAG}"
os.environ["IMAGE_URI"] = IMAGE_URI
!gcloud builds submit --tag $IMAGE_URI $TRAINING_APP_FOLDER
Explanation: Build the docker image.
You use Cloud Build to build the image and push it your project's Container Registry. As you use the remote cloud service to build the image, you don't need a local installation of Docker.
End of explanation
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"forestcover_tuning_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
os.environ["JOB_NAME"] = JOB_NAME
os.environ["JOB_DIR"] = JOB_DIR
%%bash
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
CONFIG_YAML=config.yaml
cat <<EOF > $CONFIG_YAML
studySpec:
metrics:
- metricId: accuracy
goal: MAXIMIZE
parameters:
- parameterId: max_iter
discreteValueSpec:
values:
- 10
- 20
- parameterId: alpha
doubleValueSpec:
minValue: 1.0e-4
maxValue: 1.0e-1
scaleType: UNIT_LINEAR_SCALE
algorithm: ALGORITHM_UNSPECIFIED # results in Bayesian optimization
trialJobSpec:
workerPoolSpecs:
- machineSpec:
machineType: $MACHINE_TYPE
replicaCount: $REPLICA_COUNT
containerSpec:
imageUri: $IMAGE_URI
args:
- --job_dir=$JOB_DIR
- --training_dataset_path=$TRAINING_FILE_PATH
- --validation_dataset_path=$VALIDATION_FILE_PATH
- --hptune
EOF
gcloud ai hp-tuning-jobs create \
--region=$REGION \
--display-name=$JOB_NAME \
--config=$CONFIG_YAML \
--max-trial-count=5 \
--parallel-trial-count=5
echo "JOB_NAME: $JOB_NAME"
Explanation: Submit an Vertex AI hyperparameter tuning job
Create the hyperparameter configuration file.
Recall that the training code uses SGDClassifier. The training application has been designed to accept two hyperparameters that control SGDClassifier:
- Max iterations
- Alpha
The file below configures Vertex AI hypertuning to run up to 5 trials in parallel and to choose from two discrete values of max_iter and the linear range between 1.0e-4 and 1.0e-1 for alpha.
End of explanation
def get_trials(job_name):
jobs = aiplatform.HyperparameterTuningJob.list()
match = [job for job in jobs if job.display_name == JOB_NAME]
tuning_job = match[0] if match else None
return tuning_job.trials if tuning_job else None
def get_best_trial(trials):
metrics = [trial.final_measurement.metrics[0].value for trial in trials]
best_trial = trials[metrics.index(max(metrics))]
return best_trial
def retrieve_best_trial_from_job_name(jobname):
trials = get_trials(jobname)
best_trial = get_best_trial(trials)
return best_trial
Explanation: Go to the Vertex AI Training dashboard and view the progression of the HP tuning job under "Hyperparameter Tuning Jobs".
Retrieve HP-tuning results.
After the job completes you can review the results using GCP Console or programmatically using the following functions (note that this code supposes that the metrics that the hyperparameter tuning engine optimizes is maximized):
End of explanation
best_trial = retrieve_best_trial_from_job_name(JOB_NAME)
Explanation: You'll need to wait for the hyperparameter job to complete before being able to retrieve the best job by running the cell below.
End of explanation
alpha = best_trial.parameters[0].value
max_iter = best_trial.parameters[1].value
TIMESTAMP = time.strftime("%Y%m%d_%H%M%S")
JOB_NAME = f"JOB_VERTEX_{TIMESTAMP}"
JOB_DIR = f"{JOB_DIR_ROOT}/{JOB_NAME}"
MACHINE_TYPE="n1-standard-4"
REPLICA_COUNT=1
WORKER_POOL_SPEC = f\
machine-type={MACHINE_TYPE},\
replica-count={REPLICA_COUNT},\
container-image-uri={IMAGE_URI}\
ARGS = f\
--job_dir={JOB_DIR},\
--training_dataset_path={TRAINING_FILE_PATH},\
--validation_dataset_path={VALIDATION_FILE_PATH},\
--alpha={alpha},\
--max_iter={max_iter},\
--nohptune\
!gcloud ai custom-jobs create \
--region={REGION} \
--display-name={JOB_NAME} \
--worker-pool-spec={WORKER_POOL_SPEC} \
--args={ARGS}
print("The model will be exported at:", JOB_DIR)
Explanation: Retrain the model with the best hyperparameters
You can now retrain the model using the best hyperparameters and using combined training and validation splits as a training dataset.
Configure and run the training job
End of explanation
!gsutil ls $JOB_DIR
Explanation: Examine the training output
The training script saved the trained model as the 'model.pkl' in the JOB_DIR folder on GCS.
Note: We need to wait for job triggered by the cell above to complete before running the cells below.
End of explanation
MODEL_NAME = "forest_cover_classifier_2"
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/sklearn-cpu.0-20:latest"
)
SERVING_MACHINE_TYPE = "n1-standard-2"
Explanation: Deploy the model to Vertex AI Prediction
End of explanation
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_NAME,
artifact_uri=JOB_DIR,
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI,
)
Explanation: Uploading the trained model
End of explanation
endpoint = uploaded_model.deploy(
machine_type=SERVING_MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
Explanation: Deploying the uploaded model
End of explanation
instance = [
2841.0,
45.0,
0.0,
644.0,
282.0,
1376.0,
218.0,
237.0,
156.0,
1003.0,
"Commanche",
"C4758",
]
endpoint.predict([instance])
Explanation: Serve predictions
Prepare the input file with JSON formated instances.
End of explanation |
6,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciPy
Uses numpy as its core
Numerical methods for
Step1: <a id=physical_constants></a>
Physical constants
Step2: <a id=fitting></a>
Fitting
<a id=curve_fit></a>
General least-squares fitting using curve_fit
Non-linear least-squares with Levenberg-Marquardt numerical minimization.
Step3: <a id=uncertainties_guesses></a>
Providing uncertainties and initial guesses
Step4: <a id=plot_corr_matrix></a>
Plotting the correlation matrix
Step5: <a id=minimize></a>
Unbinned likelihood fits using minimize
Simple example
Step6: Poisson pdf
Step7: minimize has lots of options for different minimization algorithms
Also able to respect bounds and constraints (with certain algorithms)
It is worth to write down you problems and simplify the (log)Likelihood as much as possible
<a id=minimize_complex></a>
A more complicated example
Fitting a gaussian with an exponential background
Let's say we have two distributions, an exponential and a gaussian
Step8: <a id=odr></a>
Fitting data with x and y errors using scipy.odr
Most fitting routines only handle uncertainties on dependent variable (y-coordinate), and in most cases this is fine.
However, sometimes you may also want to consider errors on the independent variable (x-coordinate). This generally occurs
when you have some (non-negligible) uncertainty associated with your measurement apparatus.
Consider the following random data set with both x and y uncertainties
Step9: Cases like these can be handled using orthogonal distance regression (ODR). When the independent variable is error-free (or the errors are negligibly small), the residuals are computed as the vertical distance between the data points and the fit. This is the ordinary least-squares method.
In the specific case that the x and y uncertainties are equal, which would occur if the same measurement device is used to measure both the independent and dependent variables, the residual to be minimized will actually be perpendicular (orthogonal) to the fit curve. Note that Python's ODR fit routines do not require that the x and y uncertainties are equal.
Step10: Finally, we do a comparison to a fit with ordinary least-squares (curve_fit).
Step11: If the x-uncertainties are relatively small, in general curve_fit will produce a better result. However, if the uncertainties on the independent variable are large and/or there is a rapidly changing region of your curve where the x-errors are important, ODR fitting may produce a better result.
<a id=fft></a>
Fast Fourier Transforms (FFTs)
Step12: <a id=filtering></a>
Signal filtering
Consider this noisy data set with outliers. The data is a so-called S-curve, and we want to identify the midpoint of the falling edge.
Step13: You can see clearly in the data that the mid-point of the S-curve is at about x=3 (which is the real value), but the outliers destroy the fit. We can remove them easily with a median filter. A median filter is particularly suited to edge detection cases, since it tends to preserve edges well.
Step15: Exercise
The following is an example implementation of a low-pass Butterworth filter
Step16: You are the unfortunate recipient of the following noisy data, which contains noise at two different (unknown) frequencies
Step17: Somewhere in this mess is a Gaussian
Step18: Use a FFT to identify the two offending noise frequencies. Then convert the lowpass_filter above into a bandstop filter (hint
Step19: <a id=integration></a>
Integration
Scipy integration routines are discussed in the Scipy documentation. We will look at the two most common routines here.
<a id=function_integration></a>
Function integration
quad is used to evaluate definite 1D numerical integrals. For example, assume we want to integrate a quadratic polynomial $f(x) = 3x^2 + 6x - 9$ over an interval $x \in [0, 5]$. Analytically, the answer is
Step20: The first parameter quad returns is the answer; the second is an estimate of the absolute error in the result.
For 2D, 3D, or n-dimensional integrals , use dblquad, tplquad, or nquad, respectively.
For some more interesting functions, Scipy's other function integration routines might be helpful
Step21: quad struggles with $\mathrm{sinc}$, but it can be easily handled with Gaussian quadrature
Step22: This result agrees with Mathematica to 13 decimal places (even though only 11 are shown). Note that the problem is the singularity at $x=0$; if we change the boundaries to, say, [-10.1, 10], then it works fine. Also, writing our sinc function more cleverly would eliminate the problem.
<a id=sampleintegration></a>
Sample integration
If you have a collection of points that you want to integrate, you could use an interpolation function and pass it to quad. A better alternative is to use the purpose-built functions trapz, romb, and simps.
We will consider the $\mathrm{sinc}$ function again as an example. The most naive (and surprisingly robust) integration method is using the trapazoid rule, which is implemented in trapz
Step23: <a id=interpolation></a>
Interpolation
<a id=linear_interpolation></a>
Linear interpolation
Imagine you would like to interpolate between two points $(x_0, y_0)$ and $(x_1, y_1)$. You could do this by hand
Step24: Right now, if you try to use an x-coordinate outside of the interval $[x_0, x_1]$, a ValueError will be raised
Step25: This is because we haven't told interp1d how we want to handle the boundaries. This is done using the fill_value keyword argument. There are a few options
Step26: <a id=spline_interpolation></a>
Cubic spline interpolation
Cubic splines are what are most commonly used when you want to interpolate between points smoothly.
Cubic spline interpolation is so common, it has its own method, CubicSpline, which produces generally better results.
Step27: <a id=special_functions></a>
Special Functions
A complete list of scipy special functions can be found here.
<a id=bessel></a>
Bessel functions
Step28: <a id=erf></a>
Error function and Gaussian CDF
CDF = cumulative distribution function
$$\mathrm{erf}(z) = \frac{2}{\sqrt{\pi}} \int_0^z \exp\left( -t^2 \right) dt $$
$$\mathrm{ndtr}(z) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^z \exp\left( \frac{-t^2}{2} \right) dt $$
Step29: <a id=ortho_polys></a>
Orthogonal Polynomials
Step30: Exercise
Orthogonal polynomials can be used to construct a series expansion of an arbitrary function, just like $\sin$ and $\cos$ are used to construct a Fourier series. For example, we can express a function $f(x)$ as a series of Legendre polynomials $P_n(x)$ | Python Code:
import scipy as sp
import numpy as np
# we will need to plot stuff later
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 8)
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2
Explanation: SciPy
Uses numpy as its core
Numerical methods for:
integration
solving differential equations
optimizing, minimizing
root finding
fast fourier transforms
Contains the CODATA values for many constants of nature
Mostly build as wrappers around time-proven fortran libraries (fftpack, lapack, fitpack)
Table of Contents
Physical constants
Fitting
General least-squares fitting using curve_fit
Providing uncertainties and initial guesses
Plotting the correlation matrix
Unbinned likelihood fits using minimize
A more complicated example
Fitting data with x and y errors using scipy.odr
Fast Fourier Transforms (FFTs)
Signal filtering
Integration
Function integration
Sample integration
Interpolation
Linear interpolation
Cubic spline interpolation
Special Functions
Bessel functions
Error function and Gaussian CDF
Orthogonal Polynomials
Notebook Setup (run me first!)
End of explanation
import scipy.constants as const
const.epsilon_0
# convert temperatures:
const.convert_temperature(100, old_scale='C', new_scale='K')
# more constants (including units and errors)!
for k, v in const.physical_constants.items():
print(k, ':', v)
val, unit, uncertainty = const.physical_constants['muon mass energy equivalent in MeV']
val, unit, uncertainty
Explanation: <a id=physical_constants></a>
Physical constants
End of explanation
a = -1
b = 5
x = np.linspace(0, 5, 100)
y = np.exp(a * x) + b + np.random.normal(0, 0.1, 100)
plt.plot(x, y, '.', label='data')
from scipy.optimize import curve_fit
def f(x, a, b):
return np.exp(a * x) + b
params, covariance_matrix = curve_fit(f, x, y)
uncertainties = np.sqrt(np.diag(covariance_matrix))
print('a = {:5.2f} ± {:.2f}'.format(params[0], uncertainties[0]))
print('b = {:5.2f} ± {:.2f}'.format(params[1], uncertainties[1]))
x_plot = np.linspace(-0.1, 5.1, 1000)
plt.plot(x, y, '.', label='data')
plt.plot(x_plot, f(x_plot, *params), label='fit result')
plt.legend();
Explanation: <a id=fitting></a>
Fitting
<a id=curve_fit></a>
General least-squares fitting using curve_fit
Non-linear least-squares with Levenberg-Marquardt numerical minimization.
End of explanation
x = np.linspace(0, 1, 100)
y = np.sin(5 * np.pi * x + np.pi / 2)
yerr = np.full_like(y, 0.2)
noise = np.random.normal(0, yerr, 100)
y += noise
def f(x, a, b):
return np.sin(a * x + b)
#params, covariance_matrix = curve_fit(f, x, y)
# params, covariance_matrix = curve_fit(
# f, x, y,
# p0=[15, 2],
#)
params, covariance_matrix = curve_fit(
f, x, y,
p0=[15, 1.5],
sigma=yerr,
absolute_sigma=True,
)
# plot the stuff
x_plot = np.linspace(-0.1, 1.1, 1000)
plt.plot(x, y, '.', label='data')
plt.plot(x_plot, f(x_plot, *params), label='fit result')
plt.legend();
Explanation: <a id=uncertainties_guesses></a>
Providing uncertainties and initial guesses
End of explanation
def cov2cor(cov):
'''Convert the covariance matrix to the correlation matrix'''
D = np.diag(1 / np.sqrt(np.diag(cov)))
return D @ cov @ D
covariance_matrix
correlation_matrix = cov2cor(covariance_matrix)
plt.matshow(correlation_matrix, vmin=-1, vmax=1, cmap='RdBu_r')
plt.colorbar(shrink=0.8);
correlation_matrix
Explanation: <a id=plot_corr_matrix></a>
Plotting the correlation matrix
End of explanation
lambda_ = 15
k = np.random.poisson(lambda_, 100)
# make sure to use bins of integer width, centered around the integer
bin_edges = np.arange(0, 31) - 0.5
plt.hist(k, bins=bin_edges);
Explanation: <a id=minimize></a>
Unbinned likelihood fits using minimize
Simple example: an unbinned negative log-likelihood fit for a poissonian distribution
End of explanation
from scipy.optimize import minimize
def negative_log_likelihood(lambda_, k):
return np.sum(lambda_ - k * np.log(lambda_))
result = minimize(
negative_log_likelihood,
x0=(10, ), # initial guess
args=(k, ), # additional arguments for the function to minimize
)
result
print('True λ = {}'.format(lambda_))
print('Fit: λ = {:.2f} ± {:.2f}'.format(result.x[0], np.sqrt(result.hess_inv[0, 0])))
Explanation: Poisson pdf:
$$
f(k, \lambda) = \frac{\lambda^k}{k!} \mathrm{e}^{-\lambda}
$$
So the likelihood is:
$$
\mathcal{L} = \prod_{i=0}^{N} \frac{\lambda^{k_i}}{k_i!} \mathrm{e}^{-\lambda}
$$
It's often easier to minimize $-\log(\mathcal{L})$, let's see:
$$
-\log(\mathcal{L}) = - \sum_{i=0}^{N}\bigl( k_i \log(\lambda) - \log{k_i!} - \lambda \bigr)
$$
We are interested in the minimum reletive to $\lambda$, so we dismiss constant term concerning $\lambda$
$$
-\log(\mathcal{L}) = \sum_{i=0}^{N}\bigl( \lambda - k_i \log(\lambda) \bigr)
$$
This looks indeed easier to minimize than the likelihood.
End of explanation
from scipy.stats import norm, expon
x = np.append(
norm.rvs(loc=15, scale=2, size=500),
expon.rvs(scale=10, size=4500),
)
def pdf(x, mu, sigma, tau, p):
return p*norm.pdf(x, mu, sigma) + (1 - p)*expon.pdf(x, scale=tau)
def negative_log_likelihood(params, x):
mu, sigma, tau, p = params
neg_l = -np.sum(np.log(pdf(x, mu, sigma, tau, p)))
return neg_l
result = minimize(
negative_log_likelihood,
x0=(12, 1.5, 8, 0.2), # initial guess
args=(x, ), # additional arguments for the function to minimize
bounds=[
(None, None), # no bounds for mu
(1e-32, None), # sigma > 0
(1e-32, None), # tau > 0
(0, 1), # 0 <= p <= 1
],
method='L-BFGS-B', # method that supports bounds
)
x_plot = np.linspace(0, 100, 1000)
plt.hist(x, bins=100, normed=True)
plt.plot(x_plot, pdf(x_plot, *result.x))
print(result.hess_inv)
# get the covariance matrix as normal numpy array
covariance_matrix = result.hess_inv.todense()
correlation_matrix = cov2cor(covariance_matrix)
plt.matshow(correlation_matrix, vmin=-1, vmax=1, cmap='RdBu_r')
plt.colorbar(shrink=0.8);
plt.xticks(np.arange(4), ['μ', 'σ', 'τ', 'p'])
plt.yticks(np.arange(4), ['μ', 'σ', 'τ', 'p'])
print(correlation_matrix)
Explanation: minimize has lots of options for different minimization algorithms
Also able to respect bounds and constraints (with certain algorithms)
It is worth to write down you problems and simplify the (log)Likelihood as much as possible
<a id=minimize_complex></a>
A more complicated example
Fitting a gaussian with an exponential background
Let's say we have two distributions, an exponential and a gaussian:
$$
f(x, \mu, \sigma, \tau, p) = p \cdot \frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-0.5 \frac{(x - \mu)^2}{\sigma^2}} + (1 - p) \cdot \frac{1}{\tau} \mathrm{e}^{- x / \tau}
$$
Likelihood:
$$
\mathcal{L} = \prod_{i = 0}^N \bigl( p \cdot \frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-0.5 \frac{(x_i - \mu)^2}{\sigma^2}} + (1 - p) \cdot \frac{1}{\tau} \mathrm{e}^{- x_i / \tau} \bigr)
$$
Negative log-likelihood:
$$
-\log(\mathcal{L}) = -\sum_{i = 0}^N \log\bigl( p \cdot \frac{1}{\sqrt{2 \pi}} \mathrm{e}^{-0.5 \frac{(x_i - \mu)^2}{\sigma^2}} + (1 - p) \cdot \frac{1}{\tau} \mathrm{e}^{- x_i / \tau} \bigr)
$$
But we can make use of the built in scipy distributions:
End of explanation
import numpy as np
# generate some data
real_values = np.array([1.5, -3])
x = np.linspace(0, 1, 10)
y = real_values[0]*x + real_values[1]
xerr = np.full_like(x, 0.1)
yerr = np.full_like(y, 0.05)
# add noise to the data
x += np.random.normal(0, xerr, 10)
y += np.random.normal(0, yerr, 10)
# plot the data
plt.errorbar(x, y, xerr=xerr, yerr=yerr, fmt='o');
Explanation: <a id=odr></a>
Fitting data with x and y errors using scipy.odr
Most fitting routines only handle uncertainties on dependent variable (y-coordinate), and in most cases this is fine.
However, sometimes you may also want to consider errors on the independent variable (x-coordinate). This generally occurs
when you have some (non-negligible) uncertainty associated with your measurement apparatus.
Consider the following random data set with both x and y uncertainties:
End of explanation
import scipy.odr as odr
# function we want to fit (in this case, a line)
def f(B, x):
return B[0]*x + B[1]
# do the fit!
guess = [5, 0]
linear = odr.Model(f)
data = odr.RealData(x, y, sx=xerr, sy=yerr)
odr_fit = odr.ODR(data, linear, beta0=guess)
odr_output = odr_fit.run()
odr_output.pprint() # pprint = 'pretty print' function
# plot data and ODR fit
z = np.linspace(-0.1, 1.1, 100)
plt.errorbar(x, y, xerr=xerr, yerr=yerr, fmt='o')
plt.plot(z, f(odr_output.beta, z), 'k--');
Explanation: Cases like these can be handled using orthogonal distance regression (ODR). When the independent variable is error-free (or the errors are negligibly small), the residuals are computed as the vertical distance between the data points and the fit. This is the ordinary least-squares method.
In the specific case that the x and y uncertainties are equal, which would occur if the same measurement device is used to measure both the independent and dependent variables, the residual to be minimized will actually be perpendicular (orthogonal) to the fit curve. Note that Python's ODR fit routines do not require that the x and y uncertainties are equal.
End of explanation
from scipy.optimize import curve_fit
def g(x, m, b):
return m*x + b
params, covariance_matrix = curve_fit(g, x, y, sigma=yerr, p0=guess)
plt.errorbar(x, y, xerr=xerr, yerr=yerr, fmt='o')
plt.plot(z, f(odr_output.beta, z), 'k--', label='ODR Fit')
plt.plot(z, g(z, *params), 'g-.', label='curve_fit')
plt.legend(loc='best')
print('ODR Fit Results: ', odr_output.beta)
print('curve_fit Results:', params)
print('Real Values:', real_values)
Explanation: Finally, we do a comparison to a fit with ordinary least-squares (curve_fit).
End of explanation
freq1 = 5
freq2 = 50
t = np.linspace(0, 1, 1024*10)
y = np.sin(2*np.pi*freq1*t) + np.sin(2*np.pi*freq2*t)
# add some white noise
y += np.random.normal(y, 5)
plt.scatter(t, y, s=10, alpha=0.25, lw=0)
plt.xlabel(r'$t \ /\ \mathrm{s}$');
from scipy import fftpack
z = fftpack.rfft(y)
f = fftpack.rfftfreq(len(t), t[1] - t[0])
plt.axvline(freq1, color='lightgray', lw=5)
plt.axvline(freq2, color='lightgray', lw=5)
plt.plot(f, np.abs(z)**2)
plt.xlabel('f / Hz')
plt.xscale('log')
# plt.yscale('log');
Explanation: If the x-uncertainties are relatively small, in general curve_fit will produce a better result. However, if the uncertainties on the independent variable are large and/or there is a rapidly changing region of your curve where the x-errors are important, ODR fitting may produce a better result.
<a id=fft></a>
Fast Fourier Transforms (FFTs)
End of explanation
from scipy.special import ndtr
def s_curve(x, a, b):
return ndtr(-a*(x - b))
# generate mildly noisy data using Gaussian CDF (see end of this notebook)
real_params = [2.5, 3]
x = np.linspace(0, 5, 20)
y = s_curve(x, *real_params)
y += np.random.normal(0, 0.025, len(y))
# add 4 bad data points
outlier_xcoords = [2, 6, 10, 15]
y[outlier_xcoords] = np.random.uniform(0.2, 2, size=4)
plt.plot(x, y, 'bo')
# attempt to fit
params, __ = curve_fit(s_curve, x, y)
z = np.linspace(0, 5, 100)
plt.plot(z, s_curve(z, *params), 'k--')
print('Real value:', real_params[1])
print('Fit value:', params[1])
Explanation: <a id=filtering></a>
Signal filtering
Consider this noisy data set with outliers. The data is a so-called S-curve, and we want to identify the midpoint of the falling edge.
End of explanation
from scipy.signal import medfilt
filtered_y = medfilt(y)
params, __ = curve_fit(s_curve, x, filtered_y)
print('Real value:', real_params[1])
print('Fit value:', params[1])
z = np.linspace(0, 5, 100)
plt.plot(x, y, 'k*', label='Before Filtering')
plt.plot(x, filtered_y, 'bo', label='After Filtering')
plt.plot(z, s_curve(z, *params), 'g--')
plt.legend();
Explanation: You can see clearly in the data that the mid-point of the S-curve is at about x=3 (which is the real value), but the outliers destroy the fit. We can remove them easily with a median filter. A median filter is particularly suited to edge detection cases, since it tends to preserve edges well.
End of explanation
from scipy.signal import butter, lfilter
def lowpass_filter(data, cutoff, fs, order=5):
Digital Butterworth low-pass filter.
data : 1D array of data to be filtered
cutoff : cutoff frequency in Hz
fs : sampling frequency (samples/second)
nyquist_frequency = fs/2
normal_cutoff = cutoff/nyquist_frequency
b, a = butter(order, normal_cutoff, btype='low')
y = lfilter(b, a, data)
return y
Explanation: Exercise
The following is an example implementation of a low-pass Butterworth filter:
End of explanation
data = np.genfromtxt('../resources/scipy_filter_data.dat')
t = data[:, 0]
y = data[:, 1]
sample_freq = (len(t) - 1)/(t[-1])
plt.plot(t, y); # these are your data
Explanation: You are the unfortunate recipient of the following noisy data, which contains noise at two different (unknown) frequencies:
End of explanation
from scipy.stats import norm
def gaussian(x, mu, sigma, A):
return A * norm.pdf(x, mu, sigma)
Explanation: Somewhere in this mess is a Gaussian:
End of explanation
# %load -r 3-52 solutions/07_01_scipy.py
Explanation: Use a FFT to identify the two offending noise frequencies. Then convert the lowpass_filter above into a bandstop filter (hint: it is a trivial modification), and remove the offending noise from the data as much as possible (it won't be perfect). Finally, use curvefit to fit a Gaussian to the data, thereby recovering the original signal.
End of explanation
from scipy.integrate import quad
def f(x):
return 3*x**2 + 6*x - 9
quad(f, 0, 5)
Explanation: <a id=integration></a>
Integration
Scipy integration routines are discussed in the Scipy documentation. We will look at the two most common routines here.
<a id=function_integration></a>
Function integration
quad is used to evaluate definite 1D numerical integrals. For example, assume we want to integrate a quadratic polynomial $f(x) = 3x^2 + 6x - 9$ over an interval $x \in [0, 5]$. Analytically, the answer is:
$$ \int_0^5 3x^2 + 6x - 9 \ dx = \left[ x^3 + 3x^2 - 9x \right]_{x = 0}^{x = 5} = 155 $$
End of explanation
def sinc(x):
return np.sin(x) / x
x = np.linspace(-10, 10, 1000)
y = sinc(x)
plt.plot(x, y)
plt.title('Sinc Function')
print(quad(sinc, -10, 10)) # fails
# numpys sinc handles the singularity correctly
print(quad(np.sinc, -10, 10))
Explanation: The first parameter quad returns is the answer; the second is an estimate of the absolute error in the result.
For 2D, 3D, or n-dimensional integrals , use dblquad, tplquad, or nquad, respectively.
For some more interesting functions, Scipy's other function integration routines might be helpful:
* quadrature : Gaussian quadrature
* romberg : Romberg integration
For example, consider the $\mathrm{sinc}$ function:
$$
\mathrm{sinc}(x) \equiv
\begin{cases}
1 & x = 0 \
\sin(x)/x & \mathrm{otherwise}
\end{cases}
$$
End of explanation
from scipy.integrate import quadrature
# quadrature may complain, but it will work in the end
print(quadrature(sinc, -10, 10)[0])
Explanation: quad struggles with $\mathrm{sinc}$, but it can be easily handled with Gaussian quadrature:
End of explanation
from scipy.integrate import trapz
# 50 grid points
x = np.linspace(-10, 10)
y = sinc(x)
print(' 50 points:', trapz(y, x)) # note the order of the arguments: y, x
# 1000 grid points
x = np.linspace(-10, 10, 1000)
y = sinc(x)
print('1000 points:', trapz(y, x))
Explanation: This result agrees with Mathematica to 13 decimal places (even though only 11 are shown). Note that the problem is the singularity at $x=0$; if we change the boundaries to, say, [-10.1, 10], then it works fine. Also, writing our sinc function more cleverly would eliminate the problem.
<a id=sampleintegration></a>
Sample integration
If you have a collection of points that you want to integrate, you could use an interpolation function and pass it to quad. A better alternative is to use the purpose-built functions trapz, romb, and simps.
We will consider the $\mathrm{sinc}$ function again as an example. The most naive (and surprisingly robust) integration method is using the trapazoid rule, which is implemented in trapz:
End of explanation
from scipy.interpolate import interp1d
x = (1, 2)
y = (5, 7)
print('Points:', list(zip(x, y)))
f = interp1d(x, y)
z = [1.25, 1.5, 1.75]
print('Interpolation:', list(zip(z, f(z))))
Explanation: <a id=interpolation></a>
Interpolation
<a id=linear_interpolation></a>
Linear interpolation
Imagine you would like to interpolate between two points $(x_0, y_0)$ and $(x_1, y_1)$. You could do this by hand:
$$y(x) = y_0 + (x - x_0) \frac{y_1 - y_0}{x_1 - x_0}$$
Simple enough, but it is annoying to look up or derive the formula. Also, what if you want values less than $x_0$ to stay at the value of $y_0$, and likewise for values greater than $x_1$? Then you need to add if statements, and check the logic, etc. Too much work.
Instead, there is a simple function for almost all of your interpolation needs: interp1d.
End of explanation
# f(2.5) # uncomment to run me
Explanation: Right now, if you try to use an x-coordinate outside of the interval $[x_0, x_1]$, a ValueError will be raised:
End of explanation
z = [0.5, 1, 1.5, 2, 2.5]
f = interp1d(x, y, bounds_error=False, fill_value=0)
print("Option 1:", list(zip(z, f(z))))
f = interp1d(x, y, bounds_error=False, fill_value=y) # fill with endpoint values
print("Option 2:", list(zip(z, f(z))))
f = interp1d(x, y, fill_value='extrapolate') # bounds_error set to False automatically
print("Option 3:", list(zip(z, f(z))))
Explanation: This is because we haven't told interp1d how we want to handle the boundaries. This is done using the fill_value keyword argument. There are a few options:
Set values outside of the interval $[x_0, x_1]$ to a float.
Set values $< x_0$ to below and values $> x_1$ to above by passing a tuple, (below, above).
Extrapolate points outside the interval by passing extrapolate.
We also need to tell interp1d not to raise a ValueError by setting the bounds_error keyword to False.
End of explanation
from scipy.interpolate import CubicSpline
npoints = 5
x = np.arange(npoints)
y = np.random.random(npoints)
plt.plot(x, y, label='linear')
f = interp1d(x, y, kind='cubic')
z = np.linspace(np.min(x), np.max(x), 100)
plt.plot(z, f(z), label='interp1d cubic')
f = CubicSpline(x, y)
z = np.linspace(np.min(x), np.max(x), 100)
plt.plot(z, f(z), label='CubicSpline')
plt.plot(x, y, 'ko')
plt.legend(loc='best');
Explanation: <a id=spline_interpolation></a>
Cubic spline interpolation
Cubic splines are what are most commonly used when you want to interpolate between points smoothly.
Cubic spline interpolation is so common, it has its own method, CubicSpline, which produces generally better results.
End of explanation
from scipy.special import jn
x = np.linspace(0, 10, 100)
for n in range(6):
plt.plot(x, jn(n, x), label=r'$\mathtt{J}_{%i}(x)$' % n)
plt.grid()
plt.legend();
import mpl_toolkits.mplot3d.axes3d as plt3d
from matplotlib.colors import LogNorm
def airy_disk(x):
mask = x != 0
result = np.empty_like(x)
result[~mask] = 1.0
result[mask] = (2 * jn(1, x[mask]) / x[mask])**2
return result
# 2D plot
r = np.linspace(-10, 10, 500)
plt.plot(r, airy_disk(r))
# 3D plot
x = np.arange(-10, 10.1, 0.1)
y = np.arange(-10, 10.1, 0.1)
X, Y = np.meshgrid(x, y)
Z = airy_disk(np.sqrt(X**2 + Y**2))
result
fig = plt.figure()
ax = plt3d.Axes3D(fig)
ax.plot_surface(X, Y, Z, cmap='gray', norm=LogNorm(), lw=0)
None
Explanation: <a id=special_functions></a>
Special Functions
A complete list of scipy special functions can be found here.
<a id=bessel></a>
Bessel functions
End of explanation
from scipy.special import erf, ndtr
def gaussian(z):
return np.exp(-z**2)
x = np.linspace(-3, 3, 100)
plt.plot(x, gaussian(x), label='Gaussian')
plt.plot(x, erf(x), label='Error Function')
plt.plot(x, ndtr(x), label='Gaussian CDF')
plt.ylim(-1.1, 1.1)
plt.legend(loc='best');
Explanation: <a id=erf></a>
Error function and Gaussian CDF
CDF = cumulative distribution function
$$\mathrm{erf}(z) = \frac{2}{\sqrt{\pi}} \int_0^z \exp\left( -t^2 \right) dt $$
$$\mathrm{ndtr}(z) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^z \exp\left( \frac{-t^2}{2} \right) dt $$
End of explanation
from scipy.special import eval_legendre, eval_laguerre, eval_hermite, eval_chebyt
ortho_poly_dict = {'Legendre': eval_legendre,
'Laguerre': eval_laguerre,
'Hermite': eval_hermite,
'Chebyshev T': eval_chebyt}
def plot_ortho_poly(name):
plt.figure()
f = ortho_poly_dict[name]
x = np.linspace(-1, 1, 100)
for n in range(5):
plt.plot(x, f(n, x), label='n = %i' % n)
if name is 'Legendre' or 'Chebyshev' in name:
plt.ylim(-1.1, 1.1)
plt.legend(loc='best', fontsize=16)
plt.title(name + ' Polynomials')
plot_ortho_poly('Legendre')
# plot_ortho_poly('Laguerre')
# plot_ortho_poly('Hermite')
# plot_ortho_poly('Chebyshev T')
Explanation: <a id=ortho_polys></a>
Orthogonal Polynomials
End of explanation
# %load -r 57-80 solutions/07_01_scipy.py
Explanation: Exercise
Orthogonal polynomials can be used to construct a series expansion of an arbitrary function, just like $\sin$ and $\cos$ are used to construct a Fourier series. For example, we can express a function $f(x)$ as a series of Legendre polynomials $P_n(x)$:
$$ f(x) = \sum_{n=0}^{\infty} a_n P_n(x) $$
The Legendre polynomials are orthogonal on the interval $x \in [-1, 1]$, where they obey the following orthogonality relationship:
$$ \int_{-1}^{1} P_n(x) \, P_m(x) \, dx = \frac{2}{2 m + 1} \delta_{mn} $$
With $f(x) = sin(\pi x)$, write a function to calculate the coefficients $a_n$ of the Legendre series. Then plot $f(x)$ and the Legendre series for $x \in [-1, 1]$. Calculate as many coefficients as are needed for the series to essentially the same as $f(x)$ (it will be less than ten).
If you are struggling with the math, look here.
End of explanation |
6,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating a Filter, Edge Detection
Import resources and display image
Step1: Convert the image to grayscale
Step2: TODO
Step3: Test out other filters!
You're encouraged to create other kinds of filters and apply them to see what happens! As an optional exercise, try the following | Python Code:
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import cv2
import numpy as np
%matplotlib inline
# Read in the image
image = mpimg.imread(fname='images/curved_lane.jpg')
plt.imshow(X=image)
Explanation: Creating a Filter, Edge Detection
Import resources and display image
End of explanation
# Convert to grayscale for filtering
gray = cv2.cvtColor(src=image,
code=cv2.COLOR_RGB2GRAY)
plt.imshow(X=gray, cmap='gray')
Explanation: Convert the image to grayscale
End of explanation
# Create a custom kernel
# 3x3 array for edge detection
sobel_y = np.array([[ -1, -2, -1],
[ 0, 0, 0],
[ 1, 2, 1]])
## TODO: Create and apply a Sobel x operator
sobel_x = np.array([[ -1, 0, 1],
[ -2, 0, 2],
[ -1, 0, 1]])
# Filter the image using filter2D, which has inputs: (grayscale image, bit-depth, kernel)
filtered_image = cv2.filter2D(src=gray,
ddepth=-1,
kernel=sobel_y)
filtered_image2 = cv2.filter2D(src=gray,
ddepth=-1,
kernel=sobel_x)
plt.imshow(X=filtered_image,
cmap='gray')
plt.imshow(X=filtered_image2,
cmap='gray')
Explanation: TODO: Create a custom kernel
Below, you've been given one common type of edge detection filter: a Sobel operator.
The Sobel filter is very commonly used in edge detection and in finding patterns in intensity in an image. Applying a Sobel filter to an image is a way of taking (an approximation) of the derivative of the image in the x or y direction, separately. The operators look as follows.
<img src="images/sobel_ops.png" width=200 height=200>
It's up to you to create a Sobel x operator and apply it to the given image.
For a challenge, see if you can put the image through a series of filters: first one that blurs the image (takes an average of pixels), and then one that detects the edges.
End of explanation
image2 = mpimg.imread(fname='images/white_lines.jpg')
gray2 = cv2.cvtColor(src=image2,
code=cv2.COLOR_RGB2GRAY)
sample_filter = np.array([[ 0, 0, 2, 0, 0],
[ 0, 0, 1, 0, 0],
[ 2, -1, 1, -1, 2],
[ 0, 0, 1, 0, 0],
[ 0, 0, 2, 0, 0]])
filtered_image3 = cv2.filter2D(src=gray2,
ddepth=-1,
kernel=sample_filter)
plt.imshow(X=gray2,
cmap='gray')
plt.imshow(X=filtered_image3,
cmap='gray')
Explanation: Test out other filters!
You're encouraged to create other kinds of filters and apply them to see what happens! As an optional exercise, try the following:
* Create a filter with decimal value weights.
* Create a 5x5 filter
* Apply your filters to the other images in the images directory.
End of explanation |
6,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
UCDSML Lecture 12 Part 5
Neural Networks (MLP)
Prof. James Sharpnack
Step1: Importing and installing tensorflow
install tensorflow 2.0 with conda (you do not need to install tensorflow-gpu for the course)
tensorflow, build and execute computational graphs
tensorflow 1.0 and 2.0 differ mainly by making eager execution default, removing sessions
Step2: Loading data
tensorflow has many built in utilities for getting data
you could just as easily use requests/pandas
Step3: Tensorflow datasets API
Datasets API loads and readies data for use in stochastic gradient descent type iteration
the batch size tells it how many samples for the mini-batch
Dataset has methods to shuffle the data and apply transformations
Step4: Adding Layers to Keras Model
keras model can include more layers
simplest way is with tf.keras.Sequential
can make custom layers (beyond scope of class) | Python Code:
# This was modified from Tensorflow tutorial: https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough
# All appropriate copywrites are retained, use of this material is guided by fair use for teaching
# Some modifications made for course STA 208 by James Sharpnack [email protected]
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
tf.keras.backend.set_floatx('float64')
Explanation: UCDSML Lecture 12 Part 5
Neural Networks (MLP)
Prof. James Sharpnack
End of explanation
import os
import matplotlib.pyplot as plt
import tensorflow as tf
import pandas as pd
print("TensorFlow version: {}".format(tf.__version__))
print("Eager execution: {}".format(tf.executing_eagerly()))
Explanation: Importing and installing tensorflow
install tensorflow 2.0 with conda (you do not need to install tensorflow-gpu for the course)
tensorflow, build and execute computational graphs
tensorflow 1.0 and 2.0 differ mainly by making eager execution default, removing sessions
End of explanation
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(fname=os.path.basename(train_dataset_url),
origin=train_dataset_url)
print("Local copy of the dataset file: {}".format(train_dataset_fp))
train_df = pd.read_csv(train_dataset_fp)
train_dataset = tf.data.Dataset.from_tensor_slices((train_df.values[:,:-1],train_df.values[:,-1]))
Explanation: Loading data
tensorflow has many built in utilities for getting data
you could just as easily use requests/pandas
End of explanation
batch_size = 32
train_dataset = train_dataset.shuffle(1000)
train_dataset = train_dataset.batch(batch_size)
## sets batchsize and shuffles
X,y = next(iter(train_dataset))
X
Explanation: Tensorflow datasets API
Datasets API loads and readies data for use in stochastic gradient descent type iteration
the batch size tells it how many samples for the mini-batch
Dataset has methods to shuffle the data and apply transformations
End of explanation
train_dataset.element_spec
lin_layers = tf.keras.layers.Dense(3)
lin_layers(X)
## Builds and calls the layer
lin_layers.trainable_weights
lin_layers.trainable_variables
## previous model
model = tf.keras.Sequential([
tf.keras.layers.Dense(3),
])
## model is callable outputs decision function
logits = model(X)
logits[:5]
model.summary()
## new model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10,activation="relu"),
tf.keras.layers.Dense(3)
])
logits = model(X)
model.summary()
## Create the losses
logistic_loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
logistic_loss(y,logits)
def loss(model, x, y, training):
# training=training is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
logits = model(x, training=training)
return logistic_loss(y,logits)
l = loss(model, X, y, training=False)
print("Loss test: {}".format(l))
## Gradient tape lets TF know with respect to what to take gradients
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets, training=True)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
## Create optimizer (chooses learning schedule etc)
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
loss_value, grads = grad(model, X, y)
print("Step: {}, Initial Loss: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
## Optimizer has apply_gradients step which will modify all training variables appropriately
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("Step: {}, Loss: {}".format(optimizer.iterations.numpy(),
loss(model, X, y, training=True).numpy()))
## Note: Rerunning this cell uses the same model variables
# Keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Track progress
epoch_loss_avg.update_state(loss_value) # Add current batch loss
epoch_accuracy.update_state(y, model(x, training=True))
# End epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('Training Metrics')
axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
## Evaluate on test set
test_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
test_fp = tf.keras.utils.get_file(fname=os.path.basename(test_url),
origin=test_url)
test_df = pd.read_csv(test_fp)
test_dataset = tf.data.Dataset.from_tensor_slices((test_df.values[:,:-1],test_df.values[:,-1]))
test_dataset = test_dataset.batch(batch_size)
## Compute test accuracy
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in train_dataset:
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
logits = model(x, training=False)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("Test set accuracy: {:.3%}".format(test_accuracy.result()))
## new model
model = tf.keras.Sequential([
tf.keras.layers.Dense(6,activation="relu"),
tf.keras.layers.Dense(6,activation="relu"),
tf.keras.layers.Dense(3)
])
logits = model(X)
model.summary()
## Note: Rerunning this cell uses the same model variables
## Create optimizer (chooses learning schedule etc)
optimizer = tf.keras.optimizers.Adam()
# Keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 40
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Track progress
epoch_loss_avg.update_state(loss_value) # Add current batch loss
epoch_accuracy.update_state(y, model(x, training=True))
# End epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
## Compute test accuracy
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in train_dataset:
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
logits = model(x, training=False)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print("Test set accuracy: {:.3%}".format(test_accuracy.result()))
Explanation: Adding Layers to Keras Model
keras model can include more layers
simplest way is with tf.keras.Sequential
can make custom layers (beyond scope of class)
End of explanation |
6,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deadline
Wednesday, November 22, 2017, 11
Step1: Question 1
Measure taken
Step2: From this naive analysis, we can see that participating in the job training (JTP) program does not imply a significant increase of the income, it is even doing worst looking at the general trend. Indeed, we see that the median income from people who did not take the job training program is slightly higher from the ones who did take it. A few people (outliers) seemed to have fully benefit from the program, but as it concerns only a very small proportion, we cannot draw similar conclusion for the overall group. We also notice that the two categories are not evenly represented
Step3: We see that there is not a very similar distribution of ages. For the distribution of married people among the groups, there is an even number of married and single people who did not participate in the JTP, but there is a significant difference among the ones who did participate.
Step4: For the school background, we see that both components are represented in a similar fashion, if we ignore the misrepresentation of the two catgories.
Step5: If we compare the black race to the other ethnic groups we see that the category of people who participated in the JTP is much more represented. We have the exact opposite for the hispanic race. We can conlcude that the representation of the different ethnic origins is very badly represented.
Step6: The earnings do not fit very well between the treat and control groups.
All these observations allow us to realize that the sample selected for this analysis is very poorly represented in terms of the selected features. There is too much disparity. These inequalities skew the conclusions we can draw from the results obtained in the naive analysis. A much more sophisticated analysis is necessary to couteract these inequities.
3. A propensity score model
Step7: 4. Balancing the dataset via matching
Step8: We see that there is a much better fit between the two categories compared to the same plot of part 1.
Step9: The matches are not perfect yet but the difference between them has been greatly decreased. As a result, the conclusions drawn from the analysis will not be as biased as in the previous case.
5. Balancing the groups further
Looking at all the features, we see that the one that needs to be improved in priority is the black feature.
Step10: Let's analyze the results of the different features now that we have dealt with the black feature.
Step11: We see that processing the black feature have also enhanced the distribution of the social status and the school background features. The earnings features do not seem to have changed drastically. However, the attentive reader will notice a slightly worse distribution. Therefore, we can conclude that the two groups are now better balanced.
6. A less naive analysis
Step12: In the histogram of the earnings of 1978, we see that there is more people from the control group that have a very low salary (first bin). In the boxplot, we see that the the mean of the salary his higher in the treat group. The value of the third quantile is also higher. Hence, we can conclude (with more certainty than part 1) that the job training program has, although not so big, a positive effect on salaries.
Question 2
Step13: Firstly, we split the dataset into a training and a testing set. We build a stratified 10-fold cross validation indices generator and finally, we vectorize the (textual) data by fitting a TF-IDF vectorizer on the training set.
Step14: Here, we fit a Random Forest model with just the default parameters and we compute our accuracy on the testing set.
Step15: We commented the cell below since its execution take a long time. We stored the results into a csv. If you want to re-run the code, you can just uncomment this cell. We optimized the parameters of the Random Forest model using a Grid Search with 10-fold stratified cross validation. Then, we show the results in the form of a heatmap. For computing time consideration, we restricted ourselves to a domain from 0 to 200 with a 20 step for the max_depth and n_estimators parameters. The best results we get was at max_dept=140 and n_estimators=200. Unfortunately, the optimum value were on the "border" of the grid, this suggests that we could have even better parameters by extending the domain of our grid. We didn't do it since it take a long time especially for models with high-valued parameters (it increases the complexity and thus the time to fit).
Step16: Now we show the confusion matrix on the testing set. Here, it's interesting to see how the different classes are confused by the model. For instance, if we take the talk.religion.misc class, we can see that it is poorly classified (only 54% accuracy) and it's mainly confused with the class soc.religion.christian. This confusion makes sense since these two classes seem to be strongly related. Another example is the talk.politics.misc which is confused the most with talk.politics.gun which also makes sense, the two topics seem to be close and it's harder for the model to discriminate them.
Step17: Here, we explore the importance of the different features. First, we show the top 10 most and least important features. As we can see, words which represent clearly a specific topic/context are the most important ones. For instance "clipper", "bike", "car", "sale", ... are words that clearly mark a specific context (car for automobile). There are however some exceptions like "the", "of", ... <br />
The least significant ones are the outliers like we can see in the top 10 least important features | Python Code:
import numpy as np
import pandas as pd
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.model_selection import train_test_split, StratifiedKFold, GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, confusion_matrix
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('ggplot')
from multiprocessing import Pool
Explanation: Deadline
Wednesday, November 22, 2017, 11:59PM
Important notes
When you push your Notebook to GitHub, all the cells must already have been evaluated.
Don't forget to add a textual description of your thought process and of any assumptions you've made.
Please write all your comments in English, and use meaningful variable names in your code.
Question 1: Propensity score matching
In this exercise, you will apply propensity score matching, which we discussed in lecture 5 ("Observational studies"), in order to draw conclusions from an observational study.
We will work with a by-now classic dataset from Robert LaLonde's study "Evaluating the Econometric Evaluations of Training Programs" (1986).
The study investigated the effect of a job training program ("National Supported Work Demonstration") on the real earnings of an individual, a couple of years after completion of the program.
Your task is to determine the effectiveness of the "treatment" represented by the job training program.
Dataset description
treat: 1 if the subject participated in the job training program, 0 otherwise
age: the subject's age
educ: years of education
race: categorical variable with three possible values: Black, Hispanic, or White
married: 1 if the subject was married at the time of the training program, 0 otherwise
nodegree: 1 if the subject has earned no school degree, 0 otherwise
re74: real earnings in 1974 (pre-treatment)
re75: real earnings in 1975 (pre-treatment)
re78: real earnings in 1978 (outcome)
If you want to brush up your knowledge on propensity scores and observational studies, we highly recommend Rosenbaum's excellent book on the "Design of Observational Studies". Even just reading the first chapter (18 pages) will help you a lot.
1. A naive analysis
Compare the distribution of the outcome variable (re78) between the two groups, using plots and numbers.
To summarize and compare the distributions, you may use the techniques we discussed in lectures 4 ("Read the stats carefully") and 6 ("Data visualization").
What might a naive "researcher" conclude from this superficial analysis?
2. A closer look at the data
You're not naive, of course (and even if you are, you've learned certain things in ADA), so you aren't content with a superficial analysis such as the above.
You're aware of the dangers of observational studies, so you take a closer look at the data before jumping to conclusions.
For each feature in the dataset, compare its distribution in the treated group with its distribution in the control group, using plots and numbers.
As above, you may use the techniques we discussed in class for summarizing and comparing the distributions.
What do you observe?
Describe what your observations mean for the conclusions drawn by the naive "researcher" from his superficial analysis.
3. A propensity score model
Use logistic regression to estimate propensity scores for all points in the dataset.
You may use sklearn to fit the logistic regression model and apply it to each data point to obtain propensity scores:
python
from sklearn import linear_model
logistic = linear_model.LogisticRegression()
Recall that the propensity score of a data point represents its probability of receiving the treatment, based on its pre-treatment features (in this case, age, education, pre-treatment income, etc.).
To brush up on propensity scores, you may read chapter 3.3 of the above-cited book by Rosenbaum or this article.
Note: you do not need a train/test split here. Train and apply the model on the entire dataset. If you're wondering why this is the right thing to do in this situation, recall that the propensity score model is not used in order to make predictions about unseen data. Its sole purpose is to balance the dataset across treatment groups.
(See p. 74 of Rosenbaum's book for an explanation why slight overfitting is even good for propensity scores.
If you want even more information, read this article.)
4. Balancing the dataset via matching
Use the propensity scores to match each data point from the treated group with exactly one data point from the control group, while ensuring that each data point from the control group is matched with at most one data point from the treated group.
(Hint: you may explore the networkx package in Python for predefined matching functions.)
Your matching should maximize the similarity between matched subjects, as captured by their propensity scores.
In other words, the sum (over all matched pairs) of absolute propensity-score differences between the two matched subjects should be minimized.
After matching, you have as many treated as you have control subjects.
Compare the outcomes (re78) between the two groups (treated and control).
Also, compare again the feature-value distributions between the two groups, as you've done in part 2 above, but now only for the matched subjects.
What do you observe?
Are you closer to being able to draw valid conclusions now than you were before?
5. Balancing the groups further
Based on your comparison of feature-value distributions from part 4, are you fully satisfied with your matching?
Would you say your dataset is sufficiently balanced?
If not, in what ways could the "balanced" dataset you have obtained still not allow you to draw valid conclusions?
Improve your matching by explicitly making sure that you match only subjects that have the same value for the problematic feature.
Argue with numbers and plots that the two groups (treated and control) are now better balanced than after part 4.
6. A less naive analysis
Compare the outcomes (re78) between treated and control subjects, as you've done in part 1, but now only for the matched dataset you've obtained from part 5.
What do you conclude about the effectiveness of the job training program?
Question 2: Applied ML
We are going to build a classifier of news to directly assign them to 20 news categories. Note that the pipeline that you will build in this exercise could be of great help during your project if you plan to work with text!
Load the 20newsgroup dataset. It is, again, a classic dataset that can directly be loaded using sklearn (link).
TF-IDF, short for term frequency–inverse document frequency, is of great help when if comes to compute textual features. Indeed, it gives more importance to terms that are more specific to the considered articles (TF) but reduces the importance of terms that are very frequent in the entire corpus (IDF). Compute TF-IDF features for every article using TfidfVectorizer. Then, split your dataset into a training, a testing and a validation set (10% for validation and 10% for testing). Each observation should be paired with its corresponding label (the article category).
Train a random forest on your training set. Try to fine-tune the parameters of your predictor on your validation set using a simple grid search on the number of estimator "n_estimators" and the max depth of the trees "max_depth". Then, display a confusion matrix of your classification pipeline. Lastly, once you assessed your model, inspect the feature_importances_ attribute of your random forest and discuss the obtained results.
End of explanation
def plot_re78(treat_group, control_group):
lower_bound_left = min(min(treat_group['re78']), min(control_group['re78']))
upper_bound_left = max(max(treat_group['re78']), max(control_group['re78']))
range_ = (lower_bound_left, upper_bound_left)
num_bins = 15
fig, axes = plt.subplots(1, 1, figsize=(15,5), sharey=True)
fig.suptitle('Real earnings in 1978', fontsize=14)
treat_group['re78'].plot.hist(bins = num_bins, range=range_, ax=axes, alpha=0.7, label='treat')
control_group['re78'].plot.hist(bins = num_bins, range=range_, ax=axes, alpha=0.5, label='control')
axes.legend(loc='upper right')
axes.set_xlabel('Income')
plt.show()
def boxplot_re78(treat_group, control_group):
merge = pd.concat([treat_group['re78'], control_group['re78']], axis=1)
merge.columns = ['treat', 'control']
fig, axes = plt.subplots(1, 1, figsize=(15,10))
fig.suptitle("Boxplots", fontsize=14)
merge.plot.box(ax=axes, sym='k.')
axes.set_ylabel('Income')
plt.show()
df = pd.read_csv('lalonde.csv')
treat_group_o = df[df['treat'] == 1]
control_group_o = df[df['treat'] == 0]
plot_re78(treat_group_o, control_group_o)
boxplot_re78(treat_group_o, control_group_o)
num_people_treat_o = treat_group_o['re78'].shape[0]
num_people_control_o = control_group_o['re78'].shape[0]
print('Number of people in treat group: ', num_people_treat_o, '\nNumber of people in control group: ', num_people_control_o)
Explanation: Question 1
Measure taken: we didn't created the white race category as it wasn't explicitely stated in the dataset.
1. A naive analysis
End of explanation
def plot_(treat_group, control_group, title, left_column, left_xlabel, left_legend, right_column, right_xlabel, right_legend):
lower_bound_left = min(min(treat_group[left_column]), min(control_group[left_column]))
upper_bound_left = max(max(treat_group[left_column]), max(control_group[left_column]))
range_left = (lower_bound_left, upper_bound_left)
lower_bound_right = min(min(treat_group[right_column]), min(control_group[right_column]))
upper_bound_right = max(max(treat_group[right_column]), max(control_group[right_column]))
range_right = (lower_bound_right, upper_bound_right)
num_bins=15
fig, axes = plt.subplots(1, 2, figsize=(15,5), sharey=True)
fig.suptitle(title, fontsize=14)
treat_group[left_column].plot.hist(bins=num_bins, range=range_left, ax=axes[0], alpha=0.7, label='treat')
control_group[left_column].plot.hist(bins=num_bins, range=range_left, ax=axes[0], alpha=0.5, label='control')
axes[0].legend(loc=left_legend)
axes[0].set_xlabel(left_xlabel)
treat_group[right_column].plot.hist(bins=num_bins, range=range_right, ax=axes[1], alpha=0.7, label='treat')
control_group[right_column].plot.hist(bins=num_bins, range=range_right, ax=axes[1], alpha=0.5, label='control')
axes[1].legend(loc=right_legend)
axes[1].set_xlabel(right_xlabel)
plt.show()
import matplotlib.ticker as ticker
def plot_1binary_(treat_group, control_group, title, left_column, left_xlabel, left_legend, right_column, right_xlabel, right_legend):
lower_bound_left = min(min(treat_group[left_column]), min(control_group[left_column]))
upper_bound_left = max(max(treat_group[left_column]), max(control_group[left_column]))
range_left = (lower_bound_left, upper_bound_left)
num_bins=15
fig, axes = plt.subplots(1, 2, figsize=(15,5), sharey=True)
fig.suptitle(title, fontsize=14)
treat_group[left_column].plot.hist(bins=num_bins, range=range_left, ax=axes[0], alpha=0.7, label='treat')
control_group[left_column].plot.hist(bins=num_bins, range=range_left, ax=axes[0], alpha=0.5, label='control')
axes[0].legend(loc=left_legend)
axes[0].set_xlabel(left_xlabel)
treat_group[right_column].plot.hist(bins=[0, 0.2, 0.4, 0.6, 0.8, 1, 1.2], align = 'left', ax=axes[1], alpha=0.7, label='treat')
control_group[right_column].plot.hist(bins=[0, 0.2, 0.4, 0.6, 0.8, 1, 1.2], align = 'left', ax=axes[1], alpha=0.5, label='control')
axes[1].legend(loc=right_legend)
axes[1].set_xlabel(right_xlabel)
axes[1].xaxis.set_major_locator(ticker.MaxNLocator(integer=True))
plt.show()
def plot_2binary_(treat_group, control_group, title, left_column, left_xlabel, left_legend, right_column, right_xlabel, right_legend):
fig, axes = plt.subplots(1, 2, figsize=(15,5), sharey=True)
fig.suptitle(title, fontsize=14)
treat_group[left_column].plot.hist(bins=[0, 0.2, 0.4, 0.6, 0.8, 1, 1.2], align = 'left',ax=axes[0], alpha=0.7, label='treat')
control_group[left_column].plot.hist(bins=[0, 0.2, 0.4, 0.6, 0.8, 1, 1.2], align = 'left',ax=axes[0], alpha=0.5, label='control')
axes[0].legend(loc=left_legend)
axes[0].set_xlabel(left_xlabel)
axes[0].xaxis.set_major_locator(ticker.MaxNLocator(integer=True))
treat_group[right_column].plot.hist(bins=[0, 0.2, 0.4, 0.6, 0.8, 1, 1.2], align = 'left', ax=axes[1], alpha=0.7, label='treat')
control_group[right_column].plot.hist(bins=[0, 0.2, 0.4, 0.6, 0.8, 1, 1.2], align = 'left', ax=axes[1], alpha=0.5, label='control')
axes[1].legend(loc=right_legend)
axes[1].set_xlabel(right_xlabel)
axes[1].xaxis.set_major_locator(ticker.MaxNLocator(integer=True))
plt.show()
plot_1binary_(treat_group_o, control_group_o, 'Age and Marriage analysis', 'age', 'Age', 'upper right',
'married', 'Married', 'upper center')
Explanation: From this naive analysis, we can see that participating in the job training (JTP) program does not imply a significant increase of the income, it is even doing worst looking at the general trend. Indeed, we see that the median income from people who did not take the job training program is slightly higher from the ones who did take it. A few people (outliers) seemed to have fully benefit from the program, but as it concerns only a very small proportion, we cannot draw similar conclusion for the overall group. We also notice that the two categories are not evenly represented: there is much more people who did not participate in the JTP.
2. A closer look at the data
In this section, we focus the analysis of the data in four different contexts:
social status,
school background,
ethnic origin,
real earnings.
End of explanation
plot_1binary_(treat_group_o, control_group_o, 'Education and School analysis', 'educ', 'Year of education', 'upper right',
'nodegree', 'No earned school degree', 'upper center')
Explanation: We see that there is not a very similar distribution of ages. For the distribution of married people among the groups, there is an even number of married and single people who did not participate in the JTP, but there is a significant difference among the ones who did participate.
End of explanation
plot_2binary_(treat_group_o, control_group_o, 'Ethnic origin analysis', 'black', 'Black race', 'upper center',
'hispan', 'Hispanic race', 'upper center')
Explanation: For the school background, we see that both components are represented in a similar fashion, if we ignore the misrepresentation of the two catgories.
End of explanation
plot_(treat_group_o, control_group_o, 'Earnings analysis', 're74', 'Real earnings from 1974', 'upper right',
're75', 'Real earnings from 1975', 'upper right')
Explanation: If we compare the black race to the other ethnic groups we see that the category of people who participated in the JTP is much more represented. We have the exact opposite for the hispanic race. We can conlcude that the representation of the different ethnic origins is very badly represented.
End of explanation
from sklearn import linear_model
logistic = linear_model.LogisticRegression()
# the features
X = df[['age', 'educ', 'black', 'hispan', 'married', 'nodegree', 're74', 're75']]
# the output
y = df['treat']
model = logistic.fit(X, y)
# accuracy of the model
score = model.score(X,y)
# probability of each datapoint to be in treat group
probs_distr = model.predict_proba(X)[:,1]
print('Accuracy of logistic regression model: {0:.2f}%' .format(score*100))
Explanation: The earnings do not fit very well between the treat and control groups.
All these observations allow us to realize that the sample selected for this analysis is very poorly represented in terms of the selected features. There is too much disparity. These inequalities skew the conclusions we can draw from the results obtained in the naive analysis. A much more sophisticated analysis is necessary to couteract these inequities.
3. A propensity score model
End of explanation
import networkx as nx
idx_treat = np.where(df['treat'] == 1)[0] # participated
idx_control = np.where(df['treat'] == 0)[0] # not participated
num_people_part = min(idx_treat.shape[0], idx_control.shape[0])
# create a graph
G = nx.Graph()
# add all the indexes as nodes in the graph
for idx in df.index:
G.add_node(idx)
# build the graph with weight between nodes as the distance we want to optimize
for idx_t in idx_treat:
for idx_c in idx_control:
w = 1 - (np.abs(probs_distr[idx_t] - probs_distr[idx_c]))
G.add_edge(idx_t, idx_c, weight=w)
# dictionary with pairs where the weight is maximum
tot_matching = nx.max_weight_matching(G)
from itertools import islice
def take(n, iterable):
"Return first n items of the iterable as a list"
return list(islice(iterable, n))
# take the first 185 (in our case) pairs of key-value
eq_matching = take(num_people_part, tot_matching.items())
idx_treat_sel = [x[0] for x in eq_matching]
idx_control_sel = [x[1] for x in eq_matching]
# define the new treat and control groups
treat_group = df.loc[idx_treat]
control_group = df.loc[idx_control_sel]
plot_re78(treat_group, control_group)
Explanation: 4. Balancing the dataset via matching
End of explanation
# social status
plot_1binary_(treat_group, control_group, 'Age and Marriage analysis', 'age', 'Age', 'upper right',
'married', 'Married', 'upper center')
# school background
plot_1binary_(treat_group, control_group, 'Education and School analysis', 'educ', 'Year of education', 'upper right',
'nodegree', 'No earned school degree', 'upper center')
# ethnic origin
plot_2binary_(treat_group, control_group, 'Ethnic origin analysis', 'black', 'Black race', 'upper center',
'hispan', 'Hispanic race', 'upper center')
# real earnings
plot_(treat_group, control_group, 'Earnings analysis', 're74', 'Real earnings from 1974', 'upper right',
're75', 'Real earnings from 1975', 'upper right')
Explanation: We see that there is a much better fit between the two categories compared to the same plot of part 1.
End of explanation
num_black_treat = treat_group_o.black.sum()
num_black_control = control_group_o.black.sum()
print('There is {} black people in the treat group, and {} black people in the control group.'
.format(num_black_treat, num_black_control))
num_people_treat = treat_group_o.shape[0]
num_people_control = control_group_o.shape[0]
print('There is {} people in the treat group and {} people in the control group in total.'
. format(num_people_treat, num_people_control))
num_black = min(num_black_treat, num_black_control)
num_no_black = min(num_people_treat-num_black_treat, num_people_control-num_black_control)
group_size = num_black+num_no_black
print('This means that we will have {} people from each group.' .format(group_size))
# create a graph
G_black = nx.Graph()
# add all the indexes as nodes in the graph
for idx in df.index:
G_black.add_node(idx)
# build the graph with weight between nodes as the distance we want to optimize making sure we match only subjects
# that have the same 'black' value
for idx_t in idx_treat:
for idx_c in idx_control:
if df.loc[idx_t, 'black'] == df.loc[idx_c, 'black']:
w = 1 - (np.abs(probs_distr[idx_t] - probs_distr[idx_c]))
G_black.add_edge(idx_t, idx_c, weight=w)
# dictionary with pairs where the weight is maximum
tot_matching = nx.max_weight_matching(G_black)
# take the first 'group_size' pairs of key-value
eq_matching = take(group_size, tot_matching.items())
idx_treat_sel = [x[0] for x in eq_matching]
idx_control_sel = [x[1] for x in eq_matching]
# define the new treat and control groups
treat_group_b = df.loc[idx_treat_sel]
control_group_b = df.loc[idx_control_sel]
Explanation: The matches are not perfect yet but the difference between them has been greatly decreased. As a result, the conclusions drawn from the analysis will not be as biased as in the previous case.
5. Balancing the groups further
Looking at all the features, we see that the one that needs to be improved in priority is the black feature.
End of explanation
# social status
plot_1binary_(treat_group_b, control_group_b, 'Age and Marriage analysis', 'age', 'Age', 'upper right',
'married', 'Married', 'upper center')
# school background
plot_1binary_(treat_group_b, control_group_b, 'Education and School analysis', 'educ', 'Year of education', 'upper right',
'nodegree', 'No earned school degree', 'upper center')
# ethnic origin
plot_2binary_(treat_group_b, control_group_b, 'Ethnic origin analysis', 'black', 'Black race', 'upper center',
'hispan', 'Hispanic race', 'upper center')
# real earnings
plot_(treat_group_b, control_group_b, 'Earnings analysis', 're74', 'Real earnings from 1974', 'upper right',
're75', 'Real earnings from 1975', 'upper right')
Explanation: Let's analyze the results of the different features now that we have dealt with the black feature.
End of explanation
plot_re78(treat_group_b, control_group_b)
boxplot_re78(treat_group_b, control_group_b)
num_people_treat_b = treat_group_b['re78'].shape[0]
num_people_control_b = control_group_b['re78'].shape[0]
print('Number of people in treat group: ', num_people_treat_b, '\nNumber of people in control group: ', num_people_control_b)
Explanation: We see that processing the black feature have also enhanced the distribution of the social status and the school background features. The earnings features do not seem to have changed drastically. However, the attentive reader will notice a slightly worse distribution. Therefore, we can conclude that the two groups are now better balanced.
6. A less naive analysis
End of explanation
news = fetch_20newsgroups(subset='all')
Explanation: In the histogram of the earnings of 1978, we see that there is more people from the control group that have a very low salary (first bin). In the boxplot, we see that the the mean of the salary his higher in the treat group. The value of the third quantile is also higher. Hence, we can conclude (with more certainty than part 1) that the job training program has, although not so big, a positive effect on salaries.
Question 2
End of explanation
X = news.data
Y = news.target
# Split data into a training and a testing set using stratified sampling
X_train, X_test, Y_train, Y_test = train_test_split(X, Y,
test_size=0.1, random_state=7, stratify=Y)
# Create a stratified 10-fold cross-validation indices generator
SKF = StratifiedKFold(10, shuffle=True, random_state=7)
# Fit a TF-IDF vectorizer on the training set only (indeed we don't want to leak information from the testing
# set in the training set)
vectorizer = TfidfVectorizer().fit(X_train)
# We vectorize the textual data using the vectorizer (fitted only on the training set) for the testing and
# training sets
X_train = vectorizer.transform(X_train)
X_test = vectorizer.transform(X_test)
Explanation: Firstly, we split the dataset into a training and a testing set. We build a stratified 10-fold cross validation indices generator and finally, we vectorize the (textual) data by fitting a TF-IDF vectorizer on the training set.
End of explanation
# We start by fitting a Ranfom Forest model on the training set using the default parameters
model = RandomForestClassifier()
model.fit(X_train, Y_train)
# We compute our prediction on the test set
pred = model.predict(X_test)
# We compute our accuracy based on our prediction on the testing set
print('Accuracy on the testing set with default parameters: {0:.2f}%' .format(accuracy_score(pred, Y_test)*100))
Explanation: Here, we fit a Random Forest model with just the default parameters and we compute our accuracy on the testing set.
End of explanation
## We choose a domain for each parameter we want to optimize
## For computation time consideration we restricted ourselves to a grid from 0 to 200 for each parameter
## with a step of 20
#tuned_parameters = {
# 'n_estimators': list(range(0, 201, 20))[1:],
# 'max_depth': list(range(0, 201, 20))[1:]
#}
#
## We then build a Grid Search object with our stratified cross validation indices generator precedently created
## So for each combination of parameters of the grid we'll run a 10-fold cross validation
#clf = GridSearchCV(RandomForestClassifier(), tuned_parameters, cv=SKF,
# scoring='accuracy', n_jobs=15, refit=False,
# return_train_score=True, verbose=2)
#
## We run the grid search
#clf.fit(X_train, Y_train)
#
## We save it since it's a relatively long operation
#pd.DataFrame(clf.cv_results_).to_csv('grid_search.csv')
# Code to show the results of the grid search through a hit map
res = pd.read_csv('grid_search.csv')
params = list(map(eval, res.params.values))
params = pd.DataFrame(params)
hm = pd.concat([params, res.mean_test_score, res.rank_test_score], axis=1)
best_params = hm[hm['rank_test_score'] == 1][['max_depth', 'n_estimators', 'mean_test_score']]
max_depth_labs = hm['max_depth'].unique()
nb_est_labs = hm['n_estimators'].unique()
mat = np.zeros(shape=(len(max_depth_labs), len(nb_est_labs)))
for i, depth in enumerate(max_depth_labs):
for j, n_est in enumerate(nb_est_labs):
mat[i, j] = hm[(hm['max_depth']==depth) & (hm['n_estimators']==n_est)]['mean_test_score'].values[0]
mat = pd.DataFrame(mat, index=hm.max_depth.unique(), columns=hm.n_estimators.unique())
plt.figure(figsize=(10, 10))
sns.heatmap(mat, square=True, annot=True)
plt.title('Heatmap: mean of cross-validation accuracy')
plt.show()
best_params
# Here we refit the model on the best parameter found using grid search
model = RandomForestClassifier(max_depth=best_params.max_depth.values[0],
n_estimators=best_params.n_estimators.values[0])
model.fit(X_train, Y_train)
pred = model.predict(X_test)
# We show the accuracy on the testing set for the optimal parameters
print('Accuracy on the testing set with optimal parameters: {0:.2f}%' .format(accuracy_score(pred, Y_test)*100))
Explanation: We commented the cell below since its execution take a long time. We stored the results into a csv. If you want to re-run the code, you can just uncomment this cell. We optimized the parameters of the Random Forest model using a Grid Search with 10-fold stratified cross validation. Then, we show the results in the form of a heatmap. For computing time consideration, we restricted ourselves to a domain from 0 to 200 with a 20 step for the max_depth and n_estimators parameters. The best results we get was at max_dept=140 and n_estimators=200. Unfortunately, the optimum value were on the "border" of the grid, this suggests that we could have even better parameters by extending the domain of our grid. We didn't do it since it take a long time especially for models with high-valued parameters (it increases the complexity and thus the time to fit).
End of explanation
cm = confusion_matrix(Y_test, pred)
cm = pd.DataFrame(cm, index=news.target_names,
columns=news.target_names)
cm = cm.div(cm.sum(axis=1), axis=0)
plt.figure(figsize=(15, 15))
sns.heatmap(cm, square=True, annot=True, linewidths=0.1)
plt.title('Confusion matrix on testing set')
plt.show()
Explanation: Now we show the confusion matrix on the testing set. Here, it's interesting to see how the different classes are confused by the model. For instance, if we take the talk.religion.misc class, we can see that it is poorly classified (only 54% accuracy) and it's mainly confused with the class soc.religion.christian. This confusion makes sense since these two classes seem to be strongly related. Another example is the talk.politics.misc which is confused the most with talk.politics.gun which also makes sense, the two topics seem to be close and it's harder for the model to discriminate them.
End of explanation
# Get a mapping from index to word
idx_to_word = {v:k for k,v in vectorizer.vocabulary_.items()}
# Get feature immportance
importances = model.feature_importances_
df = pd.DataFrame([{
'word': idx_to_word[feature],
'feature': feature,
'importance': importances[feature]
} for feature in range(len(importances))])
df = df.sort_values('importance', ascending=False).reset_index()
del df['index']
print('Top 10 most important features')
print(df.iloc[:10])
print()
print('Top 10 least important features')
print(df.iloc[-10:])
fig = df.importance.plot(rot=45)
fig.autoscale(tight=False)
plt.title('Plot of the importance of features ordered form the most important to the least')
plt.show()
fig = df.importance.iloc[:10000].plot(rot=45)
fig.autoscale(tight=False)
plt.title('Same plot but "zoomed" to the 10000 first most important features')
plt.show()
kept_words = [idx_to_word[f] for f in np.argsort(importances)[::-1][:2000]]
X = news.data
Y = news.target
# Split data into a training and a testing set using stratified sampling
X_train, X_test, Y_train, Y_test = train_test_split(X, Y,
test_size=0.1, random_state=7, stratify=Y)
# Create a stratified 10-fold cross-validation indices generator
SKF = StratifiedKFold(10, shuffle=True, random_state=7)
# Fit a TF-IDF vectorizer on the training set only (indeed we don't want to leak information from the testing
# set in the training set)
vectorizer = TfidfVectorizer(vocabulary=kept_words).fit(X_train)
# We vectorize the textual data using the vectorizer (fitted only on the training set) for the testing and
# training sets
X_train = vectorizer.transform(X_train)
X_test = vectorizer.transform(X_test)
model = RandomForestClassifier(max_depth=best_params.max_depth.values[0],
n_estimators=best_params.n_estimators.values[0])
model.fit(X_train, Y_train)
pred = model.predict(X_test)
print('Accuracy on the testing set by keeping the first 2000 most important features: {0:.2f}%' .format(accuracy_score(pred, Y_test)*100))
Explanation: Here, we explore the importance of the different features. First, we show the top 10 most and least important features. As we can see, words which represent clearly a specific topic/context are the most important ones. For instance "clipper", "bike", "car", "sale", ... are words that clearly mark a specific context (car for automobile). There are however some exceptions like "the", "of", ... <br />
The least significant ones are the outliers like we can see in the top 10 least important features: "62494hj6t8yv", "jarlehto", "6245". Indeed, these words are likely not frequent outliers which are not really representing a specific topic. <br />
Bonus
Lastly, we tried to plot the features importance ordered from most to least importance. We see here an exponentially decaying function which indicates that a lot of features might not be relevant. So we tried to keep the 2000 first most important features and build a new model with the same parameters than the optimal one we found with Grid Search. Doing so we still have model with good accuracy (only 3% below the other ones). Conclusion: by keeping a bit more than 1% of the data we lost only 3% of accuracy. This shows that a few words matter to classify well the topics and there is a lot of words which bring just a tiny part of the information needed.
End of explanation |
6,731 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
load doc into memory
| Python Code::
def load_doc(filename):
# open the file as read only
file = open(filename, 'r')
# read all text
text = file.read()
# close the file
file.close()
return text
|
6,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data analysis tools
For Week 1 assignment I'm checking correlation between race and amphetamines using the NESARC data
N0 hypothesis - there is no difference in alcohol usage between races.
Na hypothesis - there is difference in alcohol usage between races
Step1: Then OLS regression test is run
Step2: And as Prob (F-statistics) is less than 0.05, I can discard null hypothesis.
Following block gives means and std deviations
Step3: Tukey's HSD post hoc test | Python Code:
import numpy
import pandas
import statsmodels.formula.api as smf
import statsmodels.stats.multicomp as multi
data = pandas.read_csv('nesarc_pds.csv', low_memory=False)
# S2AQ8A - HOW OFTEN DRANK ANY ALCOHOL IN LAST 12 MONTHS (99 - Unknown)
# S2AQ8B - NUMBER OF DRINKS OF ANY ALCOHOL USUALLY CONSUMED ON DAYS WHEN DRANK ALCOHOL IN LAST 12 MONTHS (99 - Unknown)
# S2AQ3 - DRANK AT LEAST 1 ALCOHOLIC DRINK IN LAST 12 MONTHS
#setting variables you will be working with to numeric
data['S2AQ8A'] = data['S2AQ8A'].convert_objects(convert_numeric=True)
data['S2AQ8B'] = data['S2AQ8B'].convert_objects(convert_numeric=True)
data['S2AQ3'] = data['S2AQ3'].convert_objects(convert_numeric=True)
#subset data to young adults age 18 to 27 who have drank alcohol in the past 12 months
subset=data[(data['AGE']>=19) & (data['AGE']<=34) & (data['S2AQ3']==1)]
subset['S2AQ8A']=subset['S2AQ8A'].replace(99, numpy.nan)
subset['S3BD4Q2DR']=subset['S3BD4Q2DR'].replace(99, numpy.nan)
alcohol_usage_map = {
1: 365,
2: 330,
3: 182,
4: 104,
5: 52,
6: 30,
7: 12,
8: 9,
9: 5,
10: 2,
}
subset['ALCO_FREQMO'] = subset['S2AQ8A'].map(alcohol_usage_map)
#converting new variable ALCO_FREQMO to numeric
subset['ALCO_FREQMO'] = subset['ALCO_FREQMO'].convert_objects(convert_numeric=True)
subset['ALCO_NUM_EST'] = subset['ALCO_FREQMO'] * subset['S2AQ8B']
ct1 = subset.groupby('ALCO_NUM_EST').size()
subset_race = subset[['ALCO_NUM_EST', 'ETHRACE2A']].dropna()
Explanation: Data analysis tools
For Week 1 assignment I'm checking correlation between race and amphetamines using the NESARC data
N0 hypothesis - there is no difference in alcohol usage between races.
Na hypothesis - there is difference in alcohol usage between races
End of explanation
# using ols function for calculating the F-statistic and associated p value
model1 = smf.ols(formula='ALCO_NUM_EST ~ C(ETHRACE2A)', data=subset_race)
results1 = model1.fit()
print (results1.summary())
Explanation: Then OLS regression test is run
End of explanation
print ('means for ALCO_NUM_EST by race')
m2= subset_race.groupby('ETHRACE2A').mean()
print (m2)
print ('standard dev for ALCO_NUM_EST by race')
sd2 = subset_race.groupby('ETHRACE2A').std()
print (sd2)
Explanation: And as Prob (F-statistics) is less than 0.05, I can discard null hypothesis.
Following block gives means and std deviations:
End of explanation
mc1 = multi.MultiComparison(subset_race['ALCO_NUM_EST'], subset_race['ETHRACE2A'])
res1 = mc1.tukeyhsd()
print(res1.summary())
Explanation: Tukey's HSD post hoc test
End of explanation |
6,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Python Homework
Write a line of code that stores the value of the $atan(5)$ in the variable y.
Step1: In words, what the math.ceil and math.floor functions do?
They return round down to the nearest integer (ceil) or up to the nearest integer (floor).
Store the result of $5x^{4} - 3x^{2} + 0.5x - 20$ in the variable y, where $x$ is 2.
Step2: Construct a conditional that prints $x$ if it is smaller than 20 but greater than -5
Step3: Construct a conditional that prints $x$ if it not between 5 and 12.
Step4: What will the following code print? (Don't just copy and paste -- reason through it!)
It will print c. x does not meet the first two conditionals (x > 2 or x < 0 and not x == 2, so it will reach x == 2. Since it equals 2, this will print c and skip the last line printing d.
Write a loop that prints every 5th number between 1 and 2000. (HINT
Step5: What will the following program print out?
It will print 0 to 9 and then -10 to -99.
Write a loop that calculates a Riemann sum for $x^2$ for $x \in [-5,5]$.
Step6: Create a list of all integers between 2 and 30.
Step7: Create a list called some_list that has all sin(x) for x $\in$ [$0$, $\pi/4$, $\pi /2$, $3 \pi /4$... $2 \pi$].
Step8: + Create a list called some_list from -100 to 0 and then replace every even number with its positive value. (You'll want to use the (modulo operator)[https
Step9: Write a loop that creates a dictionary that uses the letters A-E as keys to the values 0-4.
Step10: Create a $3 \times 3$ numpy array with the integers 0-8
Step11: Repeat the exercise above using a numpy array. Use a numpy array to calcualte all sin(x) for x $\in$ [$0$, $\pi/4$, $\pi /2$, $3 \pi /4$... $2 \pi$].
Step12: Write a function that takes a string and returns it in all uppercase. (Hint, google this one)
Step13: Use matplotlib to plot $sin(x)$ for x $\in$ [$0$, $\pi/4$, $\pi /2$, $3 \pi /4$... $2 \pi$]. Use both orange points and a green line.
Step14: You measure the doubling times of bacterial strains A-D under identical conditions.
| strain | doubling time (min) |
|
Step15: Create a dictionary called doubling that keys the name of each strain to its population after 12 hours.
Step16: Use matplotlib to create a single graph that shows $N(t)$ for all four bacterial strains from 0 to 18 hr. Make sure you label your axes appropriately. | Python Code:
import numpy as np
y = np.arctan(5)
Explanation: Intro to Python Homework
Write a line of code that stores the value of the $atan(5)$ in the variable y.
End of explanation
x = 2
y = 5*(x**4) - 3*x**2 + 0.5*x - 20
Explanation: In words, what the math.ceil and math.floor functions do?
They return round down to the nearest integer (ceil) or up to the nearest integer (floor).
Store the result of $5x^{4} - 3x^{2} + 0.5x - 20$ in the variable y, where $x$ is 2.
End of explanation
if x > -5 and x < 20:
print(x)
Explanation: Construct a conditional that prints $x$ if it is smaller than 20 but greater than -5
End of explanation
if x <= 5 and x >= 12:
print(x)
Explanation: Construct a conditional that prints $x$ if it not between 5 and 12.
End of explanation
for i in range(1,2001,5):
print(i)
Explanation: What will the following code print? (Don't just copy and paste -- reason through it!)
It will print c. x does not meet the first two conditionals (x > 2 or x < 0 and not x == 2, so it will reach x == 2. Since it equals 2, this will print c and skip the last line printing d.
Write a loop that prints every 5th number between 1 and 2000. (HINT: try help(range))
End of explanation
# left hand integral
dx = 1
integral = 0
for x in range(-5,5):
integral = integral + dx*x**2
print(integral)
## A better, higher accuracy way
dx = 0.001
midpoints = np.arange(-5,5,dx) + dx/2
print(np.sum(midpoints**2)*dx)
Explanation: What will the following program print out?
It will print 0 to 9 and then -10 to -99.
Write a loop that calculates a Riemann sum for $x^2$ for $x \in [-5,5]$.
End of explanation
some_list = []
for i in range(2,31):
some_list.append(i)
## another (better!) way is to cast it; faster
some_list = list(range(2,31))
Explanation: Create a list of all integers between 2 and 30.
End of explanation
import numpy as np
some_list = []
for i in range(0,9):
some_list.append(np.sin(np.pi*i/4))
Explanation: Create a list called some_list that has all sin(x) for x $\in$ [$0$, $\pi/4$, $\pi /2$, $3 \pi /4$... $2 \pi$].
End of explanation
some_list = []
for i in range(-100,1):
if i % 2:
some_list.append(i)
else:
some_list.append(-i)
Explanation: + Create a list called some_list from -100 to 0 and then replace every even number with its positive value. (You'll want to use the (modulo operator)[https://stackoverflow.com/questions/4432208/how-does-work-in-python]). The output should look like:
```python
print(some_list)
[100,-99,98,-97,96,...,0]
```
End of explanation
letters = "ABCDE"
some_dict = {}
for i in range(5):
some_dict[letters[i]] = i
## A different way using a cool function called enumerate
some_dict = {}
for number, letter in enumerate("ABCDE"):
some_dict[letter] = number
## Or even MORE compact using list comprehension
some_dict = dict([(letter,number) for number, letter in enumerate("ABCDE")])
Explanation: Write a loop that creates a dictionary that uses the letters A-E as keys to the values 0-4.
End of explanation
some_list = [[0,1,2],[3,4,5],[6,7,8]]
for i in range(3):
for j in range(3):
some_list[i][j] = some_list[i][j]*5
some_list[i][j] = np.log(some_list[i][j])
total = 0
for j in range(3):
total = total + some_list[j][2]
print(total)
Explanation: Create a $3 \times 3$ numpy array with the integers 0-8:
python
[[0,1,2],
[3,4,5],
[6,7,8]]
Multiply the whole array by 5 and then take the natural log of all values (elementwise). What is the sum of the right-most column?
End of explanation
some_array = np.array([[0,1,2],[3,4,5],[6,7,8]],dtype=np.int)
## OR
some_array = np.zeros((3,3),dtype=np.int)
total = 0
for i in range(3):
for j in range(3):
some_array[i,j] = total
total += 1
## OR (probably most efficient of the set)
some_array = np.array(range(9),dtype=np.int)
some_array = some_array.reshape((3,3))
print(np.sum(np.log((5*some_array))[:,2]))
np.log(some_array*5)
Explanation: Repeat the exercise above using a numpy array. Use a numpy array to calcualte all sin(x) for x $\in$ [$0$, $\pi/4$, $\pi /2$, $3 \pi /4$... $2 \pi$].
End of explanation
def capitalize(some_string):
return some_string.upper()
capitalize("test")
Explanation: Write a function that takes a string and returns it in all uppercase. (Hint, google this one)
End of explanation
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
x = np.arange(0,2.25*np.pi,0.25*np.pi)
y = np.sin(x)
plt.plot(x,y,"-",color="green")
plt.plot(x,y,"1",color="orange",markersize=12)
Explanation: Use matplotlib to plot $sin(x)$ for x $\in$ [$0$, $\pi/4$, $\pi /2$, $3 \pi /4$... $2 \pi$]. Use both orange points and a green line.
End of explanation
def num_bacteria(t,doubling_time):
return 2**(t/doubling_time)
Explanation: You measure the doubling times of bacterial strains A-D under identical conditions.
| strain | doubling time (min) |
|:------:|:-------------------:|
| A | 20 |
| B | 25 |
| C | 39 |
| D | 53 |
Assuming you start with a single cell and have nutrients in excess, you can calculate the number of bacteria $N(t)$ in a culture after $t$ minutes according to:
$$N(t) = 2^{t/d}$$
Write a function called num_bacteria that takes the time and doubling time and returns the number of bacteria present.
End of explanation
doubling = {}
doubling["A"] = num_bacteria(12*60,20)
doubling["B"] = num_bacteria(12*60,25)
doubling["C"] = num_bacteria(12*60,39)
doubling
Explanation: Create a dictionary called doubling that keys the name of each strain to its population after 12 hours.
End of explanation
for k in doubling.keys():
print(k)
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
t = np.arange(0,18*60+1,1)
some_dict = {"A":20.0,"B":25.0}
for k in some_dict.keys():
plt.plot(t,num_bacteria(t,some_dict[k]),".")
plt.yscale("log")
Explanation: Use matplotlib to create a single graph that shows $N(t)$ for all four bacterial strains from 0 to 18 hr. Make sure you label your axes appropriately.
End of explanation |
6,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
data from first 5 columns
Step1: instantiate KNN classifier
Step2: see default settings
Step3: Get labels for prediction
Step4: using different value of K
Step5: applying logistic regression
Step6: Okay, we've been working on using the whole dataset. Now we're going to divide the dataset into training and testing
dataset to estimate the accuracy of the model
Step7: above is classification accuracy on whole dataset, let's try train/split
Step8: using knn
Step9: best value of K to select is 5<K<17
=> K= (6+11)/2 = 11
CROSS VALIDATION TO GET BEST GUESS FOR OUT OF SAMPLE ACCURACY
Step10: we take largest K to get least complex model
COMPARING MODELS
Step11: USING GRIDSEARCH TO GET BEST PARAMETERS
Step12: searching multiple parameters
Step13: USING RANDOMCV SEARCH. USEUFL WHEN COMPUTATION OF GRIDCV IS TOO COSTLY | Python Code:
iris['data'][:5]
print(iris['DESCR'] + "\n")
iris['data'].shape
iris['target'].shape
X= iris['data']
y= iris['target']
Explanation: data from first 5 columns
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn= KNeighborsClassifier(n_neighbors=1)
Explanation: instantiate KNN classifier
End of explanation
print knn
knn.fit(X, y)
knn.predict([3, 4, 5, 2])
X_new= ([3, 4, 5, 2], [4, 3, 2, 0.1])
prediction_1= knn.predict(X_new)
prediction_1
Explanation: see default settings
End of explanation
iris['target_names'][prediction_1]
Explanation: Get labels for prediction
End of explanation
knn= KNeighborsClassifier(n_neighbors=5)
knn.fit(X, y)
knn.predict(X_new)
Explanation: using different value of K
End of explanation
from sklearn.linear_model import LogisticRegression
logreg= LogisticRegression()
logreg.fit(X, y)
logreg.predict(X_new)
Explanation: applying logistic regression
End of explanation
y_pred= logreg.predict(X)
len(y_pred)
from sklearn import metrics
print metrics.accuracy_score(y_pred, y)
Explanation: Okay, we've been working on using the whole dataset. Now we're going to divide the dataset into training and testing
dataset to estimate the accuracy of the model
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= 0.4, random_state= 4)
print X_train.shape
print X_test.shape
print
print y_train.shape
print y_test.shape
logreg = LogisticRegression()
logreg.fit(X_train, y_train)
y_pred= logreg.predict(X_test)
print metrics.accuracy_score(y_test, y_pred)
Explanation: above is classification accuracy on whole dataset, let's try train/split
End of explanation
k_range= range(1, 30)
scores= []
for k in k_range:
knn= KNeighborsClassifier(n_neighbors= k)
knn.fit(X_train, y_train)
y_pred= knn.predict(X_test)
scores.append(metrics.accuracy_score(y_test, y_pred))
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(k_range, scores)
plt.xlabel('K value in KNN')
plt.ylabel('Testing accuracy')
Explanation: using knn
End of explanation
from sklearn.cross_validation import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
knn= KNeighborsClassifier(n_neighbors=5)
scores= cross_val_score(knn, X, y, cv=10, scoring= 'accuracy')
print scores
print
print "The final cv score is {}".format(scores.mean())
k_range= range(1, 31)
k_scores= []
for k in k_range:
knn= KNeighborsClassifier(n_neighbors=k)
scores= cross_val_score(knn, X, y, cv=10, scoring= 'accuracy')
k_scores.append(scores.mean())
print k_scores
plt.plot(k_range, k_scores)
plt.xlabel('value of K in knn')
plt.ylabel('cross validated accuracy')
Explanation: best value of K to select is 5<K<17
=> K= (6+11)/2 = 11
CROSS VALIDATION TO GET BEST GUESS FOR OUT OF SAMPLE ACCURACY
End of explanation
knn= KNeighborsClassifier(n_neighbors=20)
print cross_val_score(knn, X, y, cv=10, scoring='accuracy').mean()
logreg= LogisticRegression()
print cross_val_score(logreg, X, y, cv=10, scoring='accuracy').mean()
Explanation: we take largest K to get least complex model
COMPARING MODELS
End of explanation
from sklearn.grid_search import GridSearchCV
k_range= range(1, 31)
print k_range
param_grid= dict(n_neighbors= k_range)
print param_grid
grid= GridSearchCV(knn, param_grid=param_grid, cv=10, scoring='accuracy')
grid.fit(X,y)
grid.grid_scores_
grid_mean_scores= [result.mean_validation_score for result in grid.grid_scores_]
print grid_mean_scores
plt.plot(k_range, grid_mean_scores)
plt.xlabel("value of k in knn")
plt.ylabel("grid mean score")
print grid.best_score_
print "=========================================================="
print grid.best_params_
print "=========================================================="
print grid.best_estimator_
print "=========================================================="
Explanation: USING GRIDSEARCH TO GET BEST PARAMETERS
End of explanation
k_range= range(1, 31)
weight_options= ['uniform', 'distance']
param_grid= dict(n_neighbors= k_range, weights= weight_options)
print param_grid
grid= GridSearchCV(knn, param_grid, cv=10, scoring='accuracy')
grid.fit(X, y)
grid.grid_scores_
print grid.best_score_
print "=========================================================="
print grid.best_params_
print "=========================================================="
print grid.best_estimator_
print "=========================================================="
pred_new= grid.predict([3, 4, 5, 2])
print pred_new
iris['target_names'][pred_new]
Explanation: searching multiple parameters
End of explanation
from sklearn.grid_search import RandomizedSearchCV
param_grid= dict(n_neighbors= k_range, weights= weight_options)
rand = RandomizedSearchCV(knn, param_grid, cv=10, n_iter= 10, random_state= 5, scoring='accuracy')
rand.fit(X, y)
rand.grid_scores_
print grid.best_score_
print "=========================================================="
print grid.best_params_
print "=========================================================="
print grid.best_estimator_
print "=========================================================="
Explanation: USING RANDOMCV SEARCH. USEUFL WHEN COMPUTATION OF GRIDCV IS TOO COSTLY
End of explanation |
6,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Assignment 2
Step3: Warmups
We'll start by implementing some simpler search and optimization methods before the real exercises.
Warmup 1
Step7: Warm-up 2
Step9: Warmup 3
Step13: Warmup 4
Step15: Exercises
The following exercises will require you to implement several kinds of bidirectional and tridirectional searches.
For the following exercises, we will be using Atlanta OpenStreetMap data for our searches. If you want to run tests in iPython notebook using this data (rather than just calling the tests in search_tests), you'll need to load the data from file in the cell below.
Step17: Visualizing search results
When using a geographic network, you may want to visualize your searches. We can do this by converting the search results to a GeoJSON file which we then visualize on Gist by importing the file.
We provide a method for doing this in visualize_graph.py called plot_search(), which takes as parameters the graph, the name of the file to write, the nodes on the path, and the set of all nodes explored. This produces a GeoJSON file named as specified, which you can upload to Gist to visualize the search path and explored nodes.
Step19: Exercise 1
Step21: Exercise 2
Step23: Exercise 3
Step25: Exercise 4
Step27: Race!
Here's your chance to show us your best stuff. This part is mandatory if you want to compete in the race for extra credit. Implement custom_search() using whatever strategy you'd like. Remember that 'goals' will be a list of the three nodes between which you should route. | Python Code:
from __future__ import division
import random
import matplotlib.pyplot as plt
import pickle
import sys
sys.path.append('lib')
import networkx
Romania map data from Russell and Norvig, Chapter 3.
romania = pickle.load(open('romania_graph.pickle', 'rb'))
Explanation: Assignment 2: Graph Search
In this assignment you will be implementing a variety of graph search algorithms, with the eventual goal of solving tridirectional search.
Before you start, you will need to install:
networkx, which is a package for processing networks. This assignment will be easier if you take some time to test out and get familiar with the basic methods of networkx.
matplotlib for basic network visualization.
We will be using two undirected networks for this assignment: a simplified map of Romania (from Russell and Norvig) and a full street map of Atlanta.
End of explanation
import heapq
class PriorityQueue():
Implementation of a priority queue
to store nodes during search.
# HINT look up the module heapq.
def __init__(self):
self.queue = []
self.current = 0
def next(self):
if self.current >=len(self.queue):
self.current
raise StopIteration
out = self.queue[self.current]
self.current += 1
return out
def pop(self):
return heapq.heappop(self.queue)
def __iter__(self):
return self
def __str__(self):
return 'PQ:[%s]'%(', '.join([str(i) for i in self.queue]))
def append(self, node):
heapq.heappush(self.queue, node)
#def push(self,cost,node,path):
# TODO: finish this
def __contains__(self, key):
self.current = 0
return key in [n for v,n in self.queue]
def __eq__(self, other):
return self == other
def clear(self):
self.queue = []
#def has_element(self, element):
__next__ = next
from search_tests import priority_queue_tests
priority_queue_tests(PriorityQueue)
Explanation: Warmups
We'll start by implementing some simpler search and optimization methods before the real exercises.
Warmup 1: Priority queue
5 points
In all searches that involve calculating path cost or heuristic (e.g. uniform-cost), we have to order our search frontier. It turns out the way that we do this can impact our overall search runtime.
To show this, you'll implement a priority queue and demonstrate its performance benefits. For large graphs, sorting all input to a priority queue is impractical. As such, the datastructure you implement should have an amortized O(1) insertion and removal time. It should do better than the queue you've been provided in InsertionSortQueue().
Hints:
1. The heapq module has been imported for you.
2. Each edge has an associated weight.
End of explanation
pathes = pickle.load(open('romania_test_paths.pickle', 'r'))
pathes[('z', 'z')]
#pq.length
def breadth_first_search_goal(graph, start, goal):
Run a breadth-first search from start
to goal and return the goal as well as
all nodes explored.
frontiers = [start]
explored = []
while frontiers:
current = frontiers.pop(0)
explored.append(current)
edges = graph.edges(current)
for i in edges:
if i[1]!=goal:
if not(i[1] in explored):
frontiers.append(i[1])
else:
break
return goal, explored
def breadth_first_search(graph, start, goal):
Run a breadth-first search from start
to goal and return the path as well as
all nodes explored.
path = []
frontiers = [start]
explored = []
parent = {}
flag = 0
while frontiers and flag==0:
current = frontiers.pop(0)
if current == goal:
flag = 1
break
explored.append(current)
edges = graph.edges(current)
for i in edges:
if i[1]!=goal:
if not(i[1] in explored) and not(i[1] in frontiers):
parent[i[1]] = current
frontiers.append(i[1])
else:
parent[i[1]] = current
flag=1
break
if start!=goal:
path.append(goal)
else:
return None, explored
while path[-1] != start:
path.append(parent[path[-1]])
path.reverse()
return path, explored
# This function exists to help you visually debug your code.
# Feel free to modify it in any way you like.
# graph should be a networkx graph
# node_positions should be a dictionary mapping nodes to x,y coordinates
def draw_graph(graph, node_positions={}, start=None, goal=None, explored=[], path=[]):
if not node_positions:
node_positions = networkx.spring_layout(graph)
networkx.draw_networkx_nodes(graph, node_positions)
networkx.draw_networkx_edges(graph, node_positions, style='dashed')
networkx.draw_networkx_nodes(graph, node_positions, nodelist=explored, node_color='g')
if path:
edges = [(path[i], path[i+1]) for i in range(0, len(path)-1)]
networkx.draw_networkx_edges(graph, node_positions, edgelist=edges, edge_color='b')
if start:
networkx.draw_networkx_nodes(graph, node_positions, nodelist=[start], node_color='b')
if goal:
networkx.draw_networkx_nodes(graph, node_positions, nodelist=[goal], node_color='y')
labels={}
for node in romania.nodes():
labels[node] = node
networkx.draw_networkx_labels(graph,node_positions,labels,font_size=16)
plt.plot()
plt.show()
%pdb off
#path, explored = breadth_first_search(romania, 'n', 'o')
#print path
#print explored
#pathes[('n', 'o')]
Testing and visualizing breadth-first search
in the notebook.
start = 'n'
goal = 'o'
#%debug
#%debug --breakpoint search_tests.py:58
#%run -d
goal, explored = breadth_first_search_goal(romania, start, goal)
path, explored = breadth_first_search(romania, start, goal)
node_locations = {n: romania.node[n]['position'] for n in romania.node.keys()}
draw_graph(romania, node_locations, start, goal, explored, path)
#print explored
#print path
from search_tests import bfs_tests
bfs_tests(breadth_first_search)
Explanation: Warm-up 2: BFS
5 pts
To get you started with handling graphs in networkx, implement and test breadth-first search over the test network. You'll do this by writing two methods:
1. breadth_first_search_goal, which returns the goal node and the set of all nodes explored, but no path.
2. breadth_first_search, which returns the path and the set of all nodes explored.
Hint: networkx.edges() will return all edges connected to a given node.
End of explanation
def uniform_cost_search(graph, start, goal):
Run uniform-cost search from start
to goal and return the path, the nodes
explored, and the total cost.
frontiers = PriorityQueue()
path = []
explored = []
parent = {}
flag = 0
j=1
cost = 0
frontiers.append([0,start])
while frontiers and flag==0:
current = frontiers.pop()
if current[1] == goal:
flag = 1
break
explored.append(current[1])
edges = graph.edges(current[1])
for i in edges:
if i[1]!=goal:
if not(i[1] in explored) and not(i[1] in frontiers):
parent[i[1]] = current[1]
frontiers.append([j, i[1]])
j+=1
else:
parent[i[1]] = current[1]
flag=1
break
if start!=goal:
path.append(goal)
else:
return None, explored, 0
while path[-1] != start:
path.append(parent[path[-1]])
cost += 1
path.reverse()
return path, explored, cost
from search_tests import ucs_tests
ucs_tests(uniform_cost_search)
pathes[('z', 'h')][0]
Explanation: Warmup 3: Uniform-cost search
10 points
Implement uniform-cost search using PriorityQueue() as your frontier. From now on, PriorityQueue() should be your default frontier.
uniform_cost_search() should return the same arguments as breadth-first search: the path to the goal node, the set of all nodes explored, and the total cost of the path.
End of explanation
print romania.edge['a']['s']['weight']
print romania.node['a']['position']
def null_heuristic(graph, u, v, goal):
Return 0 for all nodes.
return 0
import numpy
def heuristic_euclid(graph, u, v, goal):
Return the Euclidean distance from
the node to the goal. u is current node,
v is node under consideration.
currPos = numpy.array(graph.node[u]['position'])
nextPos = numpy.array(graph.node[v]['position'])
goalPos = numpy.array(graph.node[goal]['position'])
#costU2Goal = numpy.linalg.norm(currPos, goalPos)
costU2V = graph.edge[u][v]['edge']
costV2Goal = numpy.linalg.norm(nextPos-goalPos)
return costU2V+costV2Goal
#raise NotImplementedError
def a_star(graph, start, goal, heuristic):
Run A* search from the start to
goal using the specified heuristic
function, and return the final path
and the nodes explored.
frontiers = PriorityQueue()
path = []
explored = []
parent = {}
cost = {}
flag = 0
frontiers.append([0,start])
while frontiers and flag==0:
current = frontiers.pop()
print ["current - ",current[1]]
if current[1] == goal:
flag = 1
break
explored.append(current[1])
edges = graph.edges(current[1])
for i in edges:
if i[1]!=goal:
if not(i[1] in explored) and (i[1] in frontiers):
parent[i[1]] = current[1]
cost[i[1]] = heuristic(graph,current[1],i[1],goal)
frontiers.append([cost[i[1]], i[1]])
else:
newCost = heuristic(graph,current[1],i[1],goal)
if i[1] in cost:
if newCost < cost[i[1]]:
cost[i[1]] = newCost
parent[i[1]] = current[1]
else:
cost[i[1]] = newCost
else:
parent[i[1]] = current[1]
cost[i[1]] = heuristic(graph,current[1],i[1],goal)
flag=1
break
if start!=goal:
path.append(goal)
else:
return None, explored, 0
while path[-1] != start:
path.append(parent[path[-1]])
path.reverse()
return path, explored
from search_tests import a_star_tests
#%debug
a_star_tests(a_star, null_heuristic, heuristic_euclid)
pathes[('z', 'h')]
Explanation: Warmup 4: A* search
10 points
Implement A* search using Euclidean distance as your heuristic. You'll need to implement heuristic_euclid() then pass that function to a_star() as the heuristic parameter. We provide null_heuristic() as a baseline heuristic to test against when calling a_star_tests().
End of explanation
Loading Atlanta map data.
atlanta = pickle.load(open('atlanta_osm.pickle','r'))
Explanation: Exercises
The following exercises will require you to implement several kinds of bidirectional and tridirectional searches.
For the following exercises, we will be using Atlanta OpenStreetMap data for our searches. If you want to run tests in iPython notebook using this data (rather than just calling the tests in search_tests), you'll need to load the data from file in the cell below.
End of explanation
Example of how to visualize search results
with two sample nodes in Atlanta.
from visualize_graph import plot_search
path, explored = bidirectional_ucs(atlanta, '69244359', '557989279')
plot_search(graph, 'atlanta_search.json', path, explored)
# then upload 'atlanta_search.json' to Gist
Explanation: Visualizing search results
When using a geographic network, you may want to visualize your searches. We can do this by converting the search results to a GeoJSON file which we then visualize on Gist by importing the file.
We provide a method for doing this in visualize_graph.py called plot_search(), which takes as parameters the graph, the name of the file to write, the nodes on the path, and the set of all nodes explored. This produces a GeoJSON file named as specified, which you can upload to Gist to visualize the search path and explored nodes.
End of explanation
def bidirectional_ucs(graph, start, goal):
Run bidirectional uniform-cost search
between start and goal, and return the path,
the nodes explored from start-initial
search, and the nodes explored from the
goal-initial search.
# TODO: finish this function
raise NotImplementedError
#return path, (start_explored, goal_explored)
from search_test import bidirectional_tests
bidirectional_tests(bidirectional_ucs)
Explanation: Exercise 1: Bidirectional uniform-cost search
15 points
Implement bidirectional uniform-cost search. Remember that this requires starting your search at both the start and end states.
This function will return the goal, the set of explored nodes from the start node's search, and the set of explored nodes from the goal node's search.
End of explanation
def bidirectional_a_star(graph, start, goal):
Run bidirectional A* search between
start and goal, and return the path from
start to goal, the nodes explored from
start-initial search, and the nodes explored
from the goal-initial search.
# TODO: finish this function
raise NotImplementedError
#return path, (start_explored, goal_explored)
Explanation: Exercise 2: Bidirectional A* search
20 points
Implement bidirectional A* search. Remember that you need to calculate a heuristic for both the start-to-goal search and the goal-to-start search.
This function will return the final search path, the set of nodes explored during the start-to-goal search, and the set of nodes explored during the goal-to-start search.
End of explanation
def tridirectional_search(graph, goals):
Run tridirectional uniform-cost search
between the goals, and return the path
and the nodes explored.
# TODO: finish this function
raise NotImplementedError
#return path, nodes_explored
Explanation: Exercise 3: Tridirectional search
20 points
Implement tridirectional search in the naive way: starting from each goal node, perform a uniform-cost search and keep expanding until two of the three searches meet.
This will return the path computed and the set of all nodes explored.
End of explanation
def tridirectional_search_advanced(graph, goals):
Run an improved tridirectional search between
goals, and return the path and nodes explored.
# TODO: finish this function
raise NotImplementedError
#return path, nodes_explored
Explanation: Exercise 4: Tridirectional search
15 points
This is the heart of the assignment. Implement tridirectional search in such a way as to consistently improve on the performance of your previous implementation. This means consistently exploring fewer nodes during your search in order to reduce runtime.
The specifics are up to you, but we have a few suggestions:
- Tridirectional A*
- choosing landmarks and precomputing reach values
- ATL (A*, landmarks, and triangle-inequality)
- shortcuts (skipping nodes with low reach values)
This function will return the path computed and the set of all nodes explored.
End of explanation
def custom_search(graph, goals):
Run your best tridirectional search between
goals, and return the path and nodes explored.
raise NotImplementedError
# return path, nodes_explored
Explanation: Race!
Here's your chance to show us your best stuff. This part is mandatory if you want to compete in the race for extra credit. Implement custom_search() using whatever strategy you'd like. Remember that 'goals' will be a list of the three nodes between which you should route.
End of explanation |
6,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: YAML component connections
We can define the netlist connections of a component by a netlist in YAML format
Note that you define the connections as instance_source.port ->
instance_destination.port so the order is important and therefore you can only
change the position of the instance_destination
For example, this coupler has the center coupling region at (100, 0)
Step5: While this one has the sbend_left_coupler sl centered at (100, 0)
Step7: You can rotate and instance specifying the angle in degrees
Step9: You can also define ports for the component
Step11: Routes problem
As we saw in routing_bundles notebooks, for routing bundles of ports we need to use a bundle router
Step13: Routes Solution
You can define several bundle_routes routed with a bundle router | Python Code:
import pp
gap = 0.2
wg_width = 0.5
length = 10
yaml = f
instances:
sl:
component: coupler_symmetric
settings:
gap: {gap}
wg_width: {wg_width}
sr:
component: coupler_symmetric
settings:
gap: {gap}
wg_width: {wg_width}
cs:
component: coupler_straight
settings:
gap: {gap}
width: {wg_width}
length: {length}
placements:
cs:
x: 100
y: 0
connections:
sl,W0: cs,W0
sr,W0: cs,E0
ports:
w0: sl,E0
w1: sl,E1
e0: sr,E0
e1: sr,E1
c = pp.component_from_yaml(yaml)
pp.show(c)
pp.plotgds(c)
Explanation: YAML component connections
We can define the netlist connections of a component by a netlist in YAML format
Note that you define the connections as instance_source.port ->
instance_destination.port so the order is important and therefore you can only
change the position of the instance_destination
For example, this coupler has the center coupling region at (100, 0)
End of explanation
gap = 0.2
wg_width = 0.5
length = 10
yaml = f
instances:
sl:
component: coupler_symmetric
settings:
gap: {gap}
wg_width: {wg_width}
sr:
component: coupler_symmetric
settings:
gap: {gap}
wg_width: {wg_width}
cs:
component: coupler_straight
settings:
gap: {gap}
width: {wg_width}
length: {length}
placements:
sl:
x: 100
y: 0
rotation: 180
connections:
cs,W0: sl,W0
sr,W0: cs,E0
ports:
w0: sl,E0
w1: sl,E1
e0: sr,E0
e1: sr,E1
c = pp.component_from_yaml(yaml)
pp.show(c)
pp.plotgds(c)
import pp
yaml =
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_long:
x: 100
y: 100
c = pp.component_from_yaml(yaml)
pp.show(c)
pp.plotgds(c)
import pp
yaml =
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_long:
x: 100
y: 100
routes:
mmi_short,E1: mmi_long,W0
c = pp.component_from_yaml(yaml)
pp.show(c)
pp.plotgds(c)
Explanation: While this one has the sbend_left_coupler sl centered at (100, 0)
End of explanation
import pp
yaml =
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_long:
rotation: 180
x: 100
y: 100
routes:
mmi_short,E1: mmi_long,E0
c = pp.component_from_yaml(yaml)
pp.show(c)
pp.plotgds(c)
Explanation: You can rotate and instance specifying the angle in degrees
End of explanation
import pp
yaml =
instances:
mmi_long:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 10
mmi_short:
component: mmi1x2
settings:
width_mmi: 4.5
length_mmi: 5
placements:
mmi_long:
rotation: 180
x: 100
y: 100
routes:
mmi_short,E1: mmi_long,E0
ports:
E0: mmi_short,W0
W0: mmi_long,W0
c = pp.component_from_yaml(yaml)
pp.show(c)
pp.plotgds(c)
r = c.routes['mmi_short,E1:mmi_long,E0']
r
r.parent.length
c.instances
c.routes
Explanation: You can also define ports for the component
End of explanation
sample_2x2_connections_problem =
name:
connections_2x2_problem
instances:
mmi_bottom:
component: mmi2x2
mmi_top:
component: mmi2x2
placements:
mmi_top:
x: 100
y: 100
routes:
mmi_bottom,E0: mmi_top,W0
mmi_bottom,E1: mmi_top,W1
def test_connections_2x2_problem():
c = pp.component_from_yaml(sample_2x2_connections_problem)
return c
c = test_connections_2x2_problem()
pp.qp(c)
pp.show(c)
Explanation: Routes problem
As we saw in routing_bundles notebooks, for routing bundles of ports we need to use a bundle router
End of explanation
import pp
sample_2x2_connections_solution =
name:
connections_2x2_problem
instances:
mmi_bottom:
component: mmi2x2
mmi_top:
component: mmi2x2
placements:
mmi_top:
x: 100
y: 100
bundle_routes:
mmis:
mmi_bottom,E0: mmi_top,W0
mmi_bottom,E1: mmi_top,W1
def test_connections_2x2_solution():
c = pp.component_from_yaml(sample_2x2_connections_solution)
return c
c = test_connections_2x2_solution()
pp.qp(c)
pp.show(c)
Explanation: Routes Solution
You can define several bundle_routes routed with a bundle router
End of explanation |
6,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Economics Simulation
This is a simulation of an economic marketplace in which there is a population of actors, each of which has a level of wealth. On each time step two actors (chosen by an interaction function) engage in a transaction that exchanges wealth between them (according to a transaction function). The idea is to understand the evolution of the population's wealth over time. I heard about the problem when I visited the Bard College Computer Science Department.
<img src="money.png" width=200>
Why is this interesting?
- It is an example of using simulation to model the world. The model is simple but captures some aspects of a complex world.
- Many students will have preconceptions about how economies work that will be challenged by the results shown here.
- It reveals subtle differences between computational thinking, mathematical thinking, and statistical thinking.
Population Distributions
We will model a population as a list of N numbers, each number being one actor's wealth. We'll start with a Gaussian distribution (also known as a normal distribution or bell-shaped curve), with a mean wealth of 100 simoleons and a standard deviation of 1/5 the mean
Step1: Population Statistics and Visualization
How evenly is the wealth in a population distributed? The traditional measure is the Gini coefficient, which Wikipedia says is computed by this formula (which assumes the y values are sorted)
Step2: We'll define the function hist to plot a histogram of a population. Our hist wraps plt.hist, but with some specific keyword values
Step3: Transactions
In a transaction, two actors come together and exchange some of their wealth. For now we will use a wealth-conserving transaction function in which all the wealth from both actors is put into a pot, which is then split randomly and uniformly between the two actors
Step4: Interactions
How do we decide which parties interact with each other? We will define an interaction function that, given the size of the population, randomly selects any two actors in the populations (denoted by their index numbers in the list). We'll call this function anyone, meaning that any actor can interact with any other actor
Step5: Simulation
The function simulate takes an initial population, calls an interaction function to select two actors, and a transaction function to split their wealth, and repeats this T times. After each transaction, we yield the population, so simulate yields the complete history of the simulation.
Step6: Here is a simple example of simulating a population of 4 actors for 8 time steps
Step7: SImulation Visualization
If we want to do larger simulations we'll need a better way to visualize the results.
The function show does that
Step8: There are three parts to this output
Step9: Now we can easily make an initial population from a distribution function. I'll start with a uniform distribution
Step10: And try a constant distribution, where everyone starts out the same
Step11: The resulting histogram looks different, but only because the starting distribution is so narrow and tall; the end distribution has a Gini coefficient of about 1/2 and standard deviation of about 100, just like we get from the other starting distributions.
Here is one that statisticians call the beta distribution (with carefully chosen parameters)
Step12: Surprise
Step13: Now the results look very different
Step14: Another surprise
Step15: The status_quo transaction increases inequality from the initial population, but not as much as the other transaction functions.
Effect of Interaction Function
We have been using anyone as our interaction function
Step16: Surprise
Step17: It is still surprising that we still have no efect from restricting trade.
United States Distribution
We've drawn from mathematical distributions; let's look at the actual distribution of family income in the United States. Each row in the following table is a tuple giving the lower bound and upper bound (in thousands of dollars of income), followed by the cumulative percentage of families in the row or a previous row. The table I got this from actually had "\$250,000 or above" as the final row; I had to cut it off somewhere, and arbitrarily chose \$300,000.
Step18: Let's see what it looks like
Step19: Hey—that looks like the beta distribution. Let's compare | Python Code:
import random
N = 5000 # Default size of the population
MU = 100. # Default mean of the population
population = [random.gauss(mu=MU, sigma=MU/5) for actor in range(N)]
Explanation: Economics Simulation
This is a simulation of an economic marketplace in which there is a population of actors, each of which has a level of wealth. On each time step two actors (chosen by an interaction function) engage in a transaction that exchanges wealth between them (according to a transaction function). The idea is to understand the evolution of the population's wealth over time. I heard about the problem when I visited the Bard College Computer Science Department.
<img src="money.png" width=200>
Why is this interesting?
- It is an example of using simulation to model the world. The model is simple but captures some aspects of a complex world.
- Many students will have preconceptions about how economies work that will be challenged by the results shown here.
- It reveals subtle differences between computational thinking, mathematical thinking, and statistical thinking.
Population Distributions
We will model a population as a list of N numbers, each number being one actor's wealth. We'll start with a Gaussian distribution (also known as a normal distribution or bell-shaped curve), with a mean wealth of 100 simoleons and a standard deviation of 1/5 the mean:
End of explanation
def gini(y):
"Compute the Gini coefficient (a measure of equality/inequality) in a population, y."
y = sorted(y)
n = len(y)
numer = 2 * sum((i+1) * y[i] for i in range(n))
denom = n * sum(y)
return (numer / denom) - (n + 1) / n
Explanation: Population Statistics and Visualization
How evenly is the wealth in a population distributed? The traditional measure is the Gini coefficient, which Wikipedia says is computed by this formula (which assumes the y values are sorted):
A Gini index of 0 means total equality (everyone has the same amount), and values closer to 1 mean more inequality (most of the money in the hands of a few individuals). Here's a table of Gini coefficients for several countries:
<table>
<tr><td>Sweden <td> 0.250
<tr><td>Canada <td> 0.326
<tr><td>Switzerland <td> 0.337
<tr><td>United States<td> 0.408
<tr><td>Chile <td> 0.521
<tr><td>South Africe <td> 0.631
</table>
The Gini coefficient is traditionally computed over income, but we will be dealing with wealth. Here is the computation:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
def hist(population, label='pop', **kwargs):
"A custom version of `hist` with better defaults."
label = label + ': G=' + str(round(gini(population), 2))
h = plt.hist(list(population), bins=30, alpha=0.5, label=label, **kwargs)
plt.xlabel('wealth'); plt.ylabel('count'); plt.grid(True)
plt.legend()
hist(population)
Explanation: We'll define the function hist to plot a histogram of a population. Our hist wraps plt.hist, but with some specific keyword values:
End of explanation
def random_split(A, B):
"Take all the money uin the pot and divide it randomly between the two actors."
pot = A + B
share = random.uniform(0, pot)
return share, pot - share
random_split(100, 100)
Explanation: Transactions
In a transaction, two actors come together and exchange some of their wealth. For now we will use a wealth-conserving transaction function in which all the wealth from both actors is put into a pot, which is then split randomly and uniformly between the two actors:
End of explanation
def anyone(N): return random.sample(range(N), 2)
anyone(N)
Explanation: Interactions
How do we decide which parties interact with each other? We will define an interaction function that, given the size of the population, randomly selects any two actors in the populations (denoted by their index numbers in the list). We'll call this function anyone, meaning that any actor can interact with any other actor:
End of explanation
def simulate(population, T, transaction=random_split, interaction=anyone):
"Run simulation on population for T transactions; yield (t, pop) at each time step."
population = population.copy()
yield population
for t in range(1, T + 1):
i, j = interaction(len(population))
population[i], population[j] = transaction(population[i], population[j])
yield population
Explanation: Simulation
The function simulate takes an initial population, calls an interaction function to select two actors, and a transaction function to split their wealth, and repeats this T times. After each transaction, we yield the population, so simulate yields the complete history of the simulation.
End of explanation
for pop in simulate([100] * 4, 8):
print(pop)
Explanation: Here is a simple example of simulating a population of 4 actors for 8 time steps:
End of explanation
import statistics
def show(population, k=40, percentiles=(1, 10, 50, 90, 99), **kwargs):
"Run a simulation for k*N steps, printing statistics and displaying a plot and histogram."
N = len(population)
start = list(population)
results = [(t, sorted(pop)) # Sort results so that percentiles work
for (t, pop) in enumerate(simulate(population, k * N, **kwargs))
if t % (N / 10) == 0]
times = [t for (t, pop) in results]
# Printout:
print(' t Gini stdev' + (' {:3d}%' * len(percentiles)).format(*percentiles))
print('------- ---- -----' + ' ----' * len(percentiles))
fmt = '{:7,d} {:.2f} {:5.1f}' + ' {:4.0f}' * len(percentiles)
for (t, pop) in results:
if t % (4 * N) == 0:
data = [percent(pct, pop) for pct in percentiles]
print(fmt.format(t, gini(pop), statistics.stdev(pop), *data))
# Plot:
plt.hold(True); plt.xlabel('wealth'); plt.ylabel('time'); plt.grid(True)
for pct in percentiles:
line = [percent(pct, pop) for (t, pop) in results]
plt.plot(line, times)
plt.show()
# Histogram:
R = (min(pop+start), max(pop+start))
hist(start, 'start', range=R)
hist(pop, 'end', range=R)
def percent(pct, items):
"The item that is pct percent through the sorted list of items."
return items[min(len(items)-1, len(items) * pct // 100)]
show(population)
Explanation: SImulation Visualization
If we want to do larger simulations we'll need a better way to visualize the results.
The function show does that:
End of explanation
def samples(distribution, *args, n=N, mu=MU):
"Sample from the distribution n times, then normalize results to have mean mu."
numbers = [distribution(*args) for _ in range(N)]
return normalize(numbers, mu)
def normalize(numbers, mu):
"Make the numbers non-negative, and scale them so they have mean mu."
numbers = [max(0, n) for n in numbers]
factor = len(numbers) * mu / sum(numbers)
return [x * factor for x in numbers]
Explanation: There are three parts to this output:
The printout: For the starting population and for every 10,000 transactions along the way, we
print the Gini coefficient and standard deviation of the population, and the wealths at five percentile points in the population: the 1%, 10%, 50% (median), 90% and 99% marks.
The plot: This shows the same information as the printout (except for the Gini index), but with more data points along the way. The leftmost (blue) line is the 1% mark, the rightmost (purple) is the 99% mark, and the inner lines are the 10%, 50% and 90% marks, respectively. For the plot, time goes from bottom to top rather than top to bottom. So, the 99% (purple) line starts at around 150, and over time increases to over 400, indicating that the richest 1% are getting richer. The fact that the lines are going more or less straight up after about 50,000 transactions suggests that the system has converged.
The histogram: The starting and ending populations are plotted as histograms.
The results show that income inequality is increasing over time. How can you tell? Because the Gini coefficient is increasing over time, the standard deviation is increasing, and the 1% and 10% marks are decreasing (the blue and olive lines are moving left as time increases) while the 90% and 99% marks are increasing (the aqua and purple lines are moving right as time increases).
Would the population continue to change if we let the simulation run longer? It looks like only the 1% line is changing, the other lines remain pretty much in one place from about T=15,000 to T=25,000. This suggests that running the simulation longer would not have too much effect.
Effect of Starting Population
What happens to the final result if we vary the starting population? I'll introduce the function samples to sample from a distribution function n times, normalizing the result to have the specified mean:
End of explanation
show(samples(random.uniform, 0, 200))
Explanation: Now we can easily make an initial population from a distribution function. I'll start with a uniform distribution:
End of explanation
def constant(mu=MU): return mu
show(samples(constant))
Explanation: And try a constant distribution, where everyone starts out the same:
End of explanation
def beta(): return random.betavariate(0.9, 12)
show(samples(beta))
Explanation: The resulting histogram looks different, but only because the starting distribution is so narrow and tall; the end distribution has a Gini coefficient of about 1/2 and standard deviation of about 100, just like we get from the other starting distributions.
Here is one that statisticians call the beta distribution (with carefully chosen parameters):
End of explanation
def winner_take_all(A, B): return random.choice(([A + B, 0], [0, A + B]))
show(population, transaction=winner_take_all)
Explanation: Surprise: We can confirm that the starting population doesn't matter much. I thought it would make a real difference, but we showed that three very different starting populations—Gaussian, uniform, and beta—all ended up with very similar final populations; all with G around 1/2 and standard deviation around 100. The final distribution in all three cases looks similar to the normalized beta(0.9, 12) distribution.
Effect of Transaction Function
Does the transaction function have an effect on the outcome? So far we've only used the random_split transaction function; we'll now compare that to the winner_take_all function, in which the wealth from both actors is thrown into a pot, and one of them takes all of it:
End of explanation
def redistribute(A, B, rate=0.31):
"Tax both parties at rate; split the tax revenue evenly, and randomly split the rest."
tax = rate * (A + B)
Arand, Brand = random_split(A + B - tax, 0)
return tax / 2 + Arand, tax / 2 + Brand
show(population, transaction=redistribute)
Explanation: Now the results look very different: most of the wealth goes to the 99th percentile (purple line on the far right of the plot), with everybody else getting wiped out (although the 90th percentile holds out until around 50,000 transactions). The Gini coefficient is all the way up to 0.99 and the standard deviation is over 800, and still rising.
That makes sense: any time two actors with non-zero wealth interact, one of them will end up with zero—the number of actors with zero wealth increases monotonically until all the wealth is with one actor, and from then on the wealth just gets swapped around.
At the other end of the spectrum, let's try a transaction function, redistribute, that taxes both parties 31% (the average income tax rate in the US) and splits that tax revenue evenly among the two parties; the non-taxed part is split with random_split:
End of explanation
def status_quo(A, B):
"A transaction that is most likely to leave things unchanged, but could move any amount of wealth around."
a = random.triangular(0, (A + B) / 2, A / 2)
return (A / 2 + a), (A + B) - (A / 2 + a)
show(population, transaction=status_quo)
Explanation: Another surprise: This transaction function does indeed lead to less inequality than split_randomly or winner_take_all, but surpprisingly (to me) it still increases inequality compared to the initial (Gaussian) population.
Here's one more interaction function, status_quo, in which both actors keep half of their wealth out of the transaction, and the other half is randomly split using a triangular distribution in such a way that the most likely outcome is that each actor keeps what they started with, but from there probability falls off on either side, making larger and larger deviations from the status quo less and less likely:
End of explanation
def neighborhood(n, width=5):
"Choose two agents in the same neighborhood"
i = random.randrange(n - width)
return random.sample(range(i, i + width + 1), 2)
show(population, interaction=neighborhood)
Explanation: The status_quo transaction increases inequality from the initial population, but not as much as the other transaction functions.
Effect of Interaction Function
We have been using anyone as our interaction function: anyone can enter into a transaction with anyone else. Suppose that transactions are constrained to be local—that you can only do business with your close neighbors. Will that make income more equitable, because there will be no large, global conglomorates?
End of explanation
def adjacent(n): return neighborhood(n, 1)
show(population, interaction=adjacent)
Explanation: Surprise: The neighborhood interaction is not too different from the anyone interaction.
Let's get even more local, allowing trade only with your immediate neighbor (to either side):
End of explanation
USA_table = [
(0, 10, 7.63),
(10, 20, 19.20),
(20, 30, 30.50),
(30, 40, 41.08),
(40, 50, 49.95),
(50, 60, 57.73),
(60, 70, 64.56),
(70, 80, 70.39),
(80, 90, 75.02),
(90, 100, 79.02),
(100, 110, 82.57),
(110, 120, 85.29),
(120, 130, 87.60),
(130, 140, 89.36),
(140, 150, 90.95),
(150, 160, 92.52),
(160, 170, 93.60),
(170, 180, 94.55),
(180, 190, 95.23),
(190, 200, 95.80),
(200, 250, 97.70),
(250, 300, 100.0)]
def USA():
"Sample from the USA distribution."
p = random.uniform(0, 100)
for (lo, hi, cum_pct) in USA_table:
if p <= cum_pct:
return random.uniform(lo, hi)
Explanation: It is still surprising that we still have no efect from restricting trade.
United States Distribution
We've drawn from mathematical distributions; let's look at the actual distribution of family income in the United States. Each row in the following table is a tuple giving the lower bound and upper bound (in thousands of dollars of income), followed by the cumulative percentage of families in the row or a previous row. The table I got this from actually had "\$250,000 or above" as the final row; I had to cut it off somewhere, and arbitrarily chose \$300,000.
End of explanation
hist(samples(USA), label='USA')
Explanation: Let's see what it looks like:
End of explanation
hist(samples(beta), label='beta')
hist(samples(USA), label='USA')
show(samples(USA))
Explanation: Hey—that looks like the beta distribution. Let's compare:
End of explanation |
6,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_modifying_data_inplace
Step1: It is often necessary to modify data once you have loaded it into memory.
Common examples of this are signal processing, feature extraction, and data
cleaning. Some functionality is pre-built into MNE-python, though it is also
possible to apply an arbitrary function to the data.
Step2: Signal processing
Most MNE objects have in-built methods for filtering
Step3: In addition, there are functions for applying the Hilbert transform, which is
useful to calculate phase / amplitude of your signal
Step4: Finally, it is possible to apply arbitrary to your data to do what you want.
Here we will use this to take the amplitude and phase of the hilbert
transformed data. (note that you can use amplitude=True in the call to | Python Code:
from __future__ import print_function
import mne
import os.path as op
import numpy as np
from matplotlib import pyplot as plt
Explanation: .. _tut_modifying_data_inplace:
Modifying data in-place
End of explanation
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(data_path, preload=True, verbose=False)
raw = raw.crop(0, 2)
print(raw)
Explanation: It is often necessary to modify data once you have loaded it into memory.
Common examples of this are signal processing, feature extraction, and data
cleaning. Some functionality is pre-built into MNE-python, though it is also
possible to apply an arbitrary function to the data.
End of explanation
filt_bands = [(1, 3), (3, 10), (10, 20), (20, 60)]
f, (ax, ax2) = plt.subplots(2, 1, figsize=(15, 10))
_ = ax.plot(raw._data[0])
for fband in filt_bands:
raw_filt = raw.copy()
raw_filt.filter(*fband)
_ = ax2.plot(raw_filt._data[0])
ax2.legend(filt_bands)
ax.set_title('Raw data')
ax2.set_title('Band-pass filtered data')
Explanation: Signal processing
Most MNE objects have in-built methods for filtering:
End of explanation
# Filter signal, then take hilbert transform
raw_band = raw.copy()
raw_band.filter(12, 18)
raw_hilb = raw_band.copy()
hilb_picks = mne.pick_types(raw_band.info, meg=False, eeg=True)
raw_hilb.apply_hilbert(hilb_picks)
print(raw_hilb._data.dtype)
Explanation: In addition, there are functions for applying the Hilbert transform, which is
useful to calculate phase / amplitude of your signal
End of explanation
# Take the amplitude and phase
raw_amp = raw_hilb.copy()
raw_amp.apply_function(np.abs, hilb_picks, float, 1)
raw_phase = raw_hilb.copy()
raw_phase.apply_function(np.angle, hilb_picks, float, 1)
f, (a1, a2) = plt.subplots(2, 1, figsize=(15, 10))
a1.plot(raw_band._data[hilb_picks[0]])
a1.plot(raw_amp._data[hilb_picks[0]])
a2.plot(raw_phase._data[hilb_picks[0]])
a1.set_title('Amplitude of frequency band')
a2.set_title('Phase of frequency band')
Explanation: Finally, it is possible to apply arbitrary to your data to do what you want.
Here we will use this to take the amplitude and phase of the hilbert
transformed data. (note that you can use amplitude=True in the call to
:func:mne.io.Raw.apply_hilbert to do this automatically).
End of explanation |
6,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An RNN model for temperature data
This time we will be working with real data
Step1: Please ignore any compatibility warnings and errors.
Make sure to <b>restart</b> your kernel to ensure this change has taken place.
Step2: Download Data
Step3: <a name="hyperparameters"></a>
<a name="assignment1"></a>
Hyperparameters
<div class="alert alert-block alert-info">
***Assignment #1*** Temperatures have a periodicity of 365 days. We would need to unroll the RNN over 365 steps (=SEQLEN) to capture that. That is way too much. We will have to work with averages over a handful of days instead of daily temperatures. Bump the unrolling length to SEQLEN=128 and then try averaging over 3 to 5 days (RESAMPLE_BY=3, 4, 5). Look at the data visualisations in [Resampling](#resampling) and [Training sequences](#trainseq). The training sequences should capture a recognizable part of the yearly oscillation.
***In the end, use these values
Step4: Temperature data
This is what our temperature datasets looks like
Step5: <a name="resampling"></a>
Resampling
Our RNN would need to be unrolled across 365 steps to capture the yearly temperature cycles. That's a bit too much. We will resample the temparatures and work with 5-day averages for example. This is what resampled (Tmin, Tmax) temperatures look like.
Step6: <a name="trainseq"></a>
Visualize training sequences
This is what the neural network will see during training.
Step7: <a name="assignment2"></a>
<div class="alert alert-block alert-info">
***Assignement #2*** Temperatures are noisy. If we ask the model to predict the naxt data point, noise might drown the trend and the model will not train. The trend should be clearer if we ask the moder to look further ahead. You can use the [hyperparameter](#hyperparameters) N_FORWARD to shift the target sequences by more than 1. Try values between 4 and 16 and see how [training sequences](#trainseq) look.<br/>
<br/>
If the model predicts N_FORWARD in advance, you will also need it to output N_FORWARD predicted values instead of 1. Please check that the output of your model is indeed `Yout = Yr[
Step8: Instantiate the model
Step9: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
Step10: <a name="train"></a>
The training loop
You can re-execute this cell to continue training. <br/>
<br/>
Training data must be batched correctly, one weather station per line, continued on the same line across batches. This way, output states computed from one batch are the correct input states for the next batch. The provided utility function rnn_multistation_sampling_temperature_sequencer does the right thing.
Step11: <a name="inference"></a>
Inference
This is a generative model
Step12: <a name="valid"></a>
Validation | Python Code:
!pip install tensorflow==1.15.3
Explanation: An RNN model for temperature data
This time we will be working with real data: daily (Tmin, Tmax) temperature series from 36 weather stations spanning 50 years. It is to be noted that a pretty good predictor model already exists for temperatures: the average of temperatures on the same day of the year in N previous years. It is not clear if RNNs can do better but we will see how far they can go.
<div class="alert alert-block alert-info">
Things to do:<br/>
<ol start="0">
<li>Run the notebook as it is. Look at the data visualisations. Then look at the predictions at the end. Not very good...
<li>First play with the data to find good values for RESAMPLE_BY and SEQLEN in hyperparameters ([Assignment #1](#assignment1)).
<li>Now implement the RNN model in the model function ([Assignment #2](#assignment2)).
<li>Temperatures are noisy, let's try something new: predicting N data points ahead instead of only 1 ahead ([Assignment #3](#assignment3)).
<li>Now we will adjust more traditional hyperparameters and add regularisations. ([Assignment #4](#assignment4))
<li>
Look at the save-restore code. The model is saved at the end of the [training loop](#train) and restored when running [validation](#valid). Also see how the restored model is used for [inference](#inference).
<br/><br/>
You are ready to run in the cloud on all 1666 weather stations. Use [this bash notebook](../run-on-cloud-ml-engine.ipynb) to convert your code to a regular Python file and invoke the Google Cloud ML Engine command line.
When the training is finished on ML Engine, change one line in [validation](#valid) to load the SAVEDMODEL from its cloud bucket and display.
</div>
End of explanation
import math
import sys
import time
import numpy as np
sys.path.insert(0, '../temperatures/utils/') #so python can find the utils_ modules
import utils_batching
import utils_args
import tensorflow as tf
from tensorflow.python.lib.io import file_io as gfile
print("Tensorflow version: " + tf.__version__)
from matplotlib import pyplot as plt
import utils_prettystyle
import utils_display
Explanation: Please ignore any compatibility warnings and errors.
Make sure to <b>restart</b> your kernel to ensure this change has taken place.
End of explanation
%%bash
DOWNLOAD_DIR=../temperatures/data
mkdir $DOWNLOAD_DIR
gsutil -m cp gs://cloud-training-demos/courses/machine_learning/deepdive/09_sequence/temperatures/* $DOWNLOAD_DIR
Explanation: Download Data
End of explanation
NB_EPOCHS = 5 # number of times the model sees all the data during training
N_FORWARD = 1 # train the network to predict N in advance (traditionnally 1)
RESAMPLE_BY = 1 # averaging period in days (training on daily data is too much)
RNN_CELLSIZE = 80 # size of the RNN cells
N_LAYERS = 1 # number of stacked RNN cells (needed for tensor shapes but code must be changed manually)
SEQLEN = 32 # unrolled sequence length
BATCHSIZE = 64 # mini-batch size
DROPOUT_PKEEP = 0.7 # probability of neurons not being dropped (should be between 0.5 and 1)
ACTIVATION = tf.nn.tanh # Activation function for GRU cells (tf.nn.relu or tf.nn.tanh)
JOB_DIR = "temperature_checkpoints"
DATA_DIR = "../temperatures/data"
# potentially override some settings from command-line arguments
if __name__ == '__main__':
JOB_DIR, DATA_DIR = utils_args.read_args1(JOB_DIR, DATA_DIR)
ALL_FILEPATTERN = DATA_DIR + "/*.csv" # pattern matches all 1666 files
EVAL_FILEPATTERN = DATA_DIR + "/USC000*2.csv" # pattern matches 8 files
# pattern USW*.csv -> 298 files, pattern USW*0.csv -> 28 files
print('Reading data from "{}".\nWrinting checkpoints to "{}".'.format(DATA_DIR, JOB_DIR))
Explanation: <a name="hyperparameters"></a>
<a name="assignment1"></a>
Hyperparameters
<div class="alert alert-block alert-info">
***Assignment #1*** Temperatures have a periodicity of 365 days. We would need to unroll the RNN over 365 steps (=SEQLEN) to capture that. That is way too much. We will have to work with averages over a handful of days instead of daily temperatures. Bump the unrolling length to SEQLEN=128 and then try averaging over 3 to 5 days (RESAMPLE_BY=3, 4, 5). Look at the data visualisations in [Resampling](#resampling) and [Training sequences](#trainseq). The training sequences should capture a recognizable part of the yearly oscillation.
***In the end, use these values: SEQLEN=128, RESAMPLE_BY=5.***
</div>
End of explanation
all_filenames = gfile.get_matching_files(ALL_FILEPATTERN)
eval_filenames = gfile.get_matching_files(EVAL_FILEPATTERN)
train_filenames = list(set(all_filenames) - set(eval_filenames))
# By default, this utility function loads all the files and places data
# from them as-is in an array, one file per line. Later, we will use it
# to shape the dataset as needed for training.
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames)
evtemps, _, evdates, _, _ = next(ite) # gets everything
print('Pattern "{}" matches {} files'.format(ALL_FILEPATTERN, len(all_filenames)))
print('Pattern "{}" matches {} files'.format(EVAL_FILEPATTERN, len(eval_filenames)))
print("Evaluation files: {}".format(len(eval_filenames)))
print("Training files: {}".format(len(train_filenames)))
print("Initial shape of the evaluation dataset: " + str(evtemps.shape))
print("{} files, {} data points per file, {} values per data point"
" (Tmin, Tmax, is_interpolated) ".format(evtemps.shape[0], evtemps.shape[1],evtemps.shape[2]))
# You can adjust the visualisation range and dataset here.
# Interpolated regions of the dataset are marked in red.
WEATHER_STATION = 0 # 0 to 7 in default eval dataset
START_DATE = 0 # 0 = Jan 2nd 1950
END_DATE = 18262 # 18262 = Dec 31st 2009
visu_temperatures = evtemps[WEATHER_STATION,START_DATE:END_DATE]
visu_dates = evdates[START_DATE:END_DATE]
utils_display.picture_this_4(visu_temperatures, visu_dates)
Explanation: Temperature data
This is what our temperature datasets looks like: sequences of daily (Tmin, Tmax) from 1960 to 2010. They have been cleaned up and eventual missing values have been filled by interpolation. Interpolated regions of the dataset are marked in red on the graph.
End of explanation
# This time we ask the utility function to average temperatures over 5-day periods (RESAMPLE_BY=5)
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames, RESAMPLE_BY, tminmax=True)
evaltemps, _, evaldates, _, _ = next(ite)
# display five years worth of data
WEATHER_STATION = 0 # 0 to 7 in default eval dataset
START_DATE = 0 # 0 = Jan 2nd 1950
END_DATE = 365*5//RESAMPLE_BY # 5 years
visu_temperatures = evaltemps[WEATHER_STATION, START_DATE:END_DATE]
visu_dates = evaldates[START_DATE:END_DATE]
plt.fill_between(visu_dates, visu_temperatures[:,0], visu_temperatures[:,1])
plt.show()
Explanation: <a name="resampling"></a>
Resampling
Our RNN would need to be unrolled across 365 steps to capture the yearly temperature cycles. That's a bit too much. We will resample the temparatures and work with 5-day averages for example. This is what resampled (Tmin, Tmax) temperatures look like.
End of explanation
# The function rnn_multistation_sampling_temperature_sequencer puts one weather station per line in
# a batch and continues with data from the same station in corresponding lines in the next batch.
# Features and labels are returned with shapes [BATCHSIZE, SEQLEN, 2]. The last dimension of size 2
# contains (Tmin, Tmax).
ite = utils_batching.rnn_multistation_sampling_temperature_sequencer(eval_filenames,
RESAMPLE_BY,
BATCHSIZE,
SEQLEN,
N_FORWARD,
nb_epochs=1,
tminmax=True)
# load 6 training sequences (each one contains data for all weather stations)
visu_data = [next(ite) for _ in range(6)]
# Check that consecutive training sequences from the same weather station are indeed consecutive
WEATHER_STATION = 4
utils_display.picture_this_5(visu_data, WEATHER_STATION)
Explanation: <a name="trainseq"></a>
Visualize training sequences
This is what the neural network will see during training.
End of explanation
def model_rnn_fn(features, Hin, labels, step, dropout_pkeep):
print('features: {}'.format(features))
X = features # shape [BATCHSIZE, SEQLEN, 2], 2 for (Tmin, Tmax)
batchsize = tf.shape(X)[0] # allow for variable batch size
seqlen = tf.shape(X)[1] # allow for variable sequence length
cell = tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE)
Hr, H = tf.nn.dynamic_rnn(cell,X,initial_state=Hin)
Yn = tf.reshape(Hr, [batchsize*seqlen, RNN_CELLSIZE])
Yr = tf.layers.dense(Yn, 2) # Yr [BATCHSIZE*SEQLEN, 2] predicting vectors of 2 element
Yr = tf.reshape(Yr, [batchsize, seqlen, 2]) # Yr [BATCHSIZE, SEQLEN, 2]
Yout = Yr[:,-N_FORWARD:,:] # Last N_FORWARD outputs. Yout [BATCHSIZE, N_FORWARD, 2]
loss = tf.losses.mean_squared_error(Yr, labels) # labels[BATCHSIZE, SEQLEN, 2]
optimizer = tf.train.AdamOptimizer(learning_rate=0.01)
train_op = optimizer.minimize(loss)
return Yout, H, loss, train_op, Yr
Explanation: <a name="assignment2"></a>
<div class="alert alert-block alert-info">
***Assignement #2*** Temperatures are noisy. If we ask the model to predict the naxt data point, noise might drown the trend and the model will not train. The trend should be clearer if we ask the moder to look further ahead. You can use the [hyperparameter](#hyperparameters) N_FORWARD to shift the target sequences by more than 1. Try values between 4 and 16 and see how [training sequences](#trainseq) look.<br/>
<br/>
If the model predicts N_FORWARD in advance, you will also need it to output N_FORWARD predicted values instead of 1. Please check that the output of your model is indeed `Yout = Yr[:,-N_FORWARD:,:]`. The inference part has already been adjusted to generate the sequence by blocks of N_FORWARD points. You can have a [look at it](#inference).<br/>
<br/>
Train and evaluate to see if you are getting better results. ***In the end, use this value: N_FORWARD=8***
</div>
<a name="assignment3"></a>
<div class="alert alert-block alert-info">
***Assignement #3*** Try adjusting the follwing parameters:<ol><ol>
<li> Use a stacked RNN cell with 2 layers with in the model:<br/>
```
cells = [tf.nn.rnn_cell.GRUCell(RNN_CELLSIZE) for _ in range(N_LAYERS)]
cell = tf.nn.rnn_cell.MultiRNNCell(cells, state_is_tuple=False)
```
<br/>Do not forget to set N_LAYERS=2 in [hyperparameters](#hyperparameters)
</li>
<li>Regularisation: add dropout between cell layers.<br/>
```
cells = [tf.nn.rnn_cell.DropoutWrapper(cell, output_keep_prob = dropout_pkeep) for cell in cells]
```
<br/>
Check that you have a good value for DROPOUT_PKEEP in [hyperparameters](#hyperparameters). 0.7 should do. Also check that dropout is deactivated i.e. dropout_pkeep=1.0 during [inference](#inference).
</li>
<li>Increase RNN_CELLSIZE -> 128 to allow the cells to model more complex behaviors.</li>
</ol></ol>
Play with these options until you get a good fit for at least 1.5 years.
</div>
<div style="text-align: right; font-family: monospace">
X shape [BATCHSIZE, SEQLEN, 2]<br/>
Y shape [BATCHSIZE, SEQLEN, 2]<br/>
H shape [BATCHSIZE, RNN_CELLSIZE*NLAYERS]
</div>
When executed, this function instantiates the Tensorflow graph for our model.
End of explanation
tf.reset_default_graph() # restart model graph from scratch
# placeholder for inputs
Hin = tf.placeholder(tf.float32, [None, RNN_CELLSIZE * N_LAYERS])
features = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]
labels = tf.placeholder(tf.float32, [None, None, 2]) # [BATCHSIZE, SEQLEN, 2]
step = tf.placeholder(tf.int32)
dropout_pkeep = tf.placeholder(tf.float32)
# instantiate the model
Yout, H, loss, train_op, Yr = model_rnn_fn(features, Hin, labels, step, dropout_pkeep)
Explanation: Instantiate the model
End of explanation
# variable initialization
sess = tf.Session()
init = tf.global_variables_initializer()
sess.run([init])
saver = tf.train.Saver(max_to_keep=1)
Explanation: Initialize Tensorflow session
This resets all neuron weights and biases to initial random values
End of explanation
losses = []
indices = []
last_epoch = 99999
last_fileid = 99999
for i, (next_features, next_labels, dates, epoch, fileid) in enumerate(
utils_batching.rnn_multistation_sampling_temperature_sequencer(train_filenames,
RESAMPLE_BY,
BATCHSIZE,
SEQLEN,
N_FORWARD,
NB_EPOCHS, tminmax=True)):
# reinintialize state between epochs or when starting on data from a new weather station
if epoch != last_epoch or fileid != last_fileid:
batchsize = next_features.shape[0]
H_ = np.zeros([batchsize, RNN_CELLSIZE * N_LAYERS])
print("State reset")
#train
feed = {Hin: H_, features: next_features, labels: next_labels, step: i, dropout_pkeep: DROPOUT_PKEEP}
Yout_, H_, loss_, _, Yr_ = sess.run([Yout, H, loss, train_op, Yr], feed_dict=feed)
# print progress
if i%20 == 0:
print("{}: epoch {} loss = {} ({} weather stations this epoch)".format(i, epoch, np.mean(loss_), fileid+1))
sys.stdout.flush()
if i%10 == 0:
losses.append(np.mean(loss_))
indices.append(i)
# This visualisation can be helpful to see how the model "locks" on the shape of the curve
# if i%100 == 0:
# plt.figure(figsize=(10,2))
# plt.fill_between(dates, next_features[0,:,0], next_features[0,:,1]).set_alpha(0.2)
# plt.fill_between(dates, next_labels[0,:,0], next_labels[0,:,1])
# plt.fill_between(dates, Yr_[0,:,0], Yr_[0,:,1]).set_alpha(0.8)
# plt.show()
last_epoch = epoch
last_fileid = fileid
# save the trained model
SAVEDMODEL = JOB_DIR + "/ckpt" + str(int(time.time()))
tf.saved_model.simple_save(sess, SAVEDMODEL,
inputs={"features":features, "Hin":Hin, "dropout_pkeep":dropout_pkeep},
outputs={"Yout":Yout, "H":H})
plt.ylim(ymax=np.amax(losses[1:])) # ignore first value for scaling
plt.plot(indices, losses)
plt.show()
Explanation: <a name="train"></a>
The training loop
You can re-execute this cell to continue training. <br/>
<br/>
Training data must be batched correctly, one weather station per line, continued on the same line across batches. This way, output states computed from one batch are the correct input states for the next batch. The provided utility function rnn_multistation_sampling_temperature_sequencer does the right thing.
End of explanation
def prediction_run(predict_fn, prime_data, run_length):
H = np.zeros([1, RNN_CELLSIZE * N_LAYERS]) # zero state initially
Yout = np.zeros([1, N_FORWARD, 2])
data_len = prime_data.shape[0]-N_FORWARD
# prime the state from data
if data_len > 0:
Yin = np.array(prime_data[:-N_FORWARD])
Yin = np.reshape(Yin, [1, data_len, 2]) # reshape as one sequence of pairs (Tmin, Tmax)
r = predict_fn({'features': Yin, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference
Yout = r["Yout"]
H = r["H"]
# initaily, put real data on the inputs, not predictions
Yout = np.expand_dims(prime_data[-N_FORWARD:], axis=0)
# Yout shape [1, N_FORWARD, 2]: batch of a single sequence of length N_FORWARD of (Tmin, Tmax) data pointa
# run prediction
# To generate a sequence, run a trained cell in a loop passing as input and input state
# respectively the output and output state from the previous iteration.
results = []
for i in range(run_length//N_FORWARD+1):
r = predict_fn({'features': Yout, 'Hin':H, 'dropout_pkeep':1.0}) # no dropout during inference
Yout = r["Yout"]
H = r["H"]
results.append(Yout[0]) # shape [N_FORWARD, 2]
return np.concatenate(results, axis=0)[:run_length]
Explanation: <a name="inference"></a>
Inference
This is a generative model: run an trained RNN cell in a loop. This time, with N_FORWARD>1, we generate the sequence by blocks of N_FORWAD data points instead of point by point. The RNN is unrolled across N_FORWARD steps, takes in a the last N_FORWARD data points and predicts the next N_FORWARD data points and so on in a loop. State must be passed around correctly.
End of explanation
QYEAR = 365//(RESAMPLE_BY*4)
YEAR = 365//(RESAMPLE_BY)
# Try starting predictions from January / March / July (resp. OFFSET = YEAR or YEAR+QYEAR or YEAR+2*QYEAR)
# Some start dates are more challenging for the model than others.
OFFSET = 4*YEAR+1*QYEAR
PRIMELEN=5*YEAR
RUNLEN=3*YEAR
RMSELEN=3*365//(RESAMPLE_BY*2) # accuracy of predictions 1.5 years in advance
# Restore the model from the last checkpoint saved previously.
# Alternative checkpoints:
# Once you have trained on all 1666 weather stations on Google Cloud ML Engine, you can load the checkpoint from there.
# SAVEDMODEL = "gs://{BUCKET}/sinejobs/sines_XXXXXX_XXXXXX/ckptXXXXXXXX"
# A sample checkpoint is provided with the lab. You can try loading it for comparison.
# You will have to use the following parameters and re-run the entire notebook:
# N_FORWARD = 8, RESAMPLE_BY = 5, RNN_CELLSIZE = 128, N_LAYERS = 2
# SAVEDMODEL = "temperatures_best_checkpoint"
predict_fn = tf.contrib.predictor.from_saved_model(SAVEDMODEL)
for evaldata in evaltemps:
prime_data = evaldata[OFFSET:OFFSET+PRIMELEN]
results = prediction_run(predict_fn, prime_data, RUNLEN)
utils_display.picture_this_6(evaldata, evaldates, prime_data, results, PRIMELEN, RUNLEN, OFFSET, RMSELEN)
rmses = []
bad_ones = 0
for offset in [YEAR, YEAR+QYEAR, YEAR+2*QYEAR]:
for evaldata in evaltemps:
prime_data = evaldata[offset:offset+PRIMELEN]
results = prediction_run(predict_fn, prime_data, RUNLEN)
rmse = math.sqrt(np.mean((evaldata[offset+PRIMELEN:offset+PRIMELEN+RMSELEN] - results[:RMSELEN])**2))
rmses.append(rmse)
if rmse>7: bad_ones += 1
print("RMSE on {} predictions (shaded area): {}".format(RMSELEN, rmse))
print("Average RMSE on {} weather stations: {} ({} really bad ones, i.e. >7.0)".format(len(evaltemps), np.mean(rmses), bad_ones))
sys.stdout.flush()
Explanation: <a name="valid"></a>
Validation
End of explanation |
6,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Illustrate usage of DAPPER to benchmark multiple DA methods.
Imports
<b>NB
Step1: DA method configurations
Step2: With Lorenz-96 instead
Step3: Other models (suitable xp's listed in HMM files)
Step4: Launch
Write some more non-arg parameters to the xps. In this case we set the seed,
so that repeat experiments produce exactly the same result.
Step5: Adjust experiment duration
Step6: Run/assimilate (for each xp in xps)
Step7: Print results | Python Code:
%matplotlib notebook
import dapper as dpr
import dapper.da_methods as da
Explanation: Illustrate usage of DAPPER to benchmark multiple DA methods.
Imports
<b>NB:</b> If you're on <mark><b>Gooble Colab</b></mark>,
then replace %matplotlib notebook below by
!python -m pip install git+https://github.com/nansencenter/DAPPER.git .
Also note that liveplotting does not work on Colab.
End of explanation
from dapper.mods.Lorenz63.sakov2012 import HMM # Expected rmse.a:
xps = dpr.xpList()
xps += da.Climatology() # 7.6
xps += da.OptInterp() # 1.25
xps += da.Var3D(xB=0.1) # 1.03
xps += da.ExtKF(infl=90) # 0.87
xps += da.EnKF('Sqrt' , N=3 , infl=1.30) # 0.82
xps += da.EnKF('Sqrt' , N=10 , infl=1.02, rot=True) # 0.63
xps += da.EnKF('PertObs', N=500 , infl=0.95, rot=False) # 0.56
xps += da.EnKF_N( N=10 , rot=True) # 0.54
xps += da.iEnKS('Sqrt' , N=10 , infl=1.02, rot=True) # 0.31
xps += da.PartFilt( N=100 , reg=2.4 , NER=0.3) # 0.38
xps += da.PartFilt( N=800 , reg=0.9 , NER=0.2) # 0.28
# xps += da.PartFilt( N=4000, reg=0.7 , NER=0.05) # 0.27
# xps += da.PFxN(xN=1000, N=30 , Qs=2 , NER=0.2) # 0.56
Explanation: DA method configurations
End of explanation
# from dapper.mods.Lorenz96.sakov2008 import HMM # Expected rmse.a:
# xps = dpr.xpList()
# xps += da.Climatology() # 3.6
# xps += da.OptInterp() # 0.95
# xps += da.Var3D(xB=0.02) # 0.41
# xps += da.ExtKF(infl=6) # 0.24
# xps += da.EnKF('PertObs', N=40, infl=1.06) # 0.22
# xps += da.EnKF('Sqrt' , N=28, infl=1.02, rot=True) # 0.18
# # More sophisticated:
# xps += da.EnKF_N( N=24, rot=True) # 0.21
# xps += da.EnKF_N( N=24, rot=True, xN=2) # 0.18
# xps += da.iEnKS('Sqrt' , N=40, infl=1.01, rot=True) # 0.17
# # With localisation:
# xps += da.LETKF( N=7 , infl=1.04, rot=True, loc_rad=4) # 0.22
# xps += da.SL_EAKF( N=7 , infl=1.07, rot=True, loc_rad=6) # 0.23
Explanation: With Lorenz-96 instead
End of explanation
# from dapper.mods.LA .evensen2009 import HMM
# from dapper.mods.KS .bocquet2019 import HMM
# from dapper.mods.LotkaVolterra.settings101 import HMM
Explanation: Other models (suitable xp's listed in HMM files):
End of explanation
for xp in xps:
xp.seed = 3000
Explanation: Launch
Write some more non-arg parameters to the xps. In this case we set the seed,
so that repeat experiments produce exactly the same result.
End of explanation
HMM.tseq.T = 50
Explanation: Adjust experiment duration
End of explanation
save_as = xps.launch(HMM, liveplots=False)
Explanation: Run/assimilate (for each xp in xps)
End of explanation
print(xps.tabulate_avrgs())
Explanation: Print results
End of explanation |
6,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercise 7 | Principle Component Analysis and K-Means Clustering
Step1: Part 1
Step2: The closest centroids to the first 3 examples should be [0, 2, 1] respectively.
Step3: Part 2
Step4: Centroids computed after initial finding of closest centroids
Step5: The centroids should be
Step6: Part 3
Step7: Part 4
Step8: Reshape the image into an Nx3 matrix where N = number of pixels.
Each row will contain the Red, Green and Blue pixel values
This gives us our dataset matrix X that we will use K-Means on.
Step9: You should now complete the code in kmeans_init_centroids.
Step10: Run your K-Means algorithm on this data.
You should try different values of K and max_iters here
Step11: When using K-Means, it is important the initialize the centroids
randomly.
You should complete the code in kMeansInitCentroids before proceeding
Step12: Part 5
Step13: Essentially, now we have represented the image X as in terms of the
indices in idx.
We can now recover the image from the indices (idx) by mapping each pixel
(specified by it's index in idx) to the centroid value
Step14: Here are the centroid colors
Step15: And the images, original and compressed. | Python Code:
import random
import numpy as np
import matplotlib.pyplot as plt
import scipy.io
from PIL import Image
%matplotlib inline
ex7data1 = scipy.io.loadmat('ex7data2.mat')
X = ex7data1['X']
Explanation: Exercise 7 | Principle Component Analysis and K-Means Clustering
End of explanation
def find_closest_centroids(X, centroids):
#FINDCLOSESTCENTROIDS computes the centroid memberships for every example
# idx = FINDCLOSESTCENTROIDS (X, centroids) returns the closest centroids
# in idx for a dataset X where each row is a single example. idx = m x 1
# vector of centroid assignments (i.e. each entry in range [1..K])
#
K = centroids.shape[0]
# You need to return the following variables correctly.
idx = np.zeros(X.shape[0], dtype='int')
# ====================== YOUR CODE HERE ======================
# Instructions: Go over every example, find its closest centroid, and store
# the index inside idx at the appropriate location.
# Concretely, idx(i) should contain the index of the centroid
# closest to example i. Hence, it should be a value in the
# range 1..K
#
# Note: You can use a for-loop over the examples to compute this.
#
# =============================================================
return idx
Explanation: Part 1: Find Closest Centroids
To help you implement K-Means, we have divided the learning algorithm
into two functions -- findClosestCentroids and computeCentroids. In this
part, you should complete the code in the findClosestCentroids function.
End of explanation
K = 3
initial_centroids = np.array([[3, 3], [6, 2], [8, 5]])
idx = find_closest_centroids(X, initial_centroids)
idx[:3]
Explanation: The closest centroids to the first 3 examples should be [0, 2, 1] respectively.
End of explanation
def compute_centroids(X, idx, K):
#COMPUTECENTROIDS returs the new centroids by computing the means of the
#data points assigned to each centroid.
# centroids = COMPUTECENTROIDS(X, idx, K) returns the new centroids by
# computing the means of the data points assigned to each centroid. It is
# given a dataset X where each row is a single data point, a vector
# idx of centroid assignments (i.e. each entry in range [1..K]) for each
# example, and K, the number of centroids. You should return a matrix
# centroids, where each row of centroids is the mean of the data points
# assigned to it.
#
m, n = X.shape
# You need to return the following variables correctly.
centroids = np.zeros((K, n))
# ====================== YOUR CODE HERE ======================
# Instructions: Go over every centroid and compute mean of all points that
# belong to it. Concretely, the row vector centroids(i, :)
# should contain the mean of the data points assigned to
# centroid i.
#
# Note: You can use a for-loop over the centroids to compute this.
#
# =============================================================
return centroids
Explanation: Part 2: Compute Means
After implementing the closest centroids function, you should now
complete the compute_centroids function.
End of explanation
compute_centroids(X, idx, K)
Explanation: Centroids computed after initial finding of closest centroids:
End of explanation
def plot_data_points(X, idx, K, ax):
palette = plt.get_cmap('hsv', np.max(idx) + 2)
colors = palette(idx)
ax.scatter(X[:, 0], X[:, 1], c=colors)
def plot_kmeans_progress(X, centroids, previous_centroids, idx, K, iteration_number, ax):
plot_data_points(X, idx, K, ax)
ax.scatter(centroids[:, 0], centroids[:, 1], c='black', marker='x', s=50, color='black', linewidths=4)
if previous_centroids is not None:
for c, pc in zip(centroids, previous_centroids):
ax.plot([c[0], pc[0]], [c[1], pc[1]], 'b-')
ax.set_title('Iteration {}'.format(iteration_number))
Explanation: The centroids should be:
array([[ 2.42830111, 3.15792418],
[ 5.81350331, 2.63365645],
[ 7.11938687, 3.6166844 ]])
End of explanation
K = 3
initial_centroids = np.array([[3, 3], [6, 2], [8, 5]])
max_iters = 10
def run_kmeans(X, initial_centroids, max_iters, plot_progress=False):
if plot_progress:
fig, ax = plt.subplots(figsize=(6, 6))
m, n = X.shape
K = initial_centroids.shape[0]
centroids = initial_centroids
previous_centroids = None
for i in range(max_iters):
idx = find_closest_centroids(X, centroids)
if plot_progress:
plot_kmeans_progress(X, centroids, previous_centroids, idx, K, i+1, ax)
previous_centroids = centroids
centroids = compute_centroids(X, idx, K)
return centroids, idx
_, __ = run_kmeans(X, initial_centroids, 10, True)
Explanation: Part 3: K-Means Clustering
After you have completed the two functions computeCentroids and
findClosestCentroids, you have all the necessary pieces to run the
kMeans algorithm. In this part, you will run the K-Means algorithm on
the example dataset we have provided.
End of explanation
# Load an image of a bird
im = Image.open('bird_small.png')
X = np.array(im)
X = X/255 # Divide by 255 so that all values are in the range
Explanation: Part 4: K-Means Clustering on Pixels
In this exercise, you will use K-Means to compress an image. To do this,
you will first run K-Means on the colors of the pixels in the image and
then you will map each pixel on to it's closest centroid.
End of explanation
X = X.reshape((128*128, 3))
img_size = X.shape
img_size, X.dtype
Explanation: Reshape the image into an Nx3 matrix where N = number of pixels.
Each row will contain the Red, Green and Blue pixel values
This gives us our dataset matrix X that we will use K-Means on.
End of explanation
def kmeans_init_centroids(X, K):
#KMEANSINITCENTROIDS This function initializes K centroids that are to be
#used in K-Means on the dataset X
# centroids = KMEANSINITCENTROIDS(X, K) returns K initial centroids to be
# used with the K-Means on the dataset X
#
centroids = np.zeros((K, X.shape[1]))
# ====================== YOUR CODE HERE ======================
# Instructions: You should set centroids to randomly chosen examples from
# the dataset X
#
# =============================================================
return centroids
Explanation: You should now complete the code in kmeans_init_centroids.
End of explanation
K = 16
max_iters = 10
Explanation: Run your K-Means algorithm on this data.
You should try different values of K and max_iters here
End of explanation
initial_centroids = kmeans_init_centroids(X, K)
centroids, idx = run_kmeans(X, initial_centroids, max_iters)
Explanation: When using K-Means, it is important the initialize the centroids
randomly.
You should complete the code in kMeansInitCentroids before proceeding
End of explanation
idx = find_closest_centroids(X, centroids)
Explanation: Part 5: Image Compression
In this part of the exercise, you will use the clusters of K-Means to
compress an image. To do this, we first find the closest clusters for
each example.
End of explanation
X_recovered = centroids[idx,:]
X_recovered = X_recovered.reshape([128, 128, 3])
X_recovered *= 255
X_recovered = np.array(X_recovered, dtype='uint8')
X_recovered.shape
Explanation: Essentially, now we have represented the image X as in terms of the
indices in idx.
We can now recover the image from the indices (idx) by mapping each pixel
(specified by it's index in idx) to the centroid value
End of explanation
fig, axes = plt.subplots(nrows=4, ncols=4)
axes = axes.flat
for centroid, ax in zip(centroids, axes):
c = np.array(centroid)
ax.set_axis_off()
ax.scatter(1,1,c=c,s=1000)
Explanation: Here are the centroid colors:
End of explanation
fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(8,10))
axes[0].imshow(X_recovered)
axes[0].set_title('Compressed')
axes[1].imshow(np.array(Image.open('bird_small.png')))
axes[1].set_title('Original')
for ax in axes:
ax.set_axis_off()
Explanation: And the images, original and compressed.
End of explanation |
6,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción. La regulación del diámetro se hace mediante el control del filawinder. Los datos analizados son del día 16 de Junio del 2015
Los datos del experimento
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como cuarta aproximación, vamos a modificar las velocidades de tracción. El rango de velocidades propuesto es de 1.5 a 5.3, manteniendo los incrementos del sistema experto como en el actual ensayo.
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Step3: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$ | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos 2de la muestra
datos = pd.read_csv('ensayo1.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y', 'RPM TRAC']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción. La regulación del diámetro se hace mediante el control del filawinder. Los datos analizados son del día 16 de Junio del 2015
Los datos del experimento:
* Hora de inicio: 11:50
* Hora final : 12:20
* $T: 150ºC$
End of explanation
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
Explanation: Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como cuarta aproximación, vamos a modificar las velocidades de tracción. El rango de velocidades propuesto es de 1.5 a 5.3, manteniendo los incrementos del sistema experto como en el actual ensayo.
Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
Explanation: Representación de X/Y
End of explanation
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Explanation: Analizamos datos del ratio
End of explanation
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation |
6,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Exploration & Feature Engineering
1. Data Exploration
Step1: Loading data
Step2: 2. Data Cleaning
Imputation
Step3: 2. Feature Engineering
Step4: Step2
Step5: Step 3
Step6: Step 4
Step7: Step 5
Step8: Step 6
Step9: Step7 | Python Code:
import pandas as pd
import numpy as np
Explanation: Data Exploration & Feature Engineering
1. Data Exploration
End of explanation
#Read files:
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")
#Combine test and train into one file
train['source']='train'
test['source']='test'
data = pd.concat([train, test],ignore_index=True)
print train.shape, test.shape, data.shape
#Check missing values:
data.apply(lambda x: sum(x.isnull()))
#Numerical data summary:
data.describe()
#Number of unique values in each:
data.apply(lambda x: len(x.unique()))
#Filter categorical variables
categorical_columns = [x for x in data.dtypes.index if data.dtypes[x]=='object']
#Exclude ID cols and source:
categorical_columns = [x for x in categorical_columns if x not in ['Item_Identifier','Outlet_Identifier','source']]
#Print frequency of categories
for col in categorical_columns:
print '\nFrequency of Categories for varible %s'%col
print data[col].value_counts()
Explanation: Loading data:
The files can be downloaded from: http://datahack.analyticsvidhya.com/contest/practice-problem-bigmart-sales-prediction
End of explanation
#Determine the average weight per item:
item_avg_weight = data.pivot_table(values='Item_Weight', index='Item_Identifier')
#Get a boolean variable specifying missing Item_Weight values
miss_bool = data['Item_Weight'].isnull()
#Impute data and check #missing values before and after imputation to confirm
print 'Orignal #missing: %d'% sum(miss_bool)
data.loc[miss_bool,'Item_Weight'] = data.loc[miss_bool,'Item_Identifier'].apply(lambda x: item_avg_weight[x])
print 'Final #missing: %d'% sum(data['Item_Weight'].isnull())
#Import mode function:
from scipy.stats import mode
#Determing the mode for each
outlet_size_mode = data.pivot_table(values='Outlet_Size', columns='Outlet_Type',aggfunc=(lambda x:mode(x).mode[0]) )
print 'Mode for each Outlet_Type:'
print outlet_size_mode
#Get a boolean variable specifying missing Item_Weight values
miss_bool = data['Outlet_Size'].isnull()
#Impute data and check #missing values before and after imputation to confirm
print '\nOrignal #missing: %d'% sum(miss_bool)
data.loc[miss_bool,'Outlet_Size'] = data.loc[miss_bool,'Outlet_Type'].apply(lambda x: outlet_size_mode[x])
print sum(data['Outlet_Size'].isnull())
Explanation: 2. Data Cleaning
Imputation
End of explanation
#Check the mean sales by type:
data.pivot_table(values='Item_Outlet_Sales',index='Outlet_Type')
Explanation: 2. Feature Engineering:
Step1: Consider combining categories in Outlet_Type
End of explanation
#Determine average visibility of a product
visibility_avg = data.pivot_table(values='Item_Visibility', index='Item_Identifier')
#Impute 0 values with mean visibility of that product:
miss_bool = (data['Item_Visibility'] == 0)
print 'Number of 0 values initially: %d'%sum(miss_bool)
data.loc[miss_bool,'Item_Visibility'] = data.loc[miss_bool,'Item_Identifier'].apply(lambda x: visibility_avg[x])
print 'Number of 0 values after modification: %d'%sum(data['Item_Visibility'] == 0)
#Determine another variable with means ratio
data['Item_Visibility_MeanRatio'] = data.apply(lambda x: x['Item_Visibility']/visibility_avg[x['Item_Identifier']], axis=1)
print data['Item_Visibility_MeanRatio'].describe()
Explanation: Step2: Modify Item_Visibility
End of explanation
#Item type combine:
data['Item_Identifier'].value_counts()
data['Item_Type_Combined'] = data['Item_Identifier'].apply(lambda x: x[0:2])
data['Item_Type_Combined'] = data['Item_Type_Combined'].map({'FD':'Food',
'NC':'Non-Consumable',
'DR':'Drinks'})
data['Item_Type_Combined'].value_counts()
Explanation: Step 3: Create a broad category of Type of Item
End of explanation
#Years:
data['Outlet_Years'] = 2013 - data['Outlet_Establishment_Year']
data['Outlet_Years'].describe()
Explanation: Step 4: Determine the years of operation of a store
End of explanation
#Change categories of low fat:
print 'Original Categories:'
print data['Item_Fat_Content'].value_counts()
print '\nModified Categories:'
data['Item_Fat_Content'] = data['Item_Fat_Content'].replace({'LF':'Low Fat',
'reg':'Regular',
'low fat':'Low Fat'})
print data['Item_Fat_Content'].value_counts()
#Mark non-consumables as separate category in low_fat:
data.loc[data['Item_Type_Combined']=="Non-Consumable",'Item_Fat_Content'] = "Non-Edible"
data['Item_Fat_Content'].value_counts()
Explanation: Step 5: Modify categories of Item_Fat_Content
End of explanation
#Import library:
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
#New variable for outlet
data['Outlet'] = le.fit_transform(data['Outlet_Identifier'])
var_mod = ['Item_Fat_Content','Outlet_Location_Type','Outlet_Size','Item_Type_Combined','Outlet_Type','Outlet']
le = LabelEncoder()
for i in var_mod:
data[i] = le.fit_transform(data[i])
#One Hot Coding:
data = pd.get_dummies(data, columns=['Item_Fat_Content','Outlet_Location_Type','Outlet_Size','Outlet_Type',
'Item_Type_Combined','Outlet'])
data.dtypes
data[['Item_Fat_Content_0','Item_Fat_Content_1','Item_Fat_Content_2']].head(10)
Explanation: Step 6: Numerical and One-Hot Coding of Categorical variables
End of explanation
#Drop the columns which have been converted to different types:
data.drop(['Item_Type','Outlet_Establishment_Year'],axis=1,inplace=True)
#Divide into test and train:
train = data.loc[data['source']=="train"]
test = data.loc[data['source']=="test"]
#Drop unnecessary columns:
test.drop(['Item_Outlet_Sales','source'],axis=1,inplace=True)
train.drop(['source'],axis=1,inplace=True)
#Export files as modified versions:
train.to_csv("train_modified.csv",index=False)
test.to_csv("test_modified.csv",index=False)
Explanation: Step7: Exporting Data
End of explanation |
6,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Astronomical Application of Machine Learning
Step1: Problem 1) Examine the Training Data
For this problem the training set, i.e. sources with known labels, includes stars and galaxies that have been confirmed with spectroscopic observations. The machine learning model is needed because there are $\gg 10^8$ sources with photometric observations in SDSS, and only $4 \times 10^6$ sources with spectroscopic observations. The model will allow us to translate our knowledge from the spectroscopic observations to the entire data set. The features include each $r$-band magnitude measurement made by SDSS (don't worry if you don't know what this means...). This yields 8 features to train the models (significantly fewer than the 454 properties measured for each source in SDSS).
If you are curious (and it is fine if you are not) this training set was constructed by running the following query on the SDSS database
Step2: Problem 1b
Based on your plots of the data, which feature do you think will be the most important for separating stars and galaxies? Why?
write your answer here - do not change it after later completing the problem
The final data preparation step it to create an independent test set to evalute the generalization error of the final tuned model. Independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$.
sklearn.model_selection has a useful helper function train_test_split.
Problem 1c Split the 20k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called
Step3: We will now ignore everything in the test set until we have fully optimized the machine learning model.
Problem 2) Model Building
After curating the data, you must select a specific machine learning algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
Short of that? Try two (or three, or four, or five) different models and choose whichever works the best.
Problem 2a
Train a $k$-nearest neighbors model on the star-galaxy training set. Select $k$ = 25 for this model.
Hint - the KNeighborsClassifier object in the sklearn.neighbors module may be useful for this task.
Step4: Problem 2b
Train a Random Forest (RF) model (Breiman 2001) on the training set. Include 50 trees in the forest using the n_estimators parameter. Again, set random_state = rs.
Hint - use the RandomForestClassifier object from the sklearn.ensemble module. Also - be sure to set n_jobs = -1 in every call of RandomForestClassifier.
Step5: A nice property of RF, relative to $k$NN, is that RF naturally provides an estimate of the most important features in a model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() object. The higher the value, the more important feature.
Problem 2c
Calculate the relative importance of each feature.
Which feature is most important? Does this match your answer from 1c?
Step6: write your answer here
Problem 3) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. For our current application we want to maximize the accuracy of the model.
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
The SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data
Step7: Problem 3b
Use 10-fold cross validation to estimate the FoM for the $k$NN model. Take the mean value across all folds as the FoM estimate.
Hint - the cross_val_score function from the sklearn.model_selection module performs the necessary calculations.
Step8: Problem 3c
Use 10-fold cross validation to estimate the FoM for the random forest model.
Step9: Problem 3d
Do the machine-learning models outperform the SDSS photometric classifier?
write your answer here
Problem 4) Model Optimization
While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter
Step10: write your answer here
Problem 4b
Determine the 10-fold cross validation accuracy for RF models with $N_\mathrm{tree}$ = 1, 10, 30, 100, and 300.
How do you expect changing the number of trees to affect the results?
Step11: write your answer here
Now you are ready for the moment of truth!
Problem 5) Model Predictions
Problem 5a
Calculate the FoM for the SDSS photometric model on the test set.
Step12: Problem 5b
Using the optimal number of trees from 4b calculate the FoM for the random forest model.
Hint - remember that the model should be trained on the training set, but the predictions are for the test set.
Step13: Problem 5c
Calculate the confusion matrix for the test set. Is there symmetry to the misclassifications?
Hint - the confusion_matrix function in sklearn.metrics will help.
Step14: write your answer here
Problem 5d
Calculate (and plot the region of interest) the ROC curve assumming that stars are the positive class.
Hint 1 - you will need to calculate probabilistic classifications for the test set using the predict_proba() method.
Hint 2 - the roc_curve function in the sklearn.metrics module will be useful.
Step15: Problem 5e
Suppose that (like me) you really care about supernovae. In this case you want a model that correctly classifies 99% of all stars, so that stellar flares do not fool you into thinking you have found a new supernova.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model misclassify?
Step16: Problem 6) Classify New Data
Run the cell below to load in some new data (which in this case happens to have known labels, but in practice this will almost never be the case...)
Step17: Problem 6a
Create a feature and label array for the new data.
Hint - copy the code you developed above in Problem 2.
Step18: Problem 6b
Calculate the accuracy of the model predictions on the new data.
Step19: Problem 6c
Can you explain why the accuracy for the new data is significantly lower than what you calculated previously?
If you can build and train a better model (using the trianing data) for classifying the new data - I will be extremely impressed.
write your answer here
Challenge Problem) Full RF Optimization
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters? Brute force.
We will optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: An Astronomical Application of Machine Learning:
Separating Stars and Galaxies from SDSS
Version 0.3
By AA Miller 2017 Jan 22
AA Miller 2022 Mar 06 (v0.03)
The problems in the following notebook develop an end-to-end machine learning model using actual astronomical data to separate stars and galaxies. There are 5 steps in this machine learning workflow:
Data Preparation
Model Building
Model Evaluation
Model Optimization
Model Predictions
The data come from the Sloan Digital Sky Survey (SDSS), an imaging survey that has several similarities to LSST (though the telescope was significantly smaller and the survey did not cover as large an area).
Science background: Many (nearly all?) of the science applications for LSST data will rely on the accurate separation of stars and galaxies in the LSST imaging data. As an example, imagine measuring the structure of the Milky Way without knowing which sources are galaxies and which are stars.
During this exercise, we will utilize supervised machine learning methods to separate extended sources (galaxies) and point sources (stars) in imaging data. These methods are highly flexible, and as a result can classify sources at higher fidelity than methods that simply make cuts in a low-dimensional space.
End of explanation
sdss_df = pd.read_hdf("sdss_training_set.h5")
sns.pairplot(sdss_df, hue = 'class', diag_kind = 'hist')
Explanation: Problem 1) Examine the Training Data
For this problem the training set, i.e. sources with known labels, includes stars and galaxies that have been confirmed with spectroscopic observations. The machine learning model is needed because there are $\gg 10^8$ sources with photometric observations in SDSS, and only $4 \times 10^6$ sources with spectroscopic observations. The model will allow us to translate our knowledge from the spectroscopic observations to the entire data set. The features include each $r$-band magnitude measurement made by SDSS (don't worry if you don't know what this means...). This yields 8 features to train the models (significantly fewer than the 454 properties measured for each source in SDSS).
If you are curious (and it is fine if you are not) this training set was constructed by running the following query on the SDSS database:
SELECT TOP 20000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
ORDER BY p.objid ASC
First download the training set and the blind test set for this problem.
Problem 1a
Visualize the training set data. The data have 8 features ['psfMag_r', 'fiberMag_r', 'fiber2Mag_r', 'petroMag_r', 'deVMag_r', 'expMag_r', 'modelMag_r', 'cModelMag_r'], and a 9th column ['class'] corresponding to the labels ('STAR' or 'GALAXY' in this case).
Hint - just execute the cell below.
End of explanation
from sklearn.model_selection import train_test_split
rs = 1851
# complete
X = # complete
y = # complete
train_X, test_X, train_y, test_y = # complete
Explanation: Problem 1b
Based on your plots of the data, which feature do you think will be the most important for separating stars and galaxies? Why?
write your answer here - do not change it after later completing the problem
The final data preparation step it to create an independent test set to evalute the generalization error of the final tuned model. Independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$.
sklearn.model_selection has a useful helper function train_test_split.
Problem 1c Split the 20k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called: train_X, train_y, test_X, test_y, respectively. Use rs for the random_state in train_test_split.
Hint - recall that sklearn utilizes X, a 2D np.array(), and y as the features and labels arrays, respecitively.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn_clf = # complete
# complete
Explanation: We will now ignore everything in the test set until we have fully optimized the machine learning model.
Problem 2) Model Building
After curating the data, you must select a specific machine learning algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
Short of that? Try two (or three, or four, or five) different models and choose whichever works the best.
Problem 2a
Train a $k$-nearest neighbors model on the star-galaxy training set. Select $k$ = 25 for this model.
Hint - the KNeighborsClassifier object in the sklearn.neighbors module may be useful for this task.
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf_clf = # complete
# complete
Explanation: Problem 2b
Train a Random Forest (RF) model (Breiman 2001) on the training set. Include 50 trees in the forest using the n_estimators parameter. Again, set random_state = rs.
Hint - use the RandomForestClassifier object from the sklearn.ensemble module. Also - be sure to set n_jobs = -1 in every call of RandomForestClassifier.
End of explanation
feat_str = ',\n'.join(['{}'.format(feat) for feat in np.array(feats)[np.argsort(rf_clf.feature_importances_)[::-1]]])
print('From most to least important: \n{}'.format(feat_str))
Explanation: A nice property of RF, relative to $k$NN, is that RF naturally provides an estimate of the most important features in a model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() object. The higher the value, the more important feature.
Problem 2c
Calculate the relative importance of each feature.
Which feature is most important? Does this match your answer from 1c?
End of explanation
from sklearn.metrics import accuracy_score
phot_y = # complete
# complete
# complete
# complete
print("The baseline FoM = {:.4f}".format( # complete
Explanation: write your answer here
Problem 3) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. For our current application we want to maximize the accuracy of the model.
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
The SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:
$$\mathtt{psfMag_r} - \mathtt{cModelMag_r} > 0.145.$$
Sources that satisfy this criteria are considered galaxies.
Problem 3a
Determine the baseline figure of merit by measuring the accuracy of the SDSS photometric classifier on the training set.
Hint - the accuracy_score function in the sklearn.metrics module may be useful.
End of explanation
from sklearn.model_selection import cross_val_score
knn_cv = cross_val_score( # complete
print('The kNN model FoM = {:.4f} +/- {:.4f}'.format( # complete
Explanation: Problem 3b
Use 10-fold cross validation to estimate the FoM for the $k$NN model. Take the mean value across all folds as the FoM estimate.
Hint - the cross_val_score function from the sklearn.model_selection module performs the necessary calculations.
End of explanation
rf_cv = cross_val_score( # complete
print('The RF model FoM = {:.4f} +/- {:.4f}'.format( # complete
Explanation: Problem 3c
Use 10-fold cross validation to estimate the FoM for the random forest model.
End of explanation
for k in [1,10,100]:
# complete
print('With k = {:d}, the kNN FoM = {:.4f} +/- {:.4f}'.format( # complete
Explanation: Problem 3d
Do the machine-learning models outperform the SDSS photometric classifier?
write your answer here
Problem 4) Model Optimization
While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter:
$N_\mathrm{tree}$ - the number of trees in the forest n_estimators (default: 10) in sklearn
$m_\mathrm{try}$ - the number of (random) features to explore as splitting criteria at each node max_features (default: sqrt(n_features)) in sklearn
Pruning criteria - defined stopping criteria for ending continued growth of the tree, there are many choices for this in sklearn (My preference is min_samples_leaf (default: 1) which sets the minimum number of sources allowed in a terminal node, or leaf, of the tree)
Just as we previously evaluated the model using CV, we must optimize the tuning paramters via CV. Until we "finalize" the model by fixing all the input parameters, we cannot evalute the accuracy of the model with the test set as that would be "snooping."
Before globally optimizing the model, let's develop some intuition for how the tuning parameters affect the final model predictions.
Problem 4a
Determine the 10-fold cross validation accuracy for $k$NN models with $k$ = 1, 10, 100.
How do you expect changing the number of neighbors to affect the results?
End of explanation
for ntree in [1,10,30,100,300]:
# complete
print('With {:d} trees the FoM = {:.4f} +/- {:.4f}'.format( # complete
Explanation: write your answer here
Problem 4b
Determine the 10-fold cross validation accuracy for RF models with $N_\mathrm{tree}$ = 1, 10, 30, 100, and 300.
How do you expect changing the number of trees to affect the results?
End of explanation
phot_y = # complete
# complete
# complete
# complete
print("The baseline FoM = {:.4f}".format( # complete
Explanation: write your answer here
Now you are ready for the moment of truth!
Problem 5) Model Predictions
Problem 5a
Calculate the FoM for the SDSS photometric model on the test set.
End of explanation
rf_clf = RandomForestClassifier( # complete
# complete
# complete
print("The RF model has FoM = {:.4f}".format( # complete
Explanation: Problem 5b
Using the optimal number of trees from 4b calculate the FoM for the random forest model.
Hint - remember that the model should be trained on the training set, but the predictions are for the test set.
End of explanation
from sklearn.metrics import confusion_matrix
print(confusion_matrix( # complete
Explanation: Problem 5c
Calculate the confusion matrix for the test set. Is there symmetry to the misclassifications?
Hint - the confusion_matrix function in sklearn.metrics will help.
End of explanation
from sklearn.metrics import roc_curve
test_y_int = # complete
# complete
test_preds_proba = rf_clf.predict_proba( # complete
fpr, tpr, thresh = roc_curve( # complete
fig, ax = plt.subplots()
ax.plot( # complete
ax.set_xlabel('FPR')
ax.set_ylabel('TPR')
Explanation: write your answer here
Problem 5d
Calculate (and plot the region of interest) the ROC curve assumming that stars are the positive class.
Hint 1 - you will need to calculate probabilistic classifications for the test set using the predict_proba() method.
Hint 2 - the roc_curve function in the sklearn.metrics module will be useful.
End of explanation
tpr_99_thresh = # complete
print('This model requires a classification threshold of {:.4f}'.format(tpr_99_thresh))
fpr_at_tpr_99 = # complete
print('This model misclassifies {:.2f}% of galaxies'.format(fpr_at_tpr_99*100))
Explanation: Problem 5e
Suppose that (like me) you really care about supernovae. In this case you want a model that correctly classifies 99% of all stars, so that stellar flares do not fool you into thinking you have found a new supernova.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model misclassify?
End of explanation
new_data_df = pd.read_hdf("blind_test_set.h5")
Explanation: Problem 6) Classify New Data
Run the cell below to load in some new data (which in this case happens to have known labels, but in practice this will almost never be the case...)
End of explanation
new_X = # complete
new_y = # complete
Explanation: Problem 6a
Create a feature and label array for the new data.
Hint - copy the code you developed above in Problem 2.
End of explanation
new_preds = # complete
print("The model has an accuracy of {:.4f}".format( # complete
Explanation: Problem 6b
Calculate the accuracy of the model predictions on the new data.
End of explanation
from sklearn.model_selection import GridSearchCV
# complete
print('The best model has {}'.format( # complete
Explanation: Problem 6c
Can you explain why the accuracy for the new data is significantly lower than what you calculated previously?
If you can build and train a better model (using the trianing data) for classifying the new data - I will be extremely impressed.
write your answer here
Challenge Problem) Full RF Optimization
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters? Brute force.
We will optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb: (i) if the model is optimized at the edge of the grid, refit a new grid centered on that point, and (ii) the results should be stable in the vicinity of the grid maximum. If this is not the case the model is likely overfit.
Use GridSearchCV to perform a 3-fold CV grid search to optimize the RF star-galaxy model. Remember the rules of thumb.
What are the optimal tuning parameters for the model?
Hint 1 - think about the computational runtime based on the number of points in the grid. Do not start with a very dense or large grid.
Hint 2 - if the runtime is long, don't repeat the grid search even if the optimal model is on an edge of the grid
End of explanation |
6,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Post-training float16 quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Train and export the model
Step3: For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TensorFlow Lite model
Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.
Now load the model using the TFLiteConverter
Step4: Write it out to a .tflite file
Step5: To instead quantize the model to float16 on export, first set the optimizations flag to use default optimizations. Then specify that float16 is the supported type on the target platform
Step6: Finally, convert the model like usual. Note, by default the converted model will still use float input and outputs for invocation convenience.
Step7: Note how the resulting file is approximately 1/2 the size.
Step8: Run the TensorFlow Lite models
Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter.
Load the model into the interpreters
Step9: Test the models on one image
Step10: Evaluate the models
Step11: Repeat the evaluation on the float16 quantized model to obtain | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import logging
logging.getLogger("tensorflow").setLevel(logging.DEBUG)
import tensorflow as tf
from tensorflow import keras
import numpy as np
import pathlib
tf.float16
Explanation: Post-training float16 quantization
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/performance/post_training_float16_quant"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/lite/g3doc/performance/post_training_float16_quant.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
TensorFlow Lite now supports
converting weights to 16-bit floating point values during model conversion from TensorFlow to TensorFlow Lite's flat buffer format. This results in a 2x reduction in model size. Some hardware, like GPUs, can compute natively in this reduced precision arithmetic, realizing a speedup over traditional floating point execution. The Tensorflow Lite GPU delegate can be configured to run in this way. However, a model converted to float16 weights can still run on the CPU without additional modification: the float16 weights are upsampled to float32 prior to the first inference. This permits a significant reduction in model size in exchange for a minimal impacts to latency and accuracy.
In this tutorial, you train an MNIST model from scratch, check its accuracy in TensorFlow, and then convert the model into a Tensorflow Lite flatbuffer
with float16 quantization. Finally, check the accuracy of the converted model and compare it to the original float32 model.
Build an MNIST model
Setup
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=1,
validation_data=(test_images, test_labels)
)
Explanation: Train and export the model
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
Explanation: For the example, you trained the model for just a single epoch, so it only trains to ~96% accuracy.
Convert to a TensorFlow Lite model
Using the Python TFLiteConverter, you can now convert the trained model into a TensorFlow Lite model.
Now load the model using the TFLiteConverter:
End of explanation
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
Explanation: Write it out to a .tflite file:
End of explanation
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_types = [tf.float16]
Explanation: To instead quantize the model to float16 on export, first set the optimizations flag to use default optimizations. Then specify that float16 is the supported type on the target platform:
End of explanation
tflite_fp16_model = converter.convert()
tflite_model_fp16_file = tflite_models_dir/"mnist_model_quant_f16.tflite"
tflite_model_fp16_file.write_bytes(tflite_fp16_model)
Explanation: Finally, convert the model like usual. Note, by default the converted model will still use float input and outputs for invocation convenience.
End of explanation
!ls -lh {tflite_models_dir}
Explanation: Note how the resulting file is approximately 1/2 the size.
End of explanation
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
interpreter_fp16 = tf.lite.Interpreter(model_path=str(tflite_model_fp16_file))
interpreter_fp16.allocate_tensors()
Explanation: Run the TensorFlow Lite models
Run the TensorFlow Lite model using the Python TensorFlow Lite Interpreter.
Load the model into the interpreters
End of explanation
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
interpreter.set_tensor(input_index, test_image)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
test_image = np.expand_dims(test_images[0], axis=0).astype(np.float32)
input_index = interpreter_fp16.get_input_details()[0]["index"]
output_index = interpreter_fp16.get_output_details()[0]["index"]
interpreter_fp16.set_tensor(input_index, test_image)
interpreter_fp16.invoke()
predictions = interpreter_fp16.get_tensor(output_index)
plt.imshow(test_images[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(test_labels[0]),
predict=str(np.argmax(predictions[0]))))
plt.grid(False)
Explanation: Test the models on one image
End of explanation
# A helper function to evaluate the TF Lite model using "test" dataset.
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for test_image in test_images:
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
# Compare prediction results with ground truth labels to calculate accuracy.
accurate_count = 0
for index in range(len(prediction_digits)):
if prediction_digits[index] == test_labels[index]:
accurate_count += 1
accuracy = accurate_count * 1.0 / len(prediction_digits)
return accuracy
print(evaluate_model(interpreter))
Explanation: Evaluate the models
End of explanation
# NOTE: Colab runs on server CPUs. At the time of writing this, TensorFlow Lite
# doesn't have super optimized server CPU kernels. For this reason this may be
# slower than the above float interpreter. But for mobile CPUs, considerable
# speedup can be observed.
print(evaluate_model(interpreter_fp16))
Explanation: Repeat the evaluation on the float16 quantized model to obtain:
End of explanation |
6,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
METHOD 1
Step1: METHOD 2
Step2: METHOD 3
Step3: METHOD 4 | Python Code:
chunksize = 1000
step = 0
#Get variant list. Should always be the first step after running ANNOVAR
open_file = myvariant_parsing_utils.VariantParsing()
list_file = open_file.get_variants_from_vcf(vcf_file)
#Name Collection & DB
collection_name = 'ANNOVAR_MyVariant_chunks'
db_name = 'My_Variant_Database'
#Run process, and export (export happens every time 1000 variants are processed and joined)
as_batch = annotate_batch.AnnotationMethods()
as_batch.by_chunks(list_file, chunksize, step, csv_file, collection_name, db_name)
Explanation: METHOD 1: export data to MongoDB by chunks, iteratively.
This method is well-fitted for large files. Only the 1000 documents are held in memory and processed at a time, instead of attempting to parse and process an entire csv file at once.
As soon as you run the scripts from variantannotaiton the data will automatically be stored to it. Database and collection name should be specified, and there must be a running MongoDB connection. The script will set up a client to communicate between python (through pymongo) and the the database.
In general, the shell command:
mongod --dbpath ../data/db
(data/db is the designated location where the data will be stored) will initiate MongoDB. After this, the script should store data to the directory automatically.
For pymongo, and more information on how to set up a Mongo Database: https://docs.mongodb.com/getting-started/python/
End of explanation
#get variant list. Should always be the first step after running ANNOVAR
open_file = myvariant_parsing_utils.VariantParsing()
list_file = open_file.get_variants_from_vcf(vcf_file)
#Run process, data saved to joint_list
as_one_file = annotate_batch.AnnotationMethods()
joint_list = as_one_file.full_file(list_file, csv_file)
#Name Collection & DB
collection_name = 'ANNOVAR_MyVariant_full'
db_name = 'My_Variant_Database'
#Export, all at once
exporting_function = mongo_DB_export.export
exporting_function(joint_list, collection_name, db_name)
Explanation: METHOD 2: usign full file, and holding it in memory
Works well for small files.
End of explanation
#Get variant list form vcf file
open_file = myvariant_parsing_utils.VariantParsing()
list_file = open_file.get_variants_from_vcf(vcf_file)
#Run process
my_variants = annotate_batch.AnnotationMethods()
myvariant_data = my_variants.my_variant_at_once(list_file)
#Name Collection & DB
collection_name = 'My_Variant_Info_Collection_Full'
db_name = 'My_Variant_Database'
#Export
exporting_function = mongo_DB_export.export
exporting_function(myvariant_data, collection_name, db_name)
Explanation: METHOD 3: ignore annovar, get data solely from myvariant
Easier to run, doesn't require annovar
Will however be incomplete (some variants will have no information).
End of explanation
chunksize = 1000
step = 0
#Get variant list from vcf file
open_file = myvariant_parsing_utils.VariantParsing()
list_file = open_file.get_variants_from_vcf(vcf_file)
#Name Collection & DB
collection_name = 'My_Variant_Info_Collection_Chunks'
db_name = 'My_Variant_Database'
#Run process, export to MongoDB in-built
my_variants = annotate_batch.AnnotationMethods()
myvariant_data = my_variants.myvariant_chunks(list_file, chunksize, step, collection_name, db_name)
Explanation: METHOD 4: ignore annovar, get data solely from myvariant
Easier to run, doesn't require annovar. Will however be incomplete (some variants will have no information).
Do so BY CHUNKS. Export function is built in the methods myvariant_chunks
End of explanation |
6,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
本文使用skip-gram来实现word2vec算法,word2vec主要是在NLP中使用
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
当我们处理语言的时候,我们如果使用one-hot encoding的方法,可能这个向量会有上万,千万,非常的没有效率
word2vec试图找到一个更有效的表示方法,能够使用更低的维度来对单词进行编码
有两种模型:CBOW (Continuous Bag-Of-Words) 和 Skip-gram
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation【标点符号】 into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset【去除出现少于5次的单词】. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
创建两个关系,index => word, word => index, word按照出现的频率降序排列
Step4: Subsampling【抽样】
由于一些高频词(如the,of),并没有提供什么有效的信息,因此我们要将这些词过滤掉
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
这个跟我之前做的一篇神经网络实践之情感分类相似
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling 【负采样】
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
本文使用skip-gram来实现word2vec算法,word2vec主要是在NLP中使用
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
当我们处理语言的时候,我们如果使用one-hot encoding的方法,可能这个向量会有上万,千万,非常的没有效率
word2vec试图找到一个更有效的表示方法,能够使用更低的维度来对单词进行编码
有两种模型:CBOW (Continuous Bag-Of-Words) 和 Skip-gram
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text) # 预处理中将标点符号都替换为了 <标点符号> 这种形式,并且去除了少于5的单词
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation【标点符号】 into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset【去除出现少于5次的单词】. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
# 此处我们将words变为了 integers 的数组
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
创建两个关系,index => word, word => index, word按照出现的频率降序排列
End of explanation
from collections import Counter
#int_words[:10] # 每个出现的次数
word_counts = Counter(int_words)
word_counts.most_common()
total_count = len(int_words) # 总单词书
freqs = {word: count/total_count for word, count in word_counts.items()}
threshold = 1e-5 # t
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
## Your code here
import random
train_words = [word for word in int_words if p_drop[word] < random.random()]
#train_words = # The final subsampled word list
Explanation: Subsampling【抽样】
由于一些高频词(如the,of),并没有提供什么有效的信息,因此我们要将这些词过滤掉
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
np.random.randint(1, 5)
words[0:2] # 这是一个前逼后开的区间
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
small_words = words[:30]
print(small_words)
# for small in small_words:
# print(int_to_vocab[small])
get_target(words,5,2)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
#10 // 3 # 整除
#range(0,10,3)
#words[0:3]
# get_batches 返回 x,x,x,x,x => word1,word2,word3,...
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size] # 只取整数的,去除不满batch_size的部分
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
batch_size = 5
words = train_words[:30]
n_batches = len(words)//5
words = words[:n_batches*batch_size]
print(words,n_batches)
# for idx in range(0, len(words), batch_size):
# print(idx)
x, y = [], []
batch = words[0:0+5]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, 2)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
print(x,y)
# batches = get_batches(train_words[:30],5)
# for x,y in batches:
# print(x,y)
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, shape=[None], name='inputs')
#labels = tf.placeholder(tf.int32, [None, 1], name='labels')
labels = tf.placeholder(tf.int32, shape=[None, 1], name='labels')
inputs.get_shape()
labels.get_shape()
Explanation: Building the graph
这个跟我之前做的一篇神经网络实践之情感分类相似
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
# tf.random_uniform #均匀分布
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1.0, 1.0))
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
embed.get_shape()
embedding.get_shape()
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# tf.transpose 进行转置
## tf.nn.sampled_softmax_loss ~~ tf.nn.softmax(tf.matmul(inputs, tf.transpose(weights)) + biases).
# Number of negative labels to sample
# 该方法只是在train的时候使用
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
# softmax_w = tf.Variable(tf.truncated_normal((n_embedding, n_vocab), stddev=0.1))
softmax_b = tf.Variable(tf.zeros([n_vocab]))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
# loss = tf.nn.nce_loss(weights=softmax_w,
# biases=softmax_b,
# labels=labels,
# inputs=embed,
# num_sampled=n_sampled,
# num_classes=n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling 【负采样】
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
# valid_embedding shape (None,200)
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
# 计算相似性,similarity 每一行都是表示一个单词,然后列是该单词和其他单词的相似性
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
random.sample(range(100),16//2) # 从range中随机选出 16 // 2个数
tf.reduce_sum(np.array([[1,2,3],[2,4,6]]),1,keep_dims=True) # 2行一列,我们按行进行了sum
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
#np.array([1,2,3])[:,None]
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
6,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-2', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CCCMA
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
6,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Analysis - Programming
Week 1
Oefeningen met uitwerkingen
Opageve 1. Schrijf een Python programma dat de waarheidstabel van de volgende expressie produceert
Step1: Opageve 2. De expressies uit opdracht 3 en 4 (in de slides) hebben een eigen naam,
$(\neg{A}) \lor B$ wordt ook wel implicatie genoemd en genoteerd als $A \implies B$,
$(A \lor B) \land \neg{(A \land B)}$ wordt ook wel exclusieve of (of xor) genoemd en genoteerd als $A \oplus B$.
<br/>De operator $\Leftrightarrow$ wordt equivalentie (gelijkheid) genoemd.
De expressie $A \Leftrightarrow B$ levert alleen True op, wanneer de waarde van $A$ gelijk is aan die van $B$.
Bedenk een expressie in $A$ en $B$ met behulp van de operatoren or, and en not die precies het resultaat van de operator equivalentie oplevert en controleer je expressie met een Python programma.
<table width="50%">
<tr><th>$A$</th><th>$B$</th><th>$A \Leftrightarrow B$</th></tr>
<tr><td>False</td><td>False</td><td>True</td></tr>
<tr><td>False</td><td>True</td><td>False</td></tr>
<tr><td>True</td><td>False</td><td>False</td></tr>
<tr><td>True</td><td>True</td><td>True</td></tr>
</table>
Opgave 2 - uitwerking
In de onderstaande afleiding wordt om technische reden een iets andere notatie gebruikt
Step2: Opageve 4. Een tautologie is een expressie die altijd waar oplevert, ongeacht de invulling van de variabelen.
De uitdrukking $A \lor \neg{A}$ is een voorbeeld van een tautologie.
Schrijf Python programma's om te bepalen of de onderstaande uitdrukkingen tautologieën zijn.
Let op | Python Code:
## Opgave 1 - uitwerking
for A in [False, True]:
for B in [False, True]:
print(A, B, not(A or B))
Explanation: Data Analysis - Programming
Week 1
Oefeningen met uitwerkingen
Opageve 1. Schrijf een Python programma dat de waarheidstabel van de volgende expressie produceert:
$\neg{(A \lor B)}$ (Quine's Dagger)
End of explanation
## Opgave 3 - uitwerking
# controle -(-A | -B) == A & B
for A in [False, True]:
for B in [False, True]:
print(A, B, not(not A or not B), A and B)
# controle door computer
for A in [False, True]:
for B in [False, True]:
if not(not A or not B) != A and B:
print("De expressie -(-A | -B) is niet gelijk aan (A & B) voor A", A, "en B", B)
Explanation: Opageve 2. De expressies uit opdracht 3 en 4 (in de slides) hebben een eigen naam,
$(\neg{A}) \lor B$ wordt ook wel implicatie genoemd en genoteerd als $A \implies B$,
$(A \lor B) \land \neg{(A \land B)}$ wordt ook wel exclusieve of (of xor) genoemd en genoteerd als $A \oplus B$.
<br/>De operator $\Leftrightarrow$ wordt equivalentie (gelijkheid) genoemd.
De expressie $A \Leftrightarrow B$ levert alleen True op, wanneer de waarde van $A$ gelijk is aan die van $B$.
Bedenk een expressie in $A$ en $B$ met behulp van de operatoren or, and en not die precies het resultaat van de operator equivalentie oplevert en controleer je expressie met een Python programma.
<table width="50%">
<tr><th>$A$</th><th>$B$</th><th>$A \Leftrightarrow B$</th></tr>
<tr><td>False</td><td>False</td><td>True</td></tr>
<tr><td>False</td><td>True</td><td>False</td></tr>
<tr><td>True</td><td>False</td><td>False</td></tr>
<tr><td>True</td><td>True</td><td>True</td></tr>
</table>
Opgave 2 - uitwerking
In de onderstaande afleiding wordt om technische reden een iets andere notatie gebruikt:
| operator in wiskundige notatie | operator in alternatieve notatie |
|:-:|:-:|
| $\neg{}$ | - |
| $\lor$ | | |
| $\land$ | & |
Afleiding $A \Leftrightarrow B$ met behulp van $\neg{}$, $\lor$ en $\land$:
A | B | A | B | -(A | B) | A & B | -(A | B) | (A & B)
:-:|:-:| :-: | :-: | :-: | :-:
0 | 0 | 0 | 1 | 0 | 1
0 | 1 | 1 | 0 | 0 | 0
1 | 0 | 1 | 0 | 0 | 0
1 | 1 | 1 | 0 | 1 | 1
Opageve 3. Strikt genomen zijn de extra operatoren uit opgave 2 niet nodig, want je kunt ze namaken met or, and en not.
Eigenlijk kun je zelfs rondkomen met alleen or en not !
Bedenk een combinatie van or en not waarbij de waarheidstabel hetzelfde is als die van and. Controleer je expressie met een Python programma.
<table width="50%">
<tr><th>$A$</th><th>$B$</th><th>$A \land B$</th></tr>
<tr><td>False</td><td>False</td><td>False</td></tr>
<tr><td>False</td><td>True</td><td>False</td></tr>
<tr><td>True</td><td>False</td><td>False</td></tr>
<tr><td>True</td><td>True</td><td>True</td></tr>
</table>
<br/><br/>
Opgave 3 - uitwerking
Zie uitwerking opgave 3 voor uitleg over de gebruikte notatie.
Afleiding $A \land B$ met behulp van $\neg{}$ en $\lor$:
A | B | -A | -B | -A | -B | -(-A | -B) | A & B
:-:|:-:|:-: |:-: | :-: | :-: | :-:
0 | 0 | 1 | 1 | 1 | 0 | 0
0 | 1 | 1 | 0 | 1 | 0 | 0
1 | 0 | 0 | 1 | 1 | 0 | 0
1 | 1 | 0 | 0 | 0 | 1 | 1
End of explanation
## Opgave 4 - uitwerking
# controle door computer, (A or B) -> (A or B)
# uitdrukking vertaald met -(A or B) or (A or B)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
expr = not(A or B) or (A or B)
if not expr:
is_tautology = False
if is_tautology:
print("De expressie (A or B) -> (A or B) is een tautologie")
else:
print("De expressie (A or B) -> (A or B) is geen tautologie")
# controle door computer, (A or B) -> (A and B)
# uitdrukking vertaald met -(A or B) or (A and B)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
expr = not(A or B) or (A and B)
if not expr:
is_tautology = False
if is_tautology:
print("De expressie (A or B) -> (A and B) is een tautologie")
else:
print("De expressie (A or B) -> (A and B) is geen tautologie")
# controle door computer, (-A -> B) and (-A -> -B)
# uitdrukking vertaald met (-(-A) or B) and (-(-A) or -B)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
expr = (not(not(A)) or B) and (not(not(A)) or not(B))
if not expr:
is_tautology = False
if is_tautology:
print("De expressie (-A -> B) and (-A -> -B) is een tautologie")
else:
print("De expressie (-A -> B) and (-A -> -B) is geen tautologie")
# controle door computer, ((A -> B) and (B -> C)) -> (A -> C)
# uitdrukking vertaald met -((-A or B) and (-B or C)) or (-A or C)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
for C in [False, True]:
expr = not((not A or B) and (not B or C)) or (not A or C)
if not expr:
is_tautology = False
if is_tautology:
print("De expressie ((A -> B) and (B -> C)) -> (A -> C) is een tautologie")
else:
print("De expressie ((A -> B) and (B -> C)) -> (A -> C) is geen tautologie")
Explanation: Opageve 4. Een tautologie is een expressie die altijd waar oplevert, ongeacht de invulling van de variabelen.
De uitdrukking $A \lor \neg{A}$ is een voorbeeld van een tautologie.
Schrijf Python programma's om te bepalen of de onderstaande uitdrukkingen tautologieën zijn.
Let op: de implicatie $A \implies B$ kun je uitrekenen, als in opgave 2 beschreven, met $(\neg{A}) \lor B$
1. $(A \lor B) \implies (A \lor B)$
2. $(A \lor B) \implies (A \land B)$
3. $(\neg{A} \implies B) \land (\neg{A} \implies \neg{B})$
4. $((A \implies B) \land (B \implies C)) \implies (A \implies C)$
End of explanation |
6,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Resit Assignment part A
Deadline
Step2: Please make sure you can load the English spaCy model
Step3: Exercise 1
Step4: Please test your function using the following function call
Step5: Exercise 2
Step7: Exercise 3
Step8: Please test you function by running the following cell
Step9: Exercise 4
Step10: tip 2
Step11: tip 3
Step12: tip 4
Step14: Define a function called extract_statistics that has the following parameters
Step15: Exercise 5
Step17: Define a function called process_all_txt_files that has the following parameters
Step18: Exercise 6 | Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Assignments-colab/ASSIGNMENT_RESIT_A.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
import spacy
Explanation: Resit Assignment part A
Deadline: Friday, November 13, 2020 before 17:00
Please name your files:
ASSIGNMENT-RESIT-A.ipynb
utils.py (from part B)
raw_text_to_coll.py (from part B)
Please name your zip file as follows: RESIT-ASSIGNMENT.zip and upload it via Canvas (Resit Assignment).
- Please submit your assignment on Canvas: Resit Assignment
- If you have questions about this topic, please contact [email protected].
Questions and answers will be collected in this Q&A document,
so please check if your question has already been answered.
All of the covered chapters are important to this assignment. However, please pay special attention to:
- Chapter 10 - Dictionaries
- Chapter 11 - Functions and scope
* Chapter 14 - Reading and writing text files
* Chapter 15 - Off to analyzing text
- Chapter 17 - Data Formats II (JSON)
- Chapter 19 - More about Natural Language Processing Tools (spaCy)
In this assignment:
* we are going to process the texts in ../Data/Dreams/*txt
* for each file, we are going to determine:
* the number of characters
* the number of sentences
* the number of words
* the longest word
* the longest sentence
Note
This notebook should be placed in the same folder as the other Assignments!
Loading spaCy
Please make sure that spaCy is installed on your computer
End of explanation
nlp = spacy.load('en_core_web_sm')
Explanation: Please make sure you can load the English spaCy model:
End of explanation
# your code here
Explanation: Exercise 1: get paths
Define a function called get_paths that has the following parameter:
* input_folder: a string
The function:
* stores all paths to .txt files in the input_folder in a list
* returns a list of strings, i.e., each string is a file path
End of explanation
paths = get_paths(input_folder='../Data/Dreams')
print(paths)
Explanation: Please test your function using the following function call
End of explanation
# your code here
Explanation: Exercise 2: load text
Define a function called load_text that has the following parameter:
* txt_path: a string
The function:
* opens the txt_path for reading and loads the contents of the file as a string
* returns a string, i.e., the content of the file
End of explanation
def return_the_longest(list_of_strings):
given a list of strings, return the longest string
if multiple strings have the same length, return one of them.
:param str list_of_strings: a list of strings
Explanation: Exercise 3: return the longest
Define a function called return_the_longest that has the following parameter:
* list_of_strings: a list of strings
The function:
* returns the string with the highest number of characters. If multiple strings have the same length, return one of them.
End of explanation
a_list_of_strings = ["this", "is", "a", "sentence"]
longest_string = return_the_longest(a_list_of_strings)
error_message = f'the longest string should be "sentence", you provided {longest_string}'
assert longest_string == 'sentence', error_message
Explanation: Please test you function by running the following cell:
End of explanation
a_text = 'this is one sentence. this is another.'
doc = nlp(a_text)
Explanation: Exercise 4: extract statistics
We are going to use spaCy to extract statistics from Vickie's dreams! Here are a few tips below about how to use spaCy:
tip 1: process text with spaCy
End of explanation
num_chars = len(doc.text)
print(num_chars)
Explanation: tip 2: the number of characters is the length of the document
End of explanation
for sent in doc.sents:
sent = sent.text
print(sent)
Explanation: tip 3: loop through the sentences of a document
End of explanation
for token in doc:
word = token.text
print(word)
Explanation: tip 4: loop through the words of a document
End of explanation
def extract_statistics(nlp, txt_path):
given a txt_path
-use the load_text function to load the text
-process the text using spaCy
:param nlp: loaded spaCy model (result of calling spacy.load('en_core_web_sm'))
:param str txt_path: path to txt file
:rtype: dict
:return: a dictionary with the following keys:
-"num_sents" : the number of sentences
-"num_chars" : the number of characters
-"num_tokens" : the number of words
-"longest_sent" : the longest sentence
-"longest_word" : the longest word
stats = extract_statistics(nlp, txt_path=paths[0])
stats
Explanation: Define a function called extract_statistics that has the following parameters:
* nlp: the result of calling spacy.load('en_core_web_sm')
* txt_path: path to a txt file, e.g., '../Data/Dreams/vickie8.txt'
The function:
* loads the content of the file using the function load_text
* processes the content of the file using nlp(content) (see tip 1 of this exercise)
The function returns a dictionary with five keys:
* num_sents: the number of sentences in the document
* num_chars: the number of characters in the document
* num_tokens: the number of words in the document
* longest_sent: the longest sentence in the document
* Please make a list with all the sentences and call the function return_the_longest to retrieve the longest sentence
* longest_word: the longest word in the document
* Please make a list with all the words and call the function return_the_longest to retrieve the longest word
Test the function on one of the files from Vickie's dreams.
End of explanation
import os
basename = os.path.basename('../Data/Dreams/vickie1.txt')[:-4]
print(basename)
Explanation: Exercise 5: process all txt files
tip 1: how to obtain the basename of a file
End of explanation
def process_all_txt_files(nlp, input_folder):
given a list of txt_paths
-process each with the extract_statistics function
:param nlp: loaded spaCy model (result of calling spacy.load('en_core_web_sm'))
:param list txt_paths: list of paths to txt files
:rtype: dict
:return: dictionary mapping:
-basename -> output of extract_statistics function
basename_to_stats = process_all_txt_files(nlp, input_folder='../Data/Dreams')
basename_to_stats
Explanation: Define a function called process_all_txt_files that has the following parameters:
* nlp: the result of calling spacy.load('en_core_web_sm')
* input_folder: a string (we will test it using '../Data/Dreams')
The function:
* obtains a list of txt paths using the function get_paths with input_folder as an argument
* loops through the txt paths one by one
* for each iteration, the extract_statistics function is called with txt_path as an argument
The function returns a dictionary:
* the keys are the basenames of the txt files (see tip 1 of this exercise)
* the values are the output of calling the function extract_statistics for a specific file
Test your function using '../Data/Dreams' as a value for the parameter input_folder.
End of explanation
import json
for basename, stats in basename_to_stats.items():
pass
Explanation: Exercise 6: write to disk
In this exercise, you are going to write our results to our computer.
Please loop through basename_to_stats and create one JSON file for each dream.
the path is f'{basename}.json', i.e., 'vickie1.json', 'vickie2.json', etc. (please write them to the same folder as this notebook)
the content of each JSON file is each value of basename_to_stats
End of explanation |
6,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
UMSI course recommender database
Author
Step1: This part calculates the cosine similarity
Step2: Create a new text file called allpairs.txt
Step3: 2) Create database to store course pairs
Create a database classSimilarity.db and create tables for the data in allpairs.txt
Create queries to insert the data into classSimilarity.db (the source class, the target class, and the cosine similarity value)
Source code | Python Code:
import re
import math
from operator import itemgetter
enrolled = {}
numstudents = {}
numincommon = {}
scores = {}
titles = {}
for line in open("courseenrollment.txt", "r"):
line = line.rstrip('\s\r\n')
(student, graddate, spec, term, dept, courseno) = line.split('\t')
# Create a variable course that consists of the dept. abbreviation
# Followed by a space and course number
course=dept+' '+courseno
if course not in enrolled:
enrolled[course] = {student: 1}
if student not in enrolled[course]:
enrolled[course][student] = 1
if course not in numstudents:
numstudents[course] = 0
numstudents[course] += 1
Explanation: UMSI course recommender database
Author: Chisheng Li
1) Output every pair of courses (Source Course and Target Course) and their Cosine Similarity scores to allpairs.txt
End of explanation
for course1 in enrolled:
for course2 in enrolled:
# Initialize each value in the 2d dict.
if course1 not in numincommon:
numincommon[course1] = {course2: 0}
if course2 not in numincommon[course1]:
numincommon[course1][course2] = 0
for student in enrolled[course2]:
if student in enrolled[course1]:
# If the same student is enrolled in both courses
# Increment the counter for students in common
numincommon[course1][course2] += 1
denominator = math.sqrt(numstudents[course1] * numstudents[course2])
# Same initialization as numincommon
if course1 not in scores:
scores[course1] = {course2: 0}
if course2 not in scores[course1]:
scores[course1][course2] = 0
scores[course1][course2] = numincommon[course1][course2]/denominator
for line in open("coursetitles.txt", "r"):
line = line.rstrip('\s\r\n')
(course, title) = line.split('\t')
# Strip trailing sections "-1" and spaces from the course numbers
# and replace underscores with spaces.
# Assign the result to the variable "course2"
course=course.replace('-1',"")
course2 = course.replace("_"," ")
# Regex for sequent if/continue
si_re = re.compile(r"^SI \d+.*")
found_re = re.compile(r"SI 50[01234].*")
Explanation: This part calculates the cosine similarity:
End of explanation
test = open("allpairs.txt",'w')
test.write("Source Course"+"\t"+"Target Course"+"\t"+"Cosine Similarity")
for course1 in sorted(scores):
i = 1
# skip if course was not sufficiently popular
if (numstudents[course1] < 5):
continue
# skip if the course is not an SI course (use regexp)
if (si_re.match(course1) is None):
continue
# skip if the course is one of the foundations: 500,501,502,503,504
if found_re.match(course1):
continue
for course2,score in sorted(scores[course1].items(),
key=itemgetter(1),reverse = True):
# skip if the course (course2) is the one we're asking about (course1)
if course2 == course1:
continue
# skip if (course2) is one of the old foundations: 501,502,503,504
if found_re.match(course2):
continue
# only consider course2 if the number of students in common is >=1
if(numincommon[course1][course2] < 1):
continue
# write the data set to pairs.txt
test.write("\n%s\t%s\t%s" % (course1,course2,score))
test.close()
Explanation: Create a new text file called allpairs.txt
End of explanation
import sqlite3 as lite
import sys
con = None
pairs=[]
try:
con = lite.connect('courseSmilarity.db')
cur = con.cursor()
cur.execute("select * from courses where score >= 0 and score <= 0.25")
query1 = open('cosine0.25.txt','w')
query1.write("Source Course" + "\t" + "Target Course")
print "%s" % ("Course pairs with cosine similarity score >= 0 and <= 0.25")
print "-------------------------------------------------------"
print "%s\t%s" % ("Source Course","Target Course")
for row in cur:
t1 = row[1] + row[0]
t2 = row[0] + row[1]
if t1 in pairs: continue
pairs.append(t2)
query1.write("\n%s\t%s" %(row[0],row[1]))
print "%s\t%s" % (row[0],row[1])
print
cur.execute("select * from courses where score > 0.25 and score <= 0.5")
query2 = open('cosine0.25-0.5.txt','w')
query2.write("Source Course" + "\t" + "Target Course")
print "%s" % ("Course pairs with cosine similarity score > 0.25 and <= 0.5")
print "-------------------------------------------------------"
print "%s\t%s" % ("Source Course","Target Course")
for row in cur:
t1 = row[1] + row[0]
t2 = row[0] + row[1]
if t1 in pairs: continue
pairs.append(t2)
query2.write("\n%s\t%s" %(row[0],row[1]))
print "%s\t%s" % (row[0],row[1])
print
cur.execute("select * from courses where score > 0.5 and score <= 0.75")
query3=open('cosine0.5-0.75.txt','w')
query3.write("Source Course" + "\t" + "Target Course")
print "%s" % ("Course pairs with cosine similarity score > 0.5 and <= 0.75")
print "-------------------------------------------------------"
print "%s\t%s" % ("Source Course","Target Course")
for row in cur:
t1=row[1]+row[0]
t2=row[0]+row[1]
if t1 in pairs: continue
pairs.append(t2)
query3.write("\n%s\t%s" %(row[0],row[1]))
print "%s\t%s" % (row[0],row[1])
print
cur.execute("select * from courses where score > 0.75 and score <= 1")
query4=open('cosine0.75-1.txt','w')
query4.write("Source Course" + "\t" + "Target Course")
print "%s" % ("Course pairs with cosine similarity score > 0.75 and <= 1")
print "-------------------------------------------------------"
print "%s\t%s" % ("Source Course","Target Course")
for row in cur:
t1=row[1]+row[0]
t2=row[0]+row[1]
if t1 in pairs: continue
pairs.append(t2)
query4.write("\n%s\t%s" %(row[0],row[1]))
print "%s\t%s" % (row[0],row[1])
except lite.Error, e:
print "Error %s:" % e.args[0]
sys.exit(1)
finally:
if con:
con.close()
Explanation: 2) Create database to store course pairs
Create a database classSimilarity.db and create tables for the data in allpairs.txt
Create queries to insert the data into classSimilarity.db (the source class, the target class, and the cosine similarity value)
Source code: createDB.py
3) Query courseSmilarity.db for courses within certain ranges of cosine similarity
Output the course pairs in distinct ranges of cosine similarity:
- for values from 0 <= x <= 0.25 (cosine0.25.txt)
- for values from 0.25 < x <= 0.5 (cosine0.25-0.5.txt)
- for values from 0.5 < x <= 0.75 (cosine0.5-0.75.txt)
- for values from 0.75 < x <= 1 (cosine0.75-1.txt)
End of explanation |
6,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
x_min = np.amin(image_data)
x_max = np.amax(image_data)
return a + np.true_divide((image_data - x_min) * (b - a), (x_max - x_min))
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32, [None, features_count])
labels = tf.placeholder(tf.float32, [None, labels_count])
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.random_normal([features_count, labels_count]))
biases = tf.Variable(tf.zeros([labels_count]))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 1
learning_rate = 0.1
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
6,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple RNN Encode-Decoder for Translation
Learning Objectives
1. Learn how to create a tf.data.Dataset for seq2seq problems
1. Learn how to train an encoder-decoder model in Keras
1. Learn how to save the encoder and the decoder as separate models
1. Learn how to piece together the trained encoder and decoder into a translation function
1. Learn how to use the BLUE score to evaluate a translation model
Introduction
In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.
We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.
At last, we'll benchmark our results using the industry standard BLEU score.
Step1: Downloading the Data
We'll use a language dataset provided by http
Step2: From the utils_preproc package we have written for you,
we will use the following functions to pre-process our dataset of sentence pairs.
Sentence Preprocessing
The utils_preproc.preprocess_sentence() method does the following
Step3: Sentence Integerizing
The utils_preproc.tokenize() method does the following
Step4: The outputted tokenizer can be used to get back the actual works
from the integers representing them
Step5: Creating the tf.data.Dataset
load_and_preprocess
Let's first implement a function that will read the raw sentence-pair file
and preprocess the sentences with utils_preproc.preprocess_sentence.
The load_and_preprocess function takes as input
- the path where the sentence-pair file is located
- the number of examples one wants to read in
It returns a tuple whose first component contains the english
preprocessed sentences, while the second component contains the
spanish ones
Step6: load_and_integerize
Using utils_preproc.tokenize, let us now implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple
Step7: Train and eval splits
We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU.
Let us set variable for that
Step8: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively
Step9: Let us store the maximal sentence length of both languages into two variables
Step10: We are now using scikit-learn train_test_split to create our splits
Step11: Let's make sure the number of example in each split looks good
Step12: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0
Step13: Create tf.data dataset for train and eval
Below we implement the create_dataset function that takes as input
* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences
It returns a tf.data.Dataset containing examples for the form
python
((source_sentence, target_sentence), shifted_target_sentence)
where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1.
Remark
Step14: Let's now create the actual train and eval dataset using the function above
Step15: Training the RNN encoder-decoder model
We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.
Step16: Let's implement the encoder network with Keras functional API. It will
* start with an Input layer that will consume the source language integerized sentences
* then feed them to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
The output of the encoder will be the encoder_outputs and the encoder_state.
Step17: We now implement the decoder network, which is very similar to the encoder network.
It will
* start with an Input layer that will consume the source language integerized sentences
* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
Important
Step18: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output
Step19: To be able to train the encoder-decoder network defined above, we now need to create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.
While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder
Step20: Let's now train the model!
Step21: Implementing the translation (or decoding) function
We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!
We do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.
So, the strategy now is to split our trained network into two independent Keras models
Step23: Now that we have a separate encoder and a separate decoder, let's implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems).
decode_sequences will take as input
* input_seqs which is the integerized source language sentence tensor that the encoder can consume
* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers
* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted
Note
Step24: Now we're ready to predict!
Step25: Checkpoint Model
Now let's us save the full training encoder-decoder model, as well as the separate encoder and decoder model to disk for latter reuse
Step26: Evaluation Metric (BLEU)
Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation.
Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).
It is quick and inexpensive to calculate.
It allows flexibility for the ordering of words and phrases.
It is easy to understand.
It is language independent.
It correlates highly with human evaluation.
It has been widely adopted.
The score is from 0 to 1, where 1 is an exact match.
It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4
It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.
The NLTK framework has an implementation that we will use.
We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.
For more info
Step27: Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes. | Python Code:
pip install nltk
import os
import pickle
import sys
import nltk
import numpy as np
import pandas as pd
import tensorflow as tf
import utils_preproc
from sklearn.model_selection import train_test_split
from tensorflow.keras.layers import GRU, Dense, Embedding, Input
from tensorflow.keras.models import Model, load_model
print(tf.__version__)
SEED = 0
MODEL_PATH = "translate_models/baseline"
DATA_URL = (
"http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip"
)
LOAD_CHECKPOINT = False
tf.random.set_seed(SEED)
Explanation: Simple RNN Encode-Decoder for Translation
Learning Objectives
1. Learn how to create a tf.data.Dataset for seq2seq problems
1. Learn how to train an encoder-decoder model in Keras
1. Learn how to save the encoder and the decoder as separate models
1. Learn how to piece together the trained encoder and decoder into a translation function
1. Learn how to use the BLUE score to evaluate a translation model
Introduction
In this lab we'll build a translation model from Spanish to English using a RNN encoder-decoder model architecture.
We will start by creating train and eval datasets (using the tf.data.Dataset API) that are typical for seq2seq problems. Then we will use the Keras functional API to train an RNN encoder-decoder model, which will save as two separate models, the encoder and decoder model. Using these two separate pieces we will implement the translation function.
At last, we'll benchmark our results using the industry standard BLEU score.
End of explanation
path_to_zip = tf.keras.utils.get_file(
"spa-eng.zip", origin=DATA_URL, extract=True
)
path_to_file = os.path.join(os.path.dirname(path_to_zip), "spa-eng/spa.txt")
print("Translation data stored at:", path_to_file)
data = pd.read_csv(
path_to_file, sep="\t", header=None, names=["english", "spanish"]
)
data.sample(3)
Explanation: Downloading the Data
We'll use a language dataset provided by http://www.manythings.org/anki/. The dataset contains Spanish-English translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
The dataset is a curated list of 120K translation pairs from http://tatoeba.org/, a platform for community contributed translations by native speakers.
End of explanation
raw = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?",
]
processed = [utils_preproc.preprocess_sentence(s) for s in raw]
processed
Explanation: From the utils_preproc package we have written for you,
we will use the following functions to pre-process our dataset of sentence pairs.
Sentence Preprocessing
The utils_preproc.preprocess_sentence() method does the following:
1. Converts sentence to lower case
2. Adds a space between punctuation and words
3. Replaces tokens that aren't a-z or punctuation with space
4. Adds <start> and <end> tokens
For example:
End of explanation
integerized, tokenizer = utils_preproc.tokenize(processed)
integerized
Explanation: Sentence Integerizing
The utils_preproc.tokenize() method does the following:
Splits each sentence into a token list
Maps each token to an integer
Pads to length of longest sentence
It returns an instance of a Keras Tokenizer
containing the token-integer mapping along with the integerized sentences:
End of explanation
tokenizer.sequences_to_texts(integerized)
Explanation: The outputted tokenizer can be used to get back the actual works
from the integers representing them:
End of explanation
def load_and_preprocess(path, num_examples):
with open(path_to_file) as fp:
lines = fp.read().strip().split("\n")
# TODO 1a
sentence_pairs = [
[utils_preproc.preprocess_sentence(sent) for sent in line.split("\t")]
for line in lines[:num_examples]
]
return zip(*sentence_pairs)
en, sp = load_and_preprocess(path_to_file, num_examples=10)
print(en[-1])
print(sp[-1])
Explanation: Creating the tf.data.Dataset
load_and_preprocess
Let's first implement a function that will read the raw sentence-pair file
and preprocess the sentences with utils_preproc.preprocess_sentence.
The load_and_preprocess function takes as input
- the path where the sentence-pair file is located
- the number of examples one wants to read in
It returns a tuple whose first component contains the english
preprocessed sentences, while the second component contains the
spanish ones:
End of explanation
def load_and_integerize(path, num_examples=None):
targ_lang, inp_lang = load_and_preprocess(path, num_examples)
# TODO 1b
input_tensor, inp_lang_tokenizer = utils_preproc.tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = utils_preproc.tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
Explanation: load_and_integerize
Using utils_preproc.tokenize, let us now implement the function load_and_integerize that takes as input the data path along with the number of examples we want to read in and returns the following tuple:
python
(input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer)
where
input_tensor is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
target_tensor is an integer tensor of shape (num_examples, max_length_targ) containing the integerized versions of the target language sentences
inp_lang_tokenizer is the source language tokenizer
targ_lang_tokenizer is the target language tokenizer
End of explanation
TEST_PROP = 0.2
NUM_EXAMPLES = 30000
Explanation: Train and eval splits
We'll split this data 80/20 into train and validation, and we'll use only the first 30K examples, since we'll be training on a single GPU.
Let us set variable for that:
End of explanation
input_tensor, target_tensor, inp_lang, targ_lang = load_and_integerize(
path_to_file, NUM_EXAMPLES
)
Explanation: Now let's load and integerize the sentence paris and store the tokenizer for the source and the target language into the int_lang and targ_lang variable respectively:
End of explanation
max_length_targ = target_tensor.shape[1]
max_length_inp = input_tensor.shape[1]
Explanation: Let us store the maximal sentence length of both languages into two variables:
End of explanation
splits = train_test_split(
input_tensor, target_tensor, test_size=TEST_PROP, random_state=SEED
)
input_tensor_train = splits[0]
input_tensor_val = splits[1]
target_tensor_train = splits[2]
target_tensor_val = splits[3]
Explanation: We are now using scikit-learn train_test_split to create our splits:
End of explanation
(
len(input_tensor_train),
len(target_tensor_train),
len(input_tensor_val),
len(target_tensor_val),
)
Explanation: Let's make sure the number of example in each split looks good:
End of explanation
print("Input Language; int to word mapping")
print(input_tensor_train[0])
print(utils_preproc.int2word(inp_lang, input_tensor_train[0]), "\n")
print("Target Language; int to word mapping")
print(target_tensor_train[0])
print(utils_preproc.int2word(targ_lang, target_tensor_train[0]))
Explanation: The utils_preproc.int2word function allows you to transform back the integerized sentences into words. Note that the <start> token is alwasy encoded as 1, while the <end> token is always encoded as 0:
End of explanation
def create_dataset(encoder_input, decoder_input):
# TODO 1c
# shift ahead by 1
target = tf.roll(decoder_input, -1, 1)
# replace last column with 0s
zeros = tf.zeros([target.shape[0], 1], dtype=tf.int32)
target = tf.concat((target[:, :-1], zeros), axis=-1)
dataset = tf.data.Dataset.from_tensor_slices(
((encoder_input, decoder_input), target)
)
return dataset
Explanation: Create tf.data dataset for train and eval
Below we implement the create_dataset function that takes as input
* encoder_input which is an integer tensor of shape (num_examples, max_length_inp) containing the integerized versions of the source language sentences
* decoder_input which is an integer tensor of shape (num_examples, max_length_targ)containing the integerized versions of the target language sentences
It returns a tf.data.Dataset containing examples for the form
python
((source_sentence, target_sentence), shifted_target_sentence)
where source_sentence and target_setence are the integer version of source-target language pairs and shifted_target is the same as target_sentence but with indices shifted by 1.
Remark: In the training code, source_sentence (resp. target_sentence) will be fed as the encoder (resp. decoder) input, while shifted_target will be used to compute the cross-entropy loss by comparing the decoder output with the shifted target sentences.
End of explanation
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
train_dataset = (
create_dataset(input_tensor_train, target_tensor_train)
.shuffle(BUFFER_SIZE)
.repeat()
.batch(BATCH_SIZE, drop_remainder=True)
)
eval_dataset = create_dataset(input_tensor_val, target_tensor_val).batch(
BATCH_SIZE, drop_remainder=True
)
Explanation: Let's now create the actual train and eval dataset using the function above:
End of explanation
EMBEDDING_DIM = 256
HIDDEN_UNITS = 1024
INPUT_VOCAB_SIZE = len(inp_lang.word_index) + 1
TARGET_VOCAB_SIZE = len(targ_lang.word_index) + 1
Explanation: Training the RNN encoder-decoder model
We use an encoder-decoder architecture, however we embed our words into a latent space prior to feeding them into the RNN.
End of explanation
encoder_inputs = Input(shape=(None,), name="encoder_input")
# TODO 2a
encoder_inputs_embedded = Embedding(
input_dim=INPUT_VOCAB_SIZE,
output_dim=EMBEDDING_DIM,
input_length=max_length_inp,
)(encoder_inputs)
encoder_rnn = GRU(
units=HIDDEN_UNITS,
return_sequences=True,
return_state=True,
recurrent_initializer="glorot_uniform",
)
encoder_outputs, encoder_state = encoder_rnn(encoder_inputs_embedded)
Explanation: Let's implement the encoder network with Keras functional API. It will
* start with an Input layer that will consume the source language integerized sentences
* then feed them to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
The output of the encoder will be the encoder_outputs and the encoder_state.
End of explanation
decoder_inputs = Input(shape=(None,), name="decoder_input")
# TODO 2b
decoder_inputs_embedded = Embedding(
input_dim=TARGET_VOCAB_SIZE,
output_dim=EMBEDDING_DIM,
input_length=max_length_targ,
)(decoder_inputs)
decoder_rnn = GRU(
units=HIDDEN_UNITS,
return_sequences=True,
return_state=True,
recurrent_initializer="glorot_uniform",
)
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=encoder_state
)
Explanation: We now implement the decoder network, which is very similar to the encoder network.
It will
* start with an Input layer that will consume the source language integerized sentences
* then feed that input to an Embedding layer of EMBEDDING_DIM dimensions
* which in turn will pass the embeddings to a GRU recurrent layer with HIDDEN_UNITS
Important: The main difference with the encoder, is that the recurrent GRU layer will take as input not only the decoder input embeddings, but also the encoder_state as outputted by the encoder above. This is where the two networks are linked!
The output of the encoder will be the decoder_outputs and the decoder_state.
End of explanation
decoder_dense = Dense(TARGET_VOCAB_SIZE, activation="softmax")
predictions = decoder_dense(decoder_outputs)
Explanation: The last part of the encoder-decoder architecture is a softmax Dense layer that will create the next word probability vector or next word predictions from the decoder_output:
End of explanation
# TODO 2c
model = Model(inputs=[encoder_inputs, decoder_inputs], outputs=predictions)
model.compile(optimizer="adam", loss="sparse_categorical_crossentropy")
model.summary()
Explanation: To be able to train the encoder-decoder network defined above, we now need to create a trainable Keras Model by specifying which are the inputs and the outputs of our problem. They should correspond exactly to what the type of input/output in our train and eval tf.data.Dataset since that's what will be fed to the inputs and outputs we declare while instantiating the Keras Model.
While compiling our model, we should make sure that the loss is the sparse_categorical_crossentropy so that we can compare the true word indices for the target language as outputted by our train tf.data.Dataset with the next word predictions vector as outputted by the decoder:
End of explanation
STEPS_PER_EPOCH = len(input_tensor_train) // BATCH_SIZE
EPOCHS = 1
history = model.fit(
train_dataset,
steps_per_epoch=STEPS_PER_EPOCH,
validation_data=eval_dataset,
epochs=EPOCHS,
)
Explanation: Let's now train the model!
End of explanation
if LOAD_CHECKPOINT:
encoder_model = load_model(os.path.join(MODEL_PATH, "encoder_model.h5"))
decoder_model = load_model(os.path.join(MODEL_PATH, "decoder_model.h5"))
else:
# TODO 3a
encoder_model = Model(inputs=encoder_inputs, outputs=encoder_state)
decoder_state_input = Input(
shape=(HIDDEN_UNITS,), name="decoder_state_input"
)
# Reuses weights from the decoder_rnn layer
decoder_outputs, decoder_state = decoder_rnn(
decoder_inputs_embedded, initial_state=decoder_state_input
)
# Reuses weights from the decoder_dense layer
predictions = decoder_dense(decoder_outputs)
decoder_model = Model(
inputs=[decoder_inputs, decoder_state_input],
outputs=[predictions, decoder_state],
)
Explanation: Implementing the translation (or decoding) function
We can't just use model.predict(), because we don't know all the inputs we used during training. We only know the encoder_input (source language) but not the decoder_input (target language), which is what we want to predict (i.e., the translation of the source language)!
We do however know the first token of the decoder input, which is the <start> token. So using this plus the state of the encoder RNN, we can predict the next token. We will then use that token to be the second token of decoder input, and continue like this until we predict the <end> token, or we reach some defined max length.
So, the strategy now is to split our trained network into two independent Keras models:
an encoder model with signature encoder_inputs -> encoder_state
a decoder model with signature [decoder_inputs, decoder_state_input] -> [predictions, decoder_state]
This way, we will be able to encode the source language sentence into the vector encoder_state using the encoder and feed it to the decoder model along with the <start> token at step 1.
Given that input, the decoder will produce the first word of the translation, by sampling from the predictions vector (for simplicity, our sampling strategy here will be to take the next word to be the one whose index has the maximum probability in the predictions vector) along with a new state vector, the decoder_state.
At this point, we can feed again to the decoder the predicted first word and as well as the new decoder_state to predict the translation second word.
This process can be continued until the decoder produces the token <stop>.
This is how we will implement our translation (or decoding) function, but let us first extract a separate encoder and a separate decoder from our trained encoder-decoder model.
Remark: If we have already trained and saved the models (i.e, LOAD_CHECKPOINT is True) we will just load the models, otherwise, we extract them from the trained network above by explicitly creating the encoder and decoder Keras Models with the signature we want.
End of explanation
def decode_sequences(input_seqs, output_tokenizer, max_decode_length=50):
Arguments:
input_seqs: int tensor of shape (BATCH_SIZE, SEQ_LEN)
output_tokenizer: Tokenizer used to conver from int to words
Returns translated sentences
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seqs)
# Populate the first character of target sequence with the start character.
batch_size = input_seqs.shape[0]
target_seq = tf.ones([batch_size, 1])
decoded_sentences = [[] for _ in range(batch_size)]
# TODO 4: Sampling loop
for i in range(max_decode_length):
output_tokens, decoder_state = decoder_model.predict(
[target_seq, states_value]
)
# Sample a token
sampled_token_index = np.argmax(output_tokens[:, -1, :], axis=-1)
tokens = utils_preproc.int2word(output_tokenizer, sampled_token_index)
for j in range(batch_size):
decoded_sentences[j].append(tokens[j])
# Update the target sequence (of length 1).
target_seq = tf.expand_dims(tf.constant(sampled_token_index), axis=-1)
# Update states
states_value = decoder_state
return decoded_sentences
Explanation: Now that we have a separate encoder and a separate decoder, let's implement a translation function, to which we will give the generic name of decode_sequences (to stress that this procedure is general to all seq2seq problems).
decode_sequences will take as input
* input_seqs which is the integerized source language sentence tensor that the encoder can consume
* output_tokenizer which is the target languague tokenizer we will need to extract back words from predicted word integers
* max_decode_length which is the length after which we stop decoding if the <stop> token has not been predicted
Note: Now that the encoder and decoder have been turned into Keras models, to feed them their input, we need to use the .predict method.
End of explanation
sentences = [
"No estamos comiendo.",
"Está llegando el invierno.",
"El invierno se acerca.",
"Tom no comio nada.",
"Su pierna mala le impidió ganar la carrera.",
"Su respuesta es erronea.",
"¿Qué tal si damos un paseo después del almuerzo?",
]
reference_translations = [
"We're not eating.",
"Winter is coming.",
"Winter is coming.",
"Tom ate nothing.",
"His bad leg prevented him from winning the race.",
"Your answer is wrong.",
"How about going for a walk after lunch?",
]
machine_translations = decode_sequences(
utils_preproc.preprocess(sentences, inp_lang), targ_lang, max_length_targ
)
for i in range(len(sentences)):
print("-")
print("INPUT:")
print(sentences[i])
print("REFERENCE TRANSLATION:")
print(reference_translations[i])
print("MACHINE TRANSLATION:")
print(machine_translations[i])
Explanation: Now we're ready to predict!
End of explanation
if not LOAD_CHECKPOINT:
os.makedirs(MODEL_PATH, exist_ok=True)
# TODO 3b
model.save(os.path.join(MODEL_PATH, "model.h5"))
encoder_model.save(os.path.join(MODEL_PATH, "encoder_model.h5"))
decoder_model.save(os.path.join(MODEL_PATH, "decoder_model.h5"))
with open(os.path.join(MODEL_PATH, "encoder_tokenizer.pkl"), "wb") as fp:
pickle.dump(inp_lang, fp)
with open(os.path.join(MODEL_PATH, "decoder_tokenizer.pkl"), "wb") as fp:
pickle.dump(targ_lang, fp)
Explanation: Checkpoint Model
Now let's us save the full training encoder-decoder model, as well as the separate encoder and decoder model to disk for latter reuse:
End of explanation
def bleu_1(reference, candidate):
reference = list(filter(lambda x: x != "", reference)) # remove padding
candidate = list(filter(lambda x: x != "", candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (1,), smoothing_function
)
def bleu_4(reference, candidate):
reference = list(filter(lambda x: x != "", reference)) # remove padding
candidate = list(filter(lambda x: x != "", candidate)) # remove padding
smoothing_function = nltk.translate.bleu_score.SmoothingFunction().method1
return nltk.translate.bleu_score.sentence_bleu(
reference, candidate, (0.25, 0.25, 0.25, 0.25), smoothing_function
)
Explanation: Evaluation Metric (BLEU)
Unlike say, image classification, there is no one right answer for a machine translation. However our current loss metric, cross entropy, only gives credit when the machine translation matches the exact same word in the same order as the reference translation.
Many attempts have been made to develop a better metric for natural language evaluation. The most popular currently is Bilingual Evaluation Understudy (BLEU).
It is quick and inexpensive to calculate.
It allows flexibility for the ordering of words and phrases.
It is easy to understand.
It is language independent.
It correlates highly with human evaluation.
It has been widely adopted.
The score is from 0 to 1, where 1 is an exact match.
It works by counting matching n-grams between the machine and reference texts, regardless of order. BLUE-4 counts matching n grams from 1-4 (1-gram, 2-gram, 3-gram and 4-gram). It is common to report both BLUE-1 and BLUE-4
It still is imperfect, since it gives no credit to synonyms and so human evaluation is still best when feasible. However BLEU is commonly considered the best among bad options for an automated metric.
The NLTK framework has an implementation that we will use.
We can't run calculate BLEU during training, because at that time the correct decoder input is used. Instead we'll calculate it now.
For more info: https://machinelearningmastery.com/calculate-bleu-score-for-text-python/
End of explanation
%%time
num_examples = len(input_tensor_val)
bleu_1_total = 0
bleu_4_total = 0
for idx in range(num_examples):
# TODO 5
reference_sentence = utils_preproc.int2word(
targ_lang, target_tensor_val[idx][1:]
)
decoded_sentence = decode_sequences(
input_tensor_val[idx : idx + 1], targ_lang, max_length_targ
)[0]
bleu_1_total += bleu_1(reference_sentence, decoded_sentence)
bleu_4_total += bleu_4(reference_sentence, decoded_sentence)
print(f"BLEU 1: {bleu_1_total / num_examples}")
print(f"BLEU 4: {bleu_4_total / num_examples}")
Explanation: Let's now average the bleu_1 and bleu_4 scores for all the sentence pairs in the eval set. The next cell takes some time to run, the bulk of which is decoding the 6000 sentences in the validation set. Please wait unitl completes.
End of explanation |
6,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: There are the following multiple risk factor valuation classes available
Step2: We assum a positive correlation between the two risk factors.
Step3: Valuation Environment
Similar to the instantiation of a derivatives_portfolio object, a valuation environment is needed (unifying certain parameters/assumptions for all relevant risk factors of a derivative).
Step4: valuation_mcs_european_multi
As an example for a multi-risk derivative with European exercise consider a maximum call option. With multiple risk factors, payoff functions are defined by adding key (the name strings) to the maturity_value array object. As with the portfolio valuation class, the multi-risk factor valuation classes get passed market_environment objects only and not the risk factor model objects themsemselves.
Step5: At instantiation, the respective risk factor model objects are instantiated as well.
Step6: Correlations are stored as well and the resulting corrleation and Cholesky matrices are generated.
Step7: The payoff of a European option is a one-dimensional ndarray object.
Step8: Present value estimations are generated by a call of the present_value method.
Step9: The update method allows updating of certain parameters.
Step10: Let us reset the values to the original parameters.
Step11: When calculating Greeks the risk factor now has to be specified by providing its name.
Step12: Sensitivities Positive Correlation
Almos in complete analogy to the single-risk valuation classes, sensitivities can be estimated for the multi-risk valuation classes.
Sensitivities Risk Factor 1
Consider first the case from before with positive correlation between the two risk factors. The following estimates and plots the sensitivities for the first risk factor gbm1.
Step13: Sensitivities Risk Factor 2
Now the sensitivities for the second risk factor.
Step14: Sensitivities with Negative Correlation
The second case is for highly negatively correlated risk factors.
Step15: Sensitivities Risk Factor 1
Again, sensitivities for the first risk factor first.
Step16: Sensitivities Risk Factor 2
Finally, the sensitivities for the second risk factor for this second scenario.
Step17: Surfaces for Positive Correlation Case
Let us return to the case of positive correlation between the two relevant risk factors.
Step18: Value Surface
We are now interested in the value surface of the derivative instrument for both different initial values of the first and second risk factor.
Step19: The following estimates for all possible combinations of the initial values---given the assumptions from above---the present value of the European maximum call option.
Step20: The resulting plot then looks as follows. Here, a helper plot function of DX Analytics is used.
Step21: Delta Surfaces
Applying a very similar approach, a delta surface for all possible combinations of the intial values is as easily generated.
Step22: The plot for the delta surface of the first risk factor.
Step23: And the plot for the delta of the second risk factor.
Step24: Vega Surfaces
The same approach can of course be applied to generate vega surfaces.
Step25: The surface for the first risk factor.
Step26: And the one for the second risk factor.
Step27: Finally, we reset the intial values and the volatilities for the two risk factors.
Step28: valuation_mcs_american_multi
In general, the modeling and handling of the valuation classes for American exercise is not too different from those for European exercise. The major difference is in the definition of payoff function.
Present Values
This example models an American minimum put on the two risk factors from before.
Step29: The present value surface is generated in the same way as before for the European option on the two risk factors. The computational burden is of course much higher for the American option, which are valued by the use of the Least-Squares Monte Carlo approach (LSM) according to Longstaff-Schwartz (2001).
Step30: Delta Surfaces
The same exercise as before for the two delta surfaces.
Step31: Vega Surfaces
And finally for the vega surfaces.
Step32: More than Two Risk Factors
The principles of working with multi-risk valuation classes can be illustrated quite well in the two risk factor case. However, there is---in theory---no limitation on the number of risk factors used for derivatives modeling.
Four Asset Basket Option
Consider a maximum basket option on four different risk factors. We add a jump diffusion as well as a stochastic volatility model to the mix
Step33: In this case, we need to specify three correlation values.
Step34: The payoff function in this case gets a bit more complex.
Step35: However, the instantiation of the valuation classe remains the same.
Step36: Example Output and Calculations
The following just displays some example output and the results from certain calculations.
Step37: The correlation and Cholesky matrices now are of shape 4x4.
Step38: Delta and vega estimates are generated in exactly the same fashion as in the two risk factor case.
Step39: Delta for Jump Diffusion and Stochastic Vol Process
Of course, we cannot visualize Greek surfaces dependent on initial values for all four risk factors but still for two. In what follows we generate the delta surfaces with respect to the jump diffusion- and stochastic volatility-based risk factors.
Step40: Vega for Jump Diffusion and Stochastic Vol Process
Now the same exercise for the vega surfaces for the same two risk factors.
Step41: American Exercise
As a final illustration consider the case of an American minimum put option on the four risk factors. This again is a step that leads to a much increased computational burden due to the necessity to apply the least-squares regression approach.
Step42: However, another illustration that even such a complex instrument can be handled as elegantly as the most simple one (i.e. European option on single risk factor). Let us compare the present value estimates for both the European and American maximum basket options. | Python Code:
from dx import *
import seaborn as sns; sns.set()
import time
t0 = time.time()
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Multi-Risk Derivatives Valuation
A specialty of DX Analytics is the valuation of derivatives instruments defined on multiple risk factors and portfolios composed of such derivatives. This section of the documentation illustrates the usage of the dedicated multi-risk valuation classes.
End of explanation
r = constant_short_rate('r', 0.06)
me1 = market_environment('me1', dt.datetime(2015, 1, 1))
me2 = market_environment('me2', dt.datetime(2015, 1, 1))
me1.add_constant('initial_value', 36.)
me1.add_constant('volatility', 0.1) # low volatility
me1.add_constant('currency', 'EUR')
me1.add_constant('model', 'gbm')
me2.add_environment(me1)
me2.add_constant('initial_value', 36.)
me2.add_constant('volatility', 0.5) # high volatility
Explanation: There are the following multiple risk factor valuation classes available:
valuation_mcs_european_multi for the valuation of multi-risk derivatives with European exercise
valuation_mcs_american_multi for the valuation of multi-risk derivatives with American exercise
The handling of these classes is similar to building a portfolio of single-risk derivatives positions.
Market Environments
Market environments for the risk factors are the starting point.
End of explanation
risk_factors = {'gbm1' : me1, 'gbm2' : me2}
correlations = [['gbm1', 'gbm2', 0.5]]
Explanation: We assum a positive correlation between the two risk factors.
End of explanation
val_env = market_environment('val_env', dt.datetime(2015, 1, 1))
val_env.add_constant('starting_date', val_env.pricing_date)
val_env.add_constant('final_date', dt.datetime(2015, 12, 31))
val_env.add_constant('frequency', 'W')
val_env.add_constant('paths', 5000)
val_env.add_curve('discount_curve', r)
val_env.add_constant('maturity', dt.datetime(2015, 12, 31))
val_env.add_constant('currency', 'EUR')
Explanation: Valuation Environment
Similar to the instantiation of a derivatives_portfolio object, a valuation environment is needed (unifying certain parameters/assumptions for all relevant risk factors of a derivative).
End of explanation
# European maximum call option
payoff_func = "np.maximum(np.maximum(maturity_value['gbm1'], maturity_value['gbm2']) - 38, 0)"
vc = valuation_mcs_european_multi(
name='European maximum call', # name
val_env=val_env, # valuation environment
risk_factors=risk_factors, # the relevant risk factors
correlations=correlations, # correlations between risk factors
payoff_func=payoff_func) # payoff function
vc.risk_factors
Explanation: valuation_mcs_european_multi
As an example for a multi-risk derivative with European exercise consider a maximum call option. With multiple risk factors, payoff functions are defined by adding key (the name strings) to the maturity_value array object. As with the portfolio valuation class, the multi-risk factor valuation classes get passed market_environment objects only and not the risk factor model objects themsemselves.
End of explanation
vc.underlying_objects
Explanation: At instantiation, the respective risk factor model objects are instantiated as well.
End of explanation
vc.correlations
vc.correlation_matrix
vc.val_env.get_list('cholesky_matrix')
Explanation: Correlations are stored as well and the resulting corrleation and Cholesky matrices are generated.
End of explanation
np.shape(vc.generate_payoff())
Explanation: The payoff of a European option is a one-dimensional ndarray object.
End of explanation
vc.present_value()
Explanation: Present value estimations are generated by a call of the present_value method.
End of explanation
vc.update('gbm1', initial_value=50.)
vc.present_value()
vc.update('gbm2', volatility=0.6)
vc.present_value()
Explanation: The update method allows updating of certain parameters.
End of explanation
vc.update('gbm1', initial_value=36., volatility=0.1)
vc.update('gbm2', initial_value=36., volatility=0.5)
Explanation: Let us reset the values to the original parameters.
End of explanation
vc.delta('gbm2', interval=0.5)
vc.vega('gbm1')
Explanation: When calculating Greeks the risk factor now has to be specified by providing its name.
End of explanation
%%time
s_list = np.arange(28., 46.1, 2.)
pv = []; de = []; ve = []
for s in s_list:
vc.update('gbm1', initial_value=s)
pv.append(vc.present_value())
de.append(vc.delta('gbm1', .5))
ve.append(vc.vega('gbm1', 0.2))
vc.update('gbm1', initial_value=36.)
%matplotlib inline
plot_option_stats(s_list, pv, de, ve)
Explanation: Sensitivities Positive Correlation
Almos in complete analogy to the single-risk valuation classes, sensitivities can be estimated for the multi-risk valuation classes.
Sensitivities Risk Factor 1
Consider first the case from before with positive correlation between the two risk factors. The following estimates and plots the sensitivities for the first risk factor gbm1.
End of explanation
%%time
s_list = np.arange(28., 46.1, 2.)
pv = []; de = []; ve = []
for s in s_list:
vc.update('gbm2', initial_value=s)
pv.append(vc.present_value())
de.append(vc.delta('gbm2', .5))
ve.append(vc.vega('gbm2', 0.2))
plot_option_stats(s_list, pv, de, ve)
Explanation: Sensitivities Risk Factor 2
Now the sensitivities for the second risk factor.
End of explanation
correlations = [['gbm1', 'gbm2', -0.9]]
# European maximum call option
payoff_func = "np.maximum(np.maximum(maturity_value['gbm1'], maturity_value['gbm2']) - 38, 0)"
vc = valuation_mcs_european_multi(
name='European maximum call',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff_func)
Explanation: Sensitivities with Negative Correlation
The second case is for highly negatively correlated risk factors.
End of explanation
%%time
s_list = np.arange(28., 46.1, 2.)
pv = []; de = []; ve = []
for s in s_list:
vc.update('gbm1', initial_value=s)
pv.append(vc.present_value())
de.append(vc.delta('gbm1', .5))
ve.append(vc.vega('gbm1', 0.2))
vc.update('gbm1', initial_value=36.)
plot_option_stats(s_list, pv, de, ve)
Explanation: Sensitivities Risk Factor 1
Again, sensitivities for the first risk factor first.
End of explanation
%%time
s_list = np.arange(28., 46.1, 2.)
pv = []; de = []; ve = []
for s in s_list:
vc.update('gbm2', initial_value=s)
pv.append(vc.present_value())
de.append(vc.delta('gbm2', .5))
ve.append(vc.vega('gbm2', 0.2))
plot_option_stats(s_list, pv, de, ve)
Explanation: Sensitivities Risk Factor 2
Finally, the sensitivities for the second risk factor for this second scenario.
End of explanation
correlations = [['gbm1', 'gbm2', 0.5]]
# European maximum call option
payoff_func = "np.maximum(np.maximum(maturity_value['gbm1'], maturity_value['gbm2']) - 38, 0)"
vc = valuation_mcs_european_multi(
name='European maximum call',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff_func)
Explanation: Surfaces for Positive Correlation Case
Let us return to the case of positive correlation between the two relevant risk factors.
End of explanation
asset_1 = np.arange(28., 46.1, 4.) # range of initial values
asset_2 = asset_1
a_1, a_2 = np.meshgrid(asset_1, asset_2)
# two-dimensional grids out of the value vectors
value = np.zeros_like(a_1)
Explanation: Value Surface
We are now interested in the value surface of the derivative instrument for both different initial values of the first and second risk factor.
End of explanation
%%time
for i in range(np.shape(value)[0]):
for j in range(np.shape(value)[1]):
vc.update('gbm1', initial_value=a_1[i, j])
vc.update('gbm2', initial_value=a_2[i, j])
value[i, j] = vc.present_value()
Explanation: The following estimates for all possible combinations of the initial values---given the assumptions from above---the present value of the European maximum call option.
End of explanation
plot_greeks_3d([a_1, a_2, value], ['gbm1', 'gbm2', 'present value'])
Explanation: The resulting plot then looks as follows. Here, a helper plot function of DX Analytics is used.
End of explanation
delta_1 = np.zeros_like(a_1)
delta_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(delta_1)[0]):
for j in range(np.shape(delta_1)[1]):
vc.update('gbm1', initial_value=a_1[i, j])
vc.update('gbm2', initial_value=a_2[i, j])
delta_1[i, j] = vc.delta('gbm1')
delta_2[i, j] = vc.delta('gbm2')
Explanation: Delta Surfaces
Applying a very similar approach, a delta surface for all possible combinations of the intial values is as easily generated.
End of explanation
plot_greeks_3d([a_1, a_2, delta_1], ['gbm1', 'gbm2', 'delta gbm1'])
Explanation: The plot for the delta surface of the first risk factor.
End of explanation
plot_greeks_3d([a_1, a_2, delta_2], ['gbm1', 'gbm2', 'delta gbm2'])
Explanation: And the plot for the delta of the second risk factor.
End of explanation
vega_1 = np.zeros_like(a_1)
vega_2 = np.zeros_like(a_1)
for i in range(np.shape(vega_1)[0]):
for j in range(np.shape(vega_1)[1]):
vc.update('gbm1', initial_value=a_1[i, j])
vc.update('gbm2', initial_value=a_2[i, j])
vega_1[i, j] = vc.vega('gbm1')
vega_2[i, j] = vc.vega('gbm2')
Explanation: Vega Surfaces
The same approach can of course be applied to generate vega surfaces.
End of explanation
plot_greeks_3d([a_1, a_2, vega_1], ['gbm1', 'gbm2', 'vega gbm1'])
Explanation: The surface for the first risk factor.
End of explanation
plot_greeks_3d([a_1, a_2, vega_2], ['gbm1', 'gbm2', 'vega gbm2'])
Explanation: And the one for the second risk factor.
End of explanation
# restore initial values
vc.update('gbm1', initial_value=36., volatility=0.1)
vc.update('gbm2', initial_value=36., volatility=0.5)
Explanation: Finally, we reset the intial values and the volatilities for the two risk factors.
End of explanation
# American put payoff
payoff_am = "np.maximum(34 - np.minimum(instrument_values['gbm1'], instrument_values['gbm2']), 0)"
# finer time grid and more paths
val_env.add_constant('frequency', 'B')
val_env.add_curve('time_grid', None)
# delete existing time grid information
val_env.add_constant('paths', 5000)
# American put option on minimum of two assets
vca = valuation_mcs_american_multi(
name='American minimum put',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff_am)
vca.present_value()
for key, obj in vca.instrument_values.items():
print np.shape(vca.instrument_values[key])
Explanation: valuation_mcs_american_multi
In general, the modeling and handling of the valuation classes for American exercise is not too different from those for European exercise. The major difference is in the definition of payoff function.
Present Values
This example models an American minimum put on the two risk factors from before.
End of explanation
asset_1 = np.arange(28., 44.1, 4.)
asset_2 = asset_1
a_1, a_2 = np.meshgrid(asset_1, asset_2)
value = np.zeros_like(a_1)
%%time
for i in range(np.shape(value)[0]):
for j in range(np.shape(value)[1]):
vca.update('gbm1', initial_value=a_1[i, j])
vca.update('gbm2', initial_value=a_2[i, j])
value[i, j] = vca.present_value()
plot_greeks_3d([a_1, a_2, value], ['gbm1', 'gbm2', 'present value'])
Explanation: The present value surface is generated in the same way as before for the European option on the two risk factors. The computational burden is of course much higher for the American option, which are valued by the use of the Least-Squares Monte Carlo approach (LSM) according to Longstaff-Schwartz (2001).
End of explanation
delta_1 = np.zeros_like(a_1)
delta_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(delta_1)[0]):
for j in range(np.shape(delta_1)[1]):
vca.update('gbm1', initial_value=a_1[i, j])
vca.update('gbm2', initial_value=a_2[i, j])
delta_1[i, j] = vca.delta('gbm1')
delta_2[i, j] = vca.delta('gbm2')
plot_greeks_3d([a_1, a_2, delta_1], ['gbm1', 'gbm2', 'delta gbm1'])
plot_greeks_3d([a_1, a_2, delta_2], ['gbm1', 'gbm2', 'delta gbm2'])
Explanation: Delta Surfaces
The same exercise as before for the two delta surfaces.
End of explanation
vega_1 = np.zeros_like(a_1)
vega_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(vega_1)[0]):
for j in range(np.shape(vega_1)[1]):
vca.update('gbm1', initial_value=a_1[i, j])
vca.update('gbm2', initial_value=a_2[i, j])
vega_1[i, j] = vca.vega('gbm1')
vega_2[i, j] = vca.vega('gbm2')
plot_greeks_3d([a_1, a_2, vega_1], ['gbm1', 'gbm2', 'vega gbm1'])
plot_greeks_3d([a_1, a_2, vega_2], ['gbm1', 'gbm2', 'vega gbm2'])
Explanation: Vega Surfaces
And finally for the vega surfaces.
End of explanation
me3 = market_environment('me3', dt.datetime(2015, 1, 1))
me4 = market_environment('me4', dt.datetime(2015, 1, 1))
me3.add_environment(me1)
me4.add_environment(me1)
# for jump-diffusion
me3.add_constant('lambda', 0.5)
me3.add_constant('mu', -0.6)
me3.add_constant('delta', 0.1)
me3.add_constant('model', 'jd')
# for stoch volatility model
me4.add_constant('kappa', 2.0)
me4.add_constant('theta', 0.3)
me4.add_constant('vol_vol', 0.2)
me4.add_constant('rho', -0.75)
me4.add_constant('model', 'sv')
val_env.add_constant('paths', 10000)
val_env.add_constant('frequency', 'W')
val_env.add_curve('time_grid', None)
Explanation: More than Two Risk Factors
The principles of working with multi-risk valuation classes can be illustrated quite well in the two risk factor case. However, there is---in theory---no limitation on the number of risk factors used for derivatives modeling.
Four Asset Basket Option
Consider a maximum basket option on four different risk factors. We add a jump diffusion as well as a stochastic volatility model to the mix
End of explanation
risk_factors = {'gbm1' : me1, 'gbm2' : me2, 'jd' : me3, 'sv' : me4}
correlations = [['gbm1', 'gbm2', 0.5], ['gbm2', 'jd', -0.5], ['gbm1', 'sv', 0.7]]
Explanation: In this case, we need to specify three correlation values.
End of explanation
# European maximum call payoff
payoff_1 = "np.maximum(np.maximum(np.maximum(maturity_value['gbm1'], maturity_value['gbm2']),"
payoff_2 = " np.maximum(maturity_value['jd'], maturity_value['sv'])) - 40, 0)"
payoff = payoff_1 + payoff_2
payoff
Explanation: The payoff function in this case gets a bit more complex.
End of explanation
vc = valuation_mcs_european_multi(
name='European maximum call',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff)
Explanation: However, the instantiation of the valuation classe remains the same.
End of explanation
vc.risk_factors
vc.underlying_objects
vc.present_value()
Explanation: Example Output and Calculations
The following just displays some example output and the results from certain calculations.
End of explanation
vc.correlation_matrix
vc.val_env.get_list('cholesky_matrix')
Explanation: The correlation and Cholesky matrices now are of shape 4x4.
End of explanation
vc.delta('jd', interval=0.1)
vc.delta('sv')
vc.vega('jd')
vc.vega('sv')
Explanation: Delta and vega estimates are generated in exactly the same fashion as in the two risk factor case.
End of explanation
delta_1 = np.zeros_like(a_1)
delta_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(delta_1)[0]):
for j in range(np.shape(delta_1)[1]):
vc.update('jd', initial_value=a_1[i, j])
vc.update('sv', initial_value=a_2[i, j])
delta_1[i, j] = vc.delta('jd')
delta_2[i, j] = vc.delta('sv')
plot_greeks_3d([a_1, a_2, delta_1], ['jump diffusion', 'stochastic vol', 'delta jd'])
plot_greeks_3d([a_1, a_2, delta_2], ['jump diffusion', 'stochastic vol', 'delta sv'])
Explanation: Delta for Jump Diffusion and Stochastic Vol Process
Of course, we cannot visualize Greek surfaces dependent on initial values for all four risk factors but still for two. In what follows we generate the delta surfaces with respect to the jump diffusion- and stochastic volatility-based risk factors.
End of explanation
vega_1 = np.zeros_like(a_1)
vega_2 = np.zeros_like(a_1)
%%time
for i in range(np.shape(vega_1)[0]):
for j in range(np.shape(vega_1)[1]):
vc.update('jd', initial_value=a_1[i, j])
vc.update('sv', initial_value=a_2[i, j])
vega_1[i, j] = vc.vega('jd')
vega_2[i, j] = vc.vega('sv')
plot_greeks_3d([a_1, a_2, vega_1], ['jump diffusion', 'stochastic vol', 'vega jd'])
plot_greeks_3d([a_1, a_2, vega_2], ['jump diffusion', 'stochastic vol', 'vega sv'])
Explanation: Vega for Jump Diffusion and Stochastic Vol Process
Now the same exercise for the vega surfaces for the same two risk factors.
End of explanation
# payoff of American minimum put option
payoff_am_1 = "np.maximum(40 - np.minimum(np.minimum(instrument_values['gbm1'], instrument_values['gbm2']),"
payoff_am_2 = "np.minimum(instrument_values['jd'], instrument_values['sv'])), 0)"
payoff_am = payoff_am_1 + payoff_am_2
vca = valuation_mcs_american_multi(
name='American minimum put',
val_env=val_env,
risk_factors=risk_factors,
correlations=correlations,
payoff_func=payoff_am)
Explanation: American Exercise
As a final illustration consider the case of an American minimum put option on the four risk factors. This again is a step that leads to a much increased computational burden due to the necessity to apply the least-squares regression approach.
End of explanation
# restore initial values
vc.update('jd', initial_value=36., volatility=0.1)
vc.update('sv', initial_value=36., volatility=0.1)
%time vc.present_value()
%time vca.present_value()
%time vca.delta('gbm1')
%time vca.delta('gbm2')
%time vca.vega('jd')
%time vca.vega('sv')
print "Duration for whole notebook %.2f in min" % ((time.time() - t0) / 60)
Explanation: However, another illustration that even such a complex instrument can be handled as elegantly as the most simple one (i.e. European option on single risk factor). Let us compare the present value estimates for both the European and American maximum basket options.
End of explanation |
6,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a Histogram
Create a histogram, fill it with random numbers, set its colour to blue, draw it.
Can you
Step1: We now create our histogram
Step2: We now import the gauss generation from the random module of Python and fill the histogram
Step3: Time for styling the histogram and use jsroot | Python Code:
import ROOT
Explanation: Create a Histogram
Create a histogram, fill it with random numbers, set its colour to blue, draw it.
Can you:
- Can you use the native Python random number generator for this?
- Can you make your plot interactive using JSROOT?
- Can you document what you did in markdown?
End of explanation
h = ROOT.TH1F("h", "My Notebook Histo;x;#", 64, -4, 4)
Explanation: We now create our histogram
End of explanation
from random import gauss
numbers = [gauss(0., 1.) for _ in range(1000)]
numbers
for i in numbers: h.Fill(i)
Explanation: We now import the gauss generation from the random module of Python and fill the histogram
End of explanation
%jsroot on
h.SetLineColor(ROOT.kBlue)
h.SetFillColor(ROOT.kBlue)
c = ROOT.TCanvas()
h.Draw()
c.Draw()
Explanation: Time for styling the histogram and use jsroot
End of explanation |
6,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Selecting Sites By Location
The National Water Information System (NWIS) makes data available for approximately 1.9 Million different locations in the US and Territories. Finding the data you need within this collection can be a challenge!
There are four methods for selecting sites by location
Step1: Requesting data for a site or a list of sites
Most USGS site names are between 8-11 digits long. You can use the draw_map() function
to create an interactive map with 8,000 active stream gages from the Gages-II dataset.
Step2: Select a single site
Step3: Select a list of sites
Step4: Request data by state or territory
Use the two-letter state postal code to retrieve all of the stations
inside of a state. You can only request one state at a time. Lists are not accepted.
Step5: Request data by county or list of counties
Use the five digit FIPS code for each county.
Step6: Request data using a bounding box
The coordinates for the bounding box should be in decimal degrees, with negative values for Western and Southern hemispheres.
Give the coordinates counter clockwise | Python Code:
# First things first
import hydrofunctions as hf
Explanation: Selecting Sites By Location
The National Water Information System (NWIS) makes data available for approximately 1.9 Million different locations in the US and Territories. Finding the data you need within this collection can be a challenge!
There are four methods for selecting sites by location:
Request data for a site or a list of sites
Request data for all sites in a state
Request data for all sites in a county or list of counties
Request data for all sites inside of bounding box
We'll give examples for each of these methods below.
The following examples are requesting sites, but not specifying a time or parameter of interest. When time is not specified, the NWIS will return only the most recent reading for the site- even if that was fifty years ago! If we don't specify a parameter of interest, NWIS will return all of the parameters measured at that site.
End of explanation
hf.draw_map()
Explanation: Requesting data for a site or a list of sites
Most USGS site names are between 8-11 digits long. You can use the draw_map() function
to create an interactive map with 8,000 active stream gages from the Gages-II dataset.
End of explanation
Beetree = hf.NWIS('01581960')
Beetree.df()
Explanation: Select a single site
End of explanation
sites = ['01580000', '01585500', '01589330']
Baltimore = hf.NWIS(sites)
Baltimore.df()
Explanation: Select a list of sites
End of explanation
# Request data for all stations in Puerto Rico.
puerto_rico = hf.NWIS(stateCd='PR')
# List the names for all of the sites in PR
puerto_rico
Explanation: Request data by state or territory
Use the two-letter state postal code to retrieve all of the stations
inside of a state. You can only request one state at a time. Lists are not accepted.
End of explanation
# Mills, Iowa: 19129; Maui, Hawaii: 15009
counties = hf.NWIS(countyCd = ['19129', '15009'])
counties
Explanation: Request data by county or list of counties
Use the five digit FIPS code for each county.
End of explanation
# Request multiple sites using a bounding box
test = hf.NWIS(bBox=[-105.430, 39.655, -104, 39.863])
test
Explanation: Request data using a bounding box
The coordinates for the bounding box should be in decimal degrees, with negative values for Western and Southern hemispheres.
Give the coordinates counter clockwise: West, South, East, North
End of explanation |
6,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Least squares regression
Notebook version
Step1: This notebook covers the problem of fitting parametric regression models with a minimum least-squares criterion. The material presented here is based on the first lectures of this <a haref=http
Step2: 1.1. Parametric model
Parametric regression models assume a parametric expression for the regression curve, adjusting the free parameters according to some criterion that measures the quality of the proposed model.
For a unidimensional case like the one in the previous figure, a convenient approach is to recur to polynomial expressions
Step3: Should we choose a polynomial?
What degree should we use for the polynomial?
For a given degree, how do we choose the weights?
For now, we will find the single "best" polynomial. In a future session, we will see how we can design methods that take into account different polynomia simultaneously.
Next, we will explain how to choose optimal weights according to Least-Squares criterion.
2. Least squares regression
2.1. Problem definition
The goal is to learn a (possibly non-linear) regression model from a set of $K$ labeled points, ${{\bf x}k,s_k}{k=0}^{K-1}$.
We assume a parametric function of the form
Step4: 2.4. Overfitting the training data
It may seem that increasing the degree of the polynomia is always beneficial, as we can implement a more expressive function. A polynomia of degree $M$ would include all polynomia of lower degrees as particular cases. However, if we increase the number of parameters without control, the polynomia would eventually get expressive enough to adjust any given set of training points to arbitrary precision, what does not necessarily mean that the solution is obtaining a model that can be extrapolated to new data, as we show in the following example
Step5: 2.4.1 Limitations of the LS approach. The need for assumptions
Another way to visualize the effect of overfiting is to analyze the effect of variations o a single sample. Consider a training dataset consisting of 15 points which are given, and depict the regression curves that would be obtained if adding an additional point at a fixed location, depending on the target value of that point | Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
import pylab
# For the student tests (only for python 2)
import sys
if sys.version_info.major==2:
from test_helper import Test
Explanation: Least squares regression
Notebook version: 1.4 (Sep 26, 2019)
Author: Jerónimo Arenas García ([email protected])
Changes: v.1.0 - First version
v.1.1 - UTAD version
v.1.2 - Minor corrections
v.1.3 - Python 3 compatibility
v.1.4 - Revised notation
Pending changes: *
End of explanation
K = 35
n_grid = 200
frec = 3
std_n = 0.3
# Location of the training points
X_tr = (3 * np.random.random((K, 1)) - 0.5)
# Labels are obtained from a sinusoidal function, and contaminated by noise
S_tr = np.cos(frec*X_tr) + std_n * np.random.randn(K, 1)
# Equally spaced points in the X-axis
X_grid = np.linspace(np.min(X_tr),np.max(X_tr),n_grid)
# Gererate random prediction curves
f1 = np.random.random() + np.random.random()*X_grid
f2 = np.random.random() + np.random.random()*X_grid + \
np.random.random()*(X_grid**2)
f3 = np.random.random() + np.random.random()*X_grid + \
np.random.random()*(X_grid**2) + np.random.random()*(X_grid**3)
plt.plot(X_tr,S_tr,'b.')
plt.plot(X_grid,f1.T,'g-',label='Arbitrary Linear function')
plt.plot(X_grid,f2.T,'r-',label='Arbitrary Quadratic function')
plt.plot(X_grid,f3.T,'m-',label='Arbitrary Cubic function')
plt.legend(loc='best')
plt.show()
Explanation: This notebook covers the problem of fitting parametric regression models with a minimum least-squares criterion. The material presented here is based on the first lectures of this <a haref=http://mlg.eng.cam.ac.uk/teaching/4f13/1415/>Machine Learning course</a>. In particular, you can refer to the following presentation: <a href=http://mlg.eng.cam.ac.uk/teaching/4f13/1415/lect0102.pdf> Probabilistic Regression</a>.
1. A parametric approach to the regression problem
We have already presented the goal of regression. Given that we have access to a set of training points, ${{\bf x}k, s_k}{k=0}^{K-1}$, the goal is to learn a function $f({\bf x})$ that we can use to make good predictions for an arbitrary input vector.
The following plot illustrates a regression example for unidimensional input data. We have also generated three different regression curves corresponding to polynomia of degrees 1, 2, and 3 with random coefficients.
End of explanation
## Next, we represent some random polynomial functions for degrees between 0 and 14
max_degree = 15
K = 200
#Values of X to evaluate the function
X_grid = np.linspace(-1.5, 1.5, K)
for idx in range(max_degree):
x1 = plt.subplot(3,5, idx+1)
x1.get_xaxis().set_ticks([])
x1.get_yaxis().set_ticks([])
for kk in range(5):
#Random generation of coefficients for the model
we = np.random.randn(idx+1, 1)
#Evaluate the polynomial with previous coefficients at X_grid values
fout = np.polyval(we, X_grid)
x1.plot(X_grid,fout,'g-')
x1.set_ylim([-5,5])
Explanation: 1.1. Parametric model
Parametric regression models assume a parametric expression for the regression curve, adjusting the free parameters according to some criterion that measures the quality of the proposed model.
For a unidimensional case like the one in the previous figure, a convenient approach is to recur to polynomial expressions:
$${\hat s}(x) = f(x) = w_0 + w_1 x + w_2 x^2 + \dots + w_{m-1} x^{m-1}$$
For multidimensional regression, polynomial expressions can include cross-products of the variables. For instance, for a case with two input variables, the degree 2 polynomial would be given by
$${\hat s}({\bf x}) = f({\bf x}) = w_0 + w_1 x_1 + w_2 x_2 + w_3 x_1^2 + w_4 x_2^2 + w_5 x_1 x_2$$
A linear model for multidimensional regression can be expressed as
$${\hat s}({\bf x}) = f({\bf x}) = w_0 + {\bf w}^\top {\bf x}$$
When we postulate such models, the regression model is reduced to finding the most appropriate values of the parameters ${\bf w} = [w_i]$.
All the previous models have in common the fact that they are linear in the parameters, even though they can implement highly non-linear functions. All the derivations in this notebook are equally valid for other non-linear transformations of the input variables, as long as we keep linear-in-the-parameters models.
End of explanation
n_points = 20
n_grid = 200
frec = 3
std_n = 0.2
max_degree = 20
colors = 'brgcmyk'
#Location of the training points
X_tr = (3 * np.random.random((n_points,1)) - 0.5)
#Labels are obtained from a sinusoidal function, and contaminated by noise
S_tr = np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
#Equally spaced points in the X-axis
X_grid = np.linspace(np.min(X_tr),np.max(X_tr),n_grid)
#We start by building the Z matrix
Z = []
for el in X_tr.tolist():
Z.append([el[0]**k for k in range(max_degree+1)])
Z = np.matrix(Z)
Z_grid = []
for el in X_grid.tolist():
Z_grid.append([el**k for k in range(max_degree+1)])
Z_grid = np.matrix(Z_grid)
plt.plot(X_tr,S_tr,'b.')
for k in [1, 2, n_points]: # range(max_degree+1):
Z_iter = Z[:,:k+1]
# Least square solution
#w_LS = (np.linalg.inv(Z_iter.T.dot(Z_iter))).dot(Z_iter.T).dot(S_tr)
# Least squares solution, with leass numerical errors
w_LS, resid, rank, s = np.linalg.lstsq(Z_iter, S_tr, rcond=None)
#estimates at all grid points
fout = Z_grid[:,:k+1].dot(w_LS)
fout = np.array(fout).flatten()
plt.plot(X_grid,fout,colors[k%len(colors)]+'-',label='Degree '+str(k))
plt.legend(loc='best')
plt.ylim(1.2*np.min(S_tr), 1.2*np.max(S_tr))
plt.show()
Explanation: Should we choose a polynomial?
What degree should we use for the polynomial?
For a given degree, how do we choose the weights?
For now, we will find the single "best" polynomial. In a future session, we will see how we can design methods that take into account different polynomia simultaneously.
Next, we will explain how to choose optimal weights according to Least-Squares criterion.
2. Least squares regression
2.1. Problem definition
The goal is to learn a (possibly non-linear) regression model from a set of $K$ labeled points, ${{\bf x}k,s_k}{k=0}^{K-1}$.
We assume a parametric function of the form:
$${\hat s}({\bf x}) = f({\bf x}) = w_0 z_0({\bf x}) + w_1 z_1({\bf x}) + \dots w_{m-1} z_{m-1}({\bf x})$$
where $z_i({\bf x})$ are particular transformations of the input vector variables.
Some examples are:
If ${\bf z} = {\bf x}$, the model is just a linear combination of the input variables
If ${\bf z} = \left[\begin{array}{c}1\{\bf x}\end{array}\right]$, we have again a linear combination with the inclusion of a constant term.
For unidimensional input $x$, ${\bf z} = [1, x, x^2, \dots,x^{m-1}]^\top$ would implement a polynomia of degree $m-1$.
Note that the variables of ${\bf z}$ could also be computed combining different variables of ${\bf x}$. E.g., if ${\bf x} = [x_1,x_2]^\top$, a degree-two polynomia would be implemented with
$${\bf z} = \left[\begin{array}{c}1\x_1\x_2\x_1^2\x_2^2\x_1 x_2\end{array}\right]$$
The above expression does not assume a polynomial model. For instance, we could consider ${\bf z} = [\log(x_1),\log(x_2)]$
Least squares (LS) regression finds the coefficients of the model with the aim of minimizing the square of the residuals. If we define ${\bf w} = [w_0,w_1,\dots,w_{m-1}]^\top$, the LS solution would be defined as
\begin{equation}
{\bf w}{LS} = \arg \min{\bf w} \sum_{k=0}^{K-1} [e_k]^2 = \arg \min_{\bf w} \sum_{k=0}^{K-1} \left[s_k - {\hat s}_k \right]^2
\end{equation}
2.2. Vector Notation
In order to solve the LS problem it is convenient to define the following vectors and matrices:
We can group together all available target values to form the following vector
$${\bf s} = \left[s_0, s_1, \dots, s_{K-1} \right]^\top$$
The estimation of the model for a single input vector ${\bf z}_k$ (which would be computed from ${\bf x}_k$), can be expressed as the following inner product
$${\hat s}_k = {\bf z}_k^\top {\bf w}$$
If we now group all input vectors into a matrix ${\bf Z}$, so that each row of ${\bf Z}$ contains the transpose of the corresponding ${\bf z}_k$, we can express
$$\hat{{\bf s}} = \left[{\hat s}0, {\hat s}_1, \dots, {\hat s}{K-1} \right]^\top =
{\bf Z} {\bf w}, \;\;\;\;
\text{with} \;\;
{\bf Z} = \left[\begin{array}{c} {\bf z}0^\top \ {\bf z}_1^\top\ \vdots \ {\bf z}{K-1}^\top
\end{array}\right]$$
2.3. Least-squares solution
Using the previous notation, the cost minimized by the LS model can be expressed as
$$
C({\bf w}) = \sum_{k=0}^{K-1} \left[s_0 - {\hat s}_{K-1} \right]^2
= \|{\bf s} - {\hat{\bf s}}\|^2 = \|{\bf s} - {\bf Z}{\bf w}\|^2
$$
Since the above expression depends quadratically on ${\bf w}$ and is non-negative, we know that there is only one point where the derivative of $C({\bf w})$ becomes zero, and that point is necessarily a minimum of the cost
$$\nabla_{\bf w} \|{\bf s} - {\bf Z}{\bf w}\|^2\Bigg|{{\bf w} = {\bf w}{LS}} = {\bf 0}$$
<b>Exercise:</b>
Solve the previous problem to show that
$${\bf w}_{LS} = \left( {\bf Z}^\top{\bf Z} \right)^{-1} {\bf Z}^\top{\bf s}$$
The next fragment of code adjusts polynomia of increasing order to randomly generated training data. To illustrate the composition of matrix ${\bf Z}$, we will avoid using functions $\mbox{np.polyfit}$ and $\mbox{np.polyval}$.
End of explanation
n_points = 35
n_test = 200
n_grid = 200
frec = 3
std_n = 0.7
max_degree = 25
colors = 'brgcmyk'
#Location of the training points
X_tr = (3 * np.random.random((n_points,1)) - 0.5)
#Labels are obtained from a sinusoidal function, and contaminated by noise
S_tr = np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
#Test points to validate the generalization of the solution
X_tst = (3 * np.random.random((n_test,1)) - 0.5)
S_tst = np.cos(frec*X_tst) + std_n * np.random.randn(n_test,1)
#Equally spaced points in the X-axis
X_grid = np.linspace(np.min(X_tr),np.max(X_tr),n_grid)
#We start by building the Z matrix
def extend_matrix(X,max_degree):
Z = []
X = X.reshape((X.shape[0],1))
for el in X.tolist():
Z.append([el[0]**k for k in range(max_degree+1)])
return np.matrix(Z)
Z = extend_matrix(X_tr,max_degree)
Z_grid = extend_matrix(X_grid,max_degree)
Z_test = extend_matrix(X_tst,max_degree)
#Variables to store the train and test errors
tr_error = []
tst_error = []
for k in range(max_degree):
Z_iter = Z[:,:k+1]
#Least square solution
#w_LS = (np.linalg.inv(Z_iter.T.dot(Z_iter))).dot(Z_iter.T).dot(S_tr)
# Least squares solution, with leass numerical errors
w_LS, resid, rank, s = np.linalg.lstsq(Z_iter, S_tr)
#estimates at traint and test points
f_tr = Z_iter.dot(w_LS)
f_tst = Z_test[:,:k+1].dot(w_LS)
tr_error.append(np.array((S_tr-f_tr).T.dot(S_tr-f_tr)/len(S_tr))[0,0])
tst_error.append(np.array((S_tst-f_tst).T.dot(S_tst-f_tst)/len(S_tst))[0,0])
plt.stem(range(max_degree),tr_error,'b-',label='Train error')
plt.stem(range(max_degree),tst_error,'r-o',label='Test error')
plt.legend(loc='best')
plt.show()
Explanation: 2.4. Overfitting the training data
It may seem that increasing the degree of the polynomia is always beneficial, as we can implement a more expressive function. A polynomia of degree $M$ would include all polynomia of lower degrees as particular cases. However, if we increase the number of parameters without control, the polynomia would eventually get expressive enough to adjust any given set of training points to arbitrary precision, what does not necessarily mean that the solution is obtaining a model that can be extrapolated to new data, as we show in the following example:
End of explanation
n_points = 15
n_grid = 200
frec = 3
std_n = 0.2
n_val_16 = 5
degree = 18
X_tr = 3 * np.random.random((n_points,1)) - 0.5
S_tr = - np.cos(frec*X_tr) + std_n * np.random.randn(n_points,1)
X_grid = np.linspace(-.5,2.5,n_grid)
S_grid = - np.cos(frec*X_grid) #Noise free for the true model
X_16 = .3 * np.ones((n_val_16,))
S_16 = np.linspace(np.min(S_tr),np.max(S_tr),n_val_16)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(X_tr,S_tr,'b.',markersize=10)
ax.plot(X_16,S_16,'ro',markersize=6)
ax.plot(X_grid,S_grid,'r-',label='True model')
for el in zip(X_16,S_16):
#Add point to the training set
X_tr_iter = np.append(X_tr,el[0])
S_tr_iter = np.append(S_tr,el[1])
#Obtain LS regression coefficients and evaluate it at X_grid
w_LS = np.polyfit(X_tr_iter, S_tr_iter, degree)
S_grid_iter = np.polyval(w_LS,X_grid)
ax.plot(X_grid,S_grid_iter,'g-')
ax.set_xlim(-.5,2.5)
ax.set_ylim(S_16[0]-2,S_16[-1]+2)
ax.legend(loc='best')
plt.show()
Explanation: 2.4.1 Limitations of the LS approach. The need for assumptions
Another way to visualize the effect of overfiting is to analyze the effect of variations o a single sample. Consider a training dataset consisting of 15 points which are given, and depict the regression curves that would be obtained if adding an additional point at a fixed location, depending on the target value of that point:
(You can run this code fragment several times, to check also the changes in the regression curves between executions, and depending also on the location of the training points)
End of explanation |
6,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
6,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python OOP 1
Step1: Portable Greymap ( .pgm) Format
You have been provided with some image files i.e. img1, .. img4 in the data directory in portable greymap (.pgm) format. Although admittedly a primitive image format, .pgm files are simple to manipulate as they contain only one pixel grey value per $x, y$ pixel in the image
Step4: Task breakdown
Create a SquareImage class which reads (portable greymap, '.pgm') image data from a file. The class should implement the following
Step5: Extension
Now that we can read in image data from a file, extend your SquareImages class above so that if the filename is None (python keyword), we store the $z$ attribute as np.zeros([Nx, Ny]).
This will require an if statement, e.g.
Step6: Now use the add_image method of the empty image to add on the contents of all other image in the list of imgs | Python Code:
import os
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import sys
%matplotlib inline
Explanation: Python OOP 1: Basics and Initialisation
This exercise is designed to motivate the use of object oriented programming in scientific computation via a simplified case using images. Your task is to program the Python classes which reads in the data from a file, manipulates the data and plots the image.
End of explanation
Nx = 72
Ny = 72
img_x = np.linspace(1, 0, Nx)
img_y = np.linspace(1, 0, Ny)
X, Y = np.meshgrid(img_x, img_y)
# Generate the gradient image - this could be stored in .pgm format!
img_z = (X+Y) * 255*0.5
print(img_z)
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
ax.contourf(img_x, img_y, img_z, 20, cmap=cm.Greys_r)
ax.set_xlabel('x')
ax.set_ylabel('y')
Explanation: Portable Greymap ( .pgm) Format
You have been provided with some image files i.e. img1, .. img4 in the data directory in portable greymap (.pgm) format. Although admittedly a primitive image format, .pgm files are simple to manipulate as they contain only one pixel grey value per $x, y$ pixel in the image: the scale runs from 0 (black) to 255 (white). This represents a common task in scientific computing where you must read in some field data on a grid of points. You are provided with the code to read and reshape this data from a file!
Here's a demonstration of a greymap image that might be stored in .pgm format using a simple gradient of white to black - it is displayed here using a contour plot:
End of explanation
# Implement the class here:
class SquareImage(object):
def __init__(self, filename=None):
# To simplify this exercise, set the size of the image to a constant
# (Each image was specifically written to have size 72 by 72)
self.Nx = self.Ny = 72
self.x = np.linspace(1, 0, self.Nx)
self.y = np.linspace(1, 0, self.Ny)
if filename is None:
self.z = np.zeros([self.Nx, self.Ny])
else:
self.z = np.loadtxt(filename, skiprows=4).reshape(self.Nx, self.Ny)
def add_image(self, image):
Add the z values of another 72 by 72 image image to this instance
# Could add a defensive check here
assert(np.shape(image.z) == (self.Nx, self.Ny)), 'Image sizes are not equal!'
# Add the image z value to self:
self.z += image.z
def plot(self):
Plots the contour z against x and y coordinates
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
ax.contourf(self.x, self.y, self.z, cmap=cm.Greys_r)
plt.show()
# The image file names
names = ['img1.pgm', 'img2.pgm', 'img3.pgm', 'img4.pgm']
files = [os.path.join('data', name) for name in names]
# Instantiate the class and plot each picture.
imgs = []
for f in files:
image = SquareImage(f)
print(image)
imgs.append(image) # objects are first class instances: add to a list
image.plot()
Explanation: Task breakdown
Create a SquareImage class which reads (portable greymap, '.pgm') image data from a file. The class should implement the following:
The Initialiser method __init__(self, ...), which takes a string filename as an argument and
stores $Nx$ and $Ny$ as instance attributes, both equal to 72 (this isn't robust, but simplifies the exercise)
calculates and stores $x$ and $y$ as instance attrbutes. These are again the regularly spaced pixel $x$ and $y$ values from 1 to 0 (image colour values in pgm files start from top right pixel) - use linspace from 1 to 0 with $Nx$ and $Ny$ points respectively
Read and store image grey levels in filename as an instance attribute, $z$. The line for extracting this data from the files is the same as before,
np.loadtxt(filename, skiprows=4).reshape(self.Nx, self.Ny)
An add_image method which takes an image argument, and adds the z component of image to self.z
don't forget to add self as the first argument! Instance methods require us to be specific
A plot method which takes no extra arguments, and plots the current instance attributes $z$ as a contour vs $x$ and $y$ (also instance attributes). As this exercise is not testing your matplotlib, we provide the code for the function here:
fig = plt.figure()
ax = fig.add_subplot(111, aspect='equal')
ax.contourf(self.x, self.y, self.z, cmap=cm.Greys_r)
plt.show()
End of explanation
# Create an 'empty' SquareImage
combined = SquareImage()
print(combined.z)
Explanation: Extension
Now that we can read in image data from a file, extend your SquareImages class above so that if the filename is None (python keyword), we store the $z$ attribute as np.zeros([Nx, Ny]).
This will require an if statement, e.g.:
> if filename is None
> store z as zeros
> else
> read and store z data
The default filename argument should be None, so that SquareImage() produces an 'empty' image.
End of explanation
# Loop over the list of images
for image in imgs:
combined.add_image(image)
# Plot
combined.plot()
Explanation: Now use the add_image method of the empty image to add on the contents of all other image in the list of imgs
End of explanation |
6,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Practical Guide to the Machine Learning Workflow
Step1: Problem 1) Obtain and Examine Training Data
As a reminder, for supervised-learning problems we use a training set, sources with known labels, i.e. they have been confirmed as normal stars, QSOs, or galaxies, to build a model to classify new observations where we do not know the source label.
The training set for this exercise uses Sloan Digital Sky Survey (SDSS) data. For features, we will start with each $r$-band magnitude measurement made by SDSS. This yields 8 features (twice that of the Iris data set, but significantly fewer than the 454 properties measured for each source in SDSS).
Step 1 in the ML workflow is data preparation - we must curate the training set. As a reminder
Step3: While it is possible to look up each of the names of the $r$-band magnitudes in the SDSS PhotoObjAll schema, the schema list is long, and thus difficult to parse by eye. Fortunately, we can identify the desired columns using the database itself
Step4: To reiterate a point from above
Step5: Solution 1b
Write your answer here
Finally, to finish off our preparation of the data - we need to create an independent test that will be used to evalute the accuracy/generalization properies of the model after everything has been tuned. Often, independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$. For this problem we will adopt 0.3.
sklearn.model_selection has a handy function train_test_split, which will simplify this process.
Problem 1c Split the 10k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called
Step8: Problem 2) An Aside on the Importance of Feature Engineering
It has been said that all machine learning is an exercise in feature engineering.
Feature engineering - the process of creating new features, combining features, removing features, collecting new data to supplement existing features, etc. is essential in the machine learning workflow. As part of the data preparation stage, it is useful to apply domain knowledge to engineer features prior to model construction. [Though it is important to know that feature engineering may be needed at any point in the ML workflow if the model does not provide desired results.]
Due to a peculiarity of our SDSS training set, we need to briefly craft a separate problem to demonstrate the importance of feature engineering.
For this aside, we will train the model on bright ($r' < 18.5$ mag) sources and test the model on faint ($r' > 19.5$ mag) sources. As you might guess the model will not perform well. Following some clever feature engineering, we will be able to improve this.
aside-to-the-aside
This exact situation happens in astronomy all the time, and it is known as sample selection bias. In brief, any time a larger aperture telescope is built, or instrumentation is greatly improved, a large swath of sources that were previously undetectable can now be observed. These fainter sources, however, may contain entirely different populations than their brighter counterparts, and thus any models trained on the bright sources will be biased when making predictions on the faint sources.
We train and test the model with 10000 sources using an identical query to the one employed above, with the added condition restricting the training set to bright sources and the test set to faint sources.
Step9: Problem 2a
Train a $k$ Nearest Neighbors model with $k = 11$ neighbors on the 10k source training set. Note - for this particular problem, the number of neighbors does not matter much.
Step10: Problem 2b
Evaluate the accuracy of the model when applied to the sources in the faint test set.
Does the model perform well?
Hint - you may find sklearn.metrics.accuracy_score useful for this exercise.
Step11: Solution 2b
Write your answer here
Leveraging the same domain knowledge discussed above, namely that galaxies cannot be modeled with a PSF, we can "normalize" the magnitude measurements by taking their difference relative to psfMag_r. This normalization has the added advantage of removing any knowledge of the apparent brightness of the sources, which should help when comparing independent bright and faint sets.
Problem 2c
Normalize the feature vector relative to psfMag_r, and refit the $k$NN model to the 7 newly engineered features.
Does the accuracy improve when predicting the class of sources in the faint test set?
Hint - be sure you apply the eaxct same normalization to both the training and test set
Step12: Solution 2c
Wow! Normalizing the features produces a huge ($\sim{35}\%$) increase in accuracy. Clearly, we should be using normalized magnitude features moving forward.
In addition to demonstrating the importance of feature engineering, this exercise teaches another important lesson
Step13: scikit-learn really makes it easy to build ML models.
Another nice property of RF is that it naturally provides an estimate of the most important features in the model.
[Once again - feature engineering comes into play, as it may be necessary to remove correlated features or unimportant features during the model construction in order to reduce run time or allow the model to fit in the available memory.]
In this case we don't need to remove any features [RF is relatively immune to correlated or unimportant features], but for completeness we measure the importance of each feature in the model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() class. The higher the value, the more important feature.
Problem 3b
Calculate the relative importance of each feature.
Which feature is most important? Can you make sense of the feature ordering?
Hint - do not dwell too long on the final ordering of the features.
Step14: Solution 3b
psfMag_r - deVMag_r is the most important feature. This makes sense based on the separation of stars and galaxies in the psfMag_r-deVMag_r plane (see the visualization results above).
Note - the precise ordering of the features can change due to their strong correlation with each other, though the fiberMag features are always the least important.
Problem 4) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. This in essence is the essential "engineering" step of machine learning [and why I (AAM) often caution against ML for scientific measurements and advocate for engineering-like problems instead].
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
Tthe SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data
Step15: The simple SDSS model sets a high standard! A $\sim{96}\%$ accuracy following a single hard cut is a phenomenal performance.
Problem 4b Using 10-fold cross validation, estimate the accuracy of the RF model.
Step16: Phew! Our hard work to build a machine learning model has been rewarded, by creating an improved model
Step17: While (in this case) the affect is small, it is clear that $N_\mathrm{tree}$ affects the model output.
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters?
Brute force.
This data set and the number of tuning parameters is small, so brute force is appropriate (alternatives exist when this isn't the case). We can optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb
Step18: Now that the model is fully optimized - we are ready for the moment of truth!
Problem 5c
Using the optimized model parameters, train a RF model and estimate the model's generalization error using the test set.
How does this compare to the baseline model?
Step19: Solution 5c
Write your answer here
We will now examine the performance of the model using some alternative metrics.
Note - if these metrics are essential for judging the model performance, then they should be incorporated to the workflow in the evaluation stage, prior to examination of the test set.
Problem 5d
Calculate the confusion matrix for the model, as determined by the test set.
Is there symmetry to the misclassifications?
Step20: Solution 5d
Write your answer here
Problem 5e
Calculate and plot the ROC curves for both stars and galaxies.
Hint - you'll need probabilities in order to calculate the ROC curve.
Step21: Problem 5f
Suppose you want a model that only misclassifies 1% of stars as galaxies.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model miss?
Can you think of a reason to adopt such a threshold?
Step22: Solution 5f
When building galaxy 2-point correlation functions it is very important to avoid including stars in the statistics as they will bias the final measurement.
Finally - always remember
Step23: Challenge 2
Can you think of any reasons why the performance would be so much worse for the QSOs than it is for the stars?
Can you obtain a ~.97 accuracy when classifying QSOs?
Step24: Challenge 3
Perform an actual test of the model using "field" sources. The SDSS photometric classifier is nearly perfect for sources brighter than $r = 21$ mag. Download a random sample of $r < 21$ mag photometric sources, and classify them using the optimized RF model. Adopting the photometric classifications as ground truth, what is the accuracy of the RF model?
Hint - you'll need to look up the parameter describing photometric classification in SDSS | Python Code:
import numpy as np
from astropy.table import Table
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: A Practical Guide to the Machine Learning Workflow:
Separating Stars and Galaxies from SDSS
Version 0.1
By AA Miller 2017 Jan 22
We will now follow the steps from the machine learning workflow lecture to develop an end-to-end machine learning model using actual astronomical data. As a reminder the workflow is as follows:
Data Preparation
Model Building
Model Evaluation
Model Optimization
Model Predictions
Some of these steps will be streamlined to allow us to fully build a model within the alloted time.
Science background: Many (nearly all?) of the science applications for LSST data will rely on the accurate separation of stars and galaxies in the LSST imaging data. As an example, imagine measuring galaxy clustering without knowing which sources are galaxies and which are stars.
During this exercise, we will utilize supervised machine-learning methods to separate extended (galaxies) and point sources (stars, QSOs) in imaging data. These methods are highly flexible, and as a result can classify sources at higher fidelity than methods that simply make cuts in a low-dimensional space.
End of explanation
from astroquery.sdss import SDSS # enables direct queries to the SDSS database
Explanation: Problem 1) Obtain and Examine Training Data
As a reminder, for supervised-learning problems we use a training set, sources with known labels, i.e. they have been confirmed as normal stars, QSOs, or galaxies, to build a model to classify new observations where we do not know the source label.
The training set for this exercise uses Sloan Digital Sky Survey (SDSS) data. For features, we will start with each $r$-band magnitude measurement made by SDSS. This yields 8 features (twice that of the Iris data set, but significantly fewer than the 454 properties measured for each source in SDSS).
Step 1 in the ML workflow is data preparation - we must curate the training set. As a reminder:
A machine-learning model is only as good as its training set.
This point cannot be emphasized enough. Machine-learning models are data-driven, they do not capture any physical theory, and thus it is essential that the training set satisfy several criteria.
Two of the most important criteria for a good training set are:
the training set should be unbiased [this is actually really hard to achieve in astronomy since most surveys are magnitude limited]
the training set should be representative of the (unobserved or field) population of sources [a training set with no stars will yield a model incapable of discovering point sources]
So, step 1 (this is a must), we are going to examine the training set to see if anything suspicious is going on. We will use astroquery to directly access the SDSS database, and store the results in an astropy Table.
Note The SDSS API for astroquery is not standard for the package, which leads to a warning. This is not, however, a problem for our purposes.
End of explanation
sdss_query = SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
ORDER BY p.objid ASC
sdss_set = SDSS.query_sql(sdss_query)
sdss_set
Explanation: While it is possible to look up each of the names of the $r$-band magnitudes in the SDSS PhotoObjAll schema, the schema list is long, and thus difficult to parse by eye. Fortunately, we can identify the desired columns using the database itself:
select COLUMN_NAME
from INFORMATION_SCHEMA.Columns
where table_name = 'PhotoObjAll' AND
COLUMN_NAME like '%Mag/_r' escape '/'
which returns the following list of columns: psfMag_r, fiberMag_r, fiber2Mag_r, petroMag_r, deVMag_r, expMag_r, modelMag_r, cModelMag_r.
We now select these magnitude measurements for 10000 stars and galaxies from SDSS. Additionally, we join these photometric measurements with the SpecObjAll table to obtain their spectroscopic classifications, which will serve as labels for the machine-learning model.
Note - the SDSS database contains duplicate observations, flagged observations, and non-detections, which we condition the query to exclude (as explained further below). We also exclude quasars, as the precise photometric classification of these objects is ambiguous: low-$z$ AGN have resolvable host galaxies, while high-$z$ QSOs are point-sources. Query conditions:
p.mode = 1 select only the primary photometric detection of a source
s.sciencePrimary = 1 select only the primary spectroscopic detection of a source (together with above, prevents duplicates)
p.clean = 1 the SDSS clean flag excludes flagged observations and sources with non-detections
s.class != 'QSO' removes potentially ambiguous QSOs from the training set
End of explanation
# complete
Explanation: To reiterate a point from above: data-driven models are only as good as the training set. Now that we have a potential training set, it is essential to inspect the data for any peculiarities.
Problem 1a
Can you easily identify any important properties of the data from the above table?
If not - is there a better way to examine the data?
Hint - emphasis on easy.
Solution 1a
Write your answer here
Problem 1b
Visualize the 8 dimensional feature set [this is intentionally open-ended...]
Does this visualization reveal anything that is not obvious from the table?
Can you identify any biases in the training set?
Remember - always worry about the data
Hint astropy Tables can be converted to pandas DataFrames with the .to_pandas() operator.
End of explanation
from sklearn.model_selection import train_test_split
rs = 2 # we are in second biggest metropolitan area in the US
# complete
X = np.array( # complete
y = np.array( # complete
train_X, test_X, train_y, test_y = train_test_split( X, y, # complete
Explanation: Solution 1b
Write your answer here
Finally, to finish off our preparation of the data - we need to create an independent test that will be used to evalute the accuracy/generalization properies of the model after everything has been tuned. Often, independent test sets are generated by witholding a fraction of the training set. No hard and fast rules apply for the fraction to be withheld, though typical choices vary between $\sim{0.2}-0.5$. For this problem we will adopt 0.3.
sklearn.model_selection has a handy function train_test_split, which will simplify this process.
Problem 1c Split the 10k spectroscopic sources 70-30 into training and test sets. Save the results in arrays called: train_X, train_y, test_X, test_y, respectively. Use rs for the random_state in train_test_split.
Hint - recall that sklearn utilizes X, a 2D np.array(), and y as the features and labels arrays, respecitively.
End of explanation
bright_query = SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
AND p.cModelMag_r < 18.5
ORDER BY p.objid ASC
bright_set = SDSS.query_sql(bright_query)
bright_set
faint_query = SELECT TOP 10000
p.psfMag_r, p.fiberMag_r, p.fiber2Mag_r, p.petroMag_r,
p.deVMag_r, p.expMag_r, p.modelMag_r, p.cModelMag_r,
s.class
FROM PhotoObjAll AS p JOIN specObjAll s ON s.bestobjid = p.objid
WHERE p.mode = 1 AND s.sciencePrimary = 1 AND p.clean = 1 AND s.class != 'QSO'
AND p.cModelMag_r > 19.5
ORDER BY p.objid ASC
faint_set = SDSS.query_sql(faint_query)
faint_set
Explanation: Problem 2) An Aside on the Importance of Feature Engineering
It has been said that all machine learning is an exercise in feature engineering.
Feature engineering - the process of creating new features, combining features, removing features, collecting new data to supplement existing features, etc. is essential in the machine learning workflow. As part of the data preparation stage, it is useful to apply domain knowledge to engineer features prior to model construction. [Though it is important to know that feature engineering may be needed at any point in the ML workflow if the model does not provide desired results.]
Due to a peculiarity of our SDSS training set, we need to briefly craft a separate problem to demonstrate the importance of feature engineering.
For this aside, we will train the model on bright ($r' < 18.5$ mag) sources and test the model on faint ($r' > 19.5$ mag) sources. As you might guess the model will not perform well. Following some clever feature engineering, we will be able to improve this.
aside-to-the-aside
This exact situation happens in astronomy all the time, and it is known as sample selection bias. In brief, any time a larger aperture telescope is built, or instrumentation is greatly improved, a large swath of sources that were previously undetectable can now be observed. These fainter sources, however, may contain entirely different populations than their brighter counterparts, and thus any models trained on the bright sources will be biased when making predictions on the faint sources.
We train and test the model with 10000 sources using an identical query to the one employed above, with the added condition restricting the training set to bright sources and the test set to faint sources.
End of explanation
from sklearn.neighbors import KNeighborsClassifier
feats = # complete
bright_X = # complete
bright_y = # complete
KNNclf = # complete
Explanation: Problem 2a
Train a $k$ Nearest Neighbors model with $k = 11$ neighbors on the 10k source training set. Note - for this particular problem, the number of neighbors does not matter much.
End of explanation
from sklearn.metrics import accuracy_score
faint_X = # complete
faint_y = # complete
faint_preds = # complete
print("The raw features produce a KNN model with accuracy ~{:.4f}".format( # complete
Explanation: Problem 2b
Evaluate the accuracy of the model when applied to the sources in the faint test set.
Does the model perform well?
Hint - you may find sklearn.metrics.accuracy_score useful for this exercise.
End of explanation
bright_Xnorm = # complete
KNNclf = # complete
faint_predsNorm = # complete
print("The normalized features produce an accuracy ~{:.4f}".format( # complete
Explanation: Solution 2b
Write your answer here
Leveraging the same domain knowledge discussed above, namely that galaxies cannot be modeled with a PSF, we can "normalize" the magnitude measurements by taking their difference relative to psfMag_r. This normalization has the added advantage of removing any knowledge of the apparent brightness of the sources, which should help when comparing independent bright and faint sets.
Problem 2c
Normalize the feature vector relative to psfMag_r, and refit the $k$NN model to the 7 newly engineered features.
Does the accuracy improve when predicting the class of sources in the faint test set?
Hint - be sure you apply the eaxct same normalization to both the training and test set
End of explanation
import # complete
rs = 626 # aread code for Pasadena
train_Xnorm = # complete
RFclf = # complete
Explanation: Solution 2c
Wow! Normalizing the features produces a huge ($\sim{35}\%$) increase in accuracy. Clearly, we should be using normalized magnitude features moving forward.
In addition to demonstrating the importance of feature engineering, this exercise teaches another important lesson: contextual features can be dangerous.
Contextual astronomical features can provide very strong priors: stars are more likely close to the galactic plane, supernovae occur next to/on top of galaxies, bluer stars have have lower metallicity, etc. Thus, including contextual information may improve overall model performance.
However, all astronomical training sets are heavily biased. Thus, the strong priors associated with contextual features can lead to severely biased model predictions.
Generally, I (AAM) remove all contextual features from my ML models for this reason. If you are building ML models, consider contextual information as it may help overall performance, but... be weary.
Worry about the data
Problem 3) Model Building
After the data have been properly curated, the next important choice in the ML workflow is the selection of ML algorithm. With experience, it is possible to develop intuition for the best ML algorithm given a specific problem.
Short of that? Try three (or four, or five) different models and choose whichever works the best.
For the star-galaxy problem, we will use the Random Forest (RF) algorithm (Breiman 2001) as implemented by scikit-learn.
RandomForestClassifier is part of the sklearn.ensemble module.
RF has a number of nice properties for working with astronomical data:
relative insensitivity to noisy or useless features
invariant response to highly non-gaussian feature distributions
fast, flexible and scales well to large data sets
which is why we will adopt it here.
Problem 3a
Build a RF model using the normalized features from the training set.
Include 25 trees in the forest using the n_estimators paramater in RandomForestClassifier.
End of explanation
# complete
print("The relative importance of the features is: \n{:s}".format( # complete
Explanation: scikit-learn really makes it easy to build ML models.
Another nice property of RF is that it naturally provides an estimate of the most important features in the model.
[Once again - feature engineering comes into play, as it may be necessary to remove correlated features or unimportant features during the model construction in order to reduce run time or allow the model to fit in the available memory.]
In this case we don't need to remove any features [RF is relatively immune to correlated or unimportant features], but for completeness we measure the importance of each feature in the model.
RF feature importance is measured by randomly shuffling the values of a particular feature, and measuring the decrease in the model's overall accuracy. The relative feature importances can be accessed using the .feature_importances_ attribute associated with the RandomForestClassifer() class. The higher the value, the more important feature.
Problem 3b
Calculate the relative importance of each feature.
Which feature is most important? Can you make sense of the feature ordering?
Hint - do not dwell too long on the final ordering of the features.
End of explanation
# complete
print("The SDSS phot model produces an accuracy ~{:.4f}".format( # complete
Explanation: Solution 3b
psfMag_r - deVMag_r is the most important feature. This makes sense based on the separation of stars and galaxies in the psfMag_r-deVMag_r plane (see the visualization results above).
Note - the precise ordering of the features can change due to their strong correlation with each other, though the fiberMag features are always the least important.
Problem 4) Model Evaluation
To evaluate the performance of the model we establish a baseline (or figure of merit) that we would like to exceed. This in essence is the essential "engineering" step of machine learning [and why I (AAM) often caution against ML for scientific measurements and advocate for engineering-like problems instead].
If the model does not improve upon the baseline (or reach the desired figure of merit) then one must iterate on previous steps (feature engineering, algorithm selection, etc) to accomplish the desired goal.
The SDSS photometric pipeline uses a simple parametric model to classify sources as either stars or galaxies. If we are going to the trouble of building a complex ML model, then it stands to reason that its performance should exceed that of the simple model. Thus, we adopt the SDSS photometric classifier as our baseline.
Tthe SDSS photometric classifier uses a single hard cut to separate stars and galaxies in imaging data:
$$\mathtt{psfMag} - \mathtt{cmodelMag} > 0.145.$$
Sources that satisfy this criteria are considered galaxies.
Problem 4a
Determine the baseline for the ML model by measuring the accuracy of the SDSS photometric classifier on the training set.
Hint - you may need to play around with array values to get accuracy_score to work.
End of explanation
from sklearn.model_selection import # complete
RFpreds = # complete
print("The CV accuracy for the training set is {:.4f}".format( # complete
Explanation: The simple SDSS model sets a high standard! A $\sim{96}\%$ accuracy following a single hard cut is a phenomenal performance.
Problem 4b Using 10-fold cross validation, estimate the accuracy of the RF model.
End of explanation
rs = 1936 # year JPL was founded
CVpreds1 = # complete
# complete
# complete
print("The CV accuracy for 1, 10, 100 trees is {:.4f}, {:.4f}, {:.4f}".format( # complete
Explanation: Phew! Our hard work to build a machine learning model has been rewarded, by creating an improved model: $\sim{96.9}\%$ accuracy vs. $\sim{96.4}\%$.
[But - was our effort worth only a $0.5\%$ improvement in the model?]
Problem 5) Model Optimization
While the "off-the-shelf" model provides an improvement over the SDSS photometric classifier, we can further refine and improve the performance of the machine learning model by adjusting the model tuning parameters. A process known as model optimization.
All machine-learning models have tuning parameters. In brief, these parameters capture the smoothness of the model in the multidimentional-feature space. Whether the model is smooth or coarse is application dependent -- be weary of over-fitting or under-fitting the data. Generally speaking, RF (and most tree-based methods) have 3 flavors of tuning parameter:
$N_\mathrm{tree}$ - the number of trees in the forest n_estimators (default: 10) in sklearn
$m_\mathrm{try}$ - the number of (random) features to explore as splitting criteria at each node max_features (default: sqrt(n_features)) in sklearn
Pruning criteria - defined stopping criteria for ending continued growth of the tree, there are many choices for this in sklearn (My preference is min_samples_leaf (default: 1) which sets the minimum number of sources allowed in a terminal node, or leaf, of the tree)
Just as we previously evaluated the model using CV, we must optimize the tuning paramters via CV. Until we "finalize" the model by fixing all the input parameters, we cannot evalute the accuracy of the model with the test set as that would be "snooping."
On Tuesday we were introduced to GridSearchCV, which is an excellent tool for optimizing model parameters.
Before we get to that, let's try to develop some intuition for how the tuning parameters affect the final model predictions.
Problem 5a
Determine the 5-fold CV accuracy for models with $N_\mathrm{tree}$ = 1, 10, 100.
How do you expect changing the number of trees to affect the results?
End of explanation
rs = 64 # average temperature in Los Angeles
from sklearn.model_selection import GridSearchCV
grid_results = # complete
print("The optimal parameters are:")
for key, item in grid_results.best_params_.items(): # warning - slightly different meanings in Py2 & Py3
print("{}: {}".format(key, item))
Explanation: While (in this case) the affect is small, it is clear that $N_\mathrm{tree}$ affects the model output.
Now we will optimize the model over all tuning parameters. How does one actually determine the optimal set of tuning parameters?
Brute force.
This data set and the number of tuning parameters is small, so brute force is appropriate (alternatives exist when this isn't the case). We can optimize the model via a grid search that performs CV at each point in the 3D grid. The final model will adopt the point with the highest accuracy.
It is important to remember two general rules of thumb: (i) if the model is optimized at the edge of the grid, refit a new grid centered on that point, and (ii) the results should be stable in the vicinity of the grid maximum. If this is not the case the model is likely overfit.
Problem 5b
Use GridSearchCV to perform a 3-fold CV grid search to optimize the RF star-galaxy model. Remember the rules of thumb.
What are the optimal tuning parameters for the model?
Hint 1 - think about the computational runtime based on the number of points in the grid. Do not start with a very dense or large grid.
Hint 2 - if the runtime is long, don't repeat the grid search even if the optimal model is on an edge of the grid
End of explanation
RFopt_clf = # complete
test_preds = # complete
print('The optimized model produces a generalization error of {:.4f}'.format( # complete
Explanation: Now that the model is fully optimized - we are ready for the moment of truth!
Problem 5c
Using the optimized model parameters, train a RF model and estimate the model's generalization error using the test set.
How does this compare to the baseline model?
End of explanation
from sklearn.metrics import # complete
# complete
Explanation: Solution 5c
Write your answer here
We will now examine the performance of the model using some alternative metrics.
Note - if these metrics are essential for judging the model performance, then they should be incorporated to the workflow in the evaluation stage, prior to examination of the test set.
Problem 5d
Calculate the confusion matrix for the model, as determined by the test set.
Is there symmetry to the misclassifications?
End of explanation
from sklearn.metrics import roc_curve
test_preds_proba = # complete
# complete
fpr, tpr, thresholds = roc_curve( # complete
plt.plot( # complete
plt.legend()
from sklearn.metrics import roc_curve, roc_auc_score
test_preds_proba = RFopt_clf.predict_proba(test_Xnorm)
test_y_stars = np.zeros(len(test_y), dtype = int)
test_y_stars[np.where(test_y == "STAR")] = 1
test_y_galaxies = test_y_stars*-1. + 1
fpr, tpr, thresholds = roc_curve(test_y_stars, test_preds_proba[:,1])
plt.plot(fpr, tpr, label = r'$\mathrm{STAR}$', color = "MediumAquaMarine")
fpr, tpr, thresholds = roc_curve(test_y_galaxies, test_preds_proba[:,0])
plt.plot(fpr, tpr, label = r'$\mathrm{GALAXY}$', color = "Tomato")
plt.legend()
Explanation: Solution 5d
Write your answer here
Problem 5e
Calculate and plot the ROC curves for both stars and galaxies.
Hint - you'll need probabilities in order to calculate the ROC curve.
End of explanation
# complete
Explanation: Problem 5f
Suppose you want a model that only misclassifies 1% of stars as galaxies.
What classification threshold should be adopted for this model?
What fraction of galaxies does this model miss?
Can you think of a reason to adopt such a threshold?
End of explanation
# complete
Explanation: Solution 5f
When building galaxy 2-point correlation functions it is very important to avoid including stars in the statistics as they will bias the final measurement.
Finally - always remember:
worry about the data
Challenge Problem) Taking the Plunge
Applying the model to field data
QSOs are unresolved sources that look like stars in optical imaging data. We will now download photometric measurements for 10k QSOs from SDSS and see how accurate the RF model performs for these sources.
Challenge 1
Calculate the accuracy with which the model classifies QSOs based on the 10k QSOs selected with the above command. How does that accuracy compare to that estimated by the test set?
End of explanation
# complete
Explanation: Challenge 2
Can you think of any reasons why the performance would be so much worse for the QSOs than it is for the stars?
Can you obtain a ~.97 accuracy when classifying QSOs?
End of explanation
# complete
Explanation: Challenge 3
Perform an actual test of the model using "field" sources. The SDSS photometric classifier is nearly perfect for sources brighter than $r = 21$ mag. Download a random sample of $r < 21$ mag photometric sources, and classify them using the optimized RF model. Adopting the photometric classifications as ground truth, what is the accuracy of the RF model?
Hint - you'll need to look up the parameter describing photometric classification in SDSS
End of explanation |
6,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Preprocessing using Dataflow </h1>
This notebook illustrates
Step1: Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
Step2: NOTE
Step3: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step5: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
Step7: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options
Step8: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
Please re-run the above cell if you get a <b>failed status</b> of the job in the dataflow UI console. | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
Explanation: <h1> Preprocessing using Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](https://github.com/GoogleCloudPlatform/training-data-analyst/tree/master/courses/machine_learning/deepdive2/end_to_end_ml/solutions/preproc.ipynb).
End of explanation
!pip install --user apache-beam[interactive]==2.24.0
Explanation: Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
End of explanation
import apache_beam as beam
print(beam.__version__)
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
Explanation: NOTE: In the output of the above cell you can safely ignore any WARNINGS (in Yellow text) related to: "hdfscli", "hdfscli-avro", "pbr", "fastavro", "gen_client" and ERRORS (in Red text) related to the related to: "witwidget-gpu", "fairing" etc.
If you get any related errors or warnings mentioned above please rerun the above cell.
Note: Restart your kernel to use updated packages.
Make sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number.
End of explanation
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
End of explanation
# Create SQL query using natality data after the year 2000
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
# Call BigQuery and examine in dataframe
from google.cloud import bigquery
df = bigquery.Client().query(query + " LIMIT 100").to_dataframe()
df.head()
Explanation: <h2> Save the query from earlier </h2>
The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that.
End of explanation
# TODO 1
# TODO -- Your code here.
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'num_workers': 4,
'max_num_workers': 5
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
| '{}_read'.format(step) >> beam.io.Read(beam.io.BigQuerySource(query = selquery, use_standard_sql = True))
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
preprocess(in_test_mode = False)
Explanation: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
Instead of using Beam/Dataflow, I had three other options:
Use Cloud Dataprep to visually author a Dataflow pipeline. Cloud Dataprep also allows me to explore the data, so we could have avoided much of the handcoding of Python/Seaborn calls above as well!
Read from BigQuery directly using TensorFlow.
Use the BigQuery console (http://bigquery.cloud.google.com) to run a Query and save the result as a CSV file. For larger datasets, you may have to select the option to "allow large results" and save the result into a CSV file on Google Cloud Storage.
<p>
However, in this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed. If I didn't need preprocessing, I could have used the web console. Also, I prefer to script it out rather than run queries on the user interface, so I am using Cloud Dataflow for the preprocessing.
Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP web console to the Dataflow section and monitor the running job. It took about 20 minutes for me.
<p>
If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/
</pre>
**Lab Task #1:** Creating datasets for ML using Dataflow
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
Explanation: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
Please re-run the above cell if you get a <b>failed status</b> of the job in the dataflow UI console.
End of explanation |
6,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'icon-esm-lr', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MPI-M
Source ID: ICON-ESM-LR
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
6,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VQ-VAE training example
Demonstration of how to train the model specified in https
Step1: Download Cifar10 data
This requires a connection to the internet and will download ~160MB.
Step3: Load the data into Numpy
We compute the variance of the whole training set to normalise the Mean Squared Error below.
Step4: Encoder & Decoder Architecture
Step5: Build Model and train
Step6: Plot loss
Step7: View reconstructions | Python Code:
!pip install dm-sonnet dm-tree
import matplotlib.pyplot as plt
import numpy as np
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
import tree
try:
import sonnet.v2 as snt
tf.enable_v2_behavior()
except ImportError:
import sonnet as snt
print("TensorFlow version {}".format(tf.__version__))
print("Sonnet version {}".format(snt.__version__))
Explanation: VQ-VAE training example
Demonstration of how to train the model specified in https://arxiv.org/abs/1711.00937, using TF 2 / Sonnet 2.
On Mac and Linux, simply execute each cell in turn.
End of explanation
cifar10 = tfds.as_numpy(tfds.load("cifar10:3.0.2", split="train+test", batch_size=-1))
cifar10.pop("id", None)
cifar10.pop("label")
tree.map_structure(lambda x: f'{x.dtype.name}{list(x.shape)}', cifar10)
Explanation: Download Cifar10 data
This requires a connection to the internet and will download ~160MB.
End of explanation
train_data_dict = tree.map_structure(lambda x: x[:40000], cifar10)
valid_data_dict = tree.map_structure(lambda x: x[40000:50000], cifar10)
test_data_dict = tree.map_structure(lambda x: x[50000:], cifar10)
def cast_and_normalise_images(data_dict):
Convert images to floating point with the range [-0.5, 0.5]
images = data_dict['image']
data_dict['image'] = (tf.cast(images, tf.float32) / 255.0) - 0.5
return data_dict
train_data_variance = np.var(train_data_dict['image'] / 255.0)
print('train data variance: %s' % train_data_variance)
Explanation: Load the data into Numpy
We compute the variance of the whole training set to normalise the Mean Squared Error below.
End of explanation
class ResidualStack(snt.Module):
def __init__(self, num_hiddens, num_residual_layers, num_residual_hiddens,
name=None):
super(ResidualStack, self).__init__(name=name)
self._num_hiddens = num_hiddens
self._num_residual_layers = num_residual_layers
self._num_residual_hiddens = num_residual_hiddens
self._layers = []
for i in range(num_residual_layers):
conv3 = snt.Conv2D(
output_channels=num_residual_hiddens,
kernel_shape=(3, 3),
stride=(1, 1),
name="res3x3_%d" % i)
conv1 = snt.Conv2D(
output_channels=num_hiddens,
kernel_shape=(1, 1),
stride=(1, 1),
name="res1x1_%d" % i)
self._layers.append((conv3, conv1))
def __call__(self, inputs):
h = inputs
for conv3, conv1 in self._layers:
conv3_out = conv3(tf.nn.relu(h))
conv1_out = conv1(tf.nn.relu(conv3_out))
h += conv1_out
return tf.nn.relu(h) # Resnet V1 style
class Encoder(snt.Module):
def __init__(self, num_hiddens, num_residual_layers, num_residual_hiddens,
name=None):
super(Encoder, self).__init__(name=name)
self._num_hiddens = num_hiddens
self._num_residual_layers = num_residual_layers
self._num_residual_hiddens = num_residual_hiddens
self._enc_1 = snt.Conv2D(
output_channels=self._num_hiddens // 2,
kernel_shape=(4, 4),
stride=(2, 2),
name="enc_1")
self._enc_2 = snt.Conv2D(
output_channels=self._num_hiddens,
kernel_shape=(4, 4),
stride=(2, 2),
name="enc_2")
self._enc_3 = snt.Conv2D(
output_channels=self._num_hiddens,
kernel_shape=(3, 3),
stride=(1, 1),
name="enc_3")
self._residual_stack = ResidualStack(
self._num_hiddens,
self._num_residual_layers,
self._num_residual_hiddens)
def __call__(self, x):
h = tf.nn.relu(self._enc_1(x))
h = tf.nn.relu(self._enc_2(h))
h = tf.nn.relu(self._enc_3(h))
return self._residual_stack(h)
class Decoder(snt.Module):
def __init__(self, num_hiddens, num_residual_layers, num_residual_hiddens,
name=None):
super(Decoder, self).__init__(name=name)
self._num_hiddens = num_hiddens
self._num_residual_layers = num_residual_layers
self._num_residual_hiddens = num_residual_hiddens
self._dec_1 = snt.Conv2D(
output_channels=self._num_hiddens,
kernel_shape=(3, 3),
stride=(1, 1),
name="dec_1")
self._residual_stack = ResidualStack(
self._num_hiddens,
self._num_residual_layers,
self._num_residual_hiddens)
self._dec_2 = snt.Conv2DTranspose(
output_channels=self._num_hiddens // 2,
output_shape=None,
kernel_shape=(4, 4),
stride=(2, 2),
name="dec_2")
self._dec_3 = snt.Conv2DTranspose(
output_channels=3,
output_shape=None,
kernel_shape=(4, 4),
stride=(2, 2),
name="dec_3")
def __call__(self, x):
h = self._dec_1(x)
h = self._residual_stack(h)
h = tf.nn.relu(self._dec_2(h))
x_recon = self._dec_3(h)
return x_recon
class VQVAEModel(snt.Module):
def __init__(self, encoder, decoder, vqvae, pre_vq_conv1,
data_variance, name=None):
super(VQVAEModel, self).__init__(name=name)
self._encoder = encoder
self._decoder = decoder
self._vqvae = vqvae
self._pre_vq_conv1 = pre_vq_conv1
self._data_variance = data_variance
def __call__(self, inputs, is_training):
z = self._pre_vq_conv1(self._encoder(inputs))
vq_output = self._vqvae(z, is_training=is_training)
x_recon = self._decoder(vq_output['quantize'])
recon_error = tf.reduce_mean((x_recon - inputs) ** 2) / self._data_variance
loss = recon_error + vq_output['loss']
return {
'z': z,
'x_recon': x_recon,
'loss': loss,
'recon_error': recon_error,
'vq_output': vq_output,
}
Explanation: Encoder & Decoder Architecture
End of explanation
%%time
# Set hyper-parameters.
batch_size = 32
image_size = 32
# 100k steps should take < 30 minutes on a modern (>= 2017) GPU.
# 10k steps gives reasonable accuracy with VQVAE on Cifar10.
num_training_updates = 10000
num_hiddens = 128
num_residual_hiddens = 32
num_residual_layers = 2
# These hyper-parameters define the size of the model (number of parameters and layers).
# The hyper-parameters in the paper were (For ImageNet):
# batch_size = 128
# image_size = 128
# num_hiddens = 128
# num_residual_hiddens = 32
# num_residual_layers = 2
# This value is not that important, usually 64 works.
# This will not change the capacity in the information-bottleneck.
embedding_dim = 64
# The higher this value, the higher the capacity in the information bottleneck.
num_embeddings = 512
# commitment_cost should be set appropriately. It's often useful to try a couple
# of values. It mostly depends on the scale of the reconstruction cost
# (log p(x|z)). So if the reconstruction cost is 100x higher, the
# commitment_cost should also be multiplied with the same amount.
commitment_cost = 0.25
# Use EMA updates for the codebook (instead of the Adam optimizer).
# This typically converges faster, and makes the model less dependent on choice
# of the optimizer. In the VQ-VAE paper EMA updates were not used (but was
# developed afterwards). See Appendix of the paper for more details.
vq_use_ema = True
# This is only used for EMA updates.
decay = 0.99
learning_rate = 3e-4
# # Data Loading.
train_dataset = (
tf.data.Dataset.from_tensor_slices(train_data_dict)
.map(cast_and_normalise_images)
.shuffle(10000)
.repeat(-1) # repeat indefinitely
.batch(batch_size, drop_remainder=True)
.prefetch(-1))
valid_dataset = (
tf.data.Dataset.from_tensor_slices(valid_data_dict)
.map(cast_and_normalise_images)
.repeat(1) # 1 epoch
.batch(batch_size)
.prefetch(-1))
# # Build modules.
encoder = Encoder(num_hiddens, num_residual_layers, num_residual_hiddens)
decoder = Decoder(num_hiddens, num_residual_layers, num_residual_hiddens)
pre_vq_conv1 = snt.Conv2D(output_channels=embedding_dim,
kernel_shape=(1, 1),
stride=(1, 1),
name="to_vq")
if vq_use_ema:
vq_vae = snt.nets.VectorQuantizerEMA(
embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
commitment_cost=commitment_cost,
decay=decay)
else:
vq_vae = snt.nets.VectorQuantizer(
embedding_dim=embedding_dim,
num_embeddings=num_embeddings,
commitment_cost=commitment_cost)
model = VQVAEModel(encoder, decoder, vq_vae, pre_vq_conv1,
data_variance=train_data_variance)
optimizer = snt.optimizers.Adam(learning_rate=learning_rate)
@tf.function
def train_step(data):
with tf.GradientTape() as tape:
model_output = model(data['image'], is_training=True)
trainable_variables = model.trainable_variables
grads = tape.gradient(model_output['loss'], trainable_variables)
optimizer.apply(grads, trainable_variables)
return model_output
train_losses = []
train_recon_errors = []
train_perplexities = []
train_vqvae_loss = []
for step_index, data in enumerate(train_dataset):
train_results = train_step(data)
train_losses.append(train_results['loss'])
train_recon_errors.append(train_results['recon_error'])
train_perplexities.append(train_results['vq_output']['perplexity'])
train_vqvae_loss.append(train_results['vq_output']['loss'])
if (step_index + 1) % 100 == 0:
print('%d train loss: %f ' % (step_index + 1,
np.mean(train_losses[-100:])) +
('recon_error: %.3f ' % np.mean(train_recon_errors[-100:])) +
('perplexity: %.3f ' % np.mean(train_perplexities[-100:])) +
('vqvae loss: %.3f' % np.mean(train_vqvae_loss[-100:])))
if step_index == num_training_updates:
break
Explanation: Build Model and train
End of explanation
f = plt.figure(figsize=(16,8))
ax = f.add_subplot(1,2,1)
ax.plot(train_recon_errors)
ax.set_yscale('log')
ax.set_title('NMSE.')
ax = f.add_subplot(1,2,2)
ax.plot(train_perplexities)
ax.set_title('Average codebook usage (perplexity).')
Explanation: Plot loss
End of explanation
# Reconstructions
train_batch = next(iter(train_dataset))
valid_batch = next(iter(valid_dataset))
# Put data through the model with is_training=False, so that in the case of
# using EMA the codebook is not updated.
train_reconstructions = model(train_batch['image'],
is_training=False)['x_recon'].numpy()
valid_reconstructions = model(valid_batch['image'],
is_training=False)['x_recon'].numpy()
def convert_batch_to_image_grid(image_batch):
reshaped = (image_batch.reshape(4, 8, 32, 32, 3)
.transpose(0, 2, 1, 3, 4)
.reshape(4 * 32, 8 * 32, 3))
return reshaped + 0.5
f = plt.figure(figsize=(16,8))
ax = f.add_subplot(2,2,1)
ax.imshow(convert_batch_to_image_grid(train_batch['image'].numpy()),
interpolation='nearest')
ax.set_title('training data originals')
plt.axis('off')
ax = f.add_subplot(2,2,2)
ax.imshow(convert_batch_to_image_grid(train_reconstructions),
interpolation='nearest')
ax.set_title('training data reconstructions')
plt.axis('off')
ax = f.add_subplot(2,2,3)
ax.imshow(convert_batch_to_image_grid(valid_batch['image'].numpy()),
interpolation='nearest')
ax.set_title('validation data originals')
plt.axis('off')
ax = f.add_subplot(2,2,4)
ax.imshow(convert_batch_to_image_grid(valid_reconstructions),
interpolation='nearest')
ax.set_title('validation data reconstructions')
plt.axis('off')
Explanation: View reconstructions
End of explanation |
6,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic Validation
We import the original train.csv and test.csv files and use PassengerID as the index column.
The clean_data function then performs the following
Step1: Random Forest Model
Step2: Support Vector Machine
Step3: Final Logistic Regression Model
Import the cleaned Titanic data from cl_train.csv and cl_test.csv.
Normalize features by mean and standard deviation.
Create polynomial features.
Save predicted data.
Submission Notes and History
Format
Step4: Final Random Forest Model
Import the cleaned Titanic data from cl_train.csv and cl_test.csv.
Create encoders for categorical variables.
Save predicted data.
Submission Notes and History
Format
Step5: Final Support Vector Machine
Import the cleaned Titanic data from cl_train.csv and cl_test.csv.
Normalize features by mean and standard deviation.
Create polynomial features.
Save predicted data.
Submission Notes and History
Format | Python Code:
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LogisticRegression
import pandas as pd
train = pd.read_csv('cl_train.csv', index_col='PassengerId')
# create dummy variables
train = pd.get_dummies(train, columns=['Sex', 'Pclass', 'Embarked'])
# create cross validation set
X = train.drop('Survived', axis=1)
y = train['Survived']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=53)
# feature scaling
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# logistic regression
polynomial_features = PolynomialFeatures(degree=3, include_bias=True)
logistic_regression = LogisticRegression(C=0.005)
pipeline = Pipeline([('polynomial_features', polynomial_features),
('logistic_regression', logistic_regression)])
# prediction score
pipeline.fit(X_train, y_train)
print('Logistic Regression Train Score: %s' % pipeline.score(X_train, y_train))
print('Logistic Regression CV Score: %s' % pipeline.score(X_test, y_test))
Explanation: Titanic Validation
We import the original train.csv and test.csv files and use PassengerID as the index column.
The clean_data function then performs the following:
Drops the Name, Ticket and Cabin columns which we currently are not using.
Modifies Fare column to indicate difference from the median fare paid by class.
Imputes median values (based on sex and passenger class) to null values in the Age column.
Family size feature created by adding values in the SibSp and Parch columns.
The cleaned data is saved to cl_train.csv and cl_test.csv.
Logistic Regression Model
End of explanation
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
train = pd.read_csv('cl_train.csv', index_col='PassengerId')
# impute missing 'Embarked' values with 'S' (most common)
train['Embarked'].fillna(value='S', inplace=True)
# encode categorical variables
le = LabelEncoder()
train['Sex'] = le.fit_transform(train['Sex'])
train['Embarked'] = le.fit_transform(train['Embarked'])
# create cross validation set
X = train.drop('Survived', axis=1)
y = train['Survived']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=134)
# random forest
clf = RandomForestClassifier(n_estimators=300, max_depth=6)
# prediction score
clf.fit(X_train, y_train)
print('Random Forest Train Score: %s' % clf.score(X_train, y_train))
print('Random Forest CV Score: %s' % clf.score(X_test, y_test))
print('Feature Importance:\n%s' % pd.Series(clf.feature_importances_,
index=X_train.columns))
Explanation: Random Forest Model
End of explanation
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
import pandas as pd
train = pd.read_csv('cl_train.csv', index_col='PassengerId')
# create dummy variables
train = pd.get_dummies(train, columns=['Sex', 'Pclass', 'Embarked'])
# create cross validation set
X = train.drop('Survived', axis=1)
y = train['Survived']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=116)
# feature scaling
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
# support vector machine
clf = SVC(C=5, gamma='auto')
# prediction score
clf.fit(X_train, y_train)
print('SVC Train Score: %s' % clf.score(X_train, y_train))
print('SVC CV Score: %s' % clf.score(X_test, y_test))
Explanation: Support Vector Machine
End of explanation
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LogisticRegression
import pandas as pd
import numpy as np
train = pd.read_csv('cl_train.csv', index_col='PassengerId')
test = pd.read_csv('cl_test.csv', index_col='PassengerId')
# create training set X and y
X_train = train.drop('Survived', axis=1)
y_train = train['Survived']
# combine X train and test for preprocessing
tr_len = len(X_train)
df = pd.concat(objs=[X_train, test], axis=0)
# create dummy variables on train/test
df = pd.get_dummies(df, columns=['Sex', 'Pclass', 'Embarked'])
# split X train and test
X_train = df[:tr_len]
test = df[tr_len:]
# feature scaling
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(test)
# L2 logistic polynomial regression with C = 1
polynomial_features = PolynomialFeatures(degree=3, include_bias=True)
logistic_regression = LogisticRegression(C=0.005)
pipeline = Pipeline([('polynomial_features', polynomial_features),
('logistic_regression', logistic_regression)])
# fit and predict
pipeline.fit(X_train, y_train)
prediction = pipeline.predict(X_test)
# save survival predictions to a CSV file
predicted = np.column_stack((test.index.values, prediction))
np.savetxt("pr_logistic.csv", predicted.astype(int), fmt='%d', delimiter=",",
header="PassengerId,Survived", comments='')
Explanation: Final Logistic Regression Model
Import the cleaned Titanic data from cl_train.csv and cl_test.csv.
Normalize features by mean and standard deviation.
Create polynomial features.
Save predicted data.
Submission Notes and History
Format: degree / C
* 6/25: R1 features; polynomial degree of 3, regularization constant 0.005 attained a leaderboard score of 0.77512.
End of explanation
from sklearn.preprocessing import LabelEncoder
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
import numpy as np
train = pd.read_csv('cl_train.csv', index_col='PassengerId')
test = pd.read_csv('cl_test.csv', index_col='PassengerId')
# create training set X and y
X_train = train.drop('Survived', axis=1)
y_train = train['Survived']
# combine X train and test for preprocessing
tr_len = len(X_train)
df = pd.concat(objs=[X_train, test], axis=0)
# impute missing 'Embarked' values with 'S' (most common)
df['Embarked'].fillna(value='S', inplace=True)
# encode categorical variables
le = LabelEncoder()
df['Sex'] = le.fit_transform(df['Sex'])
df['Embarked'] = le.fit_transform(df['Embarked'])
# split X train and test
X_train = df[:tr_len]
test = df[tr_len:]
# random forest with 200 estimators, max depth 10
clf = RandomForestClassifier(n_estimators=300, max_depth=6)
# fit and predict
clf.fit(X_train, y_train)
prediction = clf.predict(test)
# save survival predictions to a CSV file
predicted = np.column_stack((test.index.values, prediction))
np.savetxt("pr_forest.csv", predicted.astype(int), fmt='%d', delimiter=",",
header="PassengerId,Survived", comments='')
Explanation: Final Random Forest Model
Import the cleaned Titanic data from cl_train.csv and cl_test.csv.
Create encoders for categorical variables.
Save predicted data.
Submission Notes and History
Format: n_estimators / max_depth
* 6/25: R1 features; 300 estimators, max tree depth of 6 attained a leaderboard score of 0.79904.
End of explanation
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
import pandas as pd
import numpy as np
train = pd.read_csv('cl_train.csv', index_col='PassengerId')
test = pd.read_csv('cl_test.csv', index_col='PassengerId')
# create training set X and y
X_train = train.drop('Survived', axis=1)
y_train = train['Survived']
# combine X train and test for preprocessing
tr_len = len(X_train)
df = pd.concat(objs=[X_train, test], axis=0)
# create dummy variables on train/test
df = pd.get_dummies(df, columns=['Sex', 'Pclass', 'Embarked'])
# split X train and test
X_train = df[:tr_len]
test = df[tr_len:]
# feature scaling
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(test)
# support vector machine
clf = SVC(C=3, gamma='auto')
# fit and predict
clf.fit(X_train, y_train)
prediction = clf.predict(X_test)
# save survival predictions to a CSV file
predicted = np.column_stack((test.index.values, prediction))
np.savetxt("pr_SVM.csv", predicted.astype(int), fmt='%d', delimiter=",",
header="PassengerId,Survived", comments='')
Explanation: Final Support Vector Machine
Import the cleaned Titanic data from cl_train.csv and cl_test.csv.
Normalize features by mean and standard deviation.
Create polynomial features.
Save predicted data.
Submission Notes and History
Format: gamma / C
* 6/25: R1 features; automatic gamma, regularization constant 5 attained a leaderboard score of 0.77033.
End of explanation |
6,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 8 - pre-class assignment SOLUTIONS
Goals for today's pre-class assignment
Use complex if statements and loops to make decisions in a computer program
Assignment instructions
Watch the videos below, read through Sections 4.1, 4.4, and 4.5 of the Python Tutorial, and complete the programming problems assigned below.
This assignment is due by 11
Step1: Question 1
Step2: Question 2
Step3: Question 3
Step4: Question 4
Step6: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | Python Code:
# Imports the functionality that we need to display YouTube videos in a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("8_wSb927nH0",width=640,height=360) # Complex 'if' statements
Explanation: Day 8 - pre-class assignment SOLUTIONS
Goals for today's pre-class assignment
Use complex if statements and loops to make decisions in a computer program
Assignment instructions
Watch the videos below, read through Sections 4.1, 4.4, and 4.5 of the Python Tutorial, and complete the programming problems assigned below.
This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 8. Submission instructions can be found at the end of the notebook.
End of explanation
# put your code here.
import numpy as np
my_array = np.arange(1,11)
for val in my_array:
if val%2 == 0:
print(val, "is even")
else:
print(val, "is odd")
if val%3 == 0:
print(val, "is divisible by 3")
elif val%5 == 0:
print(val, "is divisible by 5")
else:
print(val, "wow, that's disappointing")
# WATCH THE VIDEO IN FULL-SCREEN MODE
YouTubeVideo("MzZCeHB0CbE",width=640,height=360) # Complex loops
Explanation: Question 1: In the cell below, use numpy's 'arange' method to create an array filled with all of the integers between 1 and 10 (inclusive). Loop through the array, and use if/elif/else to:
Print out if the number is even or odd.
Print out if the number is divisible by 3.
Print out if the number is divisible by 5.
If the number is not divisible by either 3 or 5, print out "wow, that's disappointing."
Note 1: You may need more than one if/elif/else statement to do this!
Note 2: If you have a numpy array named my_numpy_array, you don't necessarily have to use the numpy nditer method. You can loop using the standard python syntax as well. In other words:
for val in my_numpy_array:
print(val)
will work just fine.
End of explanation
# put your code here.
my_list = [1,3,17,23,9,-4,2,2,11,4,-7]
sum = 0
for val in my_list:
if val < 0:
break
print(val)
sum += val
print("the sum after the loop is:", sum)
Explanation: Question 2: In the space below, loop through the given array, breaking when you get to the first negative number. Print out the value you're examining after you check for negative numbers. Create a variable and set it to zero before the loop, and add each number in the list to it after the check for negative numbers. What is that variable equal to after the loop?
End of explanation
# put your code here
# put your code here.
my_list = [1,3,17,23,9,-4,2,2,11,4,-7]
sum = 0
for val in my_list:
if val % 2 == 0:
continue
print(val)
sum += val
print("the sum after the loop is:", sum)
Explanation: Question 3: In the space below, loop through the array given above, skipping every even number with the continue statement. Print out the value you're examining after you check for even numbers. Create a variable and set it to zero before the loop, and add each number in the list to it after the check for even numbers. What is that variable equal to after the loop?
End of explanation
# put your code here!
my_list = [1,3,17,23,9,-4,2,2,11,4,-7]
sum = 0
for val in my_list:
if val > 99: # should never be called, because the values are too small!
break
print(val)
sum += val
else:
print("yay, success!")
print("the sum after the loop is:", sum)
Explanation: Question 4: Copy and paste your code from question #2 above and change it in two ways:
Modify the numbers in the array so the if/break statement is never called.
There is an else clause after the end of the loop (not the end of the if statement!) that prints out "yay, success!" if the loop completes successfully, but not if it breaks.
Verify that if you use the original array, the print statement in the else clause doesn't work!
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
6,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Naive Implementation of the Union-Find Algorithm
Given a set $M$ and a binary relation $R \subseteq M \times M$, the function $\texttt{union_find}$ returns a partition $\mathcal{P}$ of $M$ such that we have
$$ \forall \langle x, y \rangle \in R
Step1: Given a partition $\mathcal{P}$ of a set $M$ and an element $x$ from $M$, the function $\texttt{find}(x, \mathcal{P})$
returns the set $S \in \mathcal{P}$ such that $x \in S$. | Python Code:
def union_find(M, R):
print(f'R = {R}')
P = [ {x} for x in M ] # the trivial partition of M
print(f'P = {P}')
for x, y in R:
A = find(x, P) # find A
B = find(y, P) # find B
if A != B:
print(f'{x} ≅ {y}: combining {set(A)} and {set(B)}')
P.remove(A)
P.remove(B)
P.append(A | B)
print(f'P = {P}')
return P
Explanation: A Naive Implementation of the Union-Find Algorithm
Given a set $M$ and a binary relation $R \subseteq M \times M$, the function $\texttt{union_find}$ returns a partition $\mathcal{P}$ of $M$ such that we have
$$ \forall \langle x, y \rangle \in R: \exists S \in \mathcal{P}: \bigl(x \in S \wedge y \in S\bigr) $$
It works by starting with the trivial partition
$$\mathcal{P} := \bigl{ { x } \mid x \in M \bigr}. $$
As long as there is a pair $\langle x, y \rangle \in R$ such that:
* $x \in A$, $y \in B$, where $A \in \mathcal{P}$ and $B \in \mathcal{P}$,
* but $A \not= B$
we update the partition $\mathcal{P}$ as follows:
$$ \mathcal{P} := \mathcal{P} \backslash { A, B } \cup { A \cup B }. $$
In order to avoid dealing with frozen sets, the partition $\mathcal{P}$ is implemented as a list rather than a set.
Note that this is quite inefficient.
End of explanation
def find(x, P):
for S in P:
if x in S:
return S
def demo():
M = set(range(1, 9+1))
R = { (1, 4), (7, 9), (3, 5), (2, 6), (5, 8), (1, 9), (4, 7) }
P = union_find(M, R)
demo()
Explanation: Given a partition $\mathcal{P}$ of a set $M$ and an element $x$ from $M$, the function $\texttt{find}(x, \mathcal{P})$
returns the set $S \in \mathcal{P}$ such that $x \in S$.
End of explanation |
6,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#1.-Weigh-in-Motion-Storage-Raw-Data" data-toc-modified-id="1.-Weigh-in-Motion-Storage-Raw-Data-1"><span class="toc-item-num">1 </span>1. Weigh-in-Motion Storage Raw Data</a></div><div class="lev2 toc-item"><a href="#1.1-Standards" data-toc-modified-id="1.1-Standards-11"><span class="toc-item-num">1.1 </span>1.1 Standards</a></div><div class="lev3 toc-item"><a href="#1.1.1-File-and-dataset-names" data-toc-modified-id="1.1.1-File-and-dataset-names-111"><span class="toc-item-num">1.1.1 </span>1.1.1 File and dataset names</a></div><div class="lev3 toc-item"><a href="#1.1.2-Fields-name-and-extra-information" data-toc-modified-id="1.1.2-Fields-name-and-extra-information-112"><span class="toc-item-num">1.1.2 </span>1.1.2 Fields name and extra information</a></div><div class="lev2 toc-item"><a href="#1.2-Algorithms" data-toc-modified-id="1.2-Algorithms-12"><span class="toc-item-num">1.2 </span>1.2 Algorithms</a></div><div class="lev3 toc-item"><a href="#1.2.1-Start-up" data-toc-modified-id="1.2.1-Start-up-121"><span class="toc-item-num">1.2.1 </span>1.2.1 Start up</a></div><div class="lev3 toc-item"><a href="#1.2.2-Creating-the-file" data-toc-modified-id="1.2.2-Creating-the-file-122"><span class="toc-item-num">1.2.2 </span>1.2.2 Creating the file</a></div><div class="lev3 toc-item"><a href="#1.2.3-Reading-the-file" data-toc-modified-id="1.2.3-Reading-the-file-123"><span class="toc-item-num">1.2.3 </span>1.2.3 Reading the file</a></div><div class="lev1 toc-item"><a href="#References" data-toc-modified-id="References-2"><span class="toc-item-num">2 </span>References</a></div>
# 1. Weigh-in-Motion Storage Raw Data
Basically, the first main input data is the raw data sensors. These data can be acquired using a data acquisition device (DAQ) through analog channels (e.g. weigh sensors, temperature sensors, etc) and/or digital channels (e.g., inductive loops).
The three more wideley piezo-eletric weigh sensors used are piezo-ceramic, piezo-polymer and piezo-electric <cite data-cite="jiang2009improvements">(Jiang, 2009)</cite>.
The storing the raw sensor data allows studying of the input signals and validating weigh methods. In COST 323 <cite data-cite="tech
Step1: 1.2.1 Start up
Step2: 1.2.2 Creating the file
Step3: 1.2.3 Reading the file | Python Code:
from IPython.display import display
from datetime import datetime
from matplotlib import pyplot as plt
from scipy import misc
import h5py
import json
import numpy as np
import os
import pandas as pd
import sys
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#1.-Weigh-in-Motion-Storage-Raw-Data" data-toc-modified-id="1.-Weigh-in-Motion-Storage-Raw-Data-1"><span class="toc-item-num">1 </span>1. Weigh-in-Motion Storage Raw Data</a></div><div class="lev2 toc-item"><a href="#1.1-Standards" data-toc-modified-id="1.1-Standards-11"><span class="toc-item-num">1.1 </span>1.1 Standards</a></div><div class="lev3 toc-item"><a href="#1.1.1-File-and-dataset-names" data-toc-modified-id="1.1.1-File-and-dataset-names-111"><span class="toc-item-num">1.1.1 </span>1.1.1 File and dataset names</a></div><div class="lev3 toc-item"><a href="#1.1.2-Fields-name-and-extra-information" data-toc-modified-id="1.1.2-Fields-name-and-extra-information-112"><span class="toc-item-num">1.1.2 </span>1.1.2 Fields name and extra information</a></div><div class="lev2 toc-item"><a href="#1.2-Algorithms" data-toc-modified-id="1.2-Algorithms-12"><span class="toc-item-num">1.2 </span>1.2 Algorithms</a></div><div class="lev3 toc-item"><a href="#1.2.1-Start-up" data-toc-modified-id="1.2.1-Start-up-121"><span class="toc-item-num">1.2.1 </span>1.2.1 Start up</a></div><div class="lev3 toc-item"><a href="#1.2.2-Creating-the-file" data-toc-modified-id="1.2.2-Creating-the-file-122"><span class="toc-item-num">1.2.2 </span>1.2.2 Creating the file</a></div><div class="lev3 toc-item"><a href="#1.2.3-Reading-the-file" data-toc-modified-id="1.2.3-Reading-the-file-123"><span class="toc-item-num">1.2.3 </span>1.2.3 Reading the file</a></div><div class="lev1 toc-item"><a href="#References" data-toc-modified-id="References-2"><span class="toc-item-num">2 </span>References</a></div>
# 1. Weigh-in-Motion Storage Raw Data
Basically, the first main input data is the raw data sensors. These data can be acquired using a data acquisition device (DAQ) through analog channels (e.g. weigh sensors, temperature sensors, etc) and/or digital channels (e.g., inductive loops).
The three more wideley piezo-eletric weigh sensors used are piezo-ceramic, piezo-polymer and piezo-electric <cite data-cite="jiang2009improvements">(Jiang, 2009)</cite>.
The storing the raw sensor data allows studying of the input signals and validating weigh methods. In COST 323 <cite data-cite="tech:cost-323">(Jacob et al., 2009)</cite>, it was not found any description about the raw data layout file. By the way, this data can be represented by a matrix using as a first column a index with time instant, it can be represented by microseconds in floating point format and it is followed by other columns representing each sensor data.
## 1.1 Standards
On one file it can be saved any measurements of vehicle's run, e.g. the researcher can create one file per day and on each file all vehicle's run, with respecting to the date of the file. Each vehicle's run should be saved on a specific dataset. The main idea of these standards is promoting a best practice to store and share weigh-in-motion data.
### 1.1.1 File and dataset names
The filename should have be informative, respecting the date, site and lane and the organization type of the dataset. If the file contains measurements from more than one site so the site identification number should be **000**. The same idea should be used to lane identification number. The date field from the filename should contain the initial date time of the period. If it is necessary, inform the initial time too (optional). The standard structure proposed is:
```
wim_t_sss_ll_yyyymmdd[_hhMMSS]
```
E.g. **wim_day_001_01_20174904_004936**. When:
* ***wim* is a fixed text;
* **t** means the organization type of the datasets (i.e. **day** means one file per day, **week** means one file per week, **month** means one file per month, **year** means one file per year and **full** means a full file with a complete data);
* **sss** means site identification number (e.g. 001);
* **ll** means lane identification number (e.g. 02);
* **yyyy** means the year (e.g. 2012);
* **mm** means the mounth (e.g. 12);
* **dd** means the day (e.g. 30);
* **hh** means the hour (e.g. 23);
* **MM** means the minute (e.g. 59);
* **SS** means (e.g. 30).
For each vehicle's run, it should be created a new dataset. The dataset name should contain site identification number, lane identification number, date and time. The standard structure proposed is:
```
run_sss_ll_yyyymmdd_hhMMSS
```
E.g. **run_001_01_20174904_004936**. When **run** is a fixed text. The other parts in dataset name can be explained as in file name standard.
### 1.1.2 Fields name and extra information
Each dataset contains information from signal data. The dataset should contain some extra information to allow data post-processing. The columns on the dataset should be **index** and data from analog channels and digital channels. The standard for column names should be:
```
{t}{n}
```
Where {t} means the channel type (i.e. can be set as **a** for analog, or **d** for digital) and {n} means the number of the channel (e.g. **a1**).
The main extra information that should be saved on the dataset is:
* sample rate (e.g. 5000 [points per second]);
* date time (e.g. 2017-49-04 00:49:36);
* site id (e.g. 001);
* lane id (e.g. 01);
* temperature (e.g. 28.5);
* license_plate (e.g. AAA9999);
* sensor calibration constant (e.g. [0.98, 0.99, 0.75]);
* distance between sensors (e.g. [1.0, 1.5, 2.0]);
* sensor type (e.g. quartz, polymer, ceramic, etc or mixed);
* sensors layout (e.g. |/|\\|<|>|=|)
* channel configuration (this is a, optional attribute, it is required just when sensor type is mixed, e.g. {'a0': 'polymer', 'a1': 'ceramic'})
## 1.2 Algorithms
The algorithms presented here was written in Python language. If it is necessary to use another language would be easy to convert or rewrite this code in any language.
Storage Data module should be able to write and read data from hdf5 file with a simple approach, in other words, it should be easy for anybody to manipulate and understand this data using other languages.
End of explanation
# local
sys.path.insert(0, os.path.dirname(os.getcwd()))
from pywim.utils.dsp.synthetic_data.sensor_data import gen_truck_raw_data
# generates a synthetic data
sample_rate = 2000
sensors_distance = [1, 2]
data = gen_truck_raw_data(
sample_rate=sample_rate, speed=20, vehicle_layout='O--O------O-',
sensors_distance=sensors_distance, p_signal_noise=100.0
)
data.plot()
plt.show()
data.head()
Explanation: 1.2.1 Start up
End of explanation
date_time = datetime.now()
site_id = '001'
lane_id = '01'
collection_type = 'day' # stored per day
f_id = 'wim_{}_{}_{}_{}'.format(
collection_type, site_id, lane_id,
date_time.strftime('%Y%m%d')
)
f = h5py.File('/tmp/{}.h5'.format(f_id), 'w')
print(f_id)
dset_id = 'run_{}_{}_{}'.format(
site_id, lane_id, date_time.strftime('%Y%M%d_%H%M%S')
)
print(dset_id)
dset = f.create_dataset(
dset_id, shape=(data.shape[0],),
dtype=np.dtype([
(k, float) for k in ['index'] + list(data.keys())
])
)
dset['index'] = data.index
for k in data.keys():
dset[k] = data[k]
# check if all values are the same
df = pd.DataFrame(dset[tuple(data.keys())], index=dset['index'])
np.all(df == data)
dset.attrs['sample_rate'] = sample_rate
dset.attrs['date_time'] = date_time.strftime('%Y-%M-%d %H:%M:%S')
dset.attrs['site_id'] = site_id
dset.attrs['lane_id'] = lane_id
dset.attrs['temperature'] = 28.5
dset.attrs['license_plate'] = 'AAA9999' # license plate number
dset.attrs['calibration_constant'] = [0.98, 0.99, 0.75]
dset.attrs['sensors_distance'] = sensors_distance
dset.attrs['sensor_type'] = 'mixed'
dset.attrs['sensors_layout'] = '|||'
dset.attrs['channel_configuration'] = json.dumps({
'a0': 'polymer', 'a1': 'ceramic', 'a2': 'polymer'
})
# flush its data to disk and close
f.flush()
f.close()
Explanation: 1.2.2 Creating the file
End of explanation
print('/tmp/{}.h5'.format(f_id))
f = h5py.File('/tmp/{}.h5'.format(f_id), 'r')
for dset_id in f.keys():
dset = f[dset_id]
paddle = len(max(dset.attrs, key=lambda v: len(v)))
print('')
print('='*80)
print(dset_id)
print('='*80)
for k in dset.attrs:
print('{}:'.format(k).ljust(paddle, ' '), dset.attrs[k], sep='\t')
pd.DataFrame(dset[dset.dtype.names[1:]], index=dset['index']).plot()
plt.show()
# f.__delitem__(dset_id)
f.flush()
f.close()
Explanation: 1.2.3 Reading the file
End of explanation |
6,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook covers the use of 'holidays' in the Prophet forecasting library. In this notebook, we will extend the previous example (http
Step1: Read in the data
Read the data in from the retail sales CSV file in the examples folder then set the index to the 'date' column. We are also parsing dates in the data file.
Step2: Prepare for Prophet
As explained in previous prophet posts, for prophet to work, we need to change the names of these columns to 'ds' and 'y'.
Step3: Let's rename the columns as required by fbprophet. Additioinally, fbprophet doesn't like the index to be a datetime...it wants to see 'ds' as a non-index column, so we won't set an index differnetly than the integer index.
Step4: Now's a good time to take a look at your data. Plot the data using pandas' plot function
Step5: Reviewing the Data
We can see from this data that there is a spike in the same month each year. While spike could be due to many different reasons, let's assume its because there's a major promotion that this company runs every year at that time, which is in December for this dataset.
Because we know this promotion occurs every december, we want to use this knowledge to help prophet better forecast those months, so we'll use prohpet's holiday construct (explained here https
Step6: To continue, we need to log-transform our data
Step7: Running Prophet
Now, let's set prophet up to begin modeling our data using our promotions dataframe as part of the forecast
Note
Step8: We've instantiated the model, now we need to build some future dates to forecast into.
Step9: To forecast this future data, we need to run it through Prophet's model.
Step10: The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe
Step11: We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with
Step12: Plotting Prophet results
Prophet has a plotting mechanism called plot. This plot functionality draws the original data (black dots), the model (blue line) and the error of the forecast (shaded blue area).
Step13: Personally, I'm not a fan of this visualization but I'm not going to build my own...you can see how I do that here
Step14: Comparing holidays vs no-holidays forecasts
Let's re-run our prophet model without our promotions/holidays for comparison.
Step15: Let's compare the two forecasts now. Note
Step16: We are only really insterested in the yhat values, so let's remove all the rest and convert the logged values back to their original scale.
Step17: Now, let's take the percentage difference and the average difference for the model with holidays vs that without. | Python Code:
import pandas as pd
import numpy as np
from fbprophet import Prophet
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize']=(20,10)
plt.style.use('ggplot')
Explanation: This notebook covers the use of 'holidays' in the Prophet forecasting library. In this notebook, we will extend the previous example (http://pythondata.com/forecasting-time-series-data-prophet-jupyter-notebook/) to use holidays in the forecasting.
Import necessary libraries
End of explanation
sales_df = pd.read_csv('../examples/retail_sales.csv', index_col='date', parse_dates=True)
sales_df.head()
Explanation: Read in the data
Read the data in from the retail sales CSV file in the examples folder then set the index to the 'date' column. We are also parsing dates in the data file.
End of explanation
df = sales_df.reset_index()
df.head()
Explanation: Prepare for Prophet
As explained in previous prophet posts, for prophet to work, we need to change the names of these columns to 'ds' and 'y'.
End of explanation
df=df.rename(columns={'date':'ds', 'sales':'y'})
df.head()
Explanation: Let's rename the columns as required by fbprophet. Additioinally, fbprophet doesn't like the index to be a datetime...it wants to see 'ds' as a non-index column, so we won't set an index differnetly than the integer index.
End of explanation
df.set_index('ds').y.plot()
Explanation: Now's a good time to take a look at your data. Plot the data using pandas' plot function
End of explanation
promotions = pd.DataFrame({
'holiday': 'december_promotion',
'ds': pd.to_datetime(['2009-12-01', '2010-12-01', '2011-12-01', '2012-12-01',
'2013-12-01', '2014-12-01', '2015-12-01']),
'lower_window': 0,
'upper_window': 0,
})
promotions
Explanation: Reviewing the Data
We can see from this data that there is a spike in the same month each year. While spike could be due to many different reasons, let's assume its because there's a major promotion that this company runs every year at that time, which is in December for this dataset.
Because we know this promotion occurs every december, we want to use this knowledge to help prophet better forecast those months, so we'll use prohpet's holiday construct (explained here https://facebookincubator.github.io/prophet/docs/holiday_effects.html).
The holiday constrict is a pandas dataframe with the holiday and date of the holiday. For this example, the construct would look like this:
promotions = pd.DataFrame({
'holiday': 'december_promotion',
'ds': pd.to_datetime(['2009-12-01', '2010-12-01', '2011-12-01', '2012-12-01',
'2013-12-01', '2014-12-01', '2015-12-01']),
'lower_window': 0,
'upper_window': 0,
})
This promotions dataframe consisists of promotion dates for Dec in 2009 through 2015, The lower_window and upper_window values are set to zero to indicate that we don't want prophet to consider any other months than the ones listed.
End of explanation
df['y'] = np.log(df['y'])
df.tail()
Explanation: To continue, we need to log-transform our data:
End of explanation
model = Prophet(holidays=promotions)
model.fit(df);
Explanation: Running Prophet
Now, let's set prophet up to begin modeling our data using our promotions dataframe as part of the forecast
Note: Since we are using monthly data, you'll see a message from Prophet saying Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this. This is OK since we are workign with monthly data but you can disable it by using weekly_seasonality=True in the instantiation of Prophet.
End of explanation
future = model.make_future_dataframe(periods=24, freq = 'm')
future.tail()
Explanation: We've instantiated the model, now we need to build some future dates to forecast into.
End of explanation
forecast = model.predict(future)
Explanation: To forecast this future data, we need to run it through Prophet's model.
End of explanation
forecast.tail()
Explanation: The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe:
End of explanation
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
Explanation: We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with:
End of explanation
model.plot(forecast);
Explanation: Plotting Prophet results
Prophet has a plotting mechanism called plot. This plot functionality draws the original data (black dots), the model (blue line) and the error of the forecast (shaded blue area).
End of explanation
model.plot_components(forecast);
Explanation: Personally, I'm not a fan of this visualization but I'm not going to build my own...you can see how I do that here:
https://github.com/urgedata/pythondata/blob/master/fbprophet/fbprophet_part_one.ipynb.
Additionally, prophet let's us take a at the components of our model, including the holidays. This component plot is an important plot as it lets you see the components of your model including the trend and seasonality (identified in the yearly pane).
End of explanation
model_no_holiday = Prophet()
model_no_holiday.fit(df);
future_no_holiday = model_no_holiday.make_future_dataframe(periods=24, freq = 'm')
future_no_holiday.tail()
forecast_no_holiday = model_no_holiday.predict(future)
Explanation: Comparing holidays vs no-holidays forecasts
Let's re-run our prophet model without our promotions/holidays for comparison.
End of explanation
forecast.set_index('ds', inplace=True)
forecast_no_holiday.set_index('ds', inplace=True)
compared_df = forecast.join(forecast_no_holiday, rsuffix="_no_holiday")
Explanation: Let's compare the two forecasts now. Note: I doubt there will be much difference in these models due to the small amount of data, but its a good example to see the process. We'll set the indexes and then join the forecast dataframes into a new dataframe called 'compared_df'.
End of explanation
compared_df= np.exp(compared_df[['yhat', 'yhat_no_holiday']])
Explanation: We are only really insterested in the yhat values, so let's remove all the rest and convert the logged values back to their original scale.
End of explanation
compared_df['diff_per'] = 100*(compared_df['yhat'] - compared_df['yhat_no_holiday']) / compared_df['yhat_no_holiday']
compared_df.tail()
compared_df['diff_per'].mean()
Explanation: Now, let's take the percentage difference and the average difference for the model with holidays vs that without.
End of explanation |
6,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Consistent models in DisMod-MR from Vivarium artifact draw
Take i, r, f, p from a Vivarium artifact, and make a consistent version of them. See how it compares to the original.
Step1: Consistent fit with all data
Let's start with a consistent fit of the simulated PD data. This includes data on prevalence, incidence, and SMR, and the assumption that remission rate is zero. All together this counts as four different data types in the DisMod-II accounting. | Python Code:
np.random.seed(123456)
# if dismod_mr is not installed, it should possible to use
# !conda install --yes pymc
# !pip install dismod_mr
import dismod_mr
# you also need one more pip installable package
# !pip install vivarium_public_health
import vivarium_public_health
Explanation: Consistent models in DisMod-MR from Vivarium artifact draw
Take i, r, f, p from a Vivarium artifact, and make a consistent version of them. See how it compares to the original.
End of explanation
from vivarium_public_health.dataset_manager import Artifact
art = Artifact('/share/costeffectiveness/artifacts/obesity/obesity.hdf')
art.keys
def format_for_dismod(df, data_type):
df = df.query('draw==0 and sex=="Female" and year_start==2017').copy()
df['data_type'] = data_type
df['area'] = 'all'
df['standard_error'] = 0.001
df['upper_ci'] = np.nan
df['lower_ci'] = np.nan
df['effective_sample_size'] = 10_000
df['sex'] = 'total'
df = df.rename({'age_group_start': 'age_start',
'age_group_end': 'age_end',}, axis=1)
return df
p = format_for_dismod(art.load('cause.ischemic_heart_disease.prevalence'), 'p')
i = format_for_dismod(art.load('cause.ischemic_heart_disease.incidence'), 'i')
f = format_for_dismod(art.load('cause.ischemic_heart_disease.excess_mortality'), 'f')
m_all = format_for_dismod(art.load('cause.all_causes.cause_specific_mortality'), 'm_all')
csmr = format_for_dismod(art.load('cause.ischemic_heart_disease.cause_specific_mortality'), 'csmr') # could also try 'pf'
dm = dismod_mr.data.ModelData()
dm.input_data = pd.concat([p, i, f, m_all,
csmr
], ignore_index=True)
for rate_type in 'ifr':
dm.set_knots(rate_type, [0,40,60,80,90,100])
dm.set_level_value('i', age_before=30, age_after=101, value=0)
dm.set_increasing('i', age_start=50, age_end=100)
dm.set_level_value('p', value=0, age_before=30, age_after=101)
dm.set_level_value('r', value=0, age_before=101, age_after=101)
dm.input_data.data_type.value_counts()
dm.setup_model(rate_model='normal', include_covariates=False)
import pymc as pm
m = pm.MAP(dm.vars)
%%time
m.fit(verbose=1)
from IPython.core.pylabtools import figsize
figsize(11, 5.5)
dm.plot()
!date
Explanation: Consistent fit with all data
Let's start with a consistent fit of the simulated PD data. This includes data on prevalence, incidence, and SMR, and the assumption that remission rate is zero. All together this counts as four different data types in the DisMod-II accounting.
End of explanation |
6,770 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Does scikit-learn provide facility to use SVM for regression, using a polynomial kernel (degree=2)? I looked at the APIs and I don't see any. Has anyone built a package on top of scikit-learn that does this? | Problem:
import numpy as np
import pandas as pd
import sklearn
X, y = load_data()
assert type(X) == np.ndarray
assert type(y) == np.ndarray
# fit, then predict X
from sklearn.svm import SVR
svr_poly = SVR(kernel='poly', degree=2)
svr_poly.fit(X, y)
predict = svr_poly.predict(X) |
6,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gippsland Basin Uncertainty Study
Step1: The Gippsland Basin Model
In this example we will apply the UncertaintyAnalysis class we have been playing with in the previous example to a 'realistic' (though highly simplified) geological model of the Gippsland Basin, a petroleum field south of Victoria, Australia. The model has been included as part of the PyNoddy directory, and can be found at pynoddy/examples/GBasin_Ve1_V4.his
Step2: While we could hard-code parameter variations here, it is much easier to store our statistical information in a csv file, so we load that instead. This file accompanies the GBasin_Ve1_V4 model in the pynoddy directory.
Step3: Generate randomised model realisations
Now we have all the information required to perform a Monte-Carlo based uncertainty analysis. In this example we will generate 100 model realisations and use them to estimate the information entropy of each voxel in the model, and hence visualise uncertainty. It is worth noting that in reality we would need to produce several thousand model realisations in order to adequately sample the model space, however for convinience we only generate a small number of models here.
Step4: A few utility functions for visualising uncertainty have been included in the UncertaintyAnalysis class, and can be used to gain an understanding of the most uncertain parts of the Gippsland Basin. The probabability voxets for each lithology can also be accessed using ua.p_block[lithology_id], and the information entropy voxset accessed using ua.e_block.
Note that the Gippsland Basin model has been computed with a vertical exaggeration of 3, in order to highlight vertical structure.
Step5: It is immediately apparent (and not particularly surprising) that uncertainty in the Gippsland Basin model is concentrated around the thin (but economically interesting) formations comprising the La Trobe and Strzelecki Groups. The faults in the model also contribute to this uncertainty, though not by a huge amount.
Exporting results to VTK for visualisation
It is also possible (and useful!) to export the uncertainty information to .vtk format for 3D analysis in software such as ParaView. This can be done as follows | Python Code:
from IPython.core.display import HTML
css_file = 'pynoddy.css'
HTML(open(css_file, "r").read())
%matplotlib inline
#import the ususal libraries + the pynoddy UncertaintyAnalysis class
import sys, os, pynoddy
# from pynoddy.experiment.UncertaintyAnalysis import UncertaintyAnalysis
# adjust some settings for matplotlib
from matplotlib import rcParams
# print rcParams
rcParams['font.size'] = 15
# determine path of repository to set paths corretly below
repo_path = os.path.realpath('../..')
import pynoddy.history
import pynoddy.experiment.uncertainty_analysis
rcParams.update({'font.size': 20})
Explanation: Gippsland Basin Uncertainty Study
End of explanation
reload(pynoddy.history)
reload(pynoddy.output)
reload(pynoddy.experiment.uncertainty_analysis)
reload(pynoddy)
# the model itself is now part of the repository, in the examples directory:
history_file = os.path.join(repo_path, "examples/GBasin_Ve1_V4.his")
Explanation: The Gippsland Basin Model
In this example we will apply the UncertaintyAnalysis class we have been playing with in the previous example to a 'realistic' (though highly simplified) geological model of the Gippsland Basin, a petroleum field south of Victoria, Australia. The model has been included as part of the PyNoddy directory, and can be found at pynoddy/examples/GBasin_Ve1_V4.his
End of explanation
params = os.path.join(repo_path,"examples/gipps_params.csv")
Explanation: While we could hard-code parameter variations here, it is much easier to store our statistical information in a csv file, so we load that instead. This file accompanies the GBasin_Ve1_V4 model in the pynoddy directory.
End of explanation
# %%timeit # Uncomment to test execution time
ua = pynoddy.experiment.uncertainty_analysis.UncertaintyAnalysis(history_file, params)
ua.estimate_uncertainty(100,verbose=False)
Explanation: Generate randomised model realisations
Now we have all the information required to perform a Monte-Carlo based uncertainty analysis. In this example we will generate 100 model realisations and use them to estimate the information entropy of each voxel in the model, and hence visualise uncertainty. It is worth noting that in reality we would need to produce several thousand model realisations in order to adequately sample the model space, however for convinience we only generate a small number of models here.
End of explanation
ua.plot_section(direction='x',data=ua.block)
ua.plot_entropy(direction='x')
Explanation: A few utility functions for visualising uncertainty have been included in the UncertaintyAnalysis class, and can be used to gain an understanding of the most uncertain parts of the Gippsland Basin. The probabability voxets for each lithology can also be accessed using ua.p_block[lithology_id], and the information entropy voxset accessed using ua.e_block.
Note that the Gippsland Basin model has been computed with a vertical exaggeration of 3, in order to highlight vertical structure.
End of explanation
ua.extent_x = 29000
ua.extent_y = 21600
ua.extent_z = 4500
output_path = os.path.join(repo_path,"sandbox/GBasin_Uncertainty")
ua.export_to_vtk(vtk_filename=output_path,data=ua.e_block)
Explanation: It is immediately apparent (and not particularly surprising) that uncertainty in the Gippsland Basin model is concentrated around the thin (but economically interesting) formations comprising the La Trobe and Strzelecki Groups. The faults in the model also contribute to this uncertainty, though not by a huge amount.
Exporting results to VTK for visualisation
It is also possible (and useful!) to export the uncertainty information to .vtk format for 3D analysis in software such as ParaView. This can be done as follows:
End of explanation |
6,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spence/Garcia, What Were the Odds of That?
post @ endlesspint.com
Step1: Coming into the Fight
source
Step2: Fight Night
source
Step3: A tale of two jabs
Step4: What about contact? "Everyone's gotta plan until they get hit."
Step5: Both fighters had their rates affected but while Mikey fell off the chart, almost literally and in a bad sense, Spence's superiour activity with the jab allowed him to set up and deliver the power punches, especially in volume over the second half of the bout
Step6: One more look at thrown punches | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use('ggplot')
%matplotlib inline
from scipy.stats import binom, poisson, zscore
Explanation: Spence/Garcia, What Were the Odds of That?
post @ endlesspint.com
End of explanation
np.random.seed(8)
sim_cnt_poi = 10000
spence_tot_poi, spence_jab_poi, spence_pow_poi = np.random.poisson(64, sim_cnt_poi), \
np.random.poisson(31, sim_cnt_poi), \
np.random.poisson(33, sim_cnt_poi)
garcia_tot_poi, garcia_jab_poi, garcia_pow_poi = np.random.poisson(53, sim_cnt_poi), \
np.random.poisson(32, sim_cnt_poi), \
np.random.poisson(21, sim_cnt_poi)
plt.figure(figsize=(16,6))
# SPENCE
ax1 = plt.subplot(231)
plt.hist(spence_tot_poi, 14, density=True, color='green')
plt.title("Spence Total Punches/Rd")
ax2 = plt.subplot(232, sharey=ax1)
plt.hist(spence_jab_poi, 14, density=True, color='green')
plt.title("Spence Jabs/Rd")
ax3 = plt.subplot(233, sharey=ax1)
plt.hist(spence_pow_poi, 14, density=True, color='green')
plt.title("Spence Power Punches/Rd")
# GARCIA
plt.subplot(234, sharex=ax1, sharey=ax1)
plt.hist(garcia_tot_poi, 14, density=True)
plt.title("Garcia Total Punches/Rd")
plt.subplot(235, sharex=ax2, sharey=ax1)
plt.hist(garcia_jab_poi, 14, density=True)
plt.title("Garcia Jabs/Rd")
plt.subplot(236, sharex=ax3, sharey=ax1)
plt.hist(garcia_pow_poi, 14, density=True)
plt.title("Garcia Power Punches/Rd")
plt.tight_layout()
Explanation: Coming into the Fight
source: https://www.boxingscene.com/errol-spence-vs-mikey-garcia-compubox-historical-review--137148
(mis)Using Poisson to Get an Idea of Expected Fighter Output
Spence could be expected to be busier, but by how much and where?
End of explanation
spence_total_thrown = [46, 55, 75, 89, 86, 95, 70, 102, 121, 125, 118, 100]
spence_jabs_thrown = [39, 43, 53, 61, 62, 56, 50, 60, 56, 58, 45, 35]
spence_power_thrown = [7, 12, 22, 28, 24, 39, 20, 42, 65, 67, 73, 65]
garcia_total_thrown = [12, 28, 31, 35, 41, 43, 26, 41, 32, 62, 15, 40]
garcia_jabs_thrown = [9, 13, 18, 20, 17, 19, 10, 22, 17, 25, 2, 16]
garcia_power_thrown = [3, 15, 13, 15, 24, 24, 16, 19, 15, 37, 13, 24]
Explanation: Fight Night
source: https://www.boxingscene.com/errol-spence-vs-mikey-garcia-compubox-punch-stats--137161
A look at actual thrown performance v expectations
End of explanation
def chance_of_throwing_GE(sim, act):
return [np.sum(sim >= x)/len(sim) for x in act]
spence_comp_thrown = [(spence_tot_poi, spence_total_thrown), (spence_jab_poi, spence_jabs_thrown), (spence_pow_poi, spence_power_thrown)]
garcia_comp_thrown = [(garcia_tot_poi, garcia_total_thrown), (garcia_jab_poi, garcia_jabs_thrown), (garcia_pow_poi, garcia_power_thrown)]
print("SPENCE")
for spence in spence_comp_thrown:
spence_perf = chance_of_throwing_GE(spence[0], spence[1])
print(spence_perf, np.mean(spence_perf))
print("\nGARCIA")
for garcia in garcia_comp_thrown:
garcia_perf = chance_of_throwing_GE(garcia[0], garcia[1])
print(garcia_perf, np.mean(garcia_perf))
Explanation: A tale of two jabs: Spence pumped out that lead left at a ridiculous rate, neutralizing Garcia while setting-up the rest of his own offense
End of explanation
s_plot = 1
fig = plt.figure(figsize=(16,6))
for stat in ((406, .298, "Garcia total", 75), (188, .209, "Garcia jabs", 21), (218, .431, "Garcia power", 54)):
n, p, category, N = stat[0], stat[1], stat[2], stat[3]
x = np.arange(binom.ppf(0.025, n, p),
binom.ppf(0.975, n, p))
sim_bouts = 10000
print("95 percent range for %s: \t" % category, int(np.min(x)), " - ", int(np.max(x)))
print("\t %s actually landed: \t" % category, N)
print("\t prob of landing this %s count or less:" % category,
"{0:.0f}%".format(np.sum(np.random.binomial(n, p, sim_bouts) < N)/float(sim_bouts) * 100))
ax = fig.add_subplot(1,3,s_plot)
ax.plot(x, binom.pmf(x, n, p), 'ro', ms=5, label='binom pmf')
plt.title(category)
ax.vlines(x, 0, binom.pmf(x, n, p), colors='r', lw=5, alpha=0.5)
ax.vlines(N, 0, np.max(binom.pmf(x, n, p)), lw=3, linestyles=":")
s_plot += 1
plt.tight_layout()
plt.show()
Explanation: What about contact? "Everyone's gotta plan until they get hit."
End of explanation
s_plot = 1
fig = plt.figure(figsize=(16,6))
for stat in ((1082, .343, "Spence total", 345), (618, .198, "Spence jabs", 108), (464, .480, "Spence power", 237)):
n, p, category, N = stat[0], stat[1], stat[2], stat[3]
x = np.arange(binom.ppf(0.025, n, p),
binom.ppf(0.975, n, p))
sim_bouts = 10000
print("95 percent range for %s: \t" % category, int(np.min(x)), " - ", int(np.max(x)))
print("\t %s actually landed: \t" % category, N)
print("\t prob of landing this %s count or less:" % category,
"{0:.0f}%".format(np.sum(np.random.binomial(n, p, sim_bouts) < N)/float(sim_bouts) * 100))
ax = fig.add_subplot(1,3,s_plot)
ax.plot(x, binom.pmf(x, n, p), 'go', ms=5, label='binom pmf')
plt.title(category)
ax.vlines(x, 0, binom.pmf(x, n, p), colors='g', lw=5, alpha=0.5)
ax.vlines(N, 0, np.max(binom.pmf(x, n, p)), lw=3, linestyles=":")
s_plot += 1
plt.tight_layout()
plt.show()
Explanation: Both fighters had their rates affected but while Mikey fell off the chart, almost literally and in a bad sense, Spence's superiour activity with the jab allowed him to set up and deliver the power punches, especially in volume over the second half of the bout
End of explanation
def standardized_fight_night(sim_rds, poi_lambda, n_times, act):
act_rds = len(act)
fight_night_zscore = np.zeros(act_rds)
for i in range(n_times):
fight_night_zscore += zscore(np.append(np.random.poisson(poi_lambda, sim_rds), act))[-act_rds:]
return fight_night_zscore/n_times
spence_total_zscore = standardized_fight_night(60, 64, 100, spence_total_thrown)
spence_total_zscore
garcia_total_zscore = standardized_fight_night(60, 53, 100, garcia_total_thrown)
garcia_total_zscore
plt.plot(spence_total_zscore, garcia_total_zscore, 'go')
Explanation: One more look at thrown punches
End of explanation |
6,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GWAS Tutorial
This notebook is designed to provide a broad overview of Hail's functionality, with emphasis on the functionality to manipulate and query a genetic dataset. We walk through a genome-wide SNP association test, and demonstrate the need to control for confounding caused by population stratification.
Step1: If the above cell ran without error, we're ready to go!
Before using Hail, we import some standard Python libraries for use throughout the notebook.
Step2: Download public 1000 Genomes data
We use a small chunk of the public 1000 Genomes dataset, created by downsampling the genotyped SNPs in the full VCF to about 20 MB. We will also integrate sample and variant metadata from separate text files.
These files are hosted by the Hail team in a public Google Storage bucket; the following cell downloads that data locally.
Step3: Importing data from VCF
The data in a VCF file is naturally represented as a Hail MatrixTable. By first importing the VCF file and then writing the resulting MatrixTable in Hail's native file format, all downstream operations on the VCF's data will be MUCH faster.
Step4: Next we read the written file, assigning the variable mt (for matrix table).
Step5: Getting to know our data
It's important to have easy ways to slice, dice, query, and summarize a dataset. Some of this functionality is demonstrated below.
The rows method can be used to get a table with all the row fields in our MatrixTable.
We can use rows along with select to pull out 5 variants. The select method takes either a string refering to a field name in the table, or a Hail Expression. Here, we leave the arguments blank to keep only the row key fields, locus and alleles.
Use the show method to display the variants.
Step6: Alternatively
Step7: Here is how to peek at the first few sample IDs
Step8: To look at the first few genotype calls, we can use entries along with select and take. The take method collects the first n rows into a list. Alternatively, we can use the show method, which prints the first n rows to the console in a table format.
Try changing take to show in the cell below.
Step9: Adding column fields
A Hail MatrixTable can have any number of row fields and column fields for storing data associated with each row and column. Annotations are usually a critical part of any genetic study. Column fields are where you'll store information about sample phenotypes, ancestry, sex, and covariates. Row fields can be used to store information like gene membership and functional impact for use in QC or analysis.
In this tutorial, we demonstrate how to take a text file and use it to annotate the columns in a MatrixTable.
The file provided contains the sample ID, the population and "super-population" designations, the sample sex, and two simulated phenotypes (one binary, one discrete).
This file can be imported into Hail with import_table. This function produces a Table object. Think of this as a Pandas or R dataframe that isn't limited by the memory on your machine -- behind the scenes, it's distributed with Spark.
Step10: A good way to peek at the structure of a Table is to look at its schema.
Step11: To peek at the first few values, use the show method
Step12: Now we'll use this table to add sample annotations to our dataset, storing the annotations in column fields in our MatrixTable. First, we'll print the existing column schema
Step13: We use the annotate_cols method to join the table with the MatrixTable containing our dataset.
Step14: Query functions and the Hail Expression Language
Hail has a number of useful query functions that can be used for gathering statistics on our dataset. These query functions take Hail Expressions as arguments.
We will start by looking at some statistics of the information in our table. The aggregate method can be used to aggregate over rows of the table.
counter is an aggregation function that counts the number of occurrences of each unique element. We can use this to pull out the population distribution by passing in a Hail Expression for the field that we want to count by.
Step15: stats is an aggregation function that produces some useful statistics about numeric collections. We can use this to see the distribution of the CaffeineConsumption phenotype.
Step16: However, these metrics aren't perfectly representative of the samples in our dataset. Here's why
Step17: Since there are fewer samples in our dataset than in the full thousand genomes cohort, we need to look at annotations on the dataset. We can use aggregate_cols to get the metrics for only the samples in our dataset.
Step18: The functionality demonstrated in the last few cells isn't anything especially new
Step19: We can list the counts in descending order using Python's Counter class.
Step20: It's nice to see that we can actually uncover something biological from this small dataset
Step21: Quality Control
QC is where analysts spend most of their time with sequencing datasets. QC is an iterative process, and is different for every project
Step22: Plotting the QC metrics is a good place to start.
Step23: Often, these metrics are correlated.
Step24: Removing outliers from the dataset will generally improve association results. We can make arbitrary cutoffs and use them to filter
Step25: Next is genotype QC. It's a good idea to filter out genotypes where the reads aren't where they should be
Step26: Variant QC is a bit more of the same
Step27: These statistics actually look pretty good
Step28: These filters removed about 15% of sites (we started with a bit over 10,000). This is NOT representative of most sequencing datasets! We have already downsampled the full thousand genomes dataset to include more common variants than we'd expect by chance.
In Hail, the association tests accept column fields for the sample phenotype and covariates. Since we've already got our phenotype of interest (caffeine consumption) in the dataset, we are good to go
Step29: Looking at the bottom of the above printout, you can see the linear regression adds new row fields for the beta, standard error, t-statistic, and p-value.
Hail makes it easy to visualize results! Let's make a Manhattan plot
Step30: This doesn't look like much of a skyline. Let's check whether our GWAS was well controlled using a Q-Q (quantile-quantile) plot.
Step31: Confounded!
The observed p-values drift away from the expectation immediately. Either every SNP in our dataset is causally linked to caffeine consumption (unlikely), or there's a confounder.
We didn't tell you, but sample ancestry was actually used to simulate this phenotype. This leads to a stratified distribution of the phenotype. The solution is to include ancestry as a covariate in our regression.
The linear_regression_rows function can also take column fields to use as covariates. We already annotated our samples with reported ancestry, but it is good to be skeptical of these labels due to human error. Genomes don't have that problem! Instead of using reported ancestry, we will use genetic ancestry by including computed principal components in our model.
The pca function produces eigenvalues as a list and sample PCs as a Table, and can also produce variant loadings when asked. The hwe_normalized_pca function does the same, using HWE-normalized genotypes for the PCA.
Step32: Now that we've got principal components per sample, we may as well plot them! Human history exerts a strong effect in genetic datasets. Even with a 50MB sequencing dataset, we can recover the major human populations.
Step33: Now we can rerun our linear regression, controlling for sample sex and the first few principal components. We'll do this with input variable the number of alternate alleles as before, and again with input variable the genotype dosage derived from the PL field.
Step34: We'll first make a Q-Q plot to assess inflation...
Step35: That's more like it! This shape is indicative of a well-controlled (but not especially well-powered) study. And now for the Manhattan plot
Step36: We have found a caffeine consumption locus! Now simply apply Hail's Nature paper function to publish the result.
Just kidding, that function won't land until Hail 1.0!
Rare variant analysis
Here we'll demonstrate how one can use the expression language to group and count by any arbitrary properties in row and column fields. Hail also implements the sequence kernel association test (SKAT).
Step37: What if we want to group by minor allele frequency bin and hair color, and calculate the mean GQ?
Step38: We've shown that it's easy to aggregate by a couple of arbitrary statistics. This specific examples may not provide especially useful pieces of information, but this same pattern can be used to detect effects of rare variation | Python Code:
import hail as hl
hl.init()
Explanation: GWAS Tutorial
This notebook is designed to provide a broad overview of Hail's functionality, with emphasis on the functionality to manipulate and query a genetic dataset. We walk through a genome-wide SNP association test, and demonstrate the need to control for confounding caused by population stratification.
End of explanation
from hail.plot import show
from pprint import pprint
hl.plot.output_notebook()
Explanation: If the above cell ran without error, we're ready to go!
Before using Hail, we import some standard Python libraries for use throughout the notebook.
End of explanation
hl.utils.get_1kg('data/')
Explanation: Download public 1000 Genomes data
We use a small chunk of the public 1000 Genomes dataset, created by downsampling the genotyped SNPs in the full VCF to about 20 MB. We will also integrate sample and variant metadata from separate text files.
These files are hosted by the Hail team in a public Google Storage bucket; the following cell downloads that data locally.
End of explanation
hl.import_vcf('data/1kg.vcf.bgz').write('data/1kg.mt', overwrite=True)
Explanation: Importing data from VCF
The data in a VCF file is naturally represented as a Hail MatrixTable. By first importing the VCF file and then writing the resulting MatrixTable in Hail's native file format, all downstream operations on the VCF's data will be MUCH faster.
End of explanation
mt = hl.read_matrix_table('data/1kg.mt')
Explanation: Next we read the written file, assigning the variable mt (for matrix table).
End of explanation
mt.rows().select().show(5)
Explanation: Getting to know our data
It's important to have easy ways to slice, dice, query, and summarize a dataset. Some of this functionality is demonstrated below.
The rows method can be used to get a table with all the row fields in our MatrixTable.
We can use rows along with select to pull out 5 variants. The select method takes either a string refering to a field name in the table, or a Hail Expression. Here, we leave the arguments blank to keep only the row key fields, locus and alleles.
Use the show method to display the variants.
End of explanation
mt.row_key.show(5)
Explanation: Alternatively:
End of explanation
mt.s.show(5)
Explanation: Here is how to peek at the first few sample IDs:
End of explanation
mt.entry.take(5)
Explanation: To look at the first few genotype calls, we can use entries along with select and take. The take method collects the first n rows into a list. Alternatively, we can use the show method, which prints the first n rows to the console in a table format.
Try changing take to show in the cell below.
End of explanation
table = (hl.import_table('data/1kg_annotations.txt', impute=True)
.key_by('Sample'))
Explanation: Adding column fields
A Hail MatrixTable can have any number of row fields and column fields for storing data associated with each row and column. Annotations are usually a critical part of any genetic study. Column fields are where you'll store information about sample phenotypes, ancestry, sex, and covariates. Row fields can be used to store information like gene membership and functional impact for use in QC or analysis.
In this tutorial, we demonstrate how to take a text file and use it to annotate the columns in a MatrixTable.
The file provided contains the sample ID, the population and "super-population" designations, the sample sex, and two simulated phenotypes (one binary, one discrete).
This file can be imported into Hail with import_table. This function produces a Table object. Think of this as a Pandas or R dataframe that isn't limited by the memory on your machine -- behind the scenes, it's distributed with Spark.
End of explanation
table.describe()
Explanation: A good way to peek at the structure of a Table is to look at its schema.
End of explanation
table.show(width=100)
Explanation: To peek at the first few values, use the show method:
End of explanation
print(mt.col.dtype)
Explanation: Now we'll use this table to add sample annotations to our dataset, storing the annotations in column fields in our MatrixTable. First, we'll print the existing column schema:
End of explanation
mt = mt.annotate_cols(pheno = table[mt.s])
mt.col.describe()
Explanation: We use the annotate_cols method to join the table with the MatrixTable containing our dataset.
End of explanation
pprint(table.aggregate(hl.agg.counter(table.SuperPopulation)))
Explanation: Query functions and the Hail Expression Language
Hail has a number of useful query functions that can be used for gathering statistics on our dataset. These query functions take Hail Expressions as arguments.
We will start by looking at some statistics of the information in our table. The aggregate method can be used to aggregate over rows of the table.
counter is an aggregation function that counts the number of occurrences of each unique element. We can use this to pull out the population distribution by passing in a Hail Expression for the field that we want to count by.
End of explanation
pprint(table.aggregate(hl.agg.stats(table.CaffeineConsumption)))
Explanation: stats is an aggregation function that produces some useful statistics about numeric collections. We can use this to see the distribution of the CaffeineConsumption phenotype.
End of explanation
table.count()
mt.count_cols()
Explanation: However, these metrics aren't perfectly representative of the samples in our dataset. Here's why:
End of explanation
mt.aggregate_cols(hl.agg.counter(mt.pheno.SuperPopulation))
pprint(mt.aggregate_cols(hl.agg.stats(mt.pheno.CaffeineConsumption)))
Explanation: Since there are fewer samples in our dataset than in the full thousand genomes cohort, we need to look at annotations on the dataset. We can use aggregate_cols to get the metrics for only the samples in our dataset.
End of explanation
snp_counts = mt.aggregate_rows(hl.agg.counter(hl.Struct(ref=mt.alleles[0], alt=mt.alleles[1])))
pprint(snp_counts)
Explanation: The functionality demonstrated in the last few cells isn't anything especially new: it's certainly not difficult to ask these questions with Pandas or R dataframes, or even Unix tools like awk. But Hail can use the same interfaces and query language to analyze collections that are much larger, like the set of variants.
Here we calculate the counts of each of the 12 possible unique SNPs (4 choices for the reference base * 3 choices for the alternate base).
To do this, we need to get the alternate allele of each variant and then count the occurences of each unique ref/alt pair. This can be done with Hail's counter function.
End of explanation
from collections import Counter
counts = Counter(snp_counts)
counts.most_common()
Explanation: We can list the counts in descending order using Python's Counter class.
End of explanation
p = hl.plot.histogram(mt.DP, range=(0,30), bins=30, title='DP Histogram', legend='DP')
show(p)
Explanation: It's nice to see that we can actually uncover something biological from this small dataset: we see that these frequencies come in pairs. C/T and G/A are actually the same mutation, just viewed from from opposite strands. Likewise, T/A and A/T are the same mutation on opposite strands. There's a 30x difference between the frequency of C/T and A/T SNPs. Why?
The same Python, R, and Unix tools could do this work as well, but we're starting to hit a wall - the latest gnomAD release publishes about 250 million variants, and that won't fit in memory on a single computer.
What about genotypes? Hail can query the collection of all genotypes in the dataset, and this is getting large even for our tiny dataset. Our 284 samples and 10,000 variants produce 10 million unique genotypes. The gnomAD dataset has about 5 trillion unique genotypes.
Hail plotting functions allow Hail fields as arguments, so we can pass in the DP field directly here. If the range and bins arguments are not set, this function will compute the range based on minimum and maximum values of the field and use the default 50 bins.
End of explanation
mt.col.describe()
mt = hl.sample_qc(mt)
mt.col.describe()
Explanation: Quality Control
QC is where analysts spend most of their time with sequencing datasets. QC is an iterative process, and is different for every project: there is no "push-button" solution for QC. Each time the Broad collects a new group of samples, it finds new batch effects. However, by practicing open science and discussing the QC process and decisions with others, we can establish a set of best practices as a community.
QC is entirely based on the ability to understand the properties of a dataset. Hail attempts to make this easier by providing the sample_qc function, which produces a set of useful metrics and stores them in a column field.
End of explanation
p = hl.plot.histogram(mt.sample_qc.call_rate, range=(.88,1), legend='Call Rate')
show(p)
p = hl.plot.histogram(mt.sample_qc.gq_stats.mean, range=(10,70), legend='Mean Sample GQ')
show(p)
Explanation: Plotting the QC metrics is a good place to start.
End of explanation
p = hl.plot.scatter(mt.sample_qc.dp_stats.mean, mt.sample_qc.call_rate, xlabel='Mean DP', ylabel='Call Rate')
show(p)
Explanation: Often, these metrics are correlated.
End of explanation
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
print('After filter, %d/284 samples remain.' % mt.count_cols())
Explanation: Removing outliers from the dataset will generally improve association results. We can make arbitrary cutoffs and use them to filter:
End of explanation
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
fraction_filtered = mt.aggregate_entries(hl.agg.fraction(~filter_condition_ab))
print(f'Filtering {fraction_filtered * 100:.2f}% entries out of downstream analysis.')
mt = mt.filter_entries(filter_condition_ab)
Explanation: Next is genotype QC. It's a good idea to filter out genotypes where the reads aren't where they should be: if we find a genotype called homozygous reference with >10% alternate reads, a genotype called homozygous alternate with >10% reference reads, or a genotype called heterozygote without a ref / alt balance near 1:1, it is likely to be an error.
In a low-depth dataset like 1KG, it is hard to detect bad genotypes using this metric, since a read ratio of 1 alt to 10 reference can easily be explained by binomial sampling. However, in a high-depth dataset, a read ratio of 10:100 is a sure cause for concern!
End of explanation
mt = hl.variant_qc(mt)
mt.row.describe()
Explanation: Variant QC is a bit more of the same: we can use the variant_qc function to produce a variety of useful statistics, plot them, and filter.
End of explanation
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
mt = mt.filter_rows(mt.variant_qc.p_value_hwe > 1e-6)
print('Samples: %d Variants: %d' % (mt.count_cols(), mt.count_rows()))
Explanation: These statistics actually look pretty good: we don't need to filter this dataset. Most datasets require thoughtful quality control, though. The filter_rows method can help!
Let's do a GWAS!
First, we need to restrict to variants that are :
common (we'll use a cutoff of 1%)
not so far from Hardy-Weinberg equilibrium as to suggest sequencing error
End of explanation
gwas = hl.linear_regression_rows(y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0])
gwas.row.describe()
Explanation: These filters removed about 15% of sites (we started with a bit over 10,000). This is NOT representative of most sequencing datasets! We have already downsampled the full thousand genomes dataset to include more common variants than we'd expect by chance.
In Hail, the association tests accept column fields for the sample phenotype and covariates. Since we've already got our phenotype of interest (caffeine consumption) in the dataset, we are good to go:
End of explanation
p = hl.plot.manhattan(gwas.p_value)
show(p)
Explanation: Looking at the bottom of the above printout, you can see the linear regression adds new row fields for the beta, standard error, t-statistic, and p-value.
Hail makes it easy to visualize results! Let's make a Manhattan plot:
End of explanation
p = hl.plot.qq(gwas.p_value)
show(p)
Explanation: This doesn't look like much of a skyline. Let's check whether our GWAS was well controlled using a Q-Q (quantile-quantile) plot.
End of explanation
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
pprint(eigenvalues)
pcs.show(5, width=100)
Explanation: Confounded!
The observed p-values drift away from the expectation immediately. Either every SNP in our dataset is causally linked to caffeine consumption (unlikely), or there's a confounder.
We didn't tell you, but sample ancestry was actually used to simulate this phenotype. This leads to a stratified distribution of the phenotype. The solution is to include ancestry as a covariate in our regression.
The linear_regression_rows function can also take column fields to use as covariates. We already annotated our samples with reported ancestry, but it is good to be skeptical of these labels due to human error. Genomes don't have that problem! Instead of using reported ancestry, we will use genetic ancestry by including computed principal components in our model.
The pca function produces eigenvalues as a list and sample PCs as a Table, and can also produce variant loadings when asked. The hwe_normalized_pca function does the same, using HWE-normalized genotypes for the PCA.
End of explanation
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
p = hl.plot.scatter(mt.scores[0],
mt.scores[1],
label=mt.pheno.SuperPopulation,
title='PCA', xlabel='PC1', ylabel='PC2')
show(p)
Explanation: Now that we've got principal components per sample, we may as well plot them! Human history exerts a strong effect in genetic datasets. Even with a 50MB sequencing dataset, we can recover the major human populations.
End of explanation
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
Explanation: Now we can rerun our linear regression, controlling for sample sex and the first few principal components. We'll do this with input variable the number of alternate alleles as before, and again with input variable the genotype dosage derived from the PL field.
End of explanation
p = hl.plot.qq(gwas.p_value)
show(p)
Explanation: We'll first make a Q-Q plot to assess inflation...
End of explanation
p = hl.plot.manhattan(gwas.p_value)
show(p)
Explanation: That's more like it! This shape is indicative of a well-controlled (but not especially well-powered) study. And now for the Manhattan plot:
End of explanation
entries = mt.entries()
results = (entries.group_by(pop = entries.pheno.SuperPopulation, chromosome = entries.locus.contig)
.aggregate(n_het = hl.agg.count_where(entries.GT.is_het())))
results.show()
Explanation: We have found a caffeine consumption locus! Now simply apply Hail's Nature paper function to publish the result.
Just kidding, that function won't land until Hail 1.0!
Rare variant analysis
Here we'll demonstrate how one can use the expression language to group and count by any arbitrary properties in row and column fields. Hail also implements the sequence kernel association test (SKAT).
End of explanation
entries = entries.annotate(maf_bin = hl.cond(entries.info.AF[0]<0.01, "< 1%",
hl.cond(entries.info.AF[0]<0.05, "1%-5%", ">5%")))
results2 = (entries.group_by(af_bin = entries.maf_bin, purple_hair = entries.pheno.PurpleHair)
.aggregate(mean_gq = hl.agg.stats(entries.GQ).mean,
mean_dp = hl.agg.stats(entries.DP).mean))
results2.show()
Explanation: What if we want to group by minor allele frequency bin and hair color, and calculate the mean GQ?
End of explanation
table = hl.import_table('data/1kg_annotations.txt', impute=True).key_by('Sample')
mt = hl.read_matrix_table('data/1kg.mt')
mt = mt.annotate_cols(pheno = table[mt.s])
mt = hl.sample_qc(mt)
mt = mt.filter_cols((mt.sample_qc.dp_stats.mean >= 4) & (mt.sample_qc.call_rate >= 0.97))
ab = mt.AD[1] / hl.sum(mt.AD)
filter_condition_ab = ((mt.GT.is_hom_ref() & (ab <= 0.1)) |
(mt.GT.is_het() & (ab >= 0.25) & (ab <= 0.75)) |
(mt.GT.is_hom_var() & (ab >= 0.9)))
mt = mt.filter_entries(filter_condition_ab)
mt = hl.variant_qc(mt)
mt = mt.filter_rows(mt.variant_qc.AF[1] > 0.01)
eigenvalues, pcs, _ = hl.hwe_normalized_pca(mt.GT)
mt = mt.annotate_cols(scores = pcs[mt.s].scores)
gwas = hl.linear_regression_rows(
y=mt.pheno.CaffeineConsumption,
x=mt.GT.n_alt_alleles(),
covariates=[1.0, mt.pheno.isFemale, mt.scores[0], mt.scores[1], mt.scores[2]])
Explanation: We've shown that it's easy to aggregate by a couple of arbitrary statistics. This specific examples may not provide especially useful pieces of information, but this same pattern can be used to detect effects of rare variation:
Count the number of heterozygous genotypes per gene by functional category (synonymous, missense, or loss-of-function) to estimate per-gene functional constraint
Count the number of singleton loss-of-function mutations per gene in cases and controls to detect genes involved in disease
Epilogue
Congrats! You've reached the end of the first tutorial. To learn more about Hail's API and functionality, take a look at the other tutorials. You can check out the Python API for documentation on additional Hail functions. If you use Hail for your own science, we'd love to hear from you on Zulip chat or the discussion forum.
For reference, here's the full workflow to all tutorial endpoints combined into one cell.
End of explanation |
6,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple Oscillator Example
This example shows the most simple way of using a solver.
We solve free vibration of a simple oscillator
Step2: We need a first order system, so convert the second order system
$$m \ddot{u} + k u = 0,\quad u(0) = u_0,\quad \dot{u}(0) = \dot{u}_0$$
into
$$\left{ \begin{array}{l}
\dot u = v\
\dot v = \ddot u = -\frac{ku}{m}
\end{array} \right.$$
You need to define a function that computes the right hand side of above equation
Step3: To solve the ODE you define an ode object, specify the solver to use, here cvode, and pass the right hand side function. You request the solution at specific timepoints by passing an array of times to the solve member.
Step4: You can continue the solver by passing further times. Calling the solve routine reinits the solver, so you can restart at whatever time. To continue from the last computed solution, pass the last obtained time and solution.
Note
Step5: The solution fails at a time around 24 seconds. Erros can be due to many things. Here however the reason is simple
Step6: To plot the simple oscillator, we show a (t,x) plot of the solution. Doing this over 60 seconds can be done as follows
Step7: Simple Oscillator Example
Step8: The solver interpolates solutions to return the solution at the required output times
Step9: Simple Oscillator Example
Step10: By inspection of the returned times you can see how efficient the solver can solve this problem | Python Code:
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
from scikits.odes import ode
#data of the oscillator
k = 4.0
m = 1.0
#initial position and speed data on t=0, x[0] = u, x[1] = \dot{u}, xp = \dot{x}
initx = [1, 0.1]
Explanation: Simple Oscillator Example
This example shows the most simple way of using a solver.
We solve free vibration of a simple oscillator:
$$m \ddot{u} + k u = 0,\quad u(0) = u_0,\quad \dot{u}(0) = \dot{u}_0$$
using the CVODE solver. An analytical solution exists, given by
$$u(t) = u_0 \cos\left(\sqrt{\frac{k}{m}} t\right)+\frac{\dot{u}_0}{\sqrt{\frac{k}{m}}} \sin\left(\sqrt{\frac{k}{m}} t\right)$$
End of explanation
def rhseqn(t, x, xdot):
we create rhs equations for the problem
xdot[0] = x[1]
xdot[1] = - k/m * x[0]
Explanation: We need a first order system, so convert the second order system
$$m \ddot{u} + k u = 0,\quad u(0) = u_0,\quad \dot{u}(0) = \dot{u}_0$$
into
$$\left{ \begin{array}{l}
\dot u = v\
\dot v = \ddot u = -\frac{ku}{m}
\end{array} \right.$$
You need to define a function that computes the right hand side of above equation:
End of explanation
solver = ode('cvode', rhseqn, old_api=False)
solution = solver.solve([0., 1., 2.], initx)
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
Explanation: To solve the ODE you define an ode object, specify the solver to use, here cvode, and pass the right hand side function. You request the solution at specific timepoints by passing an array of times to the solve member.
End of explanation
#Solve over the next hour by continuation
times = np.linspace(0, 3600, 61)
times[0] = solution.values.t[-1]
solution = solver.solve(times, solution.values.y[-1])
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
print ('Computed Solutions:')
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
Explanation: You can continue the solver by passing further times. Calling the solve routine reinits the solver, so you can restart at whatever time. To continue from the last computed solution, pass the last obtained time and solution.
Note: The solver performes better if it can take into account history information, so avoid calling solve to continue computation!
In general, you must check for errors using the errors output of solve.
End of explanation
solver = ode('cvode', rhseqn, old_api=False, max_steps=5000)
solution = solver.solve(times, solution.values.y[-1])
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
print ('Computed Solutions:')
print('\n t Solution Exact')
print('------------------------------------')
for t, u in zip(solution.values.t, solution.values.y):
print('{0:>4.0f} {1:15.6g} {2:15.6g}'.format(t, u[0],
initx[0]*np.cos(np.sqrt(k/m)*t)+initx[1]*np.sin(np.sqrt(k/m)*t)/np.sqrt(k/m)))
Explanation: The solution fails at a time around 24 seconds. Erros can be due to many things. Here however the reason is simple: we try to make too large jumps in time output. Increasing the allowed steps the solver can take will fix this. This is the max_steps option of cvode:
End of explanation
#plot of the oscilator
solver = ode('cvode', rhseqn, old_api=False)
times = np.linspace(0,60,600)
solution = solver.solve(times, initx)
plt.plot(solution.values.t,[x[0] for x in solution.values.y])
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
Explanation: To plot the simple oscillator, we show a (t,x) plot of the solution. Doing this over 60 seconds can be done as follows:
End of explanation
solver = ode('cvode', rhseqn, old_api=False)
time = 0.
solver.init_step(time, initx)
plott = []
plotx = []
while True:
time += 0.1
# fix roundoff error at end
if time > 60: time = 60
solution = solver.step(time)
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
break
#we store output for plotting
plott.append(solution.values.t)
plotx.append(solution.values.y[0])
if time >= 60:
break
plt.plot(plott,plotx)
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
Explanation: Simple Oscillator Example: Stepwise running
When using the solve method, you solve over a period of time you decided before. In some problems you might want to solve and decide on the output when to stop. Then you use the step method. The same example as above using the step method can be solved as follows.
You define the ode object selecting the cvode solver. You initialize the solver with the begin time and initial conditions using init_step. You compute solutions going forward with the step method.
End of explanation
print ('plott length:', len(plott), ', last computation times:', plott[-15:]);
Explanation: The solver interpolates solutions to return the solution at the required output times:
End of explanation
solver = ode('cvode', rhseqn, old_api=False, one_step_compute=True)
time = 0.
solver.init_step(time, initx)
plott = []
plotx = []
while True:
solution = solver.step(60)
if solution.errors.t:
print ('Error: ', solution.message, 'Error at time', solution.errors.t)
break
#we store output for plotting
plott.append(solution.values.t)
plotx.append(solution.values.y[0])
if solution.values.t >= 60:
#back up to 60
solver.set_options(one_step_compute=False)
solution = solver.step(60)
plott[-1] = solution.values.t
plotx[-1] = solution.values.y[0]
break
plt.plot(plott,plotx)
plt.xlabel('Time [s]')
plt.ylabel('Position [m]')
plt.show()
Explanation: Simple Oscillator Example: Internal Solver Stepwise running
When using the solve method, you solve over a period of time you decided before. With the step method you solve by default towards a desired output time after which you can continue solving the problem.
For full control, you can also compute problems using the solver internal steps. This is not advised, as the number of return steps can be very large, slowing down the computation enormously. If you want this nevertheless, you can achieve it with the one_step_compute option. Like this:
End of explanation
print ('plott length:', len(plott), ', last computation times:', plott[-15:]);
Explanation: By inspection of the returned times you can see how efficient the solver can solve this problem:
End of explanation |
6,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Low-latency item-to-item recommendation system - Orchestrating with TFX
Overview
This notebook is a part of the series that describes the process of implementing a Low-latency item-to-item recommendation system.
This notebook demonstrates how to use TFX and AI Platform Pipelines (Unified) to operationalize the workflow that creates embeddings and builds and deploys an ANN Service index.
In the notebook you go through the following steps.
Creating TFX custom components that encapsulate operations on BQ, BQML and ANN Service.
Creating a TFX pipeline that automates the processes of creating embeddings and deploying an ANN Index
Testing the pipeline locally using Beam runner.
Compiling the pipeline to the TFX IR format for execution on AI Platform Pipelines (Unified).
Submitting pipeline runs.
This notebook was designed to run on AI Platform Notebooks. Before running the notebook make sure that you have completed the setup steps as described in the README file.
TFX Pipeline Design
The below diagram depicts the TFX pipeline that you will implement in this notebook. Each step of the pipeline is implemented as a TFX Custom Python function component. The components track the relevant metadata in AI Platform (Unfied) ML Metadata using both standard and custom metadata types.
The first step of the pipeline is to compute item co-occurence. This is done by calling the sp_ComputePMI stored procedure created in the preceeding notebooks.
Next, the BQML Matrix Factorization model is created. The model training code is encapsulated in the sp_TrainItemMatchingModel stored procedure.
Item embeddings are extracted from the trained model weights and stored in a BQ table. The component calls the sp_ExtractEmbeddings stored procedure that implements the extraction logic.
The embeddings are exported in the JSONL format to the GCS location using the BigQuery extract job.
The embeddings in the JSONL format are used to create an ANN index by calling the ANN Service Control Plane REST API.
Finally, the ANN index is deployed to an ANN endpoint.
All steps and their inputs and outputs are tracked in the AI Platform (Unified) ML Metadata service.
Step1: Setting up the notebook's environment
Install AI Platform Pipelines client library
For AI Platform Pipelines (Unified), which is in the Experimental stage, you need to download and install the AI Platform client library on top of the KFP and TFX SDKs that were installed as part of the initial environment setup.
Step2: Restart the kernel.
Step3: Import notebook dependencies
Step4: Configure GCP environment
If you're on AI Platform Notebooks, authenticate with Google Cloud before running the next section, by running
sh
gcloud auth login
in the Terminal window (which you can open via File > New in the menu). You only need to do this once per notebook instance.
Set the following constants to the values reflecting your environment
Step5: Defining custom components
In this section of the notebook you define a set of custom TFX components that encapsulate BQ, BQML and ANN Service calls. The components are TFX Custom Python function components.
Each component is created as a separate Python module. You also create a couple of helper modules that encapsulate Python functions and classess used across the custom components.
Remove files created in the previous executions of the notebook
Step7: Define custom types for ANN service artifacts
This module defines a couple of custom TFX artifacts to track ANN Service indexes and index deployments.
Step22: Create a wrapper around ANN Service REST API
This module provides a convenience wrapper around ANN Service REST API. In the experimental stage, the ANN Service does not have an "official" Python client SDK nor it is supported by the Google Discovery API.
Step24: Create Compute PMI component
This component encapsulates a call to the BigQuery stored procedure that calculates item cooccurence. Refer to the preceeding notebooks for more details about item coocurrent calculations.
The component tracks the output item_cooc table created by the stored procedure using the TFX (simple) Dataset artifact.
Step26: Create Train Item Matching Model component
This component encapsulates a call to the BigQuery stored procedure that trains the BQML Matrix Factorization model. Refer to the preceeding notebooks for more details about model training.
The component tracks the output item_matching_model BQML model created by the stored procedure using the TFX (simple) Model artifact.
Step28: Create Extract Embeddings component
This component encapsulates a call to the BigQuery stored procedure that extracts embdeddings from the model to the staging table. Refer to the preceeding notebooks for more details about embeddings extraction.
The component tracks the output item_embeddings table created by the stored procedure using the TFX (simple) Dataset artifact.
Step30: Create Export Embeddings component
This component encapsulates a BigQuery table extraction job that extracts the item_embeddings table to a GCS location as files in the JSONL format. The format of the extracted files is compatible with the ingestion schema for the ANN Service.
The component tracks the output files location in the TFX (simple) Dataset artifact.
Step32: Create ANN index component
This component encapsulats the calls to the ANN Service to create an ANN Index.
The component tracks the created index int the TFX custom ANNIndex artifact.
Step34: Deploy ANN index component
This component deploys an ANN index to an ANN Endpoint.
The componet tracks the deployed index in the TFX custom DeployedANNIndex artifact.
Step36: Creating a TFX pipeline
The pipeline automates the process of preparing item embeddings (in BigQuery), training a Matrix Factorization model (in BQML), and creating and deploying an ANN Service index.
The pipeline has a simple sequential flow. The pipeline accepts a set of runtime parameters that define GCP environment settings and embeddings and index assembly parameters.
Step37: Testing the pipeline locally
You will first run the pipeline locally using the Beam runner.
Clean the metadata and artifacts from the previous runs
Step38: Set pipeline parameters and create the pipeline
Step39: Start the run
Step40: Inspect produced metadata
During the execution of the pipeline, the inputs and outputs of each component have been tracked in ML Metadata.
Step41: NOTICE. The following code does not work with ANN Service Experimental. It will be finalized when the service moves to the Preview stage.
Running the pipeline on AI Platform Pipelines
You will now run the pipeline on AI Platform Pipelines (Unified)
Package custom components into a container
The modules containing custom components must be first package as a docker container image, which is a derivative of the standard TFX image.
Create a Dockerfile
Step42: Build and push the docker image to Container Registry
Step43: Create AI Platform Pipelines client
Step44: Set the the parameters for AIPP execution and create the pipeline
Step45: Compile the pipeline
Step46: Submit the pipeline run | Python Code:
%load_ext autoreload
%autoreload 2
Explanation: Low-latency item-to-item recommendation system - Orchestrating with TFX
Overview
This notebook is a part of the series that describes the process of implementing a Low-latency item-to-item recommendation system.
This notebook demonstrates how to use TFX and AI Platform Pipelines (Unified) to operationalize the workflow that creates embeddings and builds and deploys an ANN Service index.
In the notebook you go through the following steps.
Creating TFX custom components that encapsulate operations on BQ, BQML and ANN Service.
Creating a TFX pipeline that automates the processes of creating embeddings and deploying an ANN Index
Testing the pipeline locally using Beam runner.
Compiling the pipeline to the TFX IR format for execution on AI Platform Pipelines (Unified).
Submitting pipeline runs.
This notebook was designed to run on AI Platform Notebooks. Before running the notebook make sure that you have completed the setup steps as described in the README file.
TFX Pipeline Design
The below diagram depicts the TFX pipeline that you will implement in this notebook. Each step of the pipeline is implemented as a TFX Custom Python function component. The components track the relevant metadata in AI Platform (Unfied) ML Metadata using both standard and custom metadata types.
The first step of the pipeline is to compute item co-occurence. This is done by calling the sp_ComputePMI stored procedure created in the preceeding notebooks.
Next, the BQML Matrix Factorization model is created. The model training code is encapsulated in the sp_TrainItemMatchingModel stored procedure.
Item embeddings are extracted from the trained model weights and stored in a BQ table. The component calls the sp_ExtractEmbeddings stored procedure that implements the extraction logic.
The embeddings are exported in the JSONL format to the GCS location using the BigQuery extract job.
The embeddings in the JSONL format are used to create an ANN index by calling the ANN Service Control Plane REST API.
Finally, the ANN index is deployed to an ANN endpoint.
All steps and their inputs and outputs are tracked in the AI Platform (Unified) ML Metadata service.
End of explanation
AIP_CLIENT_WHEEL = "aiplatform_pipelines_client-0.1.0.caip20201123-py3-none-any.whl"
AIP_CLIENT_WHEEL_GCS_LOCATION = (
f"gs://cloud-aiplatform-pipelines/releases/20201123/{AIP_CLIENT_WHEEL}"
)
!gsutil cp {AIP_CLIENT_WHEEL_GCS_LOCATION} {AIP_CLIENT_WHEEL}
%pip install {AIP_CLIENT_WHEEL}
Explanation: Setting up the notebook's environment
Install AI Platform Pipelines client library
For AI Platform Pipelines (Unified), which is in the Experimental stage, you need to download and install the AI Platform client library on top of the KFP and TFX SDKs that were installed as part of the initial environment setup.
End of explanation
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel.
End of explanation
import logging
import tensorflow as tf
import tfx
from aiplatform.pipelines import client
from tfx.orchestration.beam.beam_dag_runner import BeamDagRunner
print("TFX Version: ", tfx.__version__)
Explanation: Import notebook dependencies
End of explanation
PROJECT_ID = "jk-mlops-dev" # <---CHANGE THIS
PROJECT_NUMBER = "895222332033" # <---CHANGE THIS
API_KEY = "AIzaSyBS_RiaK3liaVthTUD91XuPDKIbiwDFlV8" # <---CHANGE THIS
USER = "user" # <---CHANGE THIS
BUCKET_NAME = "jk-ann-staging" # <---CHANGE THIS
VPC_NAME = "default" # <---CHANGE THIS IF USING A DIFFERENT VPC
REGION = "us-central1"
PIPELINE_NAME = "ann-pipeline-{}".format(USER)
PIPELINE_ROOT = "gs://{}/pipeline_root/{}".format(BUCKET_NAME, PIPELINE_NAME)
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
print("PIPELINE_ROOT: {}".format(PIPELINE_ROOT))
Explanation: Configure GCP environment
If you're on AI Platform Notebooks, authenticate with Google Cloud before running the next section, by running
sh
gcloud auth login
in the Terminal window (which you can open via File > New in the menu). You only need to do this once per notebook instance.
Set the following constants to the values reflecting your environment:
PROJECT_ID - your GCP project ID
PROJECT_NUMBER - your GCP project number
BUCKET_NAME - a name of the GCS bucket that will be used to host artifacts created by the pipeline
PIPELINE_NAME_SUFFIX - a suffix appended to the standard pipeline name. You can change to differentiate between pipelines from different users in a classroom environment
API_KEY - a GCP API key
VPC_NAME - a name of the GCP VPC to use for the index deployments.
REGION - a compute region. Don't change the default - us-central - while the ANN Service is in the experimental stage
End of explanation
component_folder = "bq_components"
if tf.io.gfile.exists(component_folder):
print("Removing older file")
tf.io.gfile.rmtree(component_folder)
print("Creating component folder")
tf.io.gfile.mkdir(component_folder)
%cd {component_folder}
Explanation: Defining custom components
In this section of the notebook you define a set of custom TFX components that encapsulate BQ, BQML and ANN Service calls. The components are TFX Custom Python function components.
Each component is created as a separate Python module. You also create a couple of helper modules that encapsulate Python functions and classess used across the custom components.
Remove files created in the previous executions of the notebook
End of explanation
%%writefile ann_types.py
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Custom types for managing ANN artifacts.
from tfx.types import artifact
class ANNIndex(artifact.Artifact):
TYPE_NAME = 'ANNIndex'
class DeployedANNIndex(artifact.Artifact):
TYPE_NAME = 'DeployedANNIndex'
Explanation: Define custom types for ANN service artifacts
This module defines a couple of custom TFX artifacts to track ANN Service indexes and index deployments.
End of explanation
%%writefile ann_service.py
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Helper classes encapsulating ANN Service REST API.
import datetime
import logging
import json
import time
import google.auth
class ANNClient(object):
Base ANN Service client.
def __init__(self, project_id, project_number, region):
credentials, _ = google.auth.default()
self.authed_session = google.auth.transport.requests.AuthorizedSession(credentials)
self.ann_endpoint = f'{region}-aiplatform.googleapis.com'
self.ann_parent = f'https://{self.ann_endpoint}/v1alpha1/projects/{project_id}/locations/{region}'
self.project_id = project_id
self.project_number = project_number
self.region = region
def wait_for_completion(self, operation_id, message, sleep_time):
Waits for a completion of a long running operation.
api_url = f'{self.ann_parent}/operations/{operation_id}'
start_time = datetime.datetime.utcnow()
while True:
response = self.authed_session.get(api_url)
if response.status_code != 200:
raise RuntimeError(response.json())
if 'done' in response.json().keys():
logging.info('Operation completed!')
break
elapsed_time = datetime.datetime.utcnow() - start_time
logging.info('{}. Elapsed time since start: {}.'.format(
message, str(elapsed_time)))
time.sleep(sleep_time)
return response.json()['response']
class IndexClient(ANNClient):
Encapsulates a subset of control plane APIs
that manage ANN indexes.
def __init__(self, project_id, project_number, region):
super().__init__(project_id, project_number, region)
def create_index(self, display_name, description, metadata):
Creates an ANN Index.
api_url = f'{self.ann_parent}/indexes'
request_body = {
'display_name': display_name,
'description': description,
'metadata': metadata
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
operation_id = response.json()['name'].split('/')[-1]
return operation_id
def list_indexes(self, display_name=None):
Lists all indexes with a given display name or
all indexes if the display_name is not provided.
if display_name:
api_url = f'{self.ann_parent}/indexes?filter=display_name="{display_name}"'
else:
api_url = f'{self.ann_parent}/indexes'
response = self.authed_session.get(api_url).json()
return response['indexes'] if response else []
def delete_index(self, index_id):
Deletes an ANN index.
api_url = f'{self.ann_parent}/indexes/{index_id}'
response = self.authed_session.delete(api_url)
if response.status_code != 200:
raise RuntimeError(response.text)
class IndexDeploymentClient(ANNClient):
Encapsulates a subset of control plane APIs
that manage ANN endpoints and deployments.
def __init__(self, project_id, project_number, region):
super().__init__(project_id, project_number, region)
def create_endpoint(self, display_name, vpc_name):
Creates an ANN endpoint.
api_url = f'{self.ann_parent}/indexEndpoints'
network_name = f'projects/{self.project_number}/global/networks/{vpc_name}'
request_body = {
'display_name': display_name,
'network': network_name
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
operation_id = response.json()['name'].split('/')[-1]
return operation_id
def list_endpoints(self, display_name=None):
Lists all ANN endpoints with a given display name or
all endpoints in the project if the display_name is not provided.
if display_name:
api_url = f'{self.ann_parent}/indexEndpoints?filter=display_name="{display_name}"'
else:
api_url = f'{self.ann_parent}/indexEndpoints'
response = self.authed_session.get(api_url).json()
return response['indexEndpoints'] if response else []
def delete_endpoint(self, endpoint_id):
Deletes an ANN endpoint.
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'
response = self.authed_session.delete(api_url)
if response.status_code != 200:
raise RuntimeError(response.text)
return response.json()
def create_deployment(self, display_name, deployment_id, endpoint_id, index_id):
Deploys an ANN index to an endpoint.
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:deployIndex'
index_name = f'projects/{self.project_number}/locations/{self.region}/indexes/{index_id}'
request_body = {
'deployed_index': {
'id': deployment_id,
'index': index_name,
'display_name': display_name
}
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
operation_id = response.json()['name'].split('/')[-1]
return operation_id
def get_deployment_grpc_ip(self, endpoint_id, deployment_id):
Returns a private IP address for a gRPC interface to
an Index deployment.
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}'
response = self.authed_session.get(api_url)
if response.status_code != 200:
raise RuntimeError(response.text)
endpoint_ip = None
if 'deployedIndexes' in response.json().keys():
for deployment in response.json()['deployedIndexes']:
if deployment['id'] == deployment_id:
endpoint_ip = deployment['privateEndpoints']['matchGrpcAddress']
return endpoint_ip
def delete_deployment(self, endpoint_id, deployment_id):
Undeployes an index from an endpoint.
api_url = f'{self.ann_parent}/indexEndpoints/{endpoint_id}:undeployIndex'
request_body = {
'deployed_index_id': deployment_id
}
response = self.authed_session.post(api_url, data=json.dumps(request_body))
if response.status_code != 200:
raise RuntimeError(response.text)
return response
Explanation: Create a wrapper around ANN Service REST API
This module provides a convenience wrapper around ANN Service REST API. In the experimental stage, the ANN Service does not have an "official" Python client SDK nor it is supported by the Google Discovery API.
End of explanation
%%writefile compute_pmi.py
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
BigQuery compute PMI component.
import logging
from google.cloud import bigquery
import tfx
import tensorflow as tf
from tfx.dsl.component.experimental.decorators import component
from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter
from tfx.types.experimental.simple_artifacts import Dataset as BQDataset
@component
def compute_pmi(
project_id: Parameter[str],
bq_dataset: Parameter[str],
min_item_frequency: Parameter[int],
max_group_size: Parameter[int],
item_cooc: OutputArtifact[BQDataset]):
stored_proc = f'{bq_dataset}.sp_ComputePMI'
query = f'''
DECLARE min_item_frequency INT64;
DECLARE max_group_size INT64;
SET min_item_frequency = {min_item_frequency};
SET max_group_size = {max_group_size};
CALL {stored_proc}(min_item_frequency, max_group_size);
'''
result_table = 'item_cooc'
logging.info(f'Starting computing PMI...')
client = bigquery.Client(project=project_id)
query_job = client.query(query)
query_job.result() # Wait for the job to complete
logging.info(f'Items PMI computation completed. Output in {bq_dataset}.{result_table}.')
# Write the location of the output table to metadata.
item_cooc.set_string_custom_property('table_name',
f'{project_id}:{bq_dataset}.{result_table}')
Explanation: Create Compute PMI component
This component encapsulates a call to the BigQuery stored procedure that calculates item cooccurence. Refer to the preceeding notebooks for more details about item coocurrent calculations.
The component tracks the output item_cooc table created by the stored procedure using the TFX (simple) Dataset artifact.
End of explanation
%%writefile train_item_matching.py
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
BigQuery compute PMI component.
import logging
from google.cloud import bigquery
import tfx
import tensorflow as tf
from tfx.dsl.component.experimental.decorators import component
from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter
from tfx.types.experimental.simple_artifacts import Dataset as BQDataset
from tfx.types.standard_artifacts import Model as BQModel
@component
def train_item_matching_model(
project_id: Parameter[str],
bq_dataset: Parameter[str],
dimensions: Parameter[int],
item_cooc: InputArtifact[BQDataset],
bq_model: OutputArtifact[BQModel]):
item_cooc_table = item_cooc.get_string_custom_property('table_name')
stored_proc = f'{bq_dataset}.sp_TrainItemMatchingModel'
query = f'''
DECLARE dimensions INT64 DEFAULT {dimensions};
CALL {stored_proc}(dimensions);
'''
model_name = 'item_matching_model'
logging.info(f'Using item co-occurrence table: item_cooc_table')
logging.info(f'Starting training of the model...')
client = bigquery.Client(project=project_id)
query_job = client.query(query)
query_job.result()
logging.info(f'Model training completed. Output in {bq_dataset}.{model_name}.')
# Write the location of the model to metadata.
bq_model.set_string_custom_property('model_name',
f'{project_id}:{bq_dataset}.{model_name}')
Explanation: Create Train Item Matching Model component
This component encapsulates a call to the BigQuery stored procedure that trains the BQML Matrix Factorization model. Refer to the preceeding notebooks for more details about model training.
The component tracks the output item_matching_model BQML model created by the stored procedure using the TFX (simple) Model artifact.
End of explanation
%%writefile extract_embeddings.py
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Extracts embeddings to a BQ table.
import logging
from google.cloud import bigquery
import tfx
import tensorflow as tf
from tfx.dsl.component.experimental.decorators import component
from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter
from tfx.types.experimental.simple_artifacts import Dataset as BQDataset
from tfx.types.standard_artifacts import Model as BQModel
@component
def extract_embeddings(
project_id: Parameter[str],
bq_dataset: Parameter[str],
bq_model: InputArtifact[BQModel],
item_embeddings: OutputArtifact[BQDataset]):
embedding_model_name = bq_model.get_string_custom_property('model_name')
stored_proc = f'{bq_dataset}.sp_ExractEmbeddings'
query = f'''
CALL {stored_proc}();
'''
embeddings_table = 'item_embeddings'
logging.info(f'Extracting item embeddings from: {embedding_model_name}')
client = bigquery.Client(project=project_id)
query_job = client.query(query)
query_job.result() # Wait for the job to complete
logging.info(f'Embeddings extraction completed. Output in {bq_dataset}.{embeddings_table}')
# Write the location of the output table to metadata.
item_embeddings.set_string_custom_property('table_name',
f'{project_id}:{bq_dataset}.{embeddings_table}')
Explanation: Create Extract Embeddings component
This component encapsulates a call to the BigQuery stored procedure that extracts embdeddings from the model to the staging table. Refer to the preceeding notebooks for more details about embeddings extraction.
The component tracks the output item_embeddings table created by the stored procedure using the TFX (simple) Dataset artifact.
End of explanation
%%writefile export_embeddings.py
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Exports embeddings from a BQ table to a GCS location.
import logging
from google.cloud import bigquery
import tfx
import tensorflow as tf
from tfx.dsl.component.experimental.decorators import component
from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter
from tfx.types.experimental.simple_artifacts import Dataset
BQDataset = Dataset
@component
def export_embeddings(
project_id: Parameter[str],
gcs_location: Parameter[str],
item_embeddings_bq: InputArtifact[BQDataset],
item_embeddings_gcs: OutputArtifact[Dataset]):
filename_pattern = 'embedding-*.json'
gcs_location = gcs_location.rstrip('/')
destination_uri = f'{gcs_location}/{filename_pattern}'
_, table_name = item_embeddings_bq.get_string_custom_property('table_name').split(':')
logging.info(f'Exporting item embeddings from: {table_name}')
bq_dataset, table_id = table_name.split('.')
client = bigquery.Client(project=project_id)
dataset_ref = bigquery.DatasetReference(project_id, bq_dataset)
table_ref = dataset_ref.table(table_id)
job_config = bigquery.job.ExtractJobConfig()
job_config.destination_format = bigquery.DestinationFormat.NEWLINE_DELIMITED_JSON
extract_job = client.extract_table(
table_ref,
destination_uris=destination_uri,
job_config=job_config
)
extract_job.result() # Wait for resuls
logging.info(f'Embeddings export completed. Output in {gcs_location}')
# Write the location of the embeddings to metadata.
item_embeddings_gcs.uri = gcs_location
Explanation: Create Export Embeddings component
This component encapsulates a BigQuery table extraction job that extracts the item_embeddings table to a GCS location as files in the JSONL format. The format of the extracted files is compatible with the ingestion schema for the ANN Service.
The component tracks the output files location in the TFX (simple) Dataset artifact.
End of explanation
%%writefile create_index.py
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Creates an ANN index.
import logging
import google.auth
import numpy as np
import tfx
import tensorflow as tf
from google.cloud import bigquery
from tfx.dsl.component.experimental.decorators import component
from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter
from tfx.types.experimental.simple_artifacts import Dataset
from ann_service import IndexClient
from ann_types import ANNIndex
NUM_NEIGHBOURS = 10
MAX_LEAVES_TO_SEARCH = 200
METRIC = 'DOT_PRODUCT_DISTANCE'
FEATURE_NORM_TYPE = 'UNIT_L2_NORM'
CHILD_NODE_COUNT = 1000
APPROXIMATE_NEIGHBORS_COUNT = 50
@component
def create_index(
project_id: Parameter[str],
project_number: Parameter[str],
region: Parameter[str],
display_name: Parameter[str],
dimensions: Parameter[int],
item_embeddings: InputArtifact[Dataset],
ann_index: OutputArtifact[ANNIndex]):
index_client = IndexClient(project_id, project_number, region)
logging.info('Creating index:')
logging.info(f' Index display name: {display_name}')
logging.info(f' Embeddings location: {item_embeddings.uri}')
index_description = display_name
index_metadata = {
'contents_delta_uri': item_embeddings.uri,
'config': {
'dimensions': dimensions,
'approximate_neighbors_count': APPROXIMATE_NEIGHBORS_COUNT,
'distance_measure_type': METRIC,
'feature_norm_type': FEATURE_NORM_TYPE,
'tree_ah_config': {
'child_node_count': CHILD_NODE_COUNT,
'max_leaves_to_search': MAX_LEAVES_TO_SEARCH
}
}
}
operation_id = index_client.create_index(display_name,
index_description,
index_metadata)
response = index_client.wait_for_completion(operation_id, 'Waiting for ANN index', 45)
index_name = response['name']
logging.info('Index {} created.'.format(index_name))
# Write the index name to metadata.
ann_index.set_string_custom_property('index_name',
index_name)
ann_index.set_string_custom_property('index_display_name',
display_name)
Explanation: Create ANN index component
This component encapsulats the calls to the ANN Service to create an ANN Index.
The component tracks the created index int the TFX custom ANNIndex artifact.
End of explanation
%%writefile deploy_index.py
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Deploys an ANN index.
import logging
import numpy as np
import uuid
import tfx
import tensorflow as tf
from google.cloud import bigquery
from tfx.dsl.component.experimental.decorators import component
from tfx.dsl.component.experimental.annotations import InputArtifact, OutputArtifact, Parameter
from tfx.types.experimental.simple_artifacts import Dataset
from ann_service import IndexDeploymentClient
from ann_types import ANNIndex
from ann_types import DeployedANNIndex
@component
def deploy_index(
project_id: Parameter[str],
project_number: Parameter[str],
region: Parameter[str],
vpc_name: Parameter[str],
deployed_index_id_prefix: Parameter[str],
ann_index: InputArtifact[ANNIndex],
deployed_ann_index: OutputArtifact[DeployedANNIndex]
):
deployment_client = IndexDeploymentClient(project_id,
project_number,
region)
index_name = ann_index.get_string_custom_property('index_name')
index_display_name = ann_index.get_string_custom_property('index_display_name')
endpoint_display_name = f'Endpoint for {index_display_name}'
logging.info(f'Creating endpoint: {endpoint_display_name}')
operation_id = deployment_client.create_endpoint(endpoint_display_name, vpc_name)
response = deployment_client.wait_for_completion(operation_id, 'Waiting for endpoint', 30)
endpoint_name = response['name']
logging.info(f'Endpoint created: {endpoint_name}')
endpoint_id = endpoint_name.split('/')[-1]
index_id = index_name.split('/')[-1]
deployed_index_display_name = f'Deployed {index_display_name}'
deployed_index_id = deployed_index_id_prefix + str(uuid.uuid4())
logging.info(f'Creating deployed index: {deployed_index_id}')
logging.info(f' from: {index_name}')
operation_id = deployment_client.create_deployment(
deployed_index_display_name,
deployed_index_id,
endpoint_id,
index_id)
response = deployment_client.wait_for_completion(operation_id, 'Waiting for deployment', 60)
logging.info('Index deployed!')
deployed_index_ip = deployment_client.get_deployment_grpc_ip(
endpoint_id, deployed_index_id
)
# Write the deployed index properties to metadata.
deployed_ann_index.set_string_custom_property('endpoint_name',
endpoint_name)
deployed_ann_index.set_string_custom_property('deployed_index_id',
deployed_index_id)
deployed_ann_index.set_string_custom_property('index_name',
index_name)
deployed_ann_index.set_string_custom_property('deployed_index_grpc_ip',
deployed_index_ip)
Explanation: Deploy ANN index component
This component deploys an ANN index to an ANN Endpoint.
The componet tracks the deployed index in the TFX custom DeployedANNIndex artifact.
End of explanation
import os
from compute_pmi import compute_pmi
from create_index import create_index
from deploy_index import deploy_index
from export_embeddings import export_embeddings
from extract_embeddings import extract_embeddings
from tfx.orchestration.kubeflow.v2 import kubeflow_v2_dag_runner
# Only required for local run.
from tfx.orchestration.metadata import sqlite_metadata_connection_config
from tfx.orchestration.pipeline import Pipeline
from train_item_matching import train_item_matching_model
def ann_pipeline(
pipeline_name,
pipeline_root,
metadata_connection_config,
project_id,
project_number,
region,
vpc_name,
bq_dataset_name,
min_item_frequency,
max_group_size,
dimensions,
embeddings_gcs_location,
index_display_name,
deployed_index_id_prefix,
) -> Pipeline:
Implements the SCANN training pipeline.
pmi_computer = compute_pmi(
project_id=project_id,
bq_dataset=bq_dataset_name,
min_item_frequency=min_item_frequency,
max_group_size=max_group_size,
)
bqml_trainer = train_item_matching_model(
project_id=project_id,
bq_dataset=bq_dataset_name,
item_cooc=pmi_computer.outputs.item_cooc,
dimensions=dimensions,
)
embeddings_extractor = extract_embeddings(
project_id=project_id,
bq_dataset=bq_dataset_name,
bq_model=bqml_trainer.outputs.bq_model,
)
embeddings_exporter = export_embeddings(
project_id=project_id,
gcs_location=embeddings_gcs_location,
item_embeddings_bq=embeddings_extractor.outputs.item_embeddings,
)
index_constructor = create_index(
project_id=project_id,
project_number=project_number,
region=region,
display_name=index_display_name,
dimensions=dimensions,
item_embeddings=embeddings_exporter.outputs.item_embeddings_gcs,
)
index_deployer = deploy_index(
project_id=project_id,
project_number=project_number,
region=region,
vpc_name=vpc_name,
deployed_index_id_prefix=deployed_index_id_prefix,
ann_index=index_constructor.outputs.ann_index,
)
components = [
pmi_computer,
bqml_trainer,
embeddings_extractor,
embeddings_exporter,
index_constructor,
index_deployer,
]
return Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
# Only needed for local runs.
metadata_connection_config=metadata_connection_config,
components=components,
)
Explanation: Creating a TFX pipeline
The pipeline automates the process of preparing item embeddings (in BigQuery), training a Matrix Factorization model (in BQML), and creating and deploying an ANN Service index.
The pipeline has a simple sequential flow. The pipeline accepts a set of runtime parameters that define GCP environment settings and embeddings and index assembly parameters.
End of explanation
pipeline_root = f"/tmp/{PIPELINE_NAME}"
local_mlmd_folder = "/tmp/mlmd"
if tf.io.gfile.exists(pipeline_root):
print("Removing previous artifacts...")
tf.io.gfile.rmtree(pipeline_root)
if tf.io.gfile.exists(local_mlmd_folder):
print("Removing local mlmd SQLite...")
tf.io.gfile.rmtree(local_mlmd_folder)
print("Creating mlmd directory: ", local_mlmd_folder)
tf.io.gfile.mkdir(local_mlmd_folder)
print("Creating pipeline root folder: ", pipeline_root)
tf.io.gfile.mkdir(pipeline_root)
Explanation: Testing the pipeline locally
You will first run the pipeline locally using the Beam runner.
Clean the metadata and artifacts from the previous runs
End of explanation
bq_dataset_name = "song_embeddings"
index_display_name = "Song embeddings"
deployed_index_id_prefix = "deployed_song_embeddings_"
min_item_frequency = 15
max_group_size = 100
dimensions = 50
embeddings_gcs_location = f"gs://{BUCKET_NAME}/embeddings"
metadata_connection_config = sqlite_metadata_connection_config(
os.path.join(local_mlmd_folder, "metadata.sqlite")
)
pipeline = ann_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=pipeline_root,
metadata_connection_config=metadata_connection_config,
project_id=PROJECT_ID,
project_number=PROJECT_NUMBER,
region=REGION,
vpc_name=VPC_NAME,
bq_dataset_name=bq_dataset_name,
index_display_name=index_display_name,
deployed_index_id_prefix=deployed_index_id_prefix,
min_item_frequency=min_item_frequency,
max_group_size=max_group_size,
dimensions=dimensions,
embeddings_gcs_location=embeddings_gcs_location,
)
Explanation: Set pipeline parameters and create the pipeline
End of explanation
logging.getLogger().setLevel(logging.INFO)
BeamDagRunner().run(pipeline)
Explanation: Start the run
End of explanation
from ml_metadata import metadata_store
from ml_metadata.proto import metadata_store_pb2
connection_config = metadata_store_pb2.ConnectionConfig()
connection_config.sqlite.filename_uri = os.path.join(
local_mlmd_folder, "metadata.sqlite"
)
connection_config.sqlite.connection_mode = 3 # READWRITE_OPENCREATE
store = metadata_store.MetadataStore(connection_config)
store.get_artifacts()
Explanation: Inspect produced metadata
During the execution of the pipeline, the inputs and outputs of each component have been tracked in ML Metadata.
End of explanation
%%writefile Dockerfile
FROM gcr.io/tfx-oss-public/tfx:0.25.0
WORKDIR /pipeline
COPY ./ ./
ENV PYTHONPATH="/pipeline:${PYTHONPATH}"
Explanation: NOTICE. The following code does not work with ANN Service Experimental. It will be finalized when the service moves to the Preview stage.
Running the pipeline on AI Platform Pipelines
You will now run the pipeline on AI Platform Pipelines (Unified)
Package custom components into a container
The modules containing custom components must be first package as a docker container image, which is a derivative of the standard TFX image.
Create a Dockerfile
End of explanation
!gcloud builds submit --tag gcr.io/{PROJECT_ID}/caip-tfx-custom:{USER} .
Explanation: Build and push the docker image to Container Registry
End of explanation
from aiplatform.pipelines import client
aipp_client = client.Client(project_id=PROJECT_ID, region=REGION, api_key=API_KEY)
Explanation: Create AI Platform Pipelines client
End of explanation
metadata_connection_config = None
pipeline_root = PIPELINE_ROOT
pipeline = ann_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=pipeline_root,
metadata_connection_config=metadata_connection_config,
project_id=PROJECT_ID,
project_number=PROJECT_NUMBER,
region=REGION,
vpc_name=VPC_NAME,
bq_dataset_name=bq_dataset_name,
index_display_name=index_display_name,
deployed_index_id_prefix=deployed_index_id_prefix,
min_item_frequency=min_item_frequency,
max_group_size=max_group_size,
dimensions=dimensions,
embeddings_gcs_location=embeddings_gcs_location,
)
Explanation: Set the the parameters for AIPP execution and create the pipeline
End of explanation
config = kubeflow_v2_dag_runner.KubeflowV2DagRunnerConfig(
project_id=PROJECT_ID,
display_name=PIPELINE_NAME,
default_image="gcr.io/{}/caip-tfx-custom:{}".format(PROJECT_ID, USER),
)
runner = kubeflow_v2_dag_runner.KubeflowV2DagRunner(
config=config, output_filename="pipeline.json"
)
runner.compile(pipeline, write_out=True)
Explanation: Compile the pipeline
End of explanation
aipp_client.create_run_from_job_spec("pipeline.json")
Explanation: Submit the pipeline run
End of explanation |
6,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparison of bowtie, bowtie2, and kallisto
Step1: Data preparation
We will use some of Ben's Sjorgrens data for this. We will generate a random sample of 1 million reads from the full data set.
Prepare data with Snakemake
bash
snakemake -s aligners.snakefile
It appears that kallisto needs at least 51 bases of the reference to successfully align most of the reads. Must be some kind of off-by-one issue with the data structures.
Load alignments
Step2: Bowtie2 vs kallisto
Step3: How many reads do bowtie2 and kallisto agree on?
Step4: For the minority of reads they disagree on, what do they look like
Step5: Mostly lower sensitivity of kallisto due to indels in the read. Specifically, out of
Step6: discordant reads, the number where kallisto failed to map is
Step7: or as a fraction
Step8: Are there any cases where bowtie2 fails to align
Step9: Which means there are no cases where bowtie and kallisto align to different peptides.
Step10: What do examples look like of kallisto aligning and bowtie2 not?
Step11: Looks like there is a perfect match to a prefix and the latter part of the read doesn't match
```
read AAATCCACCATTGTGAAGCAGATGAAGATCATTCATGGTTACTCAGAGCA
ref AAATCCACCATTGTGAAGCAGATGAAGATCATTCATAAAAATGGTTACTCA
read GGTCCTCACGCCGCCCGCGTTCGCGGGTTGGCATTACAATCCGCTTTCCA
ref GGTCCTCACGCCGCCCGCGTTCGCGGGTTGGCATTCCTCCCACACCAGACT
```
Bowtie vs kallisto
Step12: How many reads do bowtie and kallisto agree on?
Step13: For the minority of reads they disagree on, what do they look like
Step14: Looks like many disagreeents, but probably still few disagreements on a positive mapping.
Step15: discordant reads, the number where kallisto failed to map is
Step16: and the number where bowtie failed is
Step17: which means there are no disagreements on mapping. kallisto appears to be somewhat higher sensitivity.
Quantitation | Python Code:
import pandas as pd
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
Explanation: Comparison of bowtie, bowtie2, and kallisto
End of explanation
names = ['QNAME', 'FLAG', 'RNAME', 'POS', 'MAPQ', 'CIGAR', 'RNEXT', 'PNEXT', 'TLEN', 'SEQ', 'QUAL']
bowtie_alns = pd.read_csv('alns/bowtie-51mer.aln', sep='\t', header=None, usecols=list(range(11)), names=names)
bowtie2_alns = pd.read_csv('alns/bowtie2-51mer.aln', sep='\t', header=None, usecols=list(range(11)), names=names)
kallisto_alns = pd.read_csv('alns/kallisto-51mer.sam', sep='\t', header=None, usecols=list(range(11)), names=names, comment='@')
(bowtie_alns.RNAME != '*').sum() / len(bowtie_alns)
(bowtie2_alns.RNAME != '*').sum() / len(bowtie2_alns)
(kallisto_alns.RNAME != '*').sum() / len(kallisto_alns)
Explanation: Data preparation
We will use some of Ben's Sjorgrens data for this. We will generate a random sample of 1 million reads from the full data set.
Prepare data with Snakemake
bash
snakemake -s aligners.snakefile
It appears that kallisto needs at least 51 bases of the reference to successfully align most of the reads. Must be some kind of off-by-one issue with the data structures.
Load alignments
End of explanation
bt2_k_joined = pd.merge(bowtie2_alns, kallisto_alns, how='inner', on='QNAME', suffixes=['_bt2', '_k'])
Explanation: Bowtie2 vs kallisto
End of explanation
(bt2_k_joined.RNAME_bt2 == bt2_k_joined.RNAME_k).sum()
For the minority of reads they disagree on, what do they look like?
Explanation: How many reads do bowtie2 and kallisto agree on?
End of explanation
bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_k
Explanation: For the minority of reads they disagree on, what do they look like
End of explanation
(bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k).sum()
Explanation: Mostly lower sensitivity of kallisto due to indels in the read. Specifically, out of
End of explanation
(bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_k == '*').sum()
Explanation: discordant reads, the number where kallisto failed to map is
End of explanation
(bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_k == '*').sum() / (bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k).sum()
Explanation: or as a fraction
End of explanation
(bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_bt2 == '*').sum()
Explanation: Are there any cases where bowtie2 fails to align
End of explanation
((bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k) & (bt2_k_joined.RNAME_bt2 != '*') & (bt2_k_joined.RNAME_k != '*')).sum()
Explanation: Which means there are no cases where bowtie and kallisto align to different peptides.
End of explanation
bt2_k_joined[(bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k) & (bt2_k_joined.RNAME_bt2 == '*')]
Explanation: What do examples look like of kallisto aligning and bowtie2 not?
End of explanation
bt_k_joined = pd.merge(bowtie_alns, kallisto_alns, how='inner', on='QNAME', suffixes=['_bt', '_k'])
Explanation: Looks like there is a perfect match to a prefix and the latter part of the read doesn't match
```
read AAATCCACCATTGTGAAGCAGATGAAGATCATTCATGGTTACTCAGAGCA
ref AAATCCACCATTGTGAAGCAGATGAAGATCATTCATAAAAATGGTTACTCA
read GGTCCTCACGCCGCCCGCGTTCGCGGGTTGGCATTACAATCCGCTTTCCA
ref GGTCCTCACGCCGCCCGCGTTCGCGGGTTGGCATTCCTCCCACACCAGACT
```
Bowtie vs kallisto
End of explanation
(bt_k_joined.RNAME_bt == bt_k_joined.RNAME_k).sum()
Explanation: How many reads do bowtie and kallisto agree on?
End of explanation
bt_k_joined[bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k][['RNAME_bt', 'RNAME_k']]
Explanation: For the minority of reads they disagree on, what do they look like
End of explanation
(bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k).sum()
Explanation: Looks like many disagreeents, but probably still few disagreements on a positive mapping.
End of explanation
(bt_k_joined[bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k].RNAME_k == '*').sum()
Explanation: discordant reads, the number where kallisto failed to map is
End of explanation
(bt_k_joined[bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k].RNAME_bt == '*').sum()
Explanation: and the number where bowtie failed is
End of explanation
bowtie_counts = pd.read_csv('counts/bowtie-51mer.tsv', sep='\t', header=0, names=['id', 'input', 'output'])
bowtie2_counts = pd.read_csv('counts/bowtie2-51mer.tsv', sep='\t', header=0, names=['id', 'input', 'output'])
kallisto_counts = pd.read_csv('counts/kallisto-51mer.tsv', sep='\t', header=0)
fig, ax = plt.subplots()
_ = ax.hist(bowtie_counts.output, bins=100, log=True)
_ = ax.set(title='bowtie')
fig, ax = plt.subplots()
_ = ax.hist(bowtie2_counts.output, bins=100, log=True)
_ = ax.set(title='bowtie2')
fig, ax = plt.subplots()
_ = ax.hist(kallisto_counts.est_counts, bins=100, log=True)
_ = ax.set(title='kallisto')
bt2_k_counts = pd.merge(bowtie2_counts, kallisto_counts, how='inner', left_on='id', right_on='target_id')
fig, ax = plt.subplots()
ax.scatter(bt2_k_counts.output, bt2_k_counts.est_counts)
sp.stats.pearsonr(bt2_k_counts.output, bt2_k_counts.est_counts)
sp.stats.spearmanr(bt2_k_counts.output, bt2_k_counts.est_counts)
Explanation: which means there are no disagreements on mapping. kallisto appears to be somewhat higher sensitivity.
Quantitation
End of explanation |
6,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-2', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-2
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
6,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
None of this is being used. Most of it was abandoned. Just some brainstorming on how to approach presenting the data and what metrics to use.
Change csv file to have a column label for the various obstacles and points so it can be indexed into easier.
Chagne index_col to corrospond.
Finish graphs.
Step1: 9 defenses
Low Bar
ALLIANCE selected
Audience selected
ALLIANCE selected
ALLIANCE selected
Data structure choices include
Step2: Analysis Functions | Python Code:
def test():
print "testing to see if migrated repository works"
def dash():
print "-" * 20
# import libraries
import pandas as pd
import matplotlib.pyplot as plot
import matplotlib
matplotlib.style.use('ggplot')
import random as rng
import numpy as np
%matplotlib inline
# take a url of the csv or can read the csv locally into a pandas data frame
data = pd.read_csv("robodummy.csv", index_col=0)
Explanation: None of this is being used. Most of it was abandoned. Just some brainstorming on how to approach presenting the data and what metrics to use.
Change csv file to have a column label for the various obstacles and points so it can be indexed into easier.
Chagne index_col to corrospond.
Finish graphs.
End of explanation
# Object oriented approach, would have to feed csv data into objects
# maybe get rid of this and just use library analysis tools
class Robot(object):
def __init__(self, name, alliance, auto_points, points):
self.name = name
self.alliance = alliance
self.auto_points = auto_points
self.points = points
def points_per_sec(self):
return self.points / 150
def auto_points_per_sec(self):
return self.auto_points / 15
def get_name(self):
return self.name
def get_alliance(self):
return self.alliance
data
# needs to be changed to align with new csv formatting
def analyze(dataframe, team):
total_points = dataframe[team]['Points'] + dataframe[team]['Auto Points']
cumulative_success_rate = 4
pps = dataframe[team]['Points'] / 150
auto_pps = dataframe[team]['Auto Points'] / 15
# return a data frame instead
return(total_points, pps, auto_pps)
stuff = analyze(data, 'Cougar Tech')
print stuff
Explanation: 9 defenses
Low Bar
ALLIANCE selected
Audience selected
ALLIANCE selected
ALLIANCE selected
Data structure choices include:
- Pandas dataframes
- Numpy Arrays
- Object oriented
- Dictionary
End of explanation
data.plot.bar()
print data.shape[0]
data[1][1]
n_groups = data.shape[0]
fig, ax = plot.subplots()
index = np.arange(n_groups)
bar_width = 0.35
opacity = 0.4
error_config = {'ecolor': '0.3'}
rects1 = plot.bar(index, data["Cougar Tech"], bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Men')
plot.xlabel('Group')
plot.ylabel('Scores')
plot.title('Scores by group and gender')
plot.xticks(index + bar_width, data[])
plot.legend()
plot.tight_layout()
plot.show()
Explanation: Analysis Functions:
End of explanation |
6,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Dataset
In this section we inspect the dataset, split it into a training and a test set, and prepare it for easy consuption with PyTorch-based data loaders. Model construction and training will be done in the next section.
Step2: Divide into train and test
We use the DataLoader object from PyTorch to build batches from the test data set.
However, we first need to specify how much history to use in creating a forecast of a given length
Step3: We've also found that it is not necessary to train on the full dataset, so we here select a 10% random sample of time series for training. We will evaluate on the full dataset later.
Step4: We follow Torchvision in processing examples using Transforms chained together by Compose.
Tensorize creates a tensor of the example.
LogTransform natural logarithm of the targets after adding the offset (similar to torch.log1p).
RemoveLast subtracts the final value in the lookback from both lookback and horizon.
Target specifies which index in the array to forecast.
We need to perform these transformations to have input features that are of the unit scale. If the input features are not of unit scale (i.e., of O(1)) for all features, the optimizer won't be able to find an optimium due to blow-ups in the gradient calculations.
Step5: TimeSeriesDataset inherits from Torch Datasets for use with Torch DataLoader. It handles the creation of the examples used to train the network using lookback and horizon to partition the time series.
The parameter 'step' controls how far apart consective windowed samples from a time series are spaced. For example, for a time series of length 100 and a setup with lookback 24 and horizon 12, we split the original time series into smaller training examples of length 24+12=36. How much these examples are overlapping is controlled by the parameter step in TimeSeriesDataset.
Step6: Modeling and Forecasting
Temporal Convolutions
The network architecture used here is based on ideas related to WaveNet. We employ the same architecture with a few modifications (e.g., a fully connected output layer for vector forecasts). It turns out that we do not need many layers in this example to achieve state-of-the-art results, most likely because of the simple autoregressive nature of the data.
In many ways, a temporal convoluational architecture is among the simplest possible architecures that we could employ using neural networks. In our approach, every layer has the same number of convolutional filters and uses residual connections.
When it comes to loss functions, we use the log-likelihood of probability distributions from the torch.distributions module. This mean that if one supplues a normal distribution the likelihood of the transformed data is modeled as coming from a normal distribution.
Step7: Evaluation
Before any evaluation score can be calculated, we load the held out test data.
Step8: We need to transform the output forecasts. The output from the foracaster is of the form (n_samples, n_time_series, n_variables, n_timesteps).
This means, that a point forcast needs to be calculated from the samples, for example, by taking the mean or the median.
Step9: We calculate the symmetric MAPE. | Python Code:
import numpy as np
import os
import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
import torch
from torch.utils.data import DataLoader
from deep4cast.forecasters import Forecaster
from deep4cast.models import WaveNet
from deep4cast.datasets import TimeSeriesDataset
import deep4cast.transforms as transforms
import deep4cast.metrics as metrics
# Make RNG predictable
np.random.seed(0)
torch.manual_seed(0)
# Use a gpu if available, otherwise use cpu
device = ('cuda' if torch.cuda.is_available() else 'cpu')
%matplotlib inline
Explanation: Tutorial: M4 Daily
This notebook is designed to give a simple introduction to forecasting using the Deep4Cast package. The time series data is taken from the M4 dataset, specifically, the Daily subset of the data.
End of explanation
if not os.path.exists('data/Daily-train.csv'):
!wget https://raw.githubusercontent.com/M4Competition/M4-methods/master/Dataset/Train/Daily-train.csv -P data/
if not os.path.exists('data/Daily-test.csv'):
!wget https://raw.githubusercontent.com/M4Competition/M4-methods/master/Dataset/Test/Daily-test.csv -P data/
data_arr = pd.read_csv('data/Daily-train.csv')
data_arr = data_arr.iloc[:, 1:].values
data_arr = list(data_arr)
for i, ts in enumerate(data_arr):
data_arr[i] = ts[~np.isnan(ts)][None, :]
Explanation: Dataset
In this section we inspect the dataset, split it into a training and a test set, and prepare it for easy consuption with PyTorch-based data loaders. Model construction and training will be done in the next section.
End of explanation
horizon = 14
lookback = 128
Explanation: Divide into train and test
We use the DataLoader object from PyTorch to build batches from the test data set.
However, we first need to specify how much history to use in creating a forecast of a given length:
- horizon = time steps to forecast
- lookback = time steps leading up to the period to be forecast
End of explanation
import random
data_train = []
for time_series in data_arr:
data_train.append(time_series[:, :-horizon],)
data_train = random.sample(data_train, int(len(data_train) * 0.1))
Explanation: We've also found that it is not necessary to train on the full dataset, so we here select a 10% random sample of time series for training. We will evaluate on the full dataset later.
End of explanation
transform = transforms.Compose([
transforms.ToTensor(),
transforms.LogTransform(targets=[0], offset=1.0),
transforms.RemoveLast(targets=[0]),
transforms.Target(targets=[0]),
])
Explanation: We follow Torchvision in processing examples using Transforms chained together by Compose.
Tensorize creates a tensor of the example.
LogTransform natural logarithm of the targets after adding the offset (similar to torch.log1p).
RemoveLast subtracts the final value in the lookback from both lookback and horizon.
Target specifies which index in the array to forecast.
We need to perform these transformations to have input features that are of the unit scale. If the input features are not of unit scale (i.e., of O(1)) for all features, the optimizer won't be able to find an optimium due to blow-ups in the gradient calculations.
End of explanation
data_train = TimeSeriesDataset(
data_train,
lookback,
horizon,
step=1,
transform=transform
)
# Create mini-batch data loader
dataloader_train = DataLoader(
data_train,
batch_size=512,
shuffle=True,
pin_memory=True,
num_workers=1
)
Explanation: TimeSeriesDataset inherits from Torch Datasets for use with Torch DataLoader. It handles the creation of the examples used to train the network using lookback and horizon to partition the time series.
The parameter 'step' controls how far apart consective windowed samples from a time series are spaced. For example, for a time series of length 100 and a setup with lookback 24 and horizon 12, we split the original time series into smaller training examples of length 24+12=36. How much these examples are overlapping is controlled by the parameter step in TimeSeriesDataset.
End of explanation
# Define the model architecture
model = WaveNet(input_channels=1,
output_channels=1,
horizon=horizon,
hidden_channels=89,
skip_channels=199,
n_layers=7)
print('Number of model parameters: {}.'.format(model.n_parameters))
print('Receptive field size: {}.'.format(model.receptive_field_size))
# Enable multi-gpu if available
if torch.cuda.device_count() > 1:
print('Using {} GPUs.'.format(torch.cuda.device_count()))
model = torch.nn.DataParallel(model)
# .. and the optimizer
optim = torch.optim.Adam(model.parameters(), lr=0.0008097436666349985)
# .. and the loss
loss = torch.distributions.StudentT
# Fit the forecaster
forecaster = Forecaster(model, loss, optim, n_epochs=5, device=device)
forecaster.fit(dataloader_train, eval_model=True)
Explanation: Modeling and Forecasting
Temporal Convolutions
The network architecture used here is based on ideas related to WaveNet. We employ the same architecture with a few modifications (e.g., a fully connected output layer for vector forecasts). It turns out that we do not need many layers in this example to achieve state-of-the-art results, most likely because of the simple autoregressive nature of the data.
In many ways, a temporal convoluational architecture is among the simplest possible architecures that we could employ using neural networks. In our approach, every layer has the same number of convolutional filters and uses residual connections.
When it comes to loss functions, we use the log-likelihood of probability distributions from the torch.distributions module. This mean that if one supplues a normal distribution the likelihood of the transformed data is modeled as coming from a normal distribution.
End of explanation
data_train = pd.read_csv('data/Daily-train.csv')
data_test = pd.read_csv('data/Daily-test.csv')
data_train = data_train.iloc[:, 1:].values
data_test = data_test.iloc[:, 1:].values
data_arr = []
for ts_train, ts_test in zip(data_train, data_test):
ts_a = ts_train[~np.isnan(ts_train)]
ts_b = ts_test
ts = np.concatenate([ts_a, ts_b])[None, :]
data_arr.append(ts)
# Sequentialize the training and testing dataset
data_test = []
for time_series in data_arr:
data_test.append(time_series[:, -horizon-lookback:])
data_test = TimeSeriesDataset(
data_test,
lookback,
horizon,
step=1,
transform=transform
)
dataloader_test = DataLoader(
data_test,
batch_size=1024,
shuffle=False,
num_workers=2
)
Explanation: Evaluation
Before any evaluation score can be calculated, we load the held out test data.
End of explanation
# Get time series of actuals for the testing period
y_test = []
for example in dataloader_test:
example = dataloader_test.dataset.transform.untransform(example)
y_test.append(example['y'])
y_test = np.concatenate(y_test)
# Get corresponding predictions
y_samples = forecaster.predict(dataloader_test, n_samples=100)
Explanation: We need to transform the output forecasts. The output from the foracaster is of the form (n_samples, n_time_series, n_variables, n_timesteps).
This means, that a point forcast needs to be calculated from the samples, for example, by taking the mean or the median.
End of explanation
# Evaluate forecasts
test_smape = metrics.smape(y_samples, y_test)
print('SMAPE: {}%'.format(test_smape.mean()))
Explanation: We calculate the symmetric MAPE.
End of explanation |
6,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Minimum-length least squares
This notebook shows how to solve a minimum-length least squares problem, which finds a minimum-length vector $x \in \mathbf{R}^n$ achieving small mean-square error (MSE) for a particular least squares problem
Step1: The below cell constructs the problem data.
Step2: And the next cell constructs and solves the QCP. | Python Code:
!pip install --upgrade cvxpy
import cvxpy as cp
import numpy as np
Explanation: Minimum-length least squares
This notebook shows how to solve a minimum-length least squares problem, which finds a minimum-length vector $x \in \mathbf{R}^n$ achieving small mean-square error (MSE) for a particular least squares problem:
\begin{equation}
\begin{array}{ll}
\mbox{minimize} & \mathrm{len}(x) \
\mbox{subject to} & \frac{1}{n}\|Ax - b\|_2^2 \leq \epsilon,
\end{array}
\end{equation}
where the variable is $x$ and the problem data are $n$, $A$, $b$, and $\epsilon$.
This is a quasiconvex program (QCP). It can be specified using disciplined quasiconvex programming (DQCP), and it can therefore be solved using CVXPY.
End of explanation
n = 10
np.random.seed(1)
A = np.random.randn(n, n)
x_star = np.random.randn(n)
b = A @ x_star
epsilon = 1e-2
Explanation: The below cell constructs the problem data.
End of explanation
x = cp.Variable(n)
mse = cp.sum_squares(A @ x - b)/n
problem = cp.Problem(cp.Minimize(cp.length(x)), [mse <= epsilon])
print("Is problem DQCP?: ", problem.is_dqcp())
problem.solve(qcp=True)
print("Found a solution, with length: ", problem.value)
print("MSE: ", mse.value)
print("x: ", x.value)
print("x_star: ", x_star)
Explanation: And the next cell constructs and solves the QCP.
End of explanation |
6,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Application 2
Step1: Labelling data
Based on its trust value, we categorise the data entity into two sets
Step2: Having used the trust valuue to label all the data entities, we remove the trust_value column from the data frame.
Step3: Filtering data
We split the dataset into three
Step4: Balancing Data
This section explore the balance of each of the three datasets and balance them using the SMOTE Oversampling Method.
Step5: Buildings
Step6: Balancing the building dataset
Step7: Routes
Step8: Balancing the route dataset
Step9: Route Sets
Step10: Balancing the route set dataset
Step11: Cross Validation
We now run the cross validation tests on the three balanaced datasets (df_buildings, df_routes, and df_routesets) using all the features (combined), only the generic network metrics (generic), and only the provenance-specific network metrics (provenance). Please refer to Cross Validation Code.ipynb for the detailed description of the cross validation code.
Step12: Building Classification
We test the classification of buildings, collect individual accuracy scores results and the importance of every feature in each test in importances (both are Pandas Dataframes). These two tables will also be used to collect data from testing the classification of routes and route sets later.
Step13: Route Classification
Step14: Route Set Classification
Step15: Saving experiments' results (optional)
Optionally, we can save the test results to save time the next time we want to re-explore them
Step16: Next time, we can reload the results as follows
Step17: Charting the accuracy scores
Step18: Converting the accuracy score from [0, 1] to percentage, i.e [0, 100]
Step19: Saving the chart above to Fig4.eps to be included in the paper
Step20: Charting the importance of features
In this section, we explore the relevance of each features in classifying the data quality of CollabMap buildings, routes, and route sets. To do so, we analyse the feature importance values provided by the decision tree training done above - the importances data frame.
Step21: The charts above show us the relevance of each feature in classifying the quality of CollabMap buildings, routes, and route sets. Next, we find the three most relevant features for each data type to report in the paper.
Step22: The table above shows the most important metrics as reported by the decision tree classifiers during their training for each dataset.
Retest the classifications using minimal sets of features (Extra)
Armed with the knowledge of the three most important features in each experiment, we re-run the experiments using only those. | Python Code:
import pandas as pd
df = pd.read_csv("collabmap/depgraphs.csv", index_col='id')
df.head()
df.describe()
Explanation: Application 2: CollabMap Data Quality
Assessing the quality of crowdsourced data in CollabMap from their provenance
Goal: To determine if the provenance network analytics method can identify trustworthy data (i.e. buildings, routes, and route sets) contributed by crowd workers in CollabMap.
Classification labels: $\mathcal{L} = \left{ \textit{trusted}, \textit{uncertain} \right} $.
Training data:
Buildings: 5175
Routes: 4710
Route sets: 4997
Reading data
The CollabMap dataset is provided in the collabmap/depgraphs.csv file, each row corresponds to a building, route, or route sets created in the application:
* id: the identifier of the data entity (i.e. building/route/route set).
* trust_value: the beta trust value calculated from the votes for the data entity.
* The remaining columns provide the provenance network metrics calculated from the dependency provenance graph of the entity.
End of explanation
trust_threshold = 0.75
df['label'] = df.apply(lambda row: 'Trusted' if row.trust_value >= trust_threshold else 'Uncertain', axis=1)
df.head() # The new label column is the last column below
Explanation: Labelling data
Based on its trust value, we categorise the data entity into two sets: trusted and uncertain. Here, the threshold for the trust value, whose range is [0, 1], is chosen to be 0.75.
End of explanation
# We will not use trust value from now on
df.drop('trust_value', axis=1, inplace=True)
df.shape # the dataframe now have 23 columns (22 metrics + label)
Explanation: Having used the trust valuue to label all the data entities, we remove the trust_value column from the data frame.
End of explanation
df_buildings = df.filter(like="Building", axis=0)
df_routes = df.filter(regex="^Route\d", axis=0)
df_routesets = df.filter(like="RouteSet", axis=0)
df_buildings.shape, df_routes.shape, df_routesets.shape # The number of data points in each dataset
Explanation: Filtering data
We split the dataset into three: buildings, routes, and route sets.
End of explanation
from analytics import balance_smote
Explanation: Balancing Data
This section explore the balance of each of the three datasets and balance them using the SMOTE Oversampling Method.
End of explanation
df_buildings.label.value_counts()
Explanation: Buildings
End of explanation
df_buildings = balance_smote(df_buildings)
Explanation: Balancing the building dataset:
End of explanation
df_routes.label.value_counts()
Explanation: Routes
End of explanation
df_routes = balance_smote(df_routes)
Explanation: Balancing the route dataset:
End of explanation
df_routesets.label.value_counts()
Explanation: Route Sets
End of explanation
df_routesets = balance_smote(df_routesets)
Explanation: Balancing the route set dataset:
End of explanation
from analytics import test_classification
Explanation: Cross Validation
We now run the cross validation tests on the three balanaced datasets (df_buildings, df_routes, and df_routesets) using all the features (combined), only the generic network metrics (generic), and only the provenance-specific network metrics (provenance). Please refer to Cross Validation Code.ipynb for the detailed description of the cross validation code.
End of explanation
# Cross validation test on building classification
res, imps = test_classification(df_buildings)
# adding the Data Type column
res['Data Type'] = 'Building'
imps['Data Type'] = 'Building'
# storing the results and importance of features
results = res
importances = imps
# showing a few newest rows
results.tail()
Explanation: Building Classification
We test the classification of buildings, collect individual accuracy scores results and the importance of every feature in each test in importances (both are Pandas Dataframes). These two tables will also be used to collect data from testing the classification of routes and route sets later.
End of explanation
# Cross validation test on route classification
res, imps = test_classification(df_routes)
# adding the Data Type column
res['Data Type'] = 'Route'
imps['Data Type'] = 'Route'
# storing the results and importance of features
results = results.append(res, ignore_index=True)
importances = importances.append(imps, ignore_index=True)
# showing a few newest rows
results.tail()
Explanation: Route Classification
End of explanation
# Cross validation test on route classification
res, imps = test_classification(df_routesets)
# adding the Data Type column
res['Data Type'] = 'Route Set'
imps['Data Type'] = 'Route Set'
# storing the results and importance of features
results = results.append(res, ignore_index=True)
importances = importances.append(imps, ignore_index=True)
# showing a few newest rows
results.tail()
Explanation: Route Set Classification
End of explanation
results.to_pickle("collabmap/results.pkl")
importances.to_pickle("collabmap/importances.pkl")
Explanation: Saving experiments' results (optional)
Optionally, we can save the test results to save time the next time we want to re-explore them:
End of explanation
import pandas as pd
results = pd.read_pickle("collabmap/results.pkl")
importances = pd.read_pickle("collabmap/importances.pkl")
results.shape, importances.shape # showing the shape of the data (for checking)
Explanation: Next time, we can reload the results as follows:
End of explanation
%matplotlib inline
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("paper", font_scale=1.4)
Explanation: Charting the accuracy scores
End of explanation
results.Accuracy = results.Accuracy * 100
results.head()
from matplotlib.font_manager import FontProperties
fontP = FontProperties()
fontP.set_size(12)
pal = sns.light_palette("seagreen", n_colors=3, reverse=True)
plot = sns.barplot(x="Data Type", y="Accuracy", hue='Metrics', palette=pal, errwidth=1, capsize=0.02, data=results)
plot.set_ylim(80, 100)
plot.legend(loc='upper center', bbox_to_anchor=(0.5, 1.0), ncol=3)
plot.set_ylabel('Accuracy (%)')
Explanation: Converting the accuracy score from [0, 1] to percentage, i.e [0, 100]:
End of explanation
plot.figure.savefig("figures/Fig4.eps")
Explanation: Saving the chart above to Fig4.eps to be included in the paper:
End of explanation
import numpy as np
# Rename the columns with Math notation for consistency with the metrics symbols in the paper
feature_name_maths_mapping = {
"entities": "$n_e$", "agents": "$n_{ag}$", "activities": "$n_a$", "nodes": "$n$", "edges": "$e$",
"diameter": "$d$", "assortativity": "$r$", "acc": "$\\mathsf{ACC}$",
"acc_e": "$\\mathsf{ACC}_e$", "acc_a": "$\\mathsf{ACC}_a$", "acc_ag": "$\\mathsf{ACC}_{ag}$",
"mfd_e_e": "$\\mathrm{mfd}_{e \\rightarrow e}$", "mfd_e_a": "$\\mathrm{mfd}_{e \\rightarrow a}$",
"mfd_e_ag": "$\\mathrm{mfd}_{e \\rightarrow ag}$", "mfd_a_e": "$\\mathrm{mfd}_{a \\rightarrow e}$",
"mfd_a_a": "$\\mathrm{mfd}_{a \\rightarrow a}$", "mfd_a_ag": "$\\mathrm{mfd}_{a \\rightarrow ag}$",
"mfd_ag_e": "$\\mathrm{mfd}_{ag \\rightarrow e}$", "mfd_ag_a": "$\\mathrm{mfd}_{ag \\rightarrow a}$",
"mfd_ag_ag": "$\\mathrm{mfd}_{ag \\rightarrow ag}$", "mfd_der": "$\\mathrm{mfd}_\\mathit{der}$", "powerlaw_alpha": "$\\alpha$"
}
importances.rename(columns=feature_name_maths_mapping, inplace=True)
grouped =importances.groupby("Data Type") # Grouping the importance values by data type
sns.set_context("talk")
grouped.boxplot(figsize=(16, 5), layout=(1, 3), rot=90)
Explanation: Charting the importance of features
In this section, we explore the relevance of each features in classifying the data quality of CollabMap buildings, routes, and route sets. To do so, we analyse the feature importance values provided by the decision tree training done above - the importances data frame.
End of explanation
# Calculate the mean importance of each feature for each data type
imp_means = grouped.mean()
pd.DataFrame(
{row_name: row.sort_values(ascending=False)[:3].index.get_values()
for row_name, row in imp_means.iterrows()
}
)
Explanation: The charts above show us the relevance of each feature in classifying the quality of CollabMap buildings, routes, and route sets. Next, we find the three most relevant features for each data type to report in the paper.
End of explanation
from analytics import cv_test
res, imps = cv_test(df_buildings[['assortativity', 'acc', 'activities']], df_buildings.label, test_id="Building")
res, imps = cv_test(df_routes[['acc', 'diameter', 'mfd_der']], df_routes.label, test_id="Route")
res, imps = cv_test(df_routesets[['assortativity', 'acc_e', 'entities']], df_routesets.label, test_id="Routeset")
Explanation: The table above shows the most important metrics as reported by the decision tree classifiers during their training for each dataset.
Retest the classifications using minimal sets of features (Extra)
Armed with the knowledge of the three most important features in each experiment, we re-run the experiments using only those.
End of explanation |
6,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First Last - SymPy
Step1: $$ \Large {\displaystyle f(x)=3e^{-{\frac {x^{2}}{8}}}} \sin(x/3)$$
Find the first four terms of the Taylor expansion of the above equation
Make a plot of the function
Plot size 10 in x 4 in
X limts -5, 5
Y limits -2, 2
Over-plot the 1-term Taylor expansion using a different color
Over-plot the 2-term Taylor expansion using a different color
Over-plot the 3-term Taylor expansion using a different color
Over-plot the 4-term Taylor expansion using a different color | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import sympy as sp
Explanation: First Last - SymPy
End of explanation
sp.init_printing()
x = sp.symbols('x')
my_x = np.linspace(-10,10,100)
Explanation: $$ \Large {\displaystyle f(x)=3e^{-{\frac {x^{2}}{8}}}} \sin(x/3)$$
Find the first four terms of the Taylor expansion of the above equation
Make a plot of the function
Plot size 10 in x 4 in
X limts -5, 5
Y limits -2, 2
Over-plot the 1-term Taylor expansion using a different color
Over-plot the 2-term Taylor expansion using a different color
Over-plot the 3-term Taylor expansion using a different color
Over-plot the 4-term Taylor expansion using a different color
End of explanation |
6,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'canesm5', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CCCMA
Source ID: CANESM5
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:46
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
6,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<br><br><br><br><br><h1 style="font-size
Step1: <br><br>
<div style="color
Step2: <br><br><br><br>
<div style="color
Step3: <br><br><br><br>
<div style="color
Step4: <br><br><br><br>
<div style="color
Step5: <br><br><br><br>
<div style="color
Step6: <br><br><br><br>
<div style="color
Step7: <br><br>
<br><br><h1 style="font-family
Step8: <br><br><br><br>
<div style="color
Step9: <br><br>
<h1 style="font-family
Step10: <br><br><br><br>
<div style="color
Step11: <br><br>
<br><br><h1 style="font-family
Step12: <br><br>
<br><br><h1 style="font-family
Step13: <br><br><br><br>
<div style="color
Step14: <br><br>
<br><br><h1 style="font-family
Step15: <br><br>
<br><br><h1 style="font-family
Step16: <br><br><br><br>
<div style="color
Step17: <br>
<h1 style="font-family
Step18: <br><br><h1 style="font-family | Python Code:
import pandas as pd
from pyspark.mllib.clustering import KMeans, KMeansModel
from numpy import array
Explanation: <br><br><br><br><br><h1 style="font-size:4em;color:#2467C0">Welcome to Week 3</h1><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">This document provides a running example of completing the Week 3 assignment : </p>
<ul class="simple">
<li style="line-height:31px;">A shorter version with fewer comments is available as script: sparkMLlibClustering.py</li>
<li style="line-height:31px;">To run these commands in Cloudera VM: first run the setup script: setupWeek3.sh</li>
<li style="line-height:31px;">You can then copy paste these commands in pySpark. </li>
<li style="line-height:31px;">To open pySpark, refer to : <a class="reference external" href="https://www.coursera.org/learn/machinelearningwithbigdata/supplement/GTFQ0/slides-module-2-lesson-3">Week 2</a> and <a class="reference external" href="https://www.coursera.org/learn/machinelearningwithbigdata/supplement/RH1zz/download-lesson-2-slides-spark-mllib-clustering">Week 4</a> of the Machine Learning course</li>
<li style="line-height:31px;">Note that your dataset may be different from what is used here, so your results may not match with those shown here</li>
</ul></div>
End of explanation
adclicksDF = pd.read_csv('./ad-clicks.csv')
adclicksDF = adclicksDF.rename(columns=lambda x: x.strip()) #remove whitespaces from headers
Explanation: <br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Step 1: Attribute Selection</h1>
<br><br><h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Import Data</h1><br><br>
<p style="line-height:31px;">First let us read the contents of the file ad-clicks.csv. The following commands read in the CSV file in a table format and removes any extra whitespaces. So, if the CSV contained ' userid ' it becomes 'userid'. <br><br>
Note that you must change the path to ad-clicks.csv to the location on your machine, if you want to run this command on your machine.
</p>
</div>
<br><br><br><br>
End of explanation
adclicksDF.head(n=5)
Explanation: <br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Let us display the first 5 lines of adclicksDF:</p>
</div>
<br><br><br><br>
End of explanation
adclicksDF['adCount'] = 1
Explanation: <br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Next, We are going to add an extra column to the ad-clicks table and make it equal to 1. We do so to record the fact that each ROW is 1 ad-click.
You will see how this will become useful when we sum up this column to find how many ads
did a user click.</p>
</div>
<br><br><br><br>
End of explanation
adclicksDF.head(n=5)
Explanation: <br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Let us display the first 5 lines of adclicksDF and see if
a new column has been added:</p>
</div>
<br><br><br><br>
End of explanation
buyclicksDF = pd.read_csv('./buy-clicks.csv')
buyclicksDF = buyclicksDF.rename(columns=lambda x: x.strip()) #removes whitespaces from headers
Explanation: <br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Next, let us read the contents of the file buy-clicks.csv. As before, the following commands read in the CSV file in a table format and removes any extra whitespaces. So, if the CSV contained ' userid ' it becomes 'userid'. <br><br>
Note that you must change the path to buy-clicks.csv to the location on your machine, if you want to run this command on your machine.
</p>
</div>
<br><br><br><br>
End of explanation
buyclicksDF.head(n=5)
Explanation: <br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Let us display the first 5 lines of buyclicksDF:</p>
</div>
<br><br><br><br>
End of explanation
userPurchases = buyclicksDF[['userId','price']] #select only userid and price
userPurchases.head(n=5)
Explanation: <br><br>
<br><br><h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Feature Selection</h1><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">For this exercise, we can choose from buyclicksDF, the 'price' of each app that a user purchases as an attribute that captures user's purchasing behavior. The following command selects 'userid' and 'price' and drops all other columns that we do not want to use at this stage.</p>
</div>
<br><br><br><br>
End of explanation
useradClicks = adclicksDF[['userId','adCount']]
useradClicks.head(n=5) #as we saw before, this line displays first five lines
Explanation: <br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Similarly, from the adclicksDF, we will use the 'adCount' as an attribute that captures user's inclination to click on ads. The following command selects 'userid' and 'adCount' and drops all other columns that we do not want to use at this stage.</p>
</div>
<br><br><br><br>
End of explanation
adsPerUser = useradClicks.groupby('userId').sum()
adsPerUser = adsPerUser.reset_index()
adsPerUser.columns = ['userId', 'totalAdClicks'] #rename the columns
Explanation: <br><br>
<h1 style="font-family: Arial; font-size:1.5em;color:#2462C0; font-style:bold">Step 2: Training Data Set Creation</h1>
<br><br><h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Create the first aggregate feature for clustering</h1><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">From each of these single ad-clicks per row, we can now generate total ad clicks per user. Let's pick a user with userid = 3. To find out how many ads this user has clicked overall, we have to find each row that contains userid = 3, and report the total number of such rows.
The following commands sum the total number of ads per user and rename the columns to be called 'userid' and 'totalAdClicks'. <b> Note that you may not need to aggregate (e.g. sum over many rows) if you choose a different feature and your data set already provides the necessary information. </b> In the end, we want to get one row per user, if we are performing clustering over users.
</div>
<br><br><br><br>
End of explanation
adsPerUser.head(n=5)
Explanation: <br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Let us display the first 5 lines of 'adsPerUser' to see if there
is a column named 'totalAdClicks' containing total adclicks per user.</p>
</div>
<br><br><br><br>
End of explanation
revenuePerUser = userPurchases.groupby('userId').sum()
revenuePerUser = revenuePerUser.reset_index()
revenuePerUser.columns = ['userId', 'revenue'] #rename the columns
revenuePerUser.head(n=5)
Explanation: <br><br>
<br><br><h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Create the second aggregate feature for clustering</h1><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Similar to what we did for adclicks, here we find out how much money in total did each user spend on buying in-app purchases. As an example, let's pick a user with userid = 9. To find out the total money spent by this user, we have to find each row that contains userid = 9, and report the sum of the column'price' of each product they purchased.
The following commands sum the total money spent by each user and rename the columns to be called 'userid' and 'revenue'.
<br><br>
<p style="line-height:31px;"> <b> Note: </b> that you can also use other aggregates, such as sum of money spent on a specific ad category by a user or on a set of ad categories by each user, game clicks per hour by each user etc. You are free to use any mathematical operations on the fields provided in the CSV files when creating features. </p>
</div>
<br><br><br><br>
End of explanation
combinedDF = adsPerUser.merge(revenuePerUser, on='userId') #userId, adCount, price
Explanation: <br><br>
<br><br><h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Merge the two tables</h1><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Lets see what we have so far. We have a table called revenuePerUser, where each row contains total money a user (with that 'userid') has spent. We also have another table called adsPerUser where each row contains total number of ads a user has clicked. We will use revenuePerUser and adsPerUser as features / attributes to capture our users' behavior.<br><br>
Let us combine these two attributes (features) so that each row contains both attributes per user. Let's merge these two tables to get one single table we can use for K-Means clustering.
</div>
<br><br><br><br>
End of explanation
combinedDF.head(n=5) #display how the merged table looks
Explanation: <br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Let us display the first 5 lines of the merged table. <b> Note: Depending on what attributes you choose, you may not need to merge tables. You may get all your attributes from a single table. </b></p>
</div>
<br><br><br><br>
End of explanation
trainingDF = combinedDF[['totalAdClicks','revenue']]
trainingDF.head(n=5)
Explanation: <br><br>
<br><br><h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Create the final training dataset</h1><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Our training data set is almost ready. At this stage we can remove the 'userid' from each row, since 'userid' is a computer generated random number assigned to each user. It does not capture any behavioral aspect of a user. One way to drop the 'userid', is to select the other two columns. </p>
</div>
<br><br><br><br>
End of explanation
trainingDF.shape
Explanation: <br><br>
<br><br><h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Display the dimensions of the training dataset</h1><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Display the dimension of the training data set. To display the dimensions of the trainingDF, simply add .shape as a suffix and hit enter.</p>
</div>
<br><br><br><br>
End of explanation
sqlContext = SQLContext(sc)
pDF = sqlContext.createDataFrame(trainingDF)
parsedData = pDF.rdd.map(lambda line: array([line[0], line[1]])) #totalAdClicks, revenue
Explanation: <br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">The following two commands convert the tables we created into a format that can be understood by the KMeans.train function. <br><br>
line[0] refers to the first column. line[1] refers to the second column. If you have more than 2 columns in your training table, modify this command by adding line[2], line[3], line[4] ...</p>
</div>
<br><br><br><br>
End of explanation
my_kmmodel = KMeans.train(parsedData, 2, maxIterations=10, runs=10, initializationMode="random")
Explanation: <br>
<h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Step 3: Train to Create Cluster Centers</h1>
<br><br><h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Train KMeans model</h1><br><br>
<br><br><br><br>
<div style="color:black;font-family: Arial; font-size:1.1em;line-height:65%">
<p style="line-height:31px;">Here we are creating two clusters as denoted in the second argument.</p>
</div>
<br><br><br><br>
End of explanation
print(my_kmmodel.centers)
Explanation: <br><br><h1 style="font-family: Arial; font-size:1.5em;color:#2462C0">Display the centers of two clusters formed</h1><br><br>
End of explanation |
6,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excel Extractor
ETK's Excel Extractor is a cell-based extractor for extracting data from compatible spreadsheets.
Souce spreadsheet
The example spreadsheet file named alabama.xml and it has a sheet named 16tbl08al, in which row 1 to row 5 and row 60 to row 62 are metadata, 6A to M59 is a table (which has row and column headers). For this example, I'm going to extract data from C7 to M33 (see the picture attached below).
Define where and how to extract data
Excel Extractor will scan cell-by-cell within a region that you specified and populate variables that you defined.
Define variable
In this particular example, I want to extract value of all cells in region (C7, M33) and I defined a variable called value. Its value will be extracted from a cell located at $col,$row where $col and $row mean current column id and row id that the scanner is traversing at. The return is a list of object which contains user-defined variables.
Step1: Coordinate variable
Excel Extractor allows you to define multiple variables. This is useful if you want to extract the data from other cells which are associated with current cell. In this example, I also need column header (category) and county name of every cell in the region. It supports constant coordinate like ($B,$1) (which means the cell at column B row 1) or using + and - to caculate relative coordinate like ($B-1,$row+1) (which means the cell at column A and its row id is current row id + 1).
Step2: Single variable
Besides the coordinate, the value of variables can also be a builtin variable (it only has $row and $col right now). This can be used for tracking provenance of extractions. Both row and column id here are presented in numeric form (base is 0).
Step5: Wrap them up in ETK module and post processing
The below example shows how to use this extractor in ETK module. The extractor's variable syntax only supports using a single builtin variable or a coordinate. All the post processings need to be done after extraction. | Python Code:
import pprint
from etk.extractors.excel_extractor import ExcelExtractor
ee = ExcelExtractor()
variables = {
'value': '$col,$row'
}
raw_extractions = ee.extract('alabama.xls', '16tbl08al', ['C,7', 'M,33'], variables)
pprint.pprint(raw_extractions[:10]) # print first 10
Explanation: Excel Extractor
ETK's Excel Extractor is a cell-based extractor for extracting data from compatible spreadsheets.
Souce spreadsheet
The example spreadsheet file named alabama.xml and it has a sheet named 16tbl08al, in which row 1 to row 5 and row 60 to row 62 are metadata, 6A to M59 is a table (which has row and column headers). For this example, I'm going to extract data from C7 to M33 (see the picture attached below).
Define where and how to extract data
Excel Extractor will scan cell-by-cell within a region that you specified and populate variables that you defined.
Define variable
In this particular example, I want to extract value of all cells in region (C7, M33) and I defined a variable called value. Its value will be extracted from a cell located at $col,$row where $col and $row mean current column id and row id that the scanner is traversing at. The return is a list of object which contains user-defined variables.
End of explanation
variables = {
'value': '$col,$row',
'county': '$B,$row',
'category': '$col,$6'
}
raw_extractions = ee.extract('alabama.xls', '16tbl08al', ['C,7', 'M,33'], variables)
pprint.pprint(raw_extractions[:10]) # print first 10
Explanation: Coordinate variable
Excel Extractor allows you to define multiple variables. This is useful if you want to extract the data from other cells which are associated with current cell. In this example, I also need column header (category) and county name of every cell in the region. It supports constant coordinate like ($B,$1) (which means the cell at column B row 1) or using + and - to caculate relative coordinate like ($B-1,$row+1) (which means the cell at column A and its row id is current row id + 1).
End of explanation
variables = {
'value': '$col,$row',
'county': '$B,$row',
'category': '$col,$6',
'from_row': '$row',
'from_col': '$col'
}
raw_extractions = ee.extract('alabama.xls', '16tbl08al', ['C,7', 'M,33'], variables)
pprint.pprint(raw_extractions[:10]) # print first 10
Explanation: Single variable
Besides the coordinate, the value of variables can also be a builtin variable (it only has $row and $col right now). This can be used for tracking provenance of extractions. Both row and column id here are presented in numeric form (base is 0).
End of explanation
import os, sys
from etk.etk import ETK
from etk.etk_module import ETKModule
from etk.extractors.excel_extractor import ExcelExtractor
from etk.utilities import Utility
class ExampleETKModule(ETKModule):
Abstract class for extraction module
def __init__(self, etk):
ETKModule.__init__(self, etk)
self.ee = ExcelExtractor()
def document_selector(self, doc):
return 'file_path' in doc.cdr_document
def process_document(self, doc):
Add your code for processing the document
variables = {
'value': '$col,$row',
'county': '$B,$row',
'category': '$col,$6',
'from_row': '$row',
'from_col': '$col'
}
raw_extractions = self.ee.extract(doc.cdr_document['file_path'], '16tbl08al', ['C,7', 'M,33'], variables)
extracted_docs = []
for d in raw_extractions:
# post processing
d['category'] = d['category'].replace('\n', ' ').strip()
d['county'] = d['county'].replace('\n', ' ').strip()
d['from_row'] = int(d['from_row'])
d['from_col'] = int(d['from_col'])
# create sub document
d['doc_id'] = Utility.create_doc_id_from_json(d)
extracted_docs.append(etk.create_document(d))
return extracted_docs
# if __name__ == "__main__":
etk = ETK(modules=ExampleETKModule)
doc = etk.create_document({'file_path': 'alabama.xls'})
docs = etk.process_ems(doc)
for d in docs[1:11]: # print first 10
print(d.value)
Explanation: Wrap them up in ETK module and post processing
The below example shows how to use this extractor in ETK module. The extractor's variable syntax only supports using a single builtin variable or a coordinate. All the post processings need to be done after extraction.
End of explanation |
6,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MM
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
6,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AutoML text sentiment analysis
Installation
Install the latest version of AutoML SDK.
Step1: Install the Google cloud-storage library as well.
Step2: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoM SDK into our Python environment.
Step11: AutoML constants
Setup up the following constants for AutoML
Step12: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
Step13: Example output
Step14: Example output
Step15: Response
Step16: Example output
Step17: projects.locations.datasets.importData
Request
Step18: Example output
Step19: Response
Step20: Example output
Step21: Example output
Step22: Response
Step23: Example output
Step24: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
Step25: Response
Step26: Example output
Step27: Response
Step28: Example output
Step29: Example output
Step30: Example output
Step31: Response
Step32: Example output
Step33: Example output
Step34: Response
Step35: Example output
Step36: Example output
Step37: Response
Step38: Example output | Python Code:
! pip3 install google-cloud-automl
Explanation: AutoML text sentiment analysis
Installation
Install the latest version of AutoML SDK.
End of explanation
! pip3 install google-cloud-storage
Explanation: Install the Google cloud-storage library as well.
End of explanation
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see Region support for AutoML services
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
Explanation: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import json
import os
import sys
import time
from google.cloud import automl
from google.protobuf.json_format import MessageToJson
from google.protobuf.struct_pb2 import Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoM SDK into our Python environment.
End of explanation
# AutoM location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: AutoML constants
Setup up the following constants for AutoML:
PARENT: The AutoM location root path for dataset, model and endpoint resources.
End of explanation
def automl_client():
return automl.AutoMlClient()
def perdictions_client():
return automl.PredictionServiceClient()
def operations_client():
return automl.AutoMlClient()._transport.operations_client
clients = {}
clients["automl"] = automl_client()
clients["predictions"] = perdictions_client()
clients["operations"] = operations_client()
for client in clients.items():
print(client)
import tensorflow as tf
IMPORT_FILE = "gs://cloud-samples-data/language/claritin.csv"
with tf.io.gfile.GFile(IMPORT_FILE, "r") as f:
content = f.readlines()
IMPORT_FILE = "gs://" + BUCKET_NAME + "/claritin.csv"
with tf.io.gfile.GFile(IMPORT_FILE, "w") as f:
for line in content:
f.write(",".join(line.split(",")[0:-1]) + "\n")
! gsutil cat $IMPORT_FILE | head -n 10
Explanation: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
End of explanation
dataset = {
"display_name": "claritin_" + TIMESTAMP,
"text_sentiment_dataset_metadata": {"sentiment_max": 4},
}
print(
MessageToJson(
automl.CreateDatasetRequest(parent=PARENT, dataset=dataset).__dict__["_pb"]
)
)
Explanation: Example output:
@freewrytin God is way too good for Claritin,2
I need Claritin. So bad. When did I become cursed with allergies?,3
Thank god for Claritin.,4
"And what's worse is that I reached my 3-day limit on the nose spray yesterday, which means I have to rely on Claritin.",2
Time to take some Claritin or Allegra or something. I need my voice,3
Oh my RT @imsydneycharles: I just want it to be on record somewhere that I took Claritin and Benadryl together...just in case I pass out,2
Bouta take a Claritin _ÛªÛ_Ûª_ÛªÌâ FML !!,3
Commander Loratadine Generic A Sarcelles: Commander Loratadine Generic A Sarcelles Claritin =Ûª_Ûª__ http://t.co/mOleL8AM,2
"Zyrtec, Claritin, Suddafed, Nasal Spray.. I feel like a drug addict taking these Allergy medicine. Please Allergy season.. DISAPPEAR!!",1
"Ûª_Ûª_ÛªÕ@SheLovesThatD: If she has allergies, give her the Claritin D.Ûª_Ûª_Ì_å @Sweeno_thakid41 @B_Original16 @luke_CYwalker14",3
Create a dataset
projects.locations.datasets.create
Request
End of explanation
request = clients["automl"].create_dataset(parent=PARENT, dataset=dataset)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "claritin_20210304132912",
"textSentimentDatasetMetadata": {
"sentimentMax": 4
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/TST1994716952680988672"
}
End of explanation
input_config = {"gcs_source": {"input_uris": [IMPORT_FILE]}}
print(
MessageToJson(
automl.ImportDataRequest(name=dataset_id, input_config=input_config).__dict__[
"_pb"
]
)
)
Explanation: projects.locations.datasets.importData
Request
End of explanation
request = clients["automl"].import_data(name=dataset_id, input_config=input_config)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/TST1994716952680988672",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://migration-ucaip-trainingaip-20210304132912/claritin.csv"
]
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result))
Explanation: Response
End of explanation
model = {
"display_name": "claritin_" + TIMESTAMP,
"dataset_id": dataset_short_id,
"text_sentiment_model_metadata": {},
}
print(
MessageToJson(automl.CreateModelRequest(parent=PARENT, model=model).__dict__["_pb"])
)
Explanation: Example output:
{}
Train a model
projects.locations.models.create
Request
End of explanation
request = clients["automl"].create_model(parent=PARENT, model=model)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "claritin_20210304132912",
"datasetId": "TST1994716952680988672",
"textSentimentModelMetadata": {}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the training pipeline
model_id = result.name
# The short numeric ID for the training pipeline
model_short_id = model_id.split("/")[-1]
print(model_short_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272"
}
End of explanation
request = clients["automl"].list_model_evaluations(parent=model_id, filter="")
Explanation: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
End of explanation
model_evaluations = [json.loads(MessageToJson(me.__dict__["_pb"])) for me in request]
# The evaluation slice
evaluation_slice = request.model_evaluation[0].name
print(json.dumps(model_evaluations, indent=2))
Explanation: Response
End of explanation
request = clients["automl"].get_model_evaluation(name=evaluation_slice)
Explanation: Example output:
[
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272/modelEvaluations/54870628009945864",
"annotationSpecId": "8301667931964571648",
"createTime": "2021-03-04T17:15:51.851420Z",
"textSentimentEvaluationMetrics": {
"precision": 0.33333334,
"recall": 0.16666667,
"f1Score": 0.22222222
},
"displayName": "4"
},
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272/modelEvaluations/1597159550285093673",
"annotationSpecId": "1384138904323489792",
"createTime": "2021-03-04T17:15:51.851420Z",
"textSentimentEvaluationMetrics": {
"precision": 0.5,
"recall": 0.296875,
"f1Score": 0.37254903
},
"displayName": "1"
},
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272/modelEvaluations/3521790980763365687",
"createTime": "2021-03-04T17:15:51.851420Z",
"evaluatedExampleCount": 452,
"textSentimentEvaluationMetrics": {
"precision": 0.6238938,
"recall": 0.6238938,
"f1Score": 0.6238938,
"meanAbsoluteError": 0.47566372,
"meanSquaredError": 0.69690263,
"linearKappa": 0.41007927,
"quadraticKappa": 0.45938763,
"confusionMatrix": {
"annotationSpecId": [
"7148746427357724672",
"1384138904323489792",
"5995824922750877696",
"3689981913537183744",
"8301667931964571648"
],
"row": [
{
"exampleCount": [
2,
4,
1,
1,
1
]
},
{
"exampleCount": [
3,
19,
14,
28,
0
]
},
{
"exampleCount": [
0,
7,
67,
63,
1
]
},
{
"exampleCount": [
1,
8,
19,
191,
4
]
},
{
"exampleCount": [
0,
0,
0,
15,
3
]
}
],
"displayName": [
"0",
"1",
"2",
"3",
"4"
]
}
}
},
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272/modelEvaluations/3727703410992997127",
"annotationSpecId": "3689981913537183744",
"createTime": "2021-03-04T17:15:51.851420Z",
"textSentimentEvaluationMetrics": {
"precision": 0.6409396,
"recall": 0.85650223,
"f1Score": 0.7332054
},
"displayName": "3"
},
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272/modelEvaluations/4692810493650008310",
"annotationSpecId": "7148746427357724672",
"createTime": "2021-03-04T17:15:51.851420Z",
"textSentimentEvaluationMetrics": {
"precision": 0.33333334,
"recall": 0.22222222,
"f1Score": 0.26666668
},
"displayName": "0"
},
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272/modelEvaluations/8390011688796741170",
"annotationSpecId": "5995824922750877696",
"createTime": "2021-03-04T17:15:51.851420Z",
"textSentimentEvaluationMetrics": {
"precision": 0.6633663,
"recall": 0.48550725,
"f1Score": 0.5606694
},
"displayName": "2"
}
]
projects.locations.models.modelEvaluations.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
import tensorflow as tf
gcs_input_uri = "gs://" + BUCKET_NAME + "/test.csv"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
item_1 = "gs://cloud-samples-data/language/sentiment-positive.txt"
! gsutil cp $item_1 gs://$BUCKET_NAME
f.write("gs://" + BUCKET_NAME + "/sentiment-positive.txt" + "\n")
item_2 = "gs://cloud-samples-data/language/sentiment-negative.txt"
! gsutil cp $item_2 gs://$BUCKET_NAME
f.write("gs://" + BUCKET_NAME + "/sentiment-negative.txt")
! gsutil cat $gcs_input_uri
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272/modelEvaluations/54870628009945864",
"annotationSpecId": "8301667931964571648",
"createTime": "2021-03-04T17:15:51.851420Z",
"textSentimentEvaluationMetrics": {
"precision": 0.33333334,
"recall": 0.16666667,
"f1Score": 0.22222222
},
"displayName": "4"
}
Make batch predictions
Make the batch input file
To request a batch of predictions from AutoML Video, create a CSV file that lists the Cloud Storage paths to the videos that you want to annotate. You can also specify a start and end time to tell AutoML Video to only annotate a segment (segment-level) of the video. The start time must be zero or greater and must be before the end time. The end time must be greater than the start time and less than or equal to the duration of the video. You can also use inf to indicate the end of a video.
End of explanation
input_config = {"gcs_source": {"input_uris": [gcs_input_uri]}}
output_config = {
"gcs_destination": {"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"}
}
print(
MessageToJson(
automl.BatchPredictRequest(
name=model_id, input_config=input_config, output_config=output_config
).__dict__["_pb"]
)
)
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210304132912/sentiment-positive.txt
gs://migration-ucaip-trainingaip-20210304132912/sentiment-negative.txt
projects.locations.models.batchPredict
Request
End of explanation
request = clients["predictions"].batch_predict(
name=model_id, input_config=input_config, output_config=output_config
)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://migration-ucaip-trainingaip-20210304132912/test.csv"
]
}
},
"outputConfig": {
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210304132912/batch_output/"
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
test_data = ! gsutil cat $IMPORT_FILE | head -n1
test_item = str(test_data[0]).split(",")[0]
test_label = str(test_data[0]).split(",")[1]
print((test_item, test_label))
Explanation: Example output:
{}
Make online predictions
Prepare data item for online prediction
End of explanation
request = clients["automl"].deploy_model(name=model_id)
Explanation: Example output:
('@freewrytin God is way too good for Claritin', '2')
projects.locations.models.deploy
Call
End of explanation
result = request.result()
print(MessageToJson(result))
Explanation: Response
End of explanation
payload = {"text_snippet": {"content": test_item, "mime_type": "text/plain"}}
prediction_request = automl.PredictRequest(
name=model_id,
payload=payload,
)
print(MessageToJson(prediction_request.__dict__["_pb"]))
Explanation: Example output:
{}
projects.locations.models.predict
End of explanation
request = clients["predictions"].predict(request=prediction_request)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TST4078882474816438272",
"payload": {
"textSnippet": {
"content": "@freewrytin God is way too good for Claritin",
"mimeType": "text/plain"
}
}
}
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
delete_dataset = True
delete_model = True
delete_bucket = True
# Delete the dataset using the AutoML fully qualified identifier for the dataset
try:
if delete_dataset:
clients["automl"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients["automl"].delete_model(name=model_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
Explanation: Example output:
{
"payload": [
{
"textSentiment": {
"sentiment": 3
}
}
],
"metadata": {
"sentiment_score": "0.30955505"
}
}
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation |
6,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visual Comparison Between Different Classification Methods in Shogun
Notebook by Youssef Emad El-Din (Github ID
Step1: <a id = "section1">Data Generation and Visualization</a>
Transformation of features to Shogun format using <a href="http
Step5: Data visualization methods.
Step6: <a id="section2" href="http
Step7: SVM - Kernels
Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http
Step8: <a id ="section2c" href="http
Step9: <a id ="section2d" href="http
Step10: <a id ="section3" href="http
Step11: <a id ="section4" href="http
Step12: <a id ="section5" href="http
Step13: <a id ="section6" href="http
Step14: <a id ="section7" href="http
Step15: <a id ="section7b">Probit Likelihood model</a>
Shogun's <a href="http
Step16: <a id="section8">Putting It All Together</a> | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from modshogun import *
#Needed lists for the final plot
classifiers_linear = []*10
classifiers_non_linear = []*10
classifiers_names = []*10
fadings = []*10
Explanation: Visual Comparison Between Different Classification Methods in Shogun
Notebook by Youssef Emad El-Din (Github ID: <a href="https://github.com/youssef-emad/">youssef-emad</a>)
This notebook demonstrates different classification methods in Shogun. The point is to compare and visualize the decision boundaries of different classifiers on two different datasets, where one is linear seperable, and one is not.
<a href ="#section1">Data Generation and Visualization</a>
<a href ="#section2">Support Vector Machine</a>
<a href ="#section2a">Linear SVM</a>
<a href ="#section2b">Gaussian Kernel</a>
<a href ="#section2c">Sigmoid Kernel</a>
<a href ="#section2d">Polynomial Kernel</a>
<a href ="#section3">Naive Bayes</a>
<a href ="#section4">Nearest Neighbors</a>
<a href ="#section5">Linear Discriminant Analysis</a>
<a href ="#section6">Quadratic Discriminat Analysis</a>
<a href ="#section7">Gaussian Process</a>
<a href ="#section7a">Logit Likelihood model</a>
<a href ="#section7b">Probit Likelihood model</a>
<a href ="#section8">Putting It All Together</a>
End of explanation
shogun_feats_linear = RealFeatures(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_features_train.dat')))
shogun_labels_linear = BinaryLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_linear_labels_train.dat')))
shogun_feats_non_linear = RealFeatures(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_features_train.dat')))
shogun_labels_non_linear = BinaryLabels(CSVFile(os.path.join(SHOGUN_DATA_DIR, 'toy/classifier_binary_2d_nonlinear_labels_train.dat')))
feats_linear = shogun_feats_linear.get_feature_matrix()
labels_linear = shogun_labels_linear.get_labels()
feats_non_linear = shogun_feats_non_linear.get_feature_matrix()
labels_non_linear = shogun_labels_non_linear.get_labels()
Explanation: <a id = "section1">Data Generation and Visualization</a>
Transformation of features to Shogun format using <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CDenseFeatures.html">RealFeatures</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CBinaryLabels.html">BinaryLables</a> classes.
End of explanation
def plot_binary_data(plot,X_train, y_train):
This function plots 2D binary data with different colors for different labels.
plot.xlabel(r"$x$")
plot.ylabel(r"$y$")
plot.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')
plot.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')
def compute_plot_isolines(classifier,features,size=200,fading=True):
This function computes the classification of points on the grid
to get the decision boundaries used in plotting
x1 = np.linspace(1.2*min(features[0]), 1.2*max(features[0]), size)
x2 = np.linspace(1.2*min(features[1]), 1.2*max(features[1]), size)
x, y = np.meshgrid(x1, x2)
plot_features=RealFeatures(np.array((np.ravel(x), np.ravel(y))))
if fading == True:
plot_labels = classifier.apply(plot_features).get_values()
else:
plot_labels = classifier.apply(plot_features).get_labels()
z = plot_labels.reshape((size, size))
return x,y,z
def plot_model(plot,classifier,features,labels,fading=True):
This function plots an input classification model
x,y,z = compute_plot_isolines(classifier,features,fading=fading)
plot.pcolor(x,y,z,cmap='RdBu_r')
plot.contour(x, y, z, linewidths=1, colors='black')
plot_binary_data(plot,features, labels)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Linear Features")
plot_binary_data(plt,feats_linear, labels_linear)
plt.subplot(122)
plt.title("Non Linear Features")
plot_binary_data(plt,feats_non_linear, labels_non_linear)
Explanation: Data visualization methods.
End of explanation
plt.figure(figsize=(15,5))
c = 0.5
epsilon =1e-3
svm_linear = LibLinear(c,shogun_feats_linear,shogun_labels_linear)
svm_linear.set_liblinear_solver_type(L2R_L2LOSS_SVC)
svm_linear.set_epsilon(epsilon)
svm_linear.train()
classifiers_linear.append(svm_linear)
classifiers_names.append("SVM Linear")
fadings.append(True)
plt.subplot(121)
plt.title("Linear SVM - Linear Features")
plot_model(plt,svm_linear,feats_linear,labels_linear)
svm_non_linear = LibLinear(c,shogun_feats_non_linear,shogun_labels_non_linear)
svm_non_linear.set_liblinear_solver_type(L2R_L2LOSS_SVC)
svm_non_linear.set_epsilon(epsilon)
svm_non_linear.train()
classifiers_non_linear.append(svm_non_linear)
plt.subplot(122)
plt.title("Linear SVM - Non Linear Features")
plot_model(plt,svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id="section2" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSVM.html">Support Vector Machine</a>
<a id="section2a" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLibLinear.html">Linear SVM</a>
Shogun provide <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLibLinear.html">Liblinear</a> which is a library for large-scale linear learning focusing on SVM used for classification
End of explanation
gaussian_c=0.7
gaussian_kernel_linear=GaussianKernel(shogun_feats_linear, shogun_feats_linear, 100)
gaussian_svm_linear=LibSVM(gaussian_c, gaussian_kernel_linear, shogun_labels_linear)
gaussian_svm_linear.train()
classifiers_linear.append(gaussian_svm_linear)
fadings.append(True)
gaussian_kernel_non_linear=GaussianKernel(shogun_feats_non_linear, shogun_feats_non_linear, 100)
gaussian_svm_non_linear=LibSVM(gaussian_c, gaussian_kernel_non_linear, shogun_labels_non_linear)
gaussian_svm_non_linear.train()
classifiers_non_linear.append(gaussian_svm_non_linear)
classifiers_names.append("SVM Gaussian Kernel")
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Gaussian Kernel - Linear Features")
plot_model(plt,gaussian_svm_linear,feats_linear,labels_linear)
plt.subplot(122)
plt.title("SVM Gaussian Kernel - Non Linear Features")
plot_model(plt,gaussian_svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: SVM - Kernels
Shogun provides many options for using kernel functions. Kernels in Shogun are based on two classes which are <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernel.html">CKernel</a> and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKernelMachine.html">CKernelMachine</a> base class.
<a id ="section2b" href = "http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">Gaussian Kernel</a>
End of explanation
sigmoid_c = 0.9
sigmoid_kernel_linear = SigmoidKernel(shogun_feats_linear,shogun_feats_linear,200,1,0.5)
sigmoid_svm_linear = LibSVM(sigmoid_c, sigmoid_kernel_linear, shogun_labels_linear)
sigmoid_svm_linear.train()
classifiers_linear.append(sigmoid_svm_linear)
classifiers_names.append("SVM Sigmoid Kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Sigmoid Kernel - Linear Features")
plot_model(plt,sigmoid_svm_linear,feats_linear,labels_linear)
sigmoid_kernel_non_linear = SigmoidKernel(shogun_feats_non_linear,shogun_feats_non_linear,400,2.5,2)
sigmoid_svm_non_linear = LibSVM(sigmoid_c, sigmoid_kernel_non_linear, shogun_labels_non_linear)
sigmoid_svm_non_linear.train()
classifiers_non_linear.append(sigmoid_svm_non_linear)
plt.subplot(122)
plt.title("SVM Sigmoid Kernel - Non Linear Features")
plot_model(plt,sigmoid_svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section2c" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSigmoidKernel.html">Sigmoid Kernel</a>
End of explanation
poly_c = 0.5
degree = 4
poly_kernel_linear = PolyKernel(shogun_feats_linear, shogun_feats_linear, degree, True)
poly_svm_linear = LibSVM(poly_c, poly_kernel_linear, shogun_labels_linear)
poly_svm_linear.train()
classifiers_linear.append(poly_svm_linear)
classifiers_names.append("SVM Polynomial kernel")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("SVM Polynomial Kernel - Linear Features")
plot_model(plt,poly_svm_linear,feats_linear,labels_linear)
poly_kernel_non_linear=PolyKernel(shogun_feats_non_linear, shogun_feats_non_linear, degree, True)
poly_svm_non_linear = LibSVM(poly_c, poly_kernel_non_linear, shogun_labels_non_linear)
poly_svm_non_linear.train()
classifiers_non_linear.append(poly_svm_non_linear)
plt.subplot(122)
plt.title("SVM Polynomial Kernel - Non Linear Features")
plot_model(plt,poly_svm_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section2d" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CPolyKernel.html">Polynomial Kernel</a>
End of explanation
multiclass_labels_linear = shogun_labels_linear.get_labels()
for i in range(0,len(multiclass_labels_linear)):
if multiclass_labels_linear[i] == -1:
multiclass_labels_linear[i] = 0
multiclass_labels_non_linear = shogun_labels_non_linear.get_labels()
for i in range(0,len(multiclass_labels_non_linear)):
if multiclass_labels_non_linear[i] == -1:
multiclass_labels_non_linear[i] = 0
shogun_multiclass_labels_linear = MulticlassLabels(multiclass_labels_linear)
shogun_multiclass_labels_non_linear = MulticlassLabels(multiclass_labels_non_linear)
naive_bayes_linear = GaussianNaiveBayes()
naive_bayes_linear.set_features(shogun_feats_linear)
naive_bayes_linear.set_labels(shogun_multiclass_labels_linear)
naive_bayes_linear.train()
classifiers_linear.append(naive_bayes_linear)
classifiers_names.append("Naive Bayes")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Naive Bayes - Linear Features")
plot_model(plt,naive_bayes_linear,feats_linear,labels_linear,fading=False)
naive_bayes_non_linear = GaussianNaiveBayes()
naive_bayes_non_linear.set_features(shogun_feats_non_linear)
naive_bayes_non_linear.set_labels(shogun_multiclass_labels_non_linear)
naive_bayes_non_linear.train()
classifiers_non_linear.append(naive_bayes_non_linear)
plt.subplot(122)
plt.title("Naive Bayes - Non Linear Features")
plot_model(plt,naive_bayes_non_linear,feats_non_linear,labels_non_linear,fading=False)
Explanation: <a id ="section3" href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianNaiveBayes.html">Naive Bayes</a>
End of explanation
number_of_neighbors = 10
distances_linear = EuclideanDistance(shogun_feats_linear, shogun_feats_linear)
knn_linear = KNN(number_of_neighbors,distances_linear,shogun_labels_linear)
knn_linear.train()
classifiers_linear.append(knn_linear)
classifiers_names.append("Nearest Neighbors")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Nearest Neighbors - Linear Features")
plot_model(plt,knn_linear,feats_linear,labels_linear,fading=False)
distances_non_linear = EuclideanDistance(shogun_feats_non_linear, shogun_feats_non_linear)
knn_non_linear = KNN(number_of_neighbors,distances_non_linear,shogun_labels_non_linear)
knn_non_linear.train()
classifiers_non_linear.append(knn_non_linear)
plt.subplot(122)
plt.title("Nearest Neighbors - Non Linear Features")
plot_model(plt,knn_non_linear,feats_non_linear,labels_non_linear,fading=False)
Explanation: <a id ="section4" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CKNN.html">Nearest Neighbors</a>
End of explanation
gamma = 0.1
lda_linear = LDA(gamma, shogun_feats_linear, shogun_labels_linear)
lda_linear.train()
classifiers_linear.append(lda_linear)
classifiers_names.append("LDA")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("LDA - Linear Features")
plot_model(plt,lda_linear,feats_linear,labels_linear)
lda_non_linear = LDA(gamma, shogun_feats_non_linear, shogun_labels_non_linear)
lda_non_linear.train()
classifiers_non_linear.append(lda_non_linear)
plt.subplot(122)
plt.title("LDA - Non Linear Features")
plot_model(plt,lda_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section5" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLDA.html">Linear Discriminant Analysis</a>
End of explanation
qda_linear = QDA(shogun_feats_linear, shogun_multiclass_labels_linear)
qda_linear.train()
classifiers_linear.append(qda_linear)
classifiers_names.append("QDA")
fadings.append(False)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("QDA - Linear Features")
plot_model(plt,qda_linear,feats_linear,labels_linear,fading=False)
qda_non_linear = QDA(shogun_feats_non_linear, shogun_multiclass_labels_non_linear)
qda_non_linear.train()
classifiers_non_linear.append(qda_non_linear)
plt.subplot(122)
plt.title("QDA - Non Linear Features")
plot_model(plt,qda_non_linear,feats_non_linear,labels_non_linear,fading=False)
Explanation: <a id ="section6" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CQDA.html">Quadratic Discriminant Analysis</a>
End of explanation
# create Gaussian kernel with width = 2.0
kernel = GaussianKernel(10, 2)
# create zero mean function
zero_mean = ZeroMean()
# create logit likelihood model
likelihood = LogitLikelihood()
# specify EP approximation inference method
inference_model_linear = EPInferenceMethod(kernel, shogun_feats_linear, zero_mean, shogun_labels_linear, likelihood)
# create and train GP classifier, which uses Laplace approximation
gaussian_logit_linear = GaussianProcessClassification(inference_model_linear)
gaussian_logit_linear.train()
classifiers_linear.append(gaussian_logit_linear)
classifiers_names.append("Gaussian Process Logit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Logit - Linear Features")
plot_model(plt,gaussian_logit_linear,feats_linear,labels_linear)
inference_model_non_linear = EPInferenceMethod(kernel, shogun_feats_non_linear, zero_mean,
shogun_labels_non_linear, likelihood)
gaussian_logit_non_linear = GaussianProcessClassification(inference_model_non_linear)
gaussian_logit_non_linear.train()
classifiers_non_linear.append(gaussian_logit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Logit - Non Linear Features")
plot_model(plt,gaussian_logit_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section7" href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CGaussianProcessBinaryClassification.html">Gaussian Process</a>
<a id ="section7a">Logit Likelihood model</a>
Shogun's <a href= "http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CLogitLikelihood.html">CLogitLikelihood</a> and <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CEPInferenceMethod.html">CEPInferenceMethod</a> classes are used.
End of explanation
likelihood = ProbitLikelihood()
inference_model_linear = EPInferenceMethod(kernel, shogun_feats_linear, zero_mean, shogun_labels_linear, likelihood)
gaussian_probit_linear = GaussianProcessClassification(inference_model_linear)
gaussian_probit_linear.train()
classifiers_linear.append(gaussian_probit_linear)
classifiers_names.append("Gaussian Process Probit")
fadings.append(True)
plt.figure(figsize=(15,5))
plt.subplot(121)
plt.title("Gaussian Process - Probit - Linear Features")
plot_model(plt,gaussian_probit_linear,feats_linear,labels_linear)
inference_model_non_linear = EPInferenceMethod(kernel, shogun_feats_non_linear,
zero_mean, shogun_labels_non_linear, likelihood)
gaussian_probit_non_linear = GaussianProcessClassification(inference_model_non_linear)
gaussian_probit_non_linear.train()
classifiers_non_linear.append(gaussian_probit_non_linear)
plt.subplot(122)
plt.title("Gaussian Process - Probit - Non Linear Features")
plot_model(plt,gaussian_probit_non_linear,feats_non_linear,labels_non_linear)
Explanation: <a id ="section7b">Probit Likelihood model</a>
Shogun's <a href="http://www.shogun-toolbox.org/doc/en/current/classshogun_1_1CProbitLikelihood.html">CProbitLikelihood</a> class is used.
End of explanation
figure = plt.figure(figsize=(30,9))
plt.subplot(2,11,1)
plot_binary_data(plt,feats_linear, labels_linear)
for i in range(0,10):
plt.subplot(2,11,i+2)
plt.title(classifiers_names[i])
plot_model(plt,classifiers_linear[i],feats_linear,labels_linear,fading=fadings[i])
plt.subplot(2,11,12)
plot_binary_data(plt,feats_non_linear, labels_non_linear)
for i in range(0,10):
plt.subplot(2,11,13+i)
plot_model(plt,classifiers_non_linear[i],feats_non_linear,labels_non_linear,fading=fadings[i])
Explanation: <a id="section8">Putting It All Together</a>
End of explanation |
6,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>
Download
Step1: Condicional If
Step2: Condicionais Aninhados
Step3: Elif
Step4: Operadores Lógicos | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 3</font>
Download: http://github.com/dsacademybr
End of explanation
# Condicional If
if 5 > 2:
print("Python funciona!")
# Statement If...Else
if 5 < 2:
print("Python funciona!")
else:
print("Algo está errado!")
6 > 3
3 > 7
4 < 8
4 >= 4
if 5 == 5:
print("Testando Python!")
if True:
print('Parece que Python funciona!')
# Atenção com a sintaxe
if 4 > 3
print("Tudo funciona!")
# Atenção com a sintaxe
if 4 > 3:
print("Tudo funciona!")
Explanation: Condicional If
End of explanation
idade = 18
if idade > 17:
print("Você pode dirigir!")
Nome = "Bob"
if idade > 13:
if Nome == "Bob":
print("Ok Bob, você está autorizado a entrar!")
else:
print("Desculpe, mas você não pode entrar!")
idade = 13
Nome = "Bob"
if idade >= 13 and Nome == "Bob":
print("Ok Bob, você está autorizado a entrar!")
idade = 12
Nome = "Bob"
if (idade >= 13) or (Nome == "Bob"):
print("Ok Bob, você está autorizado a entrar!")
Explanation: Condicionais Aninhados
End of explanation
dia = "Terça"
if dia == "Segunda":
print("Hoje fará sol!")
else:
print("Hoje vai chover!")
if dia == "Segunda":
print("Hoje fará sol!")
elif dia == "Terça":
print("Hoje vai chover!")
else:
print("Sem previsão do tempo para o dia selecionado")
Explanation: Elif
End of explanation
idade = 18
nome = "Bob"
if idade > 17:
print("Você pode dirigir!")
idade = 18
if idade > 17 and nome == "Bob":
print("Autorizado!")
# Usando mais de uma condição na cláusula if
disciplina = input('Digite o nome da disciplina: ')
nota_final = input('Digite a nota final (entre 0 e 100): ')
if disciplina == 'Geografia' and nota_final >= '70':
print('Você foi aprovado!')
else:
print('Lamento, acho que você precisa estudar mais!')
# Usando mais de uma condição na cláusula if e introduzindo Placeholders
disciplina = input('Digite o nome da disciplina: ')
nota_final = input('Digite a nota final (entre 0 e 100): ')
semestre = input('Digite o semestre (1 a 4): ')
if disciplina == 'Geografia' and nota_final >= '50' and int(semestre) != 1:
print('Você foi aprovado em %s com média final %r!' %(disciplina, nota_final))
else:
print('Lamento, acho que você precisa estudar mais!')
Explanation: Operadores Lógicos
End of explanation |
6,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Write your post here.
Step1: Next, I'll make a new dataframe with just SeaWiFS-wavelength Rrs and chl and delete this superset to save space
Step2: What is this '0-chl' business?
Step3: So the data where chl is 0 looks funky to me as this does not seem to pop up in the Rrs data. So for now I'll drop that part of the data so as to avoid giving the algorithm a hard time.
Step4: No cleanup necessary but we need to bring stuff to a similar scale.
I'll try two ways
Step5: Above is best the formula that the GP came up with in 30 generations from a pool of 5000. Remember when you are looking at this, that you're in log space.
Reference, X{0
Step6: Now this is very interesting. The plot on the left suggest that the GP algorithm converged to a solution that fit the most frequently observed data. As a result it mimicks the field data quite well at values it is most frequently observed. As such it seem to deal with central values better than the OC4. A quick check is to compare an rmse for the two algorithms relative to the field data, with the entire data sets, and then with just some central values. | Python Code:
from gplearn.genetic import SymbolicRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.utils.random import check_random_state
import numpy as np
import pandas as pd
import seaborn as sb
from matplotlib import pyplot as pl
from matplotlib import rcParams
from IPython.display import Image
from scipy.io import loadmat
import pydotplus
from sklearn.metrics import mean_squared_error
from math import sqrt
% matplotlib inline
rcParams['axes.formatter.limits'] = (-2, 3)
rcParams['xtick.labelsize'] = 14
rcParams['ytick.labelsize'] = 14
rcParams['font.size'] = 16
fpath ='/accounts/ekarakoy/DATA/OWT/nomad_rrs4clustering.mat'
matlabData = loadmat(fpath)
wavelength = matlabData['wl'][0]
rrsBelow = matlabData['rrs_below']
chl = matlabData['chl_nomad'] # I might use this as an additional feature in clustering
labels = list(wavelength) + ['chl']
df = pd.DataFrame(data=np.hstack((rrsBelow, chl)), columns=labels)
df.head()
Explanation: Write your post here.
End of explanation
swfWvl = [411, 443, 489, 510, 555, 670]
dfSwf = df.loc[:, swfWvl + ['chl']]
del df
dfSwf.head()
dfSwf.describe()
Explanation: Next, I'll make a new dataframe with just SeaWiFS-wavelength Rrs and chl and delete this superset to save space
End of explanation
f,axs = pl.subplots(nrows=2, ncols=2, figsize=(12,6), sharex=True)
for band, ax in zip([443, 489, 510, 555], axs.ravel()):
ax.plot(dfSwf.loc[dfSwf.chl==0, 'chl'], dfSwf.loc[dfSwf.chl==0, band],
'ko',alpha=0.5, label=r'$chl=0$')
ax.plot(dfSwf.loc[dfSwf.chl!=0, 'chl'], dfSwf.loc[dfSwf.chl!=0, band],
'go', alpha=0.3,label=r'$chl \neq 0$')
ax.legend(loc='best', fontsize=14);
ax.set_xlabel('$chl_a$', fontsize=16)
ax.set_ylabel('$Rrs(%d)$' % band, fontsize=16)
Explanation: What is this '0-chl' business?
End of explanation
dfSwf['chl'].replace(0,np.NaN, inplace=True)
dfSwf.info()
dfSwf.dropna(inplace=True)
dfSwf.info()
dfSwf['maxBlue'] = dfSwf.loc[:,[443, 490, 510]].max(axis=1)
dfSwf.head()
dfSwf.info()
dfSwf.describe()
Explanation: So the data where chl is 0 looks funky to me as this does not seem to pop up in the Rrs data. So for now I'll drop that part of the data so as to avoid giving the algorithm a hard time.
End of explanation
Xlog = np.log10(dfSwf.loc[:,swfWvl ])
ylog = np.log10(dfSwf.loc[:,'chl']+1e-7)
X_train_log, X_test_log, y_train_log, y_test_log = train_test_split(Xlog, ylog,
test_size=0.33)
def CI(df):
# assumes df has rrs data at specific wavelengths
blue, green, red=443, 555, 670
ci = df[green] - (df[blue] + (green - blue) / (red - blue) * (df[red] - df[blue]))
return ci
def OC4(rrsMaxBlue, rrsGreen, log=True):
# maxblue is last column of rrsData
# note the log option to specify whether the data has already been log transformed
a=[0.3272, -2.9940, 2.7218, -1.2259, -0.5683]
if log:
poly = np.sum([a[i]*np.power((rrsMaxBlue - rrsGreen),i)
for i in range(1,5) ], axis=0)
else:
poly = np.sum([a[i]*np.power(np.log10(rrsMaxBlue/rrsGreen),i )
for i in range(1,5) ], axis=0)
poly+=a[0]
chl = np.power(10,poly)
return chl
# making sure the log option in oc4 works
logchlMdl = OC4(np.log10(dfSwf.maxBlue.values), np.log10(dfSwf[green].values),
log=True)
chlMdl = OC4(dfSwf.maxBlue.values, dfSwf[green].values,log=False)
pl.plot(logchlMdl, chlMdl);
#X_train comes from X, which was log transformed, so...
green = 555
maxBlueTrainLog = X_train_log[[443, 489, 510]].max(axis=1)
chlOC4_train = OC4(maxBlueTrainLog.values, X_train_log[green].values, log=True)
pl.figure(figsize=(6,6))
pl.plot(chl_train,np.power(10,y_train_log),'ko',alpha=0.5)
pl.yscale('log')
pl.ylim(1e-2,1e2)
pl.xlim(1e-2,1e2)
pl.xscale('log')
pl.plot([1e-2,1e2],[1e-2,1e2],'k')
pl.xlabel('chl estimated by OC4', fontsize=16)
pl.ylabel('original chl', fontsize=16);
est_gp = SymbolicRegressor(population_size=5000,const_range=(-5,5),
generations=30,transformer=True,
trigonometric=False,
p_crossover=0.6, p_subtree_mutation=0.1,
p_hoist_mutation=0.05, p_point_mutation=0.2,
max_samples=0.9, verbose=1, comparison=True,
parsimony_coefficient=0.01, n_jobs=3)
est_gp.fit(X_train_log, y_train_log)
chlGP_train_log = est_gp.predict(X_train_log)
graph = pydotplus.graphviz.graph_from_dot_data(est_gp._program.export_graphviz())
Image(graph.create_png())
Explanation: No cleanup necessary but we need to bring stuff to a similar scale.
I'll try two ways: taking the lognormal (e.g. Campbell et al., '94(?), which will agree with the OC4 formulation anyway) The other is to standardize the data.
I'll try both and see what I come up with.
First, I'll do the log transform and run the model.
End of explanation
def PlotCompRegs(fieldChl,oc4Chl,gpChl):
oc4Chl = np.log10(oc4Chl)
rmse0 = sqrt(mean_squared_error(fieldChl, gpChl))
rmse1 = sqrt(mean_squared_error(fieldChl, oc4Chl))
f,axs = pl.subplots(ncols=2, figsize=(10,6),sharey=True)
sb.regplot(gpChl, fieldChl, ax=axs[0])
axs[0].set_xlim((-2,2)), axs[0].set_ylim((-2,2))
axs[0].set_xlabel('log(chl) from GP fit', fontsize=16)
axs[0].set_ylabel('log(chl) from data',fontsize=16)
axs[0].plot([-2,2],[-2,2], 'k--', linewidth=2)
axs[0].plot(0, 0, alpha=0, label='rmse=%.2f' % rmse0)
axs[0].legend(loc='best', fontsize=16)
sb.regplot(oc4Chl, fieldChl, ax=axs[1])
axs[1].set_ylim((-2,2)), axs[1].set_xlim((-2,2))
axs[1].set_ylabel('')
axs[1].set_xlabel('log(chl) from OC4', fontsize=16)
axs[1].plot([-2,2],[-2,2],'k--', linewidth=2);
axs[1].plot(0, 0, alpha=0, label='rmse=%.2f' % rmse1)
axs[1].legend(loc='best', fontsize=16)
def PlotCompHists(fieldChl, oc4Chl, gpChl):
f,axs=pl.subplots(ncols=2,figsize=(12,6), sharey=True)
axs[0].hist(fieldChl, bins=50, range=(-2,2), label='field chl',color='gray',
normed=True);
axs[0].hist(np.log10(oc4Chl), bins=50, range=(-2,2), label='OC4 chl',
color='green', normed=True, alpha=0.3);
axs[0].legend(fontsize=16)
axs[1].hist(fieldChl, bins=50, range=(-2,2), label='field chl',color='gray',
normed=True);
axs[1].hist(gpChl, bins=50, range=(-2,2), label='GP chl', color='orange', alpha=0.3,
normed=True);
axs[1].legend(fontsize=16);
axs[0].set_xlabel('log(chl)', fontsize=16)
axs[0].set_ylabel('freq.')
axs[1].set_xlabel('log(chl)', fontsize=16);
PlotCompRegs(fieldChl=y_train_log, oc4Chl=chlOC4_train, gpChl=chlGP_train_log)
PlotCompHists(fieldChl=y_train_log, oc4Chl=chlOC4_train, gpChl=chlGP_train_log)
Explanation: Above is best the formula that the GP came up with in 30 generations from a pool of 5000. Remember when you are looking at this, that you're in log space.
Reference, X{0:412, 1:443, 2:489, 3:510, 4:555, 5:670}.
Here the algorithm is tuned in to the fact that some kind of blue-green ratio matters. Interestingly, it is the inverse of the OCx formulation.
End of explanation
oc4Msk = np.where(((np.log10(chlOC4_train)<=0.5) & (chlOC4_train>=-0.5)))
est_gp.score(X_test_log, y_test_log)
#X_train comes from X, which was log transformed, so...
green = 555
maxBlueTestLog = X_test_log[[443, 489, 510]].max(axis=1)
chlOC4_test = OC4(maxBlueTestLog.values, X_test_log[green].values, log=True)
chlGP_test_log = est_gp.predict(X_test_log)
PlotCompRegs(fieldChl=y_test_log, oc4Chl=chlOC4_test, gpChl=chlGP_test_log)
PlotCompHists(fieldChl=y_test_log, oc4Chl=chlOC4_test, gpChl=chlGP_test_log)
Explanation: Now this is very interesting. The plot on the left suggest that the GP algorithm converged to a solution that fit the most frequently observed data. As a result it mimicks the field data quite well at values it is most frequently observed. As such it seem to deal with central values better than the OC4. A quick check is to compare an rmse for the two algorithms relative to the field data, with the entire data sets, and then with just some central values.
End of explanation |
6,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Посмотрев на свой предыдущий ноутбук, я ощутила острое желание все переделать и реструктурировать.
Прошлая версия по сути была больше изготовлением кирпичиков, из которых сейчас уже я соберу полноценный парсер. Как функцию, а не последовательность ячеек.
Все необходимое я перенесла сюда, также добавила что-то новое.
В результате я хочу получить функцию cianParser(), которая возвращает DataFrame.
Step1: Тут я соберу все ссылки на квартиры по каждому округу, запишу их в словарь links.
Я хочу хранить ссылки в словаре. Ключами будут округи, значениями - все найденные ссылки для округа.
Step2: Cian предлагает для просмотра не более 30 страниц выдачи для каждого запроса. Всего у нас получится не более 30 страниц х 25 квартир на странице х 9 округов = 6750 объектов. Вроде нормально. Но практика показала, что если выбросить дубликаты, то останется около 900 объектов. С чем это может быть связано? Я заметила, что некоторые объявления фигурируют больше, чем в одном округе (я даже видела дом по Бирюлевской улице, отнесенный не только к своему, но и к Тверскому району ЦАО), какие-то объявления появляются в топе выдачи не только первой страницы запроса. Но причины тут не так важны, их наверняка побольше. Важно другое - мне захотелось в итоге иметь побольше датасет.
Тогда я решила сделать свои запросы чуть более подробными и пройтись не по 9 округам, а по всем районам (в каждом округе их довольно много, порядка 15-20). Итого получилось прохождение по результатам 120 поисковых запросов.
Step3: Здесь идет блок функций, наработанных и объясненных в прошлом ноутбуке. В этом я буду лишь указывать их предназначение.
Цена
По сравнению с предыдущим ноутбуком тут появилась обработка ситуации цены в долларах, которая мне встретилась только на этот раз
Step4: Расстояние до центра города (вспомогательные
Step5: Количество комнат
Тут я сменила значение mult на 6, так как решила все же впоследствии многокомнатным квартирам присваивать это число в это поле.
Step6: Расстояние до метро
Step7: До метро пешком/на машине
Step8: Тип дома
Step9: Этаж, этажность
Step10: Общая площадь, жилая, площадь кухни (+ вспомогательный конвертор strToFloat() для чисел, для которых cian (в отличие от python) использует в качестве разделителя запятую, а не точку)
Немного подправила конвертор, который ошибался на данных, которых не бывает на Циане, но всё же
Step11: Добавила обработку ситуации с прочерком
Step12: Отсутствие балкона/их количество, наличие телефона
Step13: Теперь новые функции
Эти выведены в отдельные исключительно для удобства при чтении бОльших, содержащих эти части кода функций
Step14: getInfo() возвращает полную информацию по квартире, вызывая внутри себя вышеперечисленные функции для определения значений признаков
Step15: Мне видится удобной схема из двух функций
Step16: Схема готова. Теперь поработаем со ссылками
Выше была вырезана часть, где я сохраняла собранные ссылки по округам в соответствующие csv. Теперь я подгружу оттуда данные.
Step17: Проделаю это для всех остальных округов, соединю все вертикально, тк дальше буду удалять дубликаты.
Step18: Nan был нулевым элементом для списка ссылок каждого округа. Удалим их (первая размерность должна стать на 9 меньше)
Step19: Теперь удалим дубликаты
Step20: Вот данных и поубавилось. Но нам хватит.
Приведем в порядок ссылки
Step21: Теперь можно запускать парсер
Step22: Данные получены, сохраним их и приступим к очистке и визуализации. | Python Code:
import requests
import re
from bs4 import BeautifulSoup
import pandas as pd
import time
import numpy as np
def html_stripper(text):
return re.sub('<[^<]+?>', '', str(text))
Explanation: Посмотрев на свой предыдущий ноутбук, я ощутила острое желание все переделать и реструктурировать.
Прошлая версия по сути была больше изготовлением кирпичиков, из которых сейчас уже я соберу полноценный парсер. Как функцию, а не последовательность ячеек.
Все необходимое я перенесла сюда, также добавила что-то новое.
В результате я хочу получить функцию cianParser(), которая возвращает DataFrame.
End of explanation
links = dict([('NW' ,[np.nan]), ('C',[np.nan]), ('N',[np.nan]), ('NE',[np.nan]), ('E',[np.nan]), ('SE',[np.nan]), ('S',[np.nan]), ('SW',[np.nan]), ('W',[np.nan])])
Explanation: Тут я соберу все ссылки на квартиры по каждому округу, запишу их в словарь links.
Я хочу хранить ссылки в словаре. Ключами будут округи, значениями - все найденные ссылки для округа.
End of explanation
districts = {1: 'NW', 4: 'C', 5:'N', 6:'NE', 7:'E', 8:'SE', 9:'S', 10:'SW', 11:'W'}
zone_bounds = {1: (125, 133), 4: (13, 23), 5: (23, 39), 6:(39, 56), 7:(56, 72), 8:(72, 84), 9:(84, 100), 10:(100, 112), 11:(112, 125)}
for i in districts.keys():
left = zone_bounds[i][0]
right = zone_bounds[i][1]
for j in range(left, right):
zone = 'http://www.cian.ru/cat.php?deal_type=sale&district%5B0%5D=' + str(j) + '&engine_version=2&offer_type=flat&room1=1&room2=1&room3=1&room4=1&room5=1&room6=1'
for page in range(1, 31):
page_url = zone.format(page)
search_page = requests.get(page_url)
search_page = search_page.content
search_page = BeautifulSoup(search_page, 'lxml')
flat_urls = search_page.findAll('div', attrs = {'ng-class':"{'serp-item_removed': offer.remove.state, 'serp-item_popup-opened': isPopupOpen}"})
flat_urls = re.split('http://www.cian.ru/sale/flat/|/" ng-class="', str(flat_urls))
for link in flat_urls:
if link.isdigit():
links[districts.get(i)].append(link)
Explanation: Cian предлагает для просмотра не более 30 страниц выдачи для каждого запроса. Всего у нас получится не более 30 страниц х 25 квартир на странице х 9 округов = 6750 объектов. Вроде нормально. Но практика показала, что если выбросить дубликаты, то останется около 900 объектов. С чем это может быть связано? Я заметила, что некоторые объявления фигурируют больше, чем в одном округе (я даже видела дом по Бирюлевской улице, отнесенный не только к своему, но и к Тверскому району ЦАО), какие-то объявления появляются в топе выдачи не только первой страницы запроса. Но причины тут не так важны, их наверняка побольше. Важно другое - мне захотелось в итоге иметь побольше датасет.
Тогда я решила сделать свои запросы чуть более подробными и пройтись не по 9 округам, а по всем районам (в каждом округе их довольно много, порядка 15-20). Итого получилось прохождение по результатам 120 поисковых запросов.
End of explanation
def getPrice(flat_page):
price = flat_page.find('div', attrs={'class':'object_descr_price'})
price = re.split('<div>|руб|\W', str(price))
price = "".join([i for i in price if i.isdigit()][-4:])
dollar = '808080'
if dollar in price:
price = price[6:]
return int(price)
Explanation: Здесь идет блок функций, наработанных и объясненных в прошлом ноутбуке. В этом я буду лишь указывать их предназначение.
Цена
По сравнению с предыдущим ноутбуком тут появилась обработка ситуации цены в долларах, которая мне встретилась только на этот раз
End of explanation
from math import radians, cos, sin, asin, sqrt
AVG_EARTH_RADIUS = 6371
def haversine(point1, point2):
# извлекаем долготу и широту
lat1, lng1 = point1
lat2, lng2 = point2
# переводим все эти значения в радианы
lat1, lng1, lat2, lng2 = map(radians, (lat1, lng1, lat2, lng2))
# вычисляем расстояние по формуле
lat = lat2 - lat1
lng = lng2 - lng1
d = sin(lat * 0.5) ** 2 + cos(lat1) * cos(lat2) * sin(lng * 0.5) ** 2
h = 2 * AVG_EARTH_RADIUS * asin(sqrt(d))
return h
def getCoords(flat_page):
coords = flat_page.find('div', attrs={'class':'map_info_button_extend'}).contents[1]
coords = re.split('&|center=|%2C', str(coords))
coords_list = []
for item in coords:
if item[0].isdigit():
coords_list.append(item)
lat = float(coords_list[0])
lon = float(coords_list[1])
return lat, lon
def getDistance(coords):
MSC_POINT_ZERO = (55.755831, 37.617673)
return haversine(MSC_POINT_ZERO, coords)
Explanation: Расстояние до центра города (вспомогательные: гаверсинус - расстояние между двумя точками на сфере, получение координат)
End of explanation
def getRoom(flat_page):
rooms_n = flat_page.find('div', attrs={'class':'object_descr_title'})
rooms_n = html_stripper(rooms_n)
room_number = ''
flag = 0
for i in re.split('-|\n', rooms_n):
if 'много' in i:
flag = 1
break
elif 'комн' in i:
break
else:
room_number += i
if (flag):
room_number = '6'
room_number = "".join(room_number.split())
return int(room_number)
Explanation: Количество комнат
Тут я сменила значение mult на 6, так как решила все же впоследствии многокомнатным квартирам присваивать это число в это поле.
End of explanation
def getMetroDistance(flat_page):
metro = flat_page.find('div', attrs={'class':'object_descr_metro'})
metro = re.split('metro_name|мин', str(metro))
if (len(metro) > 2):
metro_dist = 0
power = 0
flag = 0
for i in range(0, len(metro[1])):
if metro[1][-i-1].isdigit():
flag = 1
metro_dist += int(metro[1][-i-1]) * 10 ** power
power += 1
elif (flag == 1):
break
else:
metro_dist = np.nan
return metro_dist
Explanation: Расстояние до метро
End of explanation
def getMetroWalking(flat_page):
metro = flat_page.find('div', attrs={'class':'object_descr_metro'})
metro = re.split('metro_name|мин', str(metro))
if (len(metro) > 2):
if 'пешк' in metro[2]:
walking = 1
elif 'машин' in metro[2]:
walking = 0
else:
walking = np.nan
else:
walking = np.nan
return walking
Explanation: До метро пешком/на машине
End of explanation
def getBrick(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
brick = np.nan
building_block = re.split('Этаж|Тип продажи', table)[1]
if 'Тип дом' in building_block:
if (('кирпич' in building_block) | ('монолит' in building_block)):
brick = 1
elif (('панельн' in building_block) | ('деревян' in building_block) | ('сталин' in building_block) |
('блочн' in building_block)):
brick = 0
return brick
def getNew(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
new = np.nan
building_block = re.split('Этаж|Тип продажи', table)[1]
if 'Тип дом' in building_block:
if 'новостр' in building_block:
new = 1
elif 'втор' in building_block:
new = 0
return new
Explanation: Тип дома: материал, новостройка/вторичка
End of explanation
def getFloor(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
floor_is = 0
building_block = re.split('Этаж|Тип продажи', table)[1]
floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block)
for i in range(1, len(floor_block[2]) + 1):
if(floor_block[2][-i].isdigit()):
floor_is += int(floor_block[2][-i]) * 10**(i - 1)
return floor_is
def getNFloor(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
floors_count = np.nan
building_block = re.split('Этаж|Тип продажи', table)[1]
floor_block = re.split('\xa0/\xa0|\n|\xa0', building_block)
if floor_block[3].isdigit():
floors_count = int(floor_block[3])
return floors_count
Explanation: Этаж, этажность
End of explanation
def myStrToFloat(string):
delimiter = 0
value = 0
for i in range(0, len(string)):
if string[i] == ',':
delimiter = i
for i in range(0, delimiter):
value += int(string[delimiter - i - 1]) * 10 ** i
for i in range(1, len(string) - delimiter):
value += (int(string[delimiter + i]) * (10 ** (-i)))
return value
def getTotsp(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
total = re.split('Площадь комнат', space_block)[0]
total_space = re.split('\n|\xa0', total)[2]
if total_space.isdigit():
total_space = int(total_space)
else:
total_space = myStrToFloat(total_space)
return total_space
Explanation: Общая площадь, жилая, площадь кухни (+ вспомогательный конвертор strToFloat() для чисел, для которых cian (в отличие от python) использует в качестве разделителя запятую, а не точку)
Немного подправила конвертор, который ошибался на данных, которых не бывает на Циане, но всё же
End of explanation
def getLivesp(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
living = re.split('Жилая площадь', space_block)[1]
living_space = re.split('\n|\xa0', living)[2]
if living_space.isdigit():
living_space = int(living_space)
elif (living_space == '–'):
living_space = np.nan
else:
living_space = myStrToFloat(living_space)
return living_space
def getKitsp(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
optional_block = re.split('Жилая площадь', space_block)[1]
kitchen_space = np.nan
if 'Площадь кухни' in optional_block:
kitchen_block = re.split('Площадь кухни', optional_block)[1]
if re.split('\n|\xa0', kitchen_block)[2] != '–':
if re.split('\n|\xa0', kitchen_block)[2].isdigit():
kitchen_space = int(re.split('\n|\xa0', kitchen_block)[2])
else:
kitchen_space = myStrToFloat(re.split('\n|\xa0', kitchen_block)[2])
return kitchen_space
Explanation: Добавила обработку ситуации с прочерком
End of explanation
def getBal(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
optional_block = re.split('Жилая площадь', space_block)[1]
balcony = np.nan
if 'Балкон' in optional_block:
balcony_block = re.split('Балкон', optional_block)[1]
if re.split('\n', balcony_block)[1] != 'нет':
if re.split('\n', balcony_block)[1] != '–':
balcony = int(re.split('\n', balcony_block)[1][0])
else:
balcony = 0
return balcony
def getTel(flat_page):
table = flat_page.find('table', attrs = {'class':'object_descr_props'})
table = html_stripper(table)
space_block = re.split('Общая площадь', table)[1]
optional_block = re.split('Жилая площадь', space_block)[1]
telephone = np.nan
if 'Телефон' in optional_block:
telephone_block = re.split('Телефон', optional_block)[1]
if re.split('\n', telephone_block)[1] == 'да':
telephone = 1
elif re.split('\n', telephone_block)[1] == 'нет':
telephone = 0
return telephone
Explanation: Отсутствие балкона/их количество, наличие телефона
End of explanation
def getFlatPage(link):
flat_url = 'http://www.cian.ru/sale/flat/' + str(link) + '/'
flat_page = requests.get(flat_url)
flat_page = flat_page.content
flat_page = BeautifulSoup(flat_page, 'lxml')
return flat_page
def getFlatUrl(page):
page_url = district.format(page)
search_page = requests.get(page_url)
search_page = search_page.content
search_page = BeautifulSoup(search_page, 'lxml')
flat_url = search_page.findAll('div', attrs = {'ng-class':"{'serp-item_removed': offer.remove.state, 'serp-item_popup-opened': isPopupOpen}"})
flat_url = re.split('http://www.cian.ru/sale/flat/|/" ng-class="', str(flat_url))
return flat_url
Explanation: Теперь новые функции
Эти выведены в отдельные исключительно для удобства при чтении бОльших, содержащих эти части кода функций
End of explanation
def getInfo(link):
flat_page = getFlatPage(link)
price = getPrice(flat_page)
coords = getCoords(flat_page)
distance = getDistance(coords)
rooms = getRoom(flat_page)
metrdist = getMetroDistance(flat_page)
metro_walking = getMetroWalking(flat_page)
brick = getBrick(flat_page)
new = getNew(flat_page)
floor = getFloor(flat_page)
nfloors = getNFloor(flat_page)
bal = getBal(flat_page)
kitsp = getKitsp(flat_page)
livesp = getLivesp(flat_page)
tel = getTel(flat_page)
totsp = getTotsp(flat_page)
walk = getMetroWalking(flat_page)
info = [bal, brick, distance, floor, kitsp, livesp, metrdist, new, nfloors, price, rooms, tel, totsp, walk]
return info
Explanation: getInfo() возвращает полную информацию по квартире, вызывая внутри себя вышеперечисленные функции для определения значений признаков
End of explanation
def districtParser(links):
apartments = []
for link in links:
apartment = getInfo(link)
apartment.append(link)
apartments.append(apartment)
return apartments
districts
def cianParser(districts, links):
tmp = dict([(0 ,[np.nan]), (1,[np.nan]), (2,[np.nan]), (3,[np.nan]), (4,[np.nan]), (5,[np.nan]), (6,[np.nan]), (7,[np.nan]), (8,[np.nan]), (9,[np.nan]), (10,[np.nan]), (11,[np.nan]), ('Distr', [np.nan])])
data = pd.DataFrame(tmp)
for i in districts.keys():
district_name = districts.get(i)
tmp_links = links[links['Distr'] == district_name]
tmp_links = tmp_links['link']
data_tmp = pd.DataFrame(districtParser(tmp_links))
data_tmp['Distr'] = district_name
data = data.append(data_tmp)
print('district', districts.get(i), 'is done!')
return data
Explanation: Мне видится удобной схема из двух функций: парсер по округу и большой парсер. Большой парсер внутри себя вызывает парсер по округу для всех интересующих нас округов (их 9, я не рассматриваю Зеленоград, Новую Москву и т.д.)
End of explanation
full_links = pd.read_csv('/Users/tatanakuzenko/lbNW.csv')
full_links['Distr'] = 'NW'
full_links.head()
Explanation: Схема готова. Теперь поработаем со ссылками
Выше была вырезана часть, где я сохраняла собранные ссылки по округам в соответствующие csv. Теперь я подгружу оттуда данные.
End of explanation
districts_cut = {4: 'C', 5:'N', 6:'NE', 7:'E', 8:'SE', 9:'S', 10:'SW', 11:'W'}
for i in districts_cut.values():
links_append = pd.read_csv('/Users/tatanakuzenko/lb' + i + '.csv')
links_append['Distr'] = i
print(links_append.shape)
full_links = full_links.append(links_append)
full_links.shape
Explanation: Проделаю это для всех остальных округов, соединю все вертикально, тк дальше буду удалять дубликаты.
End of explanation
full_links = full_links.dropna()
full_links.shape
Explanation: Nan был нулевым элементом для списка ссылок каждого округа. Удалим их (первая размерность должна стать на 9 меньше)
End of explanation
full_links = full_links.drop_duplicates()
full_links.shape
Explanation: Теперь удалим дубликаты
End of explanation
full_links.index = [x for x in range(len(full_links.index))]
full_links.rename(columns={'0' : 'link'}, inplace = True)
full_links['link'] = full_links['link'].astype(np.int32)
full_links.head()
Explanation: Вот данных и поубавилось. Но нам хватит.
Приведем в порядок ссылки: установим верные индексы, переименуем колонку из 0 в link и переведем значения этой колонки в int.
End of explanation
data = cianParser(districts, full_links)
data.head()
data.shape
Explanation: Теперь можно запускать парсер
End of explanation
data.to_csv('cian_full_data.csv', index = False)
Explanation: Данные получены, сохраним их и приступим к очистке и визуализации.
End of explanation |
6,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
StackOverflow Question Multiclassification (Keras)
This example uses a dataset from stackoverflow, with the input being posts and predictions one of the possible classes. It's an adapted version from the Keras example available at https
Step1: Set Up Verta
Step2: Imports
Step3: Data preparation
Download the dataset, load and reduce the number of examples so that the example runs faster.
Step5: Pre-process the data removing unnecessary characters so that we have only the main text of the post left.
Step6: Now we split the dataset into the training and test sets.
Step7: We use keras for tokenization, with a trained tokenizer on the given corpus. This will learn what words to use based on their frequency.
Step8: Finally, transform the text input into a numeric array and encode the labels as one-hot.
Step9: Model training
Define and train the numeric Keras model.
Step10: Evaluate the model quality.
Step11: Deployment
Specify the requirements for model prediction time, then create the model API.
Step12: Let's verify that the model API looks like what we are expecting
Step13: Now we define the model wrapper for prediction. It's more complicated than a regular wrapper because Keras models can't be serialized with the rest of the class, so we have to save them as hdf5 and load once at prediction time.
Step14: Verify that the predict method behaves as we'd expect, since it will be called by the deployment.
Step15: Finally, save the model information necessary for deployment.
Step16: Now we use the demo library to query the model one example at a time.
Step17: Deploy the model through the Web App, then make predictions through the server. | Python Code:
# Python 3.6
!pip install verta
!pip install wget
!pip install pandas
!pip install tensorflow==1.14.0
!pip install scikit-learn
!pip install lxml
!pip install beautifulsoup4
Explanation: StackOverflow Question Multiclassification (Keras)
This example uses a dataset from stackoverflow, with the input being posts and predictions one of the possible classes. It's an adapted version from the Keras example available at https://towardsdatascience.com/multi-class-text-classification-model-comparison-and-selection-5eb066197568.
It shows how to use Keras for training a tokenizer and a model. It also shows how to build a custom model for deployment of this hybrid model.
Set Up Environment
This notebook has been tested with the following packages:
(you may need to change pip to pip3, depending on your own Python environment)
End of explanation
HOST = 'app.verta.ai'
PROJECT_NAME = 'Text Classification'
EXPERIMENT_NAME = 'basic-clf'
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST, use_git=False)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
run = client.set_experiment_run()
Explanation: Set Up Verta
End of explanation
from __future__ import absolute_import, division, print_function, unicode_literals
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import os
import re
import wget
from tensorflow import keras
from sklearn.preprocessing import LabelBinarizer, LabelEncoder
from bs4 import BeautifulSoup
import numpy as np
import pandas as pd
Explanation: Imports
End of explanation
if not os.path.exists('stack-overflow-data.csv'):
wget.download('https://storage.googleapis.com/tensorflow-workshop-examples/stack-overflow-data.csv')
df = pd.read_csv('stack-overflow-data.csv')
df = df[pd.notnull(df['tags'])]
df = df[:5000]
print(df.head(10))
Explanation: Data preparation
Download the dataset, load and reduce the number of examples so that the example runs faster.
End of explanation
REPLACE_BY_SPACE_RE = re.compile('[/(){}\[\]\|@,;]')
BAD_SYMBOLS_RE = re.compile('[^0-9a-z #+_]')
def clean_text(text):
text: a string
return: modified initial string
text = BeautifulSoup(text, "lxml").text # HTML decoding
text = text.lower() # lowercase text
text = REPLACE_BY_SPACE_RE.sub(' ', text) # replace REPLACE_BY_SPACE_RE symbols by space in text
text = BAD_SYMBOLS_RE.sub('', text) # delete symbols which are in BAD_SYMBOLS_RE from text
return text
df['post'] = df['post'].apply(clean_text)
print(df.head(10))
Explanation: Pre-process the data removing unnecessary characters so that we have only the main text of the post left.
End of explanation
train_size = int(len(df) * .7)
train_posts = df['post'][:train_size]
train_tags = df['tags'][:train_size]
test_posts = df['post'][train_size:]
test_tags = df['tags'][train_size:]
Explanation: Now we split the dataset into the training and test sets.
End of explanation
max_words = 1000
tokenize = keras.preprocessing.text.Tokenizer(num_words=max_words, char_level=False)
tokenize.fit_on_texts(train_posts) # only fit on train
Explanation: We use keras for tokenization, with a trained tokenizer on the given corpus. This will learn what words to use based on their frequency.
End of explanation
x_train = tokenize.texts_to_matrix(train_posts)
x_test = tokenize.texts_to_matrix(test_posts)
encoder = LabelEncoder()
encoder.fit(train_tags)
y_train = encoder.transform(train_tags)
y_test = encoder.transform(test_tags)
num_classes = np.max(y_train) + 1
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
run.log_attribute('classes', encoder.classes_.tolist())
Explanation: Finally, transform the text input into a numeric array and encode the labels as one-hot.
End of explanation
hyperparams = {
'hidden_size': 512,
'dropout': 0.2,
'batch_size': 1024,
'num_epochs': 2,
'optimizer': "adam",
'loss': "categorical_crossentropy",
'validation_split': 0.1,
}
run.log_hyperparameters(hyperparams)
# Build the model
model = keras.models.Sequential()
model.add(keras.layers.Dense(hyperparams['hidden_size'], input_shape=(max_words,)))
model.add(keras.layers.Activation('relu'))
model.add(keras.layers.Dropout(hyperparams['dropout']))
model.add(keras.layers.Dense(num_classes))
model.add(keras.layers.Activation('softmax'))
model.compile(loss=hyperparams['loss'],
optimizer=hyperparams['optimizer'],
metrics=['accuracy'])
# create a per-epoch callback for logging
def log_validation_callback(epoch, logs): # Keras will call this each epoch
run.log_observation("train_loss", float(logs['loss']))
run.log_observation("train_acc", float(logs['acc']))
run.log_observation("val_loss", float(logs['val_loss']))
run.log_observation("val_acc", float(logs['val_acc']))
history = model.fit(x_train, y_train,
batch_size=hyperparams['batch_size'],
epochs=hyperparams['num_epochs'],
verbose=1,
validation_split=hyperparams['validation_split'],
callbacks=[keras.callbacks.LambdaCallback(on_epoch_end=log_validation_callback)])
Explanation: Model training
Define and train the numeric Keras model.
End of explanation
train_loss, train_acc = model.evaluate(x_train, y_train,
batch_size=hyperparams['batch_size'], verbose=1)
run.log_metric("train_loss", train_loss)
run.log_metric("train_acc", train_acc)
Explanation: Evaluate the model quality.
End of explanation
import six, tensorflow, bs4, sklearn
requirements = [
"numpy",
"tensorflow",
"beautifulsoup4",
"scikit-learn",
]
model_api = ModelAPI(["blah", "blah", "blah"], y_test)
Explanation: Deployment
Specify the requirements for model prediction time, then create the model API.
End of explanation
model_api.to_dict()
Explanation: Let's verify that the model API looks like what we are expecting: a string as input and multiple numbers are the output.
End of explanation
class ModelWrapper:
def __init__(self, keras_model, tokenizer):
# save Keras model
import six # this comes installed with Verta
self.keras_model_hdf5 = six.BytesIO()
keras_model.save(self.keras_model_hdf5)
self.keras_model_hdf5.seek(0)
self.tokenizer = tokenizer
def __setstate__(self, state):
import tensorflow
# restore instance attributes
self.__dict__.update(state)
# load Keras model
self.graph = tensorflow.Graph()
with self.graph.as_default():
self.session = tensorflow.Session()
with self.session.as_default():
self.keras_model = tensorflow.keras.models.load_model(state['keras_model_hdf5'])
def predict(self, data):
import numpy, tensorflow
tokenized_input = self.tokenizer.texts_to_matrix(data)
if hasattr(self, 'keras_model'):
with self.session.as_default():
with self.graph.as_default():
return self.keras_model.predict(tokenized_input)
else: # not unpickled
model = tensorflow.keras.models.load_model(self.keras_model_hdf5)
return model.predict(tokenized_input)
model_wrapper = ModelWrapper(model, tokenize)
Explanation: Now we define the model wrapper for prediction. It's more complicated than a regular wrapper because Keras models can't be serialized with the rest of the class, so we have to save them as hdf5 and load once at prediction time.
End of explanation
model_wrapper.predict(["foo bar baz"])
Explanation: Verify that the predict method behaves as we'd expect, since it will be called by the deployment.
End of explanation
run.log_model(model_wrapper, model_api=model_api)
run.log_requirements(requirements)
Explanation: Finally, save the model information necessary for deployment.
End of explanation
from verta._demo_utils import DeployedModel
deployed_model = DeployedModel(HOST, run.id)
run
Explanation: Now we use the demo library to query the model one example at a time.
End of explanation
import itertools, time
for x in itertools.cycle(test_posts.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
Explanation: Deploy the model through the Web App, then make predictions through the server.
End of explanation |
6,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Logistic Regression in TensorFlow 2.0
Learning Objectives
Load a CSV file using Pandas
Create train, validation, and test sets
Define and train a model using Keras (including setting class weights)
Evaluate the model using various metrics (including precision and recall)
Try common techniques for dealing with imbalanced data like
Step1: In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
Step2: Data processing and exploration
Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
Note
Step3: Now, let's view the statistics of the raw dataframe.
Step4: Examine the class label imbalance
Let's look at the dataset imbalance
Step5: This shows the small fraction of positive samples.
Clean, split and normalize the data
The raw data has a few issues. First the Time and Amount columns are too variable to use directly. Drop the Time column (since it's not clear what it means) and take the log of the Amount column to reduce its range.
Step6: Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data.
Step7: Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note
Step8: Caution
Step9: Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent
Step10: Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
False negatives and false positives are samples that were incorrectly classified
True negatives and true positives are samples that were correctly classified
Accuracy is the percentage of examples correctly classified
$\frac{\text{true samples}}{\text{total samples}}$
Precision is the percentage of predicted positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false positives}}$
Recall is the percentage of actual positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false negatives}}$
AUC refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.
Note
Step11: Test run the model
Step12: Optional
Step13: The correct bias to set can be derived from
Step14: Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near
Step15: With this initialization the initial loss should be approximately
Step16: This initial loss is about 50 times less than if would have been with naive initilization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
Step17: Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses
Step18: The above figure makes it clear
Step19: Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this tutorial.
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
Step20: Note
Step21: Evaluate your model on the test dataset and display the results for the metrics you created above.
Step22: If the model had predicted everything perfectly, this would be a diagonal matrix where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
Plot the ROC
Now plot the ROC. This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
Step23: It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
Class weights
Calculate class weights
The goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
Step24: Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note
Step25: Check training history
Step26: Evaluate metrics
Step27: Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application.
Plot the ROC
Step28: Oversampling
Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
Step29: Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples
Step30: Using tf.data
If you're using tf.data the easiest way to produce balanced examples is to start with a positive and a negative dataset, and merge them. See the tf.data guide for more examples.
Step31: Each dataset provides (feature, label) pairs
Step32: Merge the two together using experimental.sample_from_datasets
Step33: To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once
Step34: Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note
Step35: If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal
Step36: Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the callbacks.EarlyStopping finer control over when to stop training.
Step37: Re-check training history
Step38: Evaluate metrics
Step39: Plot the ROC | Python Code:
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
import tensorflow as tf
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from tensorflow import keras
print("TensorFlow version: ", tf.version.VERSION)
Explanation: Advanced Logistic Regression in TensorFlow 2.0
Learning Objectives
Load a CSV file using Pandas
Create train, validation, and test sets
Define and train a model using Keras (including setting class weights)
Evaluate the model using various metrics (including precision and recall)
Try common techniques for dealing with imbalanced data like:
Class weighting and
Oversampling
Introduction
This lab how to classify a highly imbalanced dataset in which the number of examples in one class greatly outnumbers the examples in another. You will work with the Credit Card Fraud Detection dataset hosted on Kaggle. The aim is to detect a mere 492 fraudulent transactions from 284,807 transactions in total. You will use Keras to define the model and class weights to help the model learn from the imbalanced data.
PENDING LINK UPDATE: Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Start by importing the necessary libraries for this lab.
End of explanation
mpl.rcParams["figure.figsize"] = (12, 10)
colors = plt.rcParams["axes.prop_cycle"].by_key()["color"]
Explanation: In the next cell, we're going to customize our Matplot lib visualization figure size and colors. Note that each time Matplotlib loads, it defines a runtime configuration (rc) containing the default styles for every plot element we create. This configuration can be adjusted at any time using the plt.rc convenience routine.
End of explanation
file = tf.keras.utils
raw_df = pd.read_csv(
"https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv"
)
raw_df.head()
Explanation: Data processing and exploration
Download the Kaggle Credit Card Fraud data set
Pandas is a Python library with many helpful utilities for loading and working with structured data and can be used to download CSVs into a dataframe.
Note: This dataset has been collected and analysed during a research collaboration of Worldline and the Machine Learning Group of ULB (Université Libre de Bruxelles) on big data mining and fraud detection. More details on current and past projects on related topics are available here and the page of the DefeatFraud project
End of explanation
raw_df[
[
"Time",
"V1",
"V2",
"V3",
"V4",
"V5",
"V26",
"V27",
"V28",
"Amount",
"Class",
]
].describe()
Explanation: Now, let's view the statistics of the raw dataframe.
End of explanation
neg, pos = np.bincount(raw_df["Class"])
total = neg + pos
print(
"Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n".format(
total, pos, 100 * pos / total
)
)
Explanation: Examine the class label imbalance
Let's look at the dataset imbalance:
End of explanation
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop("Time")
# The `Amount` column covers a huge range. Convert to log-space.
eps = 0.001 # 0 => 0.1¢
cleaned_df["Log Ammount"] = np.log(cleaned_df.pop("Amount") + eps)
Explanation: This shows the small fraction of positive samples.
Clean, split and normalize the data
The raw data has a few issues. First the Time and Amount columns are too variable to use directly. Drop the Time column (since it's not clear what it means) and take the log of the Amount column to reduce its range.
End of explanation
# TODO 1
# Use a utility from sklearn to split and shuffle our dataset.
train_df, test_df = #TODO: Your code goes here.
train_df, val_df = #TODO: Your code goes here.
# Form np arrays of labels and features.
train_labels = #TODO: Your code goes here.
bool_train_labels = #TODO: Your code goes here.
val_labels = #TODO: Your code goes here.
test_labels = #TODO: Your code goes here.
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
Explanation: Split the dataset into train, validation, and test sets. The validation set is used during the model fitting to evaluate the loss and any metrics, however the model is not fit with this data. The test set is completely unused during the training phase and is only used at the end to evaluate how well the model generalizes to new data. This is especially important with imbalanced datasets where overfitting is a significant concern from the lack of training data.
End of explanation
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print("Training labels shape:", train_labels.shape)
print("Validation labels shape:", val_labels.shape)
print("Test labels shape:", test_labels.shape)
print("Training features shape:", train_features.shape)
print("Validation features shape:", val_features.shape)
print("Test features shape:", test_features.shape)
Explanation: Normalize the input features using the sklearn StandardScaler.
This will set the mean to 0 and standard deviation to 1.
Note: The StandardScaler is only fit using the train_features to be sure the model is not peeking at the validation or test sets.
End of explanation
pos_df = pd.DataFrame(
train_features[bool_train_labels], columns=train_df.columns
)
neg_df = pd.DataFrame(
train_features[~bool_train_labels], columns=train_df.columns
)
sns.jointplot(
pos_df["V5"], pos_df["V6"], kind="hex", xlim=(-5, 5), ylim=(-5, 5)
)
plt.suptitle("Positive distribution")
sns.jointplot(
neg_df["V5"], neg_df["V6"], kind="hex", xlim=(-5, 5), ylim=(-5, 5)
)
_ = plt.suptitle("Negative distribution")
Explanation: Caution: If you want to deploy a model, it's critical that you preserve the preprocessing calculations. The easiest way to implement them as layers, and attach them to your model before export.
Look at the data distribution
Next compare the distributions of the positive and negative examples over a few features. Good questions to ask yourself at this point are:
Do these distributions make sense?
Yes. You've normalized the input and these are mostly concentrated in the +/- 2 range.
Can you see the difference between the ditributions?
Yes the positive examples contain a much higher rate of extreme values.
End of explanation
METRICS = [
keras.metrics.TruePositives(name="tp"),
keras.metrics.FalsePositives(name="fp"),
keras.metrics.TrueNegatives(name="tn"),
keras.metrics.FalseNegatives(name="fn"),
keras.metrics.BinaryAccuracy(name="accuracy"),
keras.metrics.Precision(name="precision"),
keras.metrics.Recall(name="recall"),
keras.metrics.AUC(name="auc"),
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
# TODO 1
model = keras.Sequential(
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
)
model.compile(
optimizer=keras.optimizers.Adam(lr=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics,
)
return model
Explanation: Define the model and metrics
Define a function that creates a simple neural network with a densly connected hidden layer, a dropout layer to reduce overfitting, and an output sigmoid layer that returns the probability of a transaction being fraudulent:
End of explanation
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor="val_auc",
verbose=1,
patience=10,
mode="max",
restore_best_weights=True,
)
model = make_model()
model.summary()
Explanation: Understanding useful metrics
Notice that there are a few metrics defined above that can be computed by the model that will be helpful when evaluating the performance.
False negatives and false positives are samples that were incorrectly classified
True negatives and true positives are samples that were correctly classified
Accuracy is the percentage of examples correctly classified
$\frac{\text{true samples}}{\text{total samples}}$
Precision is the percentage of predicted positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false positives}}$
Recall is the percentage of actual positives that were correctly classified
$\frac{\text{true positives}}{\text{true positives + false negatives}}$
AUC refers to the Area Under the Curve of a Receiver Operating Characteristic curve (ROC-AUC). This metric is equal to the probability that a classifier will rank a random positive sample higher than than a random negative sample.
Note: Accuracy is not a helpful metric for this task. You can 99.8%+ accuracy on this task by predicting False all the time.
Read more:
* True vs. False and Positive vs. Negative
* Accuracy
* Precision and Recall
* ROC-AUC
Baseline model
Build the model
Now create and train your model using the function that was defined earlier. Notice that the model is fit using a larger than default batch size of 2048, this is important to ensure that each batch has a decent chance of containing a few positive samples. If the batch size was too small, they would likely have no fraudulent transactions to learn from.
Note: this model will not handle the class imbalance well. You will improve it later in this tutorial.
End of explanation
model.predict(train_features[:10])
Explanation: Test run the model:
End of explanation
results = model.evaluate(
train_features, train_labels, batch_size=BATCH_SIZE, verbose=0
)
print(f"Loss: {results[0]:0.4f}")
Explanation: Optional: Set the correct initial bias.
These are initial guesses are not great. You know the dataset is imbalanced. Set the output layer's bias to reflect that (See: A Recipe for Training Neural Networks: "init well"). This can help with initial convergence.
With the default bias initialization the loss should be about math.log(2) = 0.69314
End of explanation
initial_bias = np.log([pos / neg])
initial_bias
Explanation: The correct bias to set can be derived from:
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$
$$ b_0 = -log_e(1/p_0 - 1) $$
$$ b_0 = log_e(pos/neg)$$
End of explanation
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
Explanation: Set that as the initial bias, and the model will give much more reasonable initial guesses.
It should be near: pos/total = 0.0018
End of explanation
results = model.evaluate(
train_features, train_labels, batch_size=BATCH_SIZE, verbose=0
)
print(f"Loss: {results[0]:0.4f}")
Explanation: With this initialization the initial loss should be approximately:
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
End of explanation
initial_weights = os.path.join(tempfile.mkdtemp(), "initial_weights")
model.save_weights(initial_weights)
Explanation: This initial loss is about 50 times less than if would have been with naive initilization.
This way the model doesn't need to spend the first few epochs just learning that positive examples are unlikely. This also makes it easier to read plots of the loss during training.
Checkpoint the initial weights
To make the various training runs more comparable, keep this initial model's weights in a checkpoint file, and load them into each model before training.
End of explanation
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0,
)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0,
)
def plot_loss(history, label, n):
# Use a log scale to show the wide range of values.
plt.semilogy(
history.epoch,
history.history["loss"],
color=colors[n],
label="Train " + label,
)
plt.semilogy(
history.epoch,
history.history["val_loss"],
color=colors[n],
label="Val " + label,
linestyle="--",
)
plt.xlabel("Epoch")
plt.ylabel("Loss")
plt.legend()
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
Explanation: Confirm that the bias fix helps
Before moving on, confirm quick that the careful bias initialization actually helped.
Train the model for 20 epochs, with and without this careful initialization, and compare the losses:
End of explanation
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
)
Explanation: The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage.
Train the model
End of explanation
def plot_metrics(history):
metrics = ["loss", "auc", "precision", "recall"]
for n, metric in enumerate(metrics):
name = metric.replace("_", " ").capitalize()
plt.subplot(2, 2, n + 1)
plt.plot(
history.epoch,
history.history[metric],
color=colors[0],
label="Train",
)
plt.plot(
history.epoch,
history.history["val_" + metric],
color=colors[0],
linestyle="--",
label="Val",
)
plt.xlabel("Epoch")
plt.ylabel(name)
if metric == "loss":
plt.ylim([0, plt.ylim()[1]])
elif metric == "auc":
plt.ylim([0.8, 1])
else:
plt.ylim([0, 1])
plt.legend()
plot_metrics(baseline_history)
Explanation: Check training history
In this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this tutorial.
Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
End of explanation
# TODO 1
train_predictions_baseline = #TODO: Your code goes here.
test_predictions_baseline = #TODO: Your code goes here.
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5, 5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title(f"Confusion matrix @{p:.2f}")
plt.ylabel("Actual label")
plt.xlabel("Predicted label")
print("Legitimate Transactions Detected (True Negatives): ", cm[0][0])
print(
"Legitimate Transactions Incorrectly Detected (False Positives): ",
cm[0][1],
)
print("Fraudulent Transactions Missed (False Negatives): ", cm[1][0])
print("Fraudulent Transactions Detected (True Positives): ", cm[1][1])
print("Total Fraudulent Transactions: ", np.sum(cm[1]))
Explanation: Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model.
Evaluate metrics
You can use a confusion matrix to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
End of explanation
baseline_results = model.evaluate(
test_features, test_labels, batch_size=BATCH_SIZE, verbose=0
)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ": ", value)
print()
plot_cm(test_labels, test_predictions_baseline)
Explanation: Evaluate your model on the test dataset and display the results for the metrics you created above.
End of explanation
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100 * fp, 100 * tp, label=name, linewidth=2, **kwargs)
plt.xlabel("False positives [%]")
plt.ylabel("True positives [%]")
plt.xlim([-0.5, 20])
plt.ylim([80, 100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect("equal")
plot_roc(
"Train Baseline", train_labels, train_predictions_baseline, color=colors[0]
)
plot_roc(
"Test Baseline",
test_labels,
test_predictions_baseline,
color=colors[0],
linestyle="--",
)
plt.legend(loc="lower right")
Explanation: If the model had predicted everything perfectly, this would be a diagonal matrix where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity.
Plot the ROC
Now plot the ROC. This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
End of explanation
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
# TODO 1
weight_for_0 = #TODO: Your code goes here.
weight_for_1 = #TODO: Your code goes here.
class_weight = #TODO: Your code goes here.
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
Explanation: It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness.
Class weights
Calculate class weights
The goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
End of explanation
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight,
)
Explanation: Train a model with class weights
Now try re-training and evaluating the model with class weights to see how that affects the predictions.
Note: Using class_weights changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like optimizers.SGD, may fail. The optimizer used here, optimizers.Adam, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
End of explanation
plot_metrics(weighted_history)
Explanation: Check training history
End of explanation
# TODO 1
train_predictions_weighted = #TODO: Your code goes here.
test_predictions_weighted = #TODO: Your code goes here.
weighted_results = weighted_model.evaluate(
test_features, test_labels, batch_size=BATCH_SIZE, verbose=0
)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ": ", value)
print()
plot_cm(test_labels, test_predictions_weighted)
Explanation: Evaluate metrics
End of explanation
plot_roc(
"Train Baseline", train_labels, train_predictions_baseline, color=colors[0]
)
plot_roc(
"Test Baseline",
test_labels,
test_predictions_baseline,
color=colors[0],
linestyle="--",
)
plot_roc(
"Train Weighted", train_labels, train_predictions_weighted, color=colors[1]
)
plot_roc(
"Test Weighted",
test_labels,
test_predictions_weighted,
color=colors[1],
linestyle="--",
)
plt.legend(loc="lower right")
Explanation: Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application.
Plot the ROC
End of explanation
# TODO 1
pos_features = #TODO: Your code goes here.
neg_features = train_features[~bool_train_labels]
pos_labels = #TODO: Your code goes here.
neg_labels = #TODO: Your code goes here.
Explanation: Oversampling
Oversample the minority class
A related approach would be to resample the dataset by oversampling the minority class.
End of explanation
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
Explanation: Using NumPy
You can balance the dataset manually by choosing the right number of random
indices from the positive examples:
End of explanation
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels)) # .cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
Explanation: Using tf.data
If you're using tf.data the easiest way to produce balanced examples is to start with a positive and a negative dataset, and merge them. See the tf.data guide for more examples.
End of explanation
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
Explanation: Each dataset provides (feature, label) pairs:
End of explanation
resampled_ds = tf.data.experimental.sample_from_datasets(
[pos_ds, neg_ds], weights=[0.5, 0.5]
)
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
Explanation: Merge the two together using experimental.sample_from_datasets:
End of explanation
resampled_steps_per_epoch = np.ceil(2.0 * neg / BATCH_SIZE)
resampled_steps_per_epoch
Explanation: To use this dataset, you'll need the number of steps per epoch.
The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
End of explanation
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks=[early_stopping],
validation_data=val_ds,
)
Explanation: Train on the oversampled data
Now try training the model with the resampled data set instead of using class weights to see how these methods compare.
Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
End of explanation
plot_metrics(resampled_history)
Explanation: If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.
But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight.
This smoother gradient signal makes it easier to train the model.
Check training history
Note that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
End of explanation
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch=20,
epochs=10 * EPOCHS,
callbacks=[early_stopping],
validation_data=(val_ds),
)
Explanation: Re-train
Because training is easier on the balanced data, the above training procedure may overfit quickly.
So break up the epochs to give the callbacks.EarlyStopping finer control over when to stop training.
End of explanation
plot_metrics(resampled_history)
Explanation: Re-check training history
End of explanation
# TODO 1
train_predictions_resampled = #TODO: Your code goes here.
test_predictions_resampled = #TODO: Your code goes here.
resampled_results = resampled_model.evaluate(
test_features, test_labels, batch_size=BATCH_SIZE, verbose=0
)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ": ", value)
print()
plot_cm(test_labels, test_predictions_resampled)
Explanation: Evaluate metrics
End of explanation
plot_roc(
"Train Baseline", train_labels, train_predictions_baseline, color=colors[0]
)
plot_roc(
"Test Baseline",
test_labels,
test_predictions_baseline,
color=colors[0],
linestyle="--",
)
plot_roc(
"Train Weighted", train_labels, train_predictions_weighted, color=colors[1]
)
plot_roc(
"Test Weighted",
test_labels,
test_predictions_weighted,
color=colors[1],
linestyle="--",
)
plot_roc(
"Train Resampled",
train_labels,
train_predictions_resampled,
color=colors[2],
)
plot_roc(
"Test Resampled",
test_labels,
test_predictions_resampled,
color=colors[2],
linestyle="--",
)
plt.legend(loc="lower right")
Explanation: Plot the ROC
End of explanation |
6,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vocabulary
In the previous parts, you learned how matplotlib organizes plot-making by figures and axes. We broke down the components of a basic figure and learned how to create them. You also learned how to add one or more axes to a figure, and how to tie them together. You even learned how to change some of the basic appearances of the axes. Finally, we went over some of the many plotting methods that matplotlib has to draw on those axes. With all that knowledge, you should be off making great and wonderful figures.
Of course! While the previous sections may have taught you some of the structure and syntax of matplotlib, it did not describe much of the substance and vocabulary of the library. This section will go over many of the properties that are used throughout the library. Note that while many of the examples in this section may show one way of setting a particular property, that property may be applicible elsewhere in completely different context. This is the "language" of matplotlib.
Colors
This is, perhaps, the most important piece of vocabulary in matplotlib. Given that matplotlib is a plotting library, colors are associated with everything that is plotted in your figures. Matplotlib supports a very robust language for specifying colors that should be familiar to a wide variety of users.
Colornames
First, colors can be given as strings. For very basic colors, you can even get away with just a single letter
Step1: Markers
Markers are commonly used in plot() and scatter() plots, but also show up elsewhere. There is a wide set of markers available, and custom markers can even be specified.
marker | description ||marker | description ||marker | description ||marker | description
Step2: Exercise 3.2
Try out some different markers and colors
Step3: Linestyles
Line styles are about as commonly used as colors. There are a few predefined linestyles available to use. Note that there are some advanced techniques to specify some custom line styles. Here is an example of a custom dash pattern.
linestyle | description
-------------------|------------------------------
'-' | solid
'--' | dashed
'-.' | dashdot
'
Step4: It is a bit confusing, but the line styles mentioned above are only valid for lines. Whenever you are dealing with the linestyles of the edges of "Patch" objects, you will need to use words instead of the symbols. So "solid" instead of "-", and "dashdot" instead of "-.". This issue is be fixed for the v2.1 release and allow these specifications to be used interchangably.
Step5: Plot attributes
With just about any plot you can make, there are many attributes that can be modified to make the lines and markers suit your needs. Note that for many plotting functions, matplotlib will cycle the colors for each dataset you plot. However, you are free to explicitly state which colors you want used for which plots. For the plt.plot() and plt.scatter() functions, you can mix the specification for the colors, linestyles, and markers in a single string.
Step6: | Property | Value Type
|------------------------|-------------------------------------------------
|alpha | float
|color or c | any matplotlib color
|dash_capstyle | ['butt', 'round' 'projecting']
|dash_joinstyle | ['miter' 'round' 'bevel']
|dashes | sequence of on/off ink in points
|drawstyle | [ ‘default’ ‘steps’ ‘steps-pre’
| | ‘steps-mid’ ‘steps-post’ ]
|linestyle or ls | [ '-' '--' '-.' '
Step8: Colormaps
Another very important property of many figures is the colormap. The job of a colormap is to relate a scalar value to a color. In addition to the regular portion of the colormap, an "over", "under" and "bad" color can be optionally defined as well. NaNs will trigger the "bad" part of the colormap.
As we all know, we create figures in order to convey information visually to our readers. There is much care and consideration that have gone into the design of these colormaps. Your choice in which colormap to use depends on what you are displaying. In mpl, the "jet" colormap has historically been used by default, but it will often not be the colormap you would want to use.
Step9: When colormaps are created in mpl, they get "registered" with a name. This allows one to specify a colormap to use by name.
Step10: Mathtext
Oftentimes, you just simply need that superscript or some other math text in your labels. Matplotlib provides a very easy way to do this for those familiar with LaTeX. Any text that is surrounded by dollar signs will be treated as "mathtext". Do note that because backslashes are prevelent in LaTeX, it is often a good idea to prepend an r to your string literal so that Python will not treat the backslashes as escape characters.
Step11: Annotations and Arrows
There are two ways one can place arbitrary text anywhere they want on a plot. The first is a simple text(). Then there is the fancier annotate() function that can help "point out" what you want to annotate.
Step12: There are all sorts of boxes for your text, and arrows you can use, and there are many different ways to connect the text to the point that you want to annotate. For a complete tutorial on this topic, go to the Annotation Guide. In the meantime, here is a table of the kinds of arrows that can be drawn
Step13: Exercise 3.4
Point out a local minimum with a fancy red arrow.
Step14: Hatches
A Patch object can have a hatching defined for it.
/ - diagonal hatching
\ - back diagonal
| - vertical
- - horizontal
+ - crossed
x - crossed diagonal
o - small circle
O - large circle (upper-case 'o')
. - dots
* - stars
Letters can be combined, in which case all the specified
hatchings are done. If same letter repeats, it increases the
density of hatching of that pattern.
Ugly tie contest! | Python Code:
# %load exercises/3.1-colors.py
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, , t, t**2, , t, t**3, )
plt.show()
Explanation: Vocabulary
In the previous parts, you learned how matplotlib organizes plot-making by figures and axes. We broke down the components of a basic figure and learned how to create them. You also learned how to add one or more axes to a figure, and how to tie them together. You even learned how to change some of the basic appearances of the axes. Finally, we went over some of the many plotting methods that matplotlib has to draw on those axes. With all that knowledge, you should be off making great and wonderful figures.
Of course! While the previous sections may have taught you some of the structure and syntax of matplotlib, it did not describe much of the substance and vocabulary of the library. This section will go over many of the properties that are used throughout the library. Note that while many of the examples in this section may show one way of setting a particular property, that property may be applicible elsewhere in completely different context. This is the "language" of matplotlib.
Colors
This is, perhaps, the most important piece of vocabulary in matplotlib. Given that matplotlib is a plotting library, colors are associated with everything that is plotted in your figures. Matplotlib supports a very robust language for specifying colors that should be familiar to a wide variety of users.
Colornames
First, colors can be given as strings. For very basic colors, you can even get away with just a single letter:
b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
Other colornames that are allowed are the HTML/CSS colornames such as "burlywood" and "chartreuse" are valid. See the full list of the 147 colornames. They allow "grey" where-ever "gray" appears in that list of colornames. All of these colornames are case-insensitive.
Hex values
Colors can also be specified by supplying an HTML/CSS hex string, such as '#0000FF' for blue.
256 Shades of Gray
A gray level can be given instead of a color by passing a string representation of a number between 0 and 1, inclusive. '0.0' is black, while '1.0' is white. '0.75' would be a lighter shade of gray.
RGB[A] tuples
You may come upon instances where the previous ways of specifying colors do not work. This can sometimes happen in some of the deeper, stranger levels of the code. When all else fails, the universal language of colors for matplotlib is the RGB[A] tuple. This is the "Red", "Green", "Blue", and sometimes "Alpha" tuple of floats in the range of [0, 1]. One means full saturation of that channel, so a red RGBA tuple would be (1.0, 0.0, 0.0, 1.0), whereas a partly transparent green RGBA tuple would be (0.0, 1.0, 0.0, 0.75). The documentation will usually specify whether it accepts RGB or RGBA tuples. Sometimes, a list of tuples would be required for multiple colors, and you can even supply a Nx3 or Nx4 numpy array in such cases.
In functions such as plot() and scatter(), while it may appear that they can take a color specification, what they really need is a "format specification", which includes color as part of the format. Unfortunately, such specifications are string only and so RGB[A] tuples are not supported for such arguments (but you can still pass an RGB[A] tuple for a "color" argument).
Note, oftentimes there is a separate argument for "alpha" where-ever you can specify a color. The value for "alpha" will usually take precedence over the alpha value in the RGBA tuple. There is no easy way around this problem.
End of explanation
xs, ys = np.mgrid[:4, 9:0:-1]
markers = [".", "+", ",", "x", "o", "D", "d", "", "8", "s", "p", "*", "|", "_", "h", "H", 0, 4, "<", "3",
1, 5, ">", "4", 2, 6, "^", "2", 3, 7, "v", "1", "None", None, " ", ""]
descripts = ["point", "plus", "pixel", "cross", "circle", "diamond", "thin diamond", "",
"octagon", "square", "pentagon", "star", "vertical bar", "horizontal bar", "hexagon 1", "hexagon 2",
"tick left", "caret left", "triangle left", "tri left", "tick right", "caret right", "triangle right", "tri right",
"tick up", "caret up", "triangle up", "tri up", "tick down", "caret down", "triangle down", "tri down",
"Nothing", "Nothing", "Nothing", "Nothing"]
fig, ax = plt.subplots(1, 1, figsize=(14, 4))
for x, y, m, d in zip(xs.T.flat, ys.T.flat, markers, descripts):
ax.scatter(x, y, marker=m, s=100)
ax.text(x + 0.1, y - 0.1, d, size=14)
ax.set_axis_off()
plt.show()
Explanation: Markers
Markers are commonly used in plot() and scatter() plots, but also show up elsewhere. There is a wide set of markers available, and custom markers can even be specified.
marker | description ||marker | description ||marker | description ||marker | description
:----------|:--------------||:---------|:--------------||:---------|:--------------||:---------|:--------------
"." | point ||"+" | plus ||"," | pixel ||"x" | cross
"o" | circle ||"D" | diamond ||"d" | thin_diamond || |
"8" | octagon ||"s" | square ||"p" | pentagon ||"*" | star
"|" | vertical line||"_" | horizontal line ||"h" | hexagon1 ||"H" | hexagon2
0 | tickleft ||4 | caretleft ||"<" | triangle_left ||"3" | tri_left
1 | tickright ||5 | caretright ||">" | triangle_right||"4" | tri_right
2 | tickup ||6 | caretup ||"^" | triangle_up ||"2" | tri_up
3 | tickdown ||7 | caretdown ||"v" | triangle_down ||"1" | tri_down
"None" | nothing ||None | nothing ||" " | nothing ||"" | nothing
End of explanation
# %load exercises/3.2-markers.py
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, , t, t**2, , t, t**3, )
plt.show()
Explanation: Exercise 3.2
Try out some different markers and colors
End of explanation
fig = plt.figure()
t = np.arange(0.0, 5.0, 0.2)
plt.plot(t, t, '-', t, t**2, '--', t, t**3, '-.', t, -t, ':')
plt.show()
Explanation: Linestyles
Line styles are about as commonly used as colors. There are a few predefined linestyles available to use. Note that there are some advanced techniques to specify some custom line styles. Here is an example of a custom dash pattern.
linestyle | description
-------------------|------------------------------
'-' | solid
'--' | dashed
'-.' | dashdot
':' | dotted
'None' | draw nothing
' ' | draw nothing
'' | draw nothing
Also, don't mix up ".-" (line with dot markers) and "-." (dash-dot line) when using the plot function!
End of explanation
fig, ax = plt.subplots(1, 1)
ax.bar([1, 2, 3, 4], [10, 20, 15, 13], ls='dashed', ec='r', lw=5)
plt.show()
Explanation: It is a bit confusing, but the line styles mentioned above are only valid for lines. Whenever you are dealing with the linestyles of the edges of "Patch" objects, you will need to use words instead of the symbols. So "solid" instead of "-", and "dashdot" instead of "-.". This issue is be fixed for the v2.1 release and allow these specifications to be used interchangably.
End of explanation
fig = plt.figure()
t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.show()
Explanation: Plot attributes
With just about any plot you can make, there are many attributes that can be modified to make the lines and markers suit your needs. Note that for many plotting functions, matplotlib will cycle the colors for each dataset you plot. However, you are free to explicitly state which colors you want used for which plots. For the plt.plot() and plt.scatter() functions, you can mix the specification for the colors, linestyles, and markers in a single string.
End of explanation
# %load exercises/3.3-properties.py
t = np.arange(0.0, 5.0, 0.1)
a = np.exp(-t) * np.cos(2*np.pi*t)
plt.plot(t, a, )
plt.show()
Explanation: | Property | Value Type
|------------------------|-------------------------------------------------
|alpha | float
|color or c | any matplotlib color
|dash_capstyle | ['butt', 'round' 'projecting']
|dash_joinstyle | ['miter' 'round' 'bevel']
|dashes | sequence of on/off ink in points
|drawstyle | [ ‘default’ ‘steps’ ‘steps-pre’
| | ‘steps-mid’ ‘steps-post’ ]
|linestyle or ls | [ '-' '--' '-.' ':' 'None' ' ' '']
| | and any drawstyle in combination with a
| | linestyle, e.g. 'steps--'.
|linewidth or lw | float value in points
|marker | [ 0 1 2 3 4 5 6 7 'o' 'd' 'D' 'h' 'H'
| | '' 'None' ' ' None '8' 'p' ','
| | '+' 'x' '.' 's' '*' '_' '|'
| | '1' '2' '3' '4' 'v' '<' '>' '^' ]
|markeredgecolor or mec | any matplotlib color
|markeredgewidth or mew | float value in points
|markerfacecolor or mfc | any matplotlib color
|markersize or ms | float
|solid_capstyle | ['butt' 'round' 'projecting']
|solid_joinstyle | ['miter' 'round' 'bevel']
|visible | [True False]
|zorder | any number
Exercise 3.3
Make a plot that has a dotted red line, with large yellow diamond markers that have a green edge
End of explanation
# %load http://matplotlib.org/mpl_examples/color/colormaps_reference.py # For those with v1.2 or higher
Reference for colormaps included with Matplotlib.
This reference example shows all colormaps included with Matplotlib. Note that
any colormap listed here can be reversed by appending "_r" (e.g., "pink_r").
These colormaps are divided into the following categories:
Sequential:
These colormaps are approximately monochromatic colormaps varying smoothly
between two color tones---usually from low saturation (e.g. white) to high
saturation (e.g. a bright blue). Sequential colormaps are ideal for
representing most scientific data since they show a clear progression from
low-to-high values.
Diverging:
These colormaps have a median value (usually light in color) and vary
smoothly to two different color tones at high and low values. Diverging
colormaps are ideal when your data has a median value that is significant
(e.g. 0, such that positive and negative values are represented by
different colors of the colormap).
Qualitative:
These colormaps vary rapidly in color. Qualitative colormaps are useful for
choosing a set of discrete colors. For example::
color_list = plt.cm.Set3(np.linspace(0, 1, 12))
gives a list of RGB colors that are good for plotting a series of lines on
a dark background.
Miscellaneous:
Colormaps that don't fit into the categories above.
import numpy as np
import matplotlib.pyplot as plt
# Have colormaps separated into categories:
# http://matplotlib.org/examples/color/colormaps_reference.html
cmaps = [('Perceptually Uniform Sequential',
['viridis', 'inferno', 'plasma', 'magma']),
('Sequential', ['Blues', 'BuGn', 'BuPu',
'GnBu', 'Greens', 'Greys', 'Oranges', 'OrRd',
'PuBu', 'PuBuGn', 'PuRd', 'Purples', 'RdPu',
'Reds', 'YlGn', 'YlGnBu', 'YlOrBr', 'YlOrRd']),
('Sequential (2)', ['afmhot', 'autumn', 'bone', 'cool',
'copper', 'gist_heat', 'gray', 'hot',
'pink', 'spring', 'summer', 'winter']),
('Diverging', ['BrBG', 'bwr', 'coolwarm', 'PiYG', 'PRGn', 'PuOr',
'RdBu', 'RdGy', 'RdYlBu', 'RdYlGn', 'Spectral',
'seismic']),
('Qualitative', ['Accent', 'Dark2', 'Paired', 'Pastel1',
'Pastel2', 'Set1', 'Set2', 'Set3']),
('Miscellaneous', ['gist_earth', 'terrain', 'ocean', 'gist_stern',
'brg', 'CMRmap', 'cubehelix',
'gnuplot', 'gnuplot2', 'gist_ncar',
'nipy_spectral', 'jet', 'rainbow',
'gist_rainbow', 'hsv', 'flag', 'prism'])]
nrows = max(len(cmap_list) for cmap_category, cmap_list in cmaps)
gradient = np.linspace(0, 1, 256)
gradient = np.vstack((gradient, gradient))
def plot_color_gradients(cmap_category, cmap_list):
fig, axes = plt.subplots(nrows=nrows)
fig.subplots_adjust(top=0.95, bottom=0.01, left=0.2, right=0.99)
axes[0].set_title(cmap_category + ' colormaps', fontsize=14)
for ax, name in zip(axes, cmap_list):
ax.imshow(gradient, aspect='auto', cmap=plt.get_cmap(name))
pos = list(ax.get_position().bounds)
x_text = pos[0] - 0.01
y_text = pos[1] + pos[3]/2.
fig.text(x_text, y_text, name, va='center', ha='right', fontsize=10)
# Turn off *all* ticks & spines, not just the ones with colormaps.
for ax in axes:
ax.set_axis_off()
for cmap_category, cmap_list in cmaps:
plot_color_gradients(cmap_category, cmap_list)
plt.show()
Explanation: Colormaps
Another very important property of many figures is the colormap. The job of a colormap is to relate a scalar value to a color. In addition to the regular portion of the colormap, an "over", "under" and "bad" color can be optionally defined as well. NaNs will trigger the "bad" part of the colormap.
As we all know, we create figures in order to convey information visually to our readers. There is much care and consideration that have gone into the design of these colormaps. Your choice in which colormap to use depends on what you are displaying. In mpl, the "jet" colormap has historically been used by default, but it will often not be the colormap you would want to use.
End of explanation
fig, (ax1, ax2) = plt.subplots(1, 2)
z = np.random.random((10, 10))
ax1.imshow(z, interpolation='none', cmap='gray')
ax2.imshow(z, interpolation='none', cmap='coolwarm')
plt.show()
Explanation: When colormaps are created in mpl, they get "registered" with a name. This allows one to specify a colormap to use by name.
End of explanation
fig = plt.figure()
plt.scatter([1, 2, 3, 4], [4, 3, 2, 1])
plt.title(r'$\sigma_{i=15}$', fontsize=20)
plt.show()
Explanation: Mathtext
Oftentimes, you just simply need that superscript or some other math text in your labels. Matplotlib provides a very easy way to do this for those familiar with LaTeX. Any text that is surrounded by dollar signs will be treated as "mathtext". Do note that because backslashes are prevelent in LaTeX, it is often a good idea to prepend an r to your string literal so that Python will not treat the backslashes as escape characters.
End of explanation
fig = plt.figure()
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
plt.plot(t, s, lw=2)
plt.annotate('local max', xy=(2, 1), xytext=(4, 1.5),
arrowprops=dict(facecolor='black', shrink=0.0))
plt.ylim(-2, 2)
plt.show()
Explanation: Annotations and Arrows
There are two ways one can place arbitrary text anywhere they want on a plot. The first is a simple text(). Then there is the fancier annotate() function that can help "point out" what you want to annotate.
End of explanation
import matplotlib.patches as mpatches
styles = mpatches.ArrowStyle.get_styles()
ncol = 2
nrow = (len(styles)+1) // ncol
figheight = (nrow+0.5)
fig = plt.figure(figsize=(4.0*ncol/0.85, figheight/0.85))
fontsize = 0.4 * 70
ax = fig.add_axes([0, 0, 1, 1])
ax.set_xlim(0, 4*ncol)
ax.set_ylim(0, figheight)
def to_texstring(s):
s = s.replace("<", r"$<$")
s = s.replace(">", r"$>$")
s = s.replace("|", r"$|$")
return s
for i, (stylename, styleclass) in enumerate(sorted(styles.items())):
x = 3.2 + (i//nrow)*4
y = (figheight - 0.7 - i%nrow)
p = mpatches.Circle((x, y), 0.2, fc="w")
ax.add_patch(p)
ax.annotate(to_texstring(stylename), (x, y),
(x-1.2, y),
ha="right", va="center",
size=fontsize,
arrowprops=dict(arrowstyle=stylename,
patchB=p,
shrinkA=50,
shrinkB=5,
fc="w", ec="k",
connectionstyle="arc3,rad=-0.25",
),
bbox=dict(boxstyle="square", fc="w"))
ax.set_axis_off()
plt.show()
Explanation: There are all sorts of boxes for your text, and arrows you can use, and there are many different ways to connect the text to the point that you want to annotate. For a complete tutorial on this topic, go to the Annotation Guide. In the meantime, here is a table of the kinds of arrows that can be drawn
End of explanation
# %load exercises/3.4-arrows.py
t = np.arange(0.0, 5.0, 0.01)
s = np.cos(2*np.pi*t)
plt.plot(t, s, lw=2)
plt.annotate('local max', xy=(2, 1), xytext=(3, 1.5),
arrowprops=dict())
plt.ylim(-2, 2)
plt.show()
Explanation: Exercise 3.4
Point out a local minimum with a fancy red arrow.
End of explanation
fig = plt.figure()
bars = plt.bar([1, 2, 3, 4], [10, 12, 15, 17])
plt.setp(bars[0], hatch='x', facecolor='w')
plt.setp(bars[1], hatch='xx-', facecolor='orange')
plt.setp(bars[2], hatch='+O.', facecolor='c')
plt.setp(bars[3], hatch='*', facecolor='y')
plt.show()
Explanation: Hatches
A Patch object can have a hatching defined for it.
/ - diagonal hatching
\ - back diagonal
| - vertical
- - horizontal
+ - crossed
x - crossed diagonal
o - small circle
O - large circle (upper-case 'o')
. - dots
* - stars
Letters can be combined, in which case all the specified
hatchings are done. If same letter repeats, it increases the
density of hatching of that pattern.
Ugly tie contest!
End of explanation |
6,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with the python DyNet package
The DyNet package is intended for training and using neural networks, and is particularly suited for applications with dynamically changing network structures. It is a python-wrapper for the DyNet C++ package.
In neural network packages there are generally two modes of operation
Step1: The first block creates a parameter collection and populates it with parameters.
The second block creates a computation graph and adds the parameters to it, transforming them into Expressions.
The need to distinguish model parameters from "expressions" will become clearer later.
We now make use of the W and V expressions, in order to create the complete expression for the network.
Step2: Training
We now want to set the parameter weights such that the loss is minimized.
For this, we will use a trainer object. A trainer is constructed with respect to the parameters of a given model.
Step3: To use the trainer, we need to
Step4: The optimization step indeed made the loss decrease. We now need to run this in a loop.
To this end, we will create a training set, and iterate over it.
For the xor problem, the training instances are easy to create.
Step5: We now feed each question / answer pair to the network, and try to minimize the loss.
Step6: Our network is now trained. Let's verify that it indeed learned the xor function
Step7: In case we are curious about the parameter values, we can query them
Step8: To summarize
Here is a complete program
Step9: Dynamic Networks
Dynamic networks are very similar to static ones, but instead of creating the network once and then calling "set" in each training example to change the inputs, we just create a new network for each training example.
We present an example below. While the value of this may not be clear in the xor example, the dynamic approach
is very convenient for networks for which the structure is not fixed, such as recurrent or recursive networks. | Python Code:
# we assume that we have the dynet module in your path.
# OUTDATED: we also assume that LD_LIBRARY_PATH includes a pointer to where libcnn_shared.so is.
import dynet as dy
# create a parameter collection and add the parameters.
m = dy.ParameterCollection()
pW = m.add_parameters((8,2))
pV = m.add_parameters((1,8))
pb = m.add_parameters((8))
renew_cg() # new computation graph. not strictly needed here, but good practice.
# associate the parameters with cg Expressions
W = parameter(pW)
V = parameter(pV)
b = parameter(pb)
#b[1:-1].value()
b.value()
Explanation: Working with the python DyNet package
The DyNet package is intended for training and using neural networks, and is particularly suited for applications with dynamically changing network structures. It is a python-wrapper for the DyNet C++ package.
In neural network packages there are generally two modes of operation:
Static networks, in which a network is built and then being fed with different inputs/outputs. Most NN packages work this way.
Dynamic networks, in which a new network is built for each training example (sharing parameters with the networks of other training examples). This approach is what makes DyNet unique, and where most of its power comes from.
We will describe both of these modes.
Package Fundamentals
The main piece of DyNet is the ComputationGraph, which is what essentially defines a neural network.
The ComputationGraph is composed of expressions, which relate to the inputs and outputs of the network,
as well as the Parameters of the network. The parameters are the things in the network that are optimized over time, and all of the parameters sit inside a ParameterCollection. There are trainers (for example SimpleSGDTrainer) that are in charge of setting the parameter values.
We will not be using the ComputationGraph directly, but it is there in the background, as a singleton object.
When dynet is imported, a new ComputationGraph is created. We can then reset the computation graph to a new state
by calling renew_cg().
Static Networks
The life-cycle of a DyNet program is:
1. Create a ParameterCollection, and populate it with Parameters.
2. Renew the computation graph, and create Expression representing the network
(the network will include the Expressions for the Parameters defined in the parameter collection).
3. Optimize the model for the objective of the network.
As an example, consider a model for solving the "xor" problem. The network has two inputs, which can be 0 or 1, and a single output which should be the xor of the two inputs.
We will model this as a multi-layer perceptron with a single hidden layer.
Let $x = x_1, x_2$ be our input. We will have a hidden layer of 8 nodes, and an output layer of a single node. The activation on the hidden layer will be a $\tanh$. Our network will then be:
$\sigma(V(\tanh(Wx+b)))$
Where $W$ is a $8 \times 2$ matrix, $V$ is an $8 \times 1$ matrix, and $b$ is an 8-dim vector.
We want the output to be either 0 or 1, so we take the output layer to be the logistic-sigmoid function, $\sigma(x)$, that takes values between $-\infty$ and $+\infty$ and returns numbers in $[0,1]$.
We will begin by defining the model and the computation graph.
End of explanation
x = vecInput(2) # an input vector of size 2. Also an expression.
output = logistic(V*(tanh((W*x)+b)))
# we can now query our network
x.set([0,0])
output.value()
# we want to be able to define a loss, so we need an input expression to work against.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
x.set([1,0])
y.set(0)
print loss.value()
y.set(1)
print loss.value()
Explanation: The first block creates a parameter collection and populates it with parameters.
The second block creates a computation graph and adds the parameters to it, transforming them into Expressions.
The need to distinguish model parameters from "expressions" will become clearer later.
We now make use of the W and V expressions, in order to create the complete expression for the network.
End of explanation
trainer = SimpleSGDTrainer(m)
Explanation: Training
We now want to set the parameter weights such that the loss is minimized.
For this, we will use a trainer object. A trainer is constructed with respect to the parameters of a given model.
End of explanation
x.set([1,0])
y.set(1)
loss_value = loss.value() # this performs a forward through the network.
print "the loss before step is:",loss_value
# now do an optimization step
loss.backward() # compute the gradients
trainer.update()
# see how it affected the loss:
loss_value = loss.value(recalculate=True) # recalculate=True means "don't use precomputed value"
print "the loss after step is:",loss_value
Explanation: To use the trainer, we need to:
* call the forward_scalar method of ComputationGraph. This will run a forward pass through the network, calculating all the intermediate values until the last one (loss, in our case), and then convert the value to a scalar. The final output of our network must be a single scalar value. However, if we do not care about the value, we can just use cg.forward() instead of cg.forward_sclar().
* call the backward method of ComputationGraph. This will run a backward pass from the last node, calculating the gradients with respect to minimizing the last expression (in our case we want to minimize the loss). The gradients are stored in the parameter collection, and we can now let the trainer take care of the optimization step.
* call trainer.update() to optimize the values with respect to the latest gradients.
End of explanation
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
Explanation: The optimization step indeed made the loss decrease. We now need to run this in a loop.
To this end, we will create a training set, and iterate over it.
For the xor problem, the training instances are easy to create.
End of explanation
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
Explanation: We now feed each question / answer pair to the network, and try to minimize the loss.
End of explanation
x.set([0,1])
print "0,1",output.value()
x.set([1,0])
print "1,0",output.value()
x.set([0,0])
print "0,0",output.value()
x.set([1,1])
print "1,1",output.value()
Explanation: Our network is now trained. Let's verify that it indeed learned the xor function:
End of explanation
W.value()
V.value()
b.value()
Explanation: In case we are curious about the parameter values, we can query them:
End of explanation
# define the parameters
m = ParameterCollection()
pW = m.add_parameters((8,2))
pV = m.add_parameters((1,8))
pb = m.add_parameters((8))
# renew the computation graph
renew_cg()
# add the parameters to the graph
W = parameter(pW)
V = parameter(pV)
b = parameter(pb)
# create the network
x = vecInput(2) # an input vector of size 2.
output = logistic(V*(tanh((W*x)+b)))
# define the loss with respect to an output y.
y = scalarInput(0) # this will hold the correct answer
loss = binary_log_loss(output, y)
# create training instances
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# train the network
trainer = SimpleSGDTrainer(m)
total_loss = 0
seen_instances = 0
for question, answer in zip(questions, answers):
x.set(question)
y.set(answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
Explanation: To summarize
Here is a complete program:
End of explanation
import dynet as dy
# create training instances, as before
def create_xor_instances(num_rounds=2000):
questions = []
answers = []
for round in xrange(num_rounds):
for x1 in 0,1:
for x2 in 0,1:
answer = 0 if x1==x2 else 1
questions.append((x1,x2))
answers.append(answer)
return questions, answers
questions, answers = create_xor_instances()
# create a network for the xor problem given input and output
def create_xor_network(pW, pV, pb, inputs, expected_answer):
dy.renew_cg() # new computation graph
W = dy.parameter(pW) # add parameters to graph as expressions
V = dy.parameter(pV)
b = dy.parameter(pb)
x = dy.vecInput(len(inputs))
x.set(inputs)
y = dy.scalarInput(expected_answer)
output = dy.logistic(V*(dy.tanh((W*x)+b)))
loss = dy.binary_log_loss(output, y)
return loss
m2 = dy.ParameterCollection()
pW = m2.add_parameters((8,2))
pV = m2.add_parameters((1,8))
pb = m2.add_parameters((8))
trainer = dy.SimpleSGDTrainer(m2)
seen_instances = 0
total_loss = 0
for question, answer in zip(questions, answers):
loss = create_xor_network(pW, pV, pb, question, answer)
seen_instances += 1
total_loss += loss.value()
loss.backward()
trainer.update()
if (seen_instances > 1 and seen_instances % 100 == 0):
print "average loss is:",total_loss / seen_instances
Explanation: Dynamic Networks
Dynamic networks are very similar to static ones, but instead of creating the network once and then calling "set" in each training example to change the inputs, we just create a new network for each training example.
We present an example below. While the value of this may not be clear in the xor example, the dynamic approach
is very convenient for networks for which the structure is not fixed, such as recurrent or recursive networks.
End of explanation |
6,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment 12
1) Finalize figures(Bijan
Step1: Make a higher definition graph
Step2: It seems clear that the data changes across y. Let's look at the magnitude of these changes.
Step3: Next week I want to see how I can clean this, maybe isolate cortical layer boundaries from it.
Next is generating a .gif of the data points at increasing levels of syn/unmasked | Python Code:
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
%matplotlib inline
import urllib2
import numpy as np
np.set_printoptions(precision=3, suppress=True)
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:]
def check_condition(row):
if row[3] == 0:
return False
return True
a = np.apply_along_axis(check_condition, 1, csv)
a = np.where(a == True)[0]
nonZeroMask = csv[a, :]
synDividedMask = np.divide(nonZeroMask[:,4],nonZeroMask[:,3])
synDividedMask = synDividedMask * (64**3)
accurateDataT = np.vstack((nonZeroMask[:,0],nonZeroMask[:,1],nonZeroMask[:,2],synDividedMask))
accurateData = accurateDataT.T
cleaned = accurateData[accurateData[:,0] >= 409]
cleaned = cleaned[cleaned[:,0] <= 3529]
cleaned = cleaned[cleaned[:,1] >= 1564]
cleaned = cleaned[cleaned[:,1] <= 3124]
length, width = cleaned.shape
print length
print width
Explanation: Assignment 12
1) Finalize figures(Bijan
End of explanation
import math
divisionsx = np.unique(cleaned[:,0])
meanx = np.zeros((2,len(divisionsx)))
divisionsy = np.unique(cleaned[:,1])
meany = np.zeros((2,len(divisionsy)))
divisionsz = np.unique(cleaned[:,2])
meanz = np.zeros((2,len(divisionsz)))
maxlen = np.amax([len(divisionsx),len(divisionsy),len(divisionsz)])
xstep = np.divide(maxlen,len(divisionsx))
ystep = 2
zstep = 8
counter = 0
for d in divisionsx:
slicex = cleaned[cleaned[:,0] == d]
meanx[0,counter] = (counter)*xstep
meanx[1,counter] = np.mean(slicex[:,3])
counter += 1
counter = 0
for d in divisionsy:
slicey = cleaned[cleaned[:,1] == d]
meany[0,counter] = (counter)*ystep
meany[1,counter] = np.mean(slicey[:,3])
counter += 1
counter = 0
for d in divisionsz:
slicez = cleaned[cleaned[:,2] == d]
meanz[0,counter] = (counter)*zstep
meanz[1,counter] = np.mean(slicez[:,3])
counter += 1
#plot it
allmean = [301] * maxlen
fig,ax = plt.subplots()
ax.set_ylim([150,375])
ax.set_title('Average Synaptic Density Values along Each Axis')
ax.set_yticks([150,225,300,375])
ax.set_xticks([0,20,40,60,80])
ax.set_xlabel('x,y,z coordinate standardized to 0')
ax.set_ylabel('synaptic density')
ax.plot(allmean,label='whole data set',color='black')
ax.plot(meanx[0,:],meanx[1,:],label='x',color='blue')
ax.plot(meany[0,:],meany[1,:],label='y',color='red')
ax.plot(meanz[0,:],meanz[1,:],label='z',color='magenta')
legend = ax.legend(loc='lower left')
Explanation: Make a higher definition graph
End of explanation
mean = meany[1,:]
mean1 = mean[:-1]
mean2 = mean[1:]
meandiff = mean2 - mean1
print meandiff
fig,ax = plt.subplots()
ax.plot(meandiff)
ax.plot([0] * len(meandiff))
ax.set_xlabel('y slice')
ax.set_ylabel('change in syn/unmasked')
ax.set_xticks(np.arange(0,40,10))
ax.set_yticks(np.arange(-30,15,10))
Explanation: It seems clear that the data changes across y. Let's look at the magnitude of these changes.
End of explanation
cleanedSyn = cleaned[cleaned[:,3].argsort()]
print cleanedSyn
uniqueSyn = np.unique(cleaned[:,3])
print len(uniqueSyn)
import Figtodat
from images2gif import writeGif
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
figure = plt.figure()
plot = figure.add_subplot(111, projection='3d')
plot.hold(False)
plot.view_init(elev=50, azim=20)
plot.set_xlabel('x axis')
plot.set_xticks(np.arange(0,4000,1000))
plot.set_ylabel('y axis')
plot.set_yticks(np.arange(1000,3500,1000))
plot.set_zlabel('z axis')
plot.set_zticks(np.arange(0,1500,500))
plot.set_title('Time points = increasing values of syn/unmasked')
images=[]
imageSteps = np.arange(0,49)
stepSize = len(cleaned)/len(imageSteps)
for i in imageSteps[:-1]:
currImage = cleanedSyn[(stepSize*i):(stepSize*(i+1)),:]
plot.scatter(currImage[:,0],currImage[:,1],currImage[:,2])
plot.hold(True)
plot.scatter(4000,np.mean(currImage[:,1]),s = 10,c = 'r')
plot.hold(False)
im = Figtodat.fig2img(figure)
images.append(im)
writeGif("Bijan_gif_dirty.gif",images,duration=0.1,dither=0)
Explanation: Next week I want to see how I can clean this, maybe isolate cortical layer boundaries from it.
Next is generating a .gif of the data points at increasing levels of syn/unmasked
End of explanation |
6,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Movie Review Sentiment Analysis
This notebook creates a logistic regression, svm, and naive bayes classifier from training data to train these model.
in this motebok data that we are using having review of a movie and sentiment (positive or negative) of movie.
Load Python Library that we are use in this analysis
Step1: Load Data Set from review file
Step2: there are basically two columns first is the review data and second is sentiment as shown positive sentiment
Step3: Here we can see accuracy of our model is 80% when we are using the 80% of data as a training data let's try on different combination of data for training and test | Python Code:
import csv
from sklearn.linear_model import LogisticRegression as LR
from sklearn import svm
from sklearn.naive_bayes import BernoulliNB
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn import cross_validation
from sklearn.metrics import classification_report
import numpy as np
from sklearn.metrics import accuracy_score
import pandas as pd
from sklearn.utils import shuffle
from nltk.corpus import stopwords
import nltk
import re
from sklearn.grid_search import GridSearchCV
from sklearn.linear_model import SGDClassifier as SGD
Explanation: Movie Review Sentiment Analysis
This notebook creates a logistic regression, svm, and naive bayes classifier from training data to train these model.
in this motebok data that we are using having review of a movie and sentiment (positive or negative) of movie.
Load Python Library that we are use in this analysis
End of explanation
def load_file():
with open('review.csv') as csv_file:
reader = csv.reader(csv_file,delimiter=",",quotechar='"')
reader.next()
data =[]
target = []
for row in reader:
# skip missing data
if row[0] and row[1]:
data.append(row[0])
target.append(row[1])
return data,target
data,target = load_file()
Explanation: Load Data Set from review file
End of explanation
# preprocess creates the term frequency matrix for the review data set
count_vectorizer = CountVectorizer()
data1 = count_vectorizer.fit_transform(data)
tfidf_data = TfidfTransformer(use_idf=False).fit_transform(data1)
# preparing data for split validation. 80% training, 20% test
data_train,data_test,target_train,target_test =cross_validation.train_test_split(tfidf_data,\
target,test_size=0.2,\
random_state=43)
classifier = BernoulliNB().fit(data_train,target_train)
predicted = classifier.predict(data_test)
print classification_report(target_test,predicted)
print "The accuracy score is {:.2%}".format(accuracy_score(target_test,predicted))
Explanation: there are basically two columns first is the review data and second is sentiment as shown positive sentiment
End of explanation
# preparing data for split validation. 60% training, 40% test
data_train,data_test,target_train,target_test =cross_validation.train_test_split(tfidf_data,\
target,test_size=0.4,\
random_state=43)
classifier = BernoulliNB().fit(data_train,target_train)
predicted = classifier.predict(data_test)
print classification_report(target_test,predicted)
print "The accuracy score is {:.2%}".format(accuracy_score(target_test,predicted))
Explanation: Here we can see accuracy of our model is 80% when we are using the 80% of data as a training data let's try on different combination of data for training and test
End of explanation |
6,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Indexando-o-raster-de-um-array-usando-strides" data-toc-modified-id="Indexando-o-raster-de-um-array-usando-strides-1"><span class="toc-item-num">1 </span>Indexando o raster de um array usando strides</a></div><div class="lev2 toc-item"><a href="#Stride-de-um-array" data-toc-modified-id="Stride-de-um-array-11"><span class="toc-item-num">1.1 </span>Stride de um array</a></div><div class="lev2 toc-item"><a href="#Produto-cartesiano-entre-2-conjuntos---coordenadas-bidimensionais" data-toc-modified-id="Produto-cartesiano-entre-2-conjuntos---coordenadas-bidimensionais-12"><span class="toc-item-num">1.2 </span>Produto cartesiano entre 2 conjuntos - coordenadas bidimensionais</a></div><div class="lev2 toc-item"><a href="#Indexação-com-espelhamento-na-vertical" data-toc-modified-id="Indexação-com-espelhamento-na-vertical-13"><span class="toc-item-num">1.3 </span>Indexação com espelhamento na vertical</a></div><div class="lev2 toc-item"><a href="#Matriz-transposta" data-toc-modified-id="Matriz-transposta-14"><span class="toc-item-num">1.4 </span>Matriz transposta</a></div><div class="lev2 toc-item"><a href="#Exercícios" data-toc-modified-id="Exercícios-15"><span class="toc-item-num">1.5 </span>Exercícios</a></div>
# Indexando o raster de um array usando strides
Quando os pixels da imagem são armazenados de forma contígua na forma raster unidimensional, o cálculo da indexação N-dimensional $(n_0, n_1, ..., n_{N-1})$ é dada pelo offset
Step1: Vamos criar um array com 4 linhas e 6 colunas criados com o arange assim, o valor de cada pixel é o valor de seu offset raster unidimensional
Step2: Stride de um array
O stride de um array normal raster é dado por $(W,1)$ e $K_{offset} = 0$;
$$ n_{\mathrm{offset}} = 0 + W n_0 + n_1 $$
onde $n_0$ é o índice das linhas e $n_1$ é o índice das colunas.
No exemplo do array a de dimensões (4,6), o stride é dado por (W,1)
Step3: Assim, a indexação [2,3] é igual ao 15
Step4: Produto cartesiano entre 2 conjuntos - coordenadas bidimensionais
Step5: Indexação com espelhamento na vertical
$$ i = (T-W) -W row + 1 col $$
Step6: Matriz transposta
$$ i = row + W col $$ | Python Code:
import numpy as np
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Indexando-o-raster-de-um-array-usando-strides" data-toc-modified-id="Indexando-o-raster-de-um-array-usando-strides-1"><span class="toc-item-num">1 </span>Indexando o raster de um array usando strides</a></div><div class="lev2 toc-item"><a href="#Stride-de-um-array" data-toc-modified-id="Stride-de-um-array-11"><span class="toc-item-num">1.1 </span>Stride de um array</a></div><div class="lev2 toc-item"><a href="#Produto-cartesiano-entre-2-conjuntos---coordenadas-bidimensionais" data-toc-modified-id="Produto-cartesiano-entre-2-conjuntos---coordenadas-bidimensionais-12"><span class="toc-item-num">1.2 </span>Produto cartesiano entre 2 conjuntos - coordenadas bidimensionais</a></div><div class="lev2 toc-item"><a href="#Indexação-com-espelhamento-na-vertical" data-toc-modified-id="Indexação-com-espelhamento-na-vertical-13"><span class="toc-item-num">1.3 </span>Indexação com espelhamento na vertical</a></div><div class="lev2 toc-item"><a href="#Matriz-transposta" data-toc-modified-id="Matriz-transposta-14"><span class="toc-item-num">1.4 </span>Matriz transposta</a></div><div class="lev2 toc-item"><a href="#Exercícios" data-toc-modified-id="Exercícios-15"><span class="toc-item-num">1.5 </span>Exercícios</a></div>
# Indexando o raster de um array usando strides
Quando os pixels da imagem são armazenados de forma contígua na forma raster unidimensional, o cálculo da indexação N-dimensional $(n_0, n_1, ..., n_{N-1})$ é dada pelo offset:
$$ n_{\mathrm{offset}} = K_{\mathrm{offset}} + \sum_{k=0}^{N-1} s_k n_k $$
onde $s_k$ é o stride da dimensão $k$.
Neste notebook iremos verificar como o stride e $K_{offset}$ são modificados para implementar operações como espelhamento e transposta.
End of explanation
H,W = (4,6)
a = np.arange(H*W).reshape(H,W).astype(np.uint8)
print(a)
print(a.size)
Explanation: Vamos criar um array com 4 linhas e 6 colunas criados com o arange assim, o valor de cada pixel é o valor de seu offset raster unidimensional:
End of explanation
print(a.strides)
Explanation: Stride de um array
O stride de um array normal raster é dado por $(W,1)$ e $K_{offset} = 0$;
$$ n_{\mathrm{offset}} = 0 + W n_0 + n_1 $$
onde $n_0$ é o índice das linhas e $n_1$ é o índice das colunas.
No exemplo do array a de dimensões (4,6), o stride é dado por (W,1):
End of explanation
a[2,3]
(a.strides * np.array([2,3])).sum()
Explanation: Assim, a indexação [2,3] é igual ao 15:
End of explanation
Rows = np.arange(H)
Cols = np.arange(W)
index = []
for i in Rows:
for j in Cols:
index.append([i,j])
print(index)
index_all = np.array(index)
print(index_all)
i = (a.strides * index_all).sum(axis=1)
print(i)
Explanation: Produto cartesiano entre 2 conjuntos - coordenadas bidimensionais
End of explanation
b = a[::-1,:]
print(b.strides)
print(b.size)
print(b.ravel())
b[2,1]
np.sum(b.strides * np.array([2,1])) + b.size- b.shape[1]
(b.strides * index_all).sum(axis=1) + b.size - b.shape[1]
Explanation: Indexação com espelhamento na vertical
$$ i = (T-W) -W row + 1 col $$
End of explanation
c = a.T
print(c)
print(c.strides)
print(c.ravel())
(c.strides * index_all).sum(axis=1)
Explanation: Matriz transposta
$$ i = row + W col $$
End of explanation |
6,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Нулевая гипотеза о стационарности временного ряда уверенно отклоняется тестом Тест Дики — Фуллера
Попробуем востановить стационарность, наивно, но два воспользуемся двумя способами
Step1: Два способа убрать сезонность | Python Code:
# возьмем лог, который "penalize higher values more than smaller values"
ts_log = np.log(rub["Adj Close"])
test_stationarity(ts_log)
# далее вычтем скользящее среднее
moving_avg = pd.rolling_mean(ts_log,50)
plt.plot(ts_log)
plt.plot(moving_avg, color='red')
ts_log_moving_avg_diff = ts_log - moving_avg
ts_log_moving_avg_diff.dropna(inplace=True)
test_stationarity(ts_log_moving_avg_diff)
expwighted_avg = pd.ewma(ts_log, halflife=50)
plt.plot(ts_log)
plt.plot(expwighted_avg, color='red')
ts_log_ewma_diff = ts_log - expwighted_avg
test_stationarity(ts_log_ewma_diff)
Explanation: Нулевая гипотеза о стационарности временного ряда уверенно отклоняется тестом Тест Дики — Фуллера
Попробуем востановить стационарность, наивно, но два воспользуемся двумя способами: убрать тренд и сезонность
End of explanation
# Differencing - отличные результаты
ts_log_diff = ts_log - ts_log.shift(periods=1)
ts_log_diff.dropna(inplace=True)
test_stationarity(ts_log_diff)
index_q= ts_log.index
ts_log = pd.DataFrame(data=ts_log.values, index=pd.to_datetime(ts_log.index), columns=['usd/rub'])
ts_log_decompose = residual
ts_log_decompose.dropna(inplace=True)
test_stationarity(ts_log_decompose)
# посмотрим и выявим тренд + сезонность, и что останется от ряда, если всё это вычесть
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(ts_log, freq = 101)
trend = decomposition.trend
seasonal = decomposition.seasonal
residual = decomposition.resid
plt.subplot(411)
plt.plot(ts_log, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(residual, label='Residuals')
plt.legend(loc='best')
plt.tight_layout()
ts_log_decompose.dropna(inplace=True)
ts_log_decompose.plot()
#ACF and PACF plots:
from statsmodels.tsa.stattools import acf, pacf
lag_acf = acf(ts_log_diff, nlags=20)
lag_pacf = pacf(ts_log_diff, nlags=20, method='ols')
import matplotlib.ticker as ticker
plt.figure(figsize=(15,4))
tick_spacing = 1
fig, ax = plt.subplots(1,1)
#ax.plot(x,y)
ax.xaxis.set_major_locator(ticker.MultipleLocator(tick_spacing))
#Plot ACF:
plt.subplot(121)
plt.plot(lag_acf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.title('Autocorrelation Function')
#Plot PACF:
plt.subplot(122)
plt.plot(lag_pacf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.title('Partial Autocorrelation Function')
plt.tight_layout()
plt.figure(figsize=(50,4))
plt.plot(lag_acf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(ts_log_diff)),linestyle='--',color='gray')
plt.show()
from statsmodels.tsa.arima_model import ARIMA
model = ARIMA(ts_log, order=(1, 1, 0))
results_AR = model.fit(disp=-1)
plt.plot(ts_log_diff)
plt.plot(results_AR.fittedvalues, color='red')
plt.title('RSS: %.4f'% sum((results_AR.fittedvalues-ts_log_diff)**2))
model = ARIMA(ts_log, order=(0, 1, 1))
results_MA = model.fit(disp=-1)
plt.plot(ts_log_diff)
plt.plot(results_MA.fittedvalues, color='red')
plt.title('RSS: %.4f'% sum((results_MA.fittedvalues-ts_log_diff)**2))
model = ARIMA(ts_log, order=(1, 1, 2))
results_ARIMA = model.fit(disp=-1)
plt.plot(ts_log_diff)
plt.plot(results_ARIMA.fittedvalues, color='red')
plt.title('RSS: %.4f'% sum((results_ARIMA.fittedvalues-ts_log_diff)**2))
predictions_ARIMA_diff = pd.Series(results_ARIMA.fittedvalues, copy=True)
print predictions_ARIMA_diff.head()
predictions_ARIMA_diff_cumsum = predictions_ARIMA_diff.cumsum()
print predictions_ARIMA_diff_cumsum.head()
predictions_ARIMA_log = pd.Series(ts_log.ix[0], index=ts_log.index)
predictions_ARIMA_log = predictions_ARIMA_log.add(predictions_ARIMA_diff_cumsum,fill_value=0)
predictions_ARIMA_log.head()
predictions_ARIMA = np.exp(trend + seasonal)
plt.plot(rub["Adj Close"])
plt.plot(predictions_ARIMA)
#plt.title('RMSE: %.4f'% np.sqrt(sum((predictions_ARIMA-rub["Adj Close"])**2)/len(rub["Adj Close"])))
Explanation: Два способа убрать сезонность:
- Differencing – taking the differece with a particular time lag
- Decomposition – modeling both trend and seasonality and removing them from the model.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.