Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
4,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kernel Density Estimation
by Parijat Mazumdar (GitHub ID
Step1: Now, we will apply KDE to estimate the actual pdf using the samples. Using KDE in Shogun is a 3 stage process
Step2: We have calculated log of pdf. Let us see how accurate it is by comparing it with the actual pdf.
Step3: We see that the estimated pdf resembles the actual pdf with reasonable accuracy. This is a small demonstration of the fact that KDE can be used to estimate any arbitrary distribution given a finite number of it's samples.
Effect of bandwidth
Kernel bandwidth is a very important controlling parameter of the kernel density estimate. We have already seen that for bandwidth of 0.5, the estimated pdf almost coincides with the actual pdf. Let us see what happens when we decrease or increase the value of the kernel bandwidth keeping number of samples constant at 200.
Step4: From the above plots, it can be inferred that the kernel bandwidth controls the extent of smoothness of the pdf function. Low value of bandwidth parameter causes under-smoothing (which is the case with the first 2 plots from top) and high value causes over-smoothing (as it is the case with the bottom 2 plots). The perfect value of the kernel bandwidth should be estimated using
model-selection techniques which is presently not supported by Shogun (to be updated soon).
Effect of number of samples
Here, we see the effect of the number of samples on the estimated pdf, fine-tuning bandwidth in each case such that we get the most accurate pdf.
Step5: Firstly, We see that the estimated pdf becomes more accurate with increasing number of samples. By running the above snippent multiple times, we also notice that the variation in the shape of estimated pdf, between 2 different runs of the above code snippet, is highest when the number of samples is 20 and lowest when the number of samples is 2000. Therefore, we can say that with increase in the number of samples, the stability of the estimated pdf increases. Both the results can be explained using the intuitive fact that a larger number of samples gives a better picture of the entire distribution. A formal proof of the same has been presented by L. Devroye in his book "Nonparametric Density Estimation
Step6: Next, let us use the samples to estimate the probability density functions of each category of plant.
Step7: The above contour plots depict the pdf of respective categories of iris plant. These probability density functions can be used
as generative models to estimate the likelihood of any test sample belonging to a particular category. We use these likelihoods for classification by forming a simple decision rule | Python Code:
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
%matplotlib inline
import os
import shogun as sg
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# generates samples from the distribution
def generate_samples(n_samples,mu1,sigma1,mu2,sigma2):
samples1 = np.random.normal(mu1,sigma1,(1,int(n_samples/2)))
samples2 = np.random.normal(mu2,sigma2,(1,int(n_samples/2)))
samples = np.concatenate((samples1,samples2),1)
return samples
# parameters of the distribution
mu1=4
sigma1=1
mu2=8
sigma2=2
# number of samples
n_samples = 200
samples=generate_samples(n_samples,mu1,sigma1,mu2,sigma2)
# pdf function for plotting
x = np.linspace(0,15,500)
y = 0.5*(stats.norm(mu1,sigma1).pdf(x)+stats.norm(mu2,sigma2).pdf(x))
# plot samples
plt.plot(samples[0,:],np.zeros(n_samples),'rx',label="Samples")
# plot actual pdf
plt.plot(x,y,'b--',label="Actual pdf")
plt.legend(numpoints=1)
plt.show()
Explanation: Kernel Density Estimation
by Parijat Mazumdar (GitHub ID: <a href='https://github.com/mazumdarparijat'>mazumdarparijat</a>)
This notebook is on using the Shogun Machine Learning Toolbox for kernel density estimation (KDE). We start with a brief overview of KDE. Then we demonstrate the use of Shogun's $KernelDensity$ class on a toy example. Finally, we apply KDE to a real world example, thus demonstrating the its prowess as a non-parametric statistical method.
Brief overview of Kernel Density Estimation
Kernel Density Estimation (KDE) is a non-parametric way of estimating the probability density function (pdf) of ANY distribution given a finite number of its samples. The pdf of a random variable X given finite samples ($x_i$s), as per KDE formula, is given by:
$$pdf(x)=\frac{1}{nh} \Sigma_{i=1}^n K(\frac{||x-x_i||}{h})$$
In the above equation, K() is called the kernel - a symmetric function that integrates to 1. h is called the kernel bandwidth
which controls how smooth (or spread-out) the kernel is. The most commonly used kernel is the normal distribution function.
KDE is a computationally expensive method. Given $N_1$ query points (i.e. the points where we want to compute the pdf) and $N_2$ samples, computational complexity of KDE is $\mathcal{O}(N_1.N_2.D)$ where D is the dimension of the data. This computational load can be reduced by spatially segregating data points using data structures like KD-Tree and Ball-Tree. In single tree methods, only the sample points are structured in a tree whereas in dual tree methods both sample points and query points are structured in respective trees. Using these tree structures enables us to compute the density estimate for a bunch of points together at once thus reducing the number of required computations. This speed-up, however, results in reduced accuracy. Greater the speed-up, lower the accuracy. Therefore, in practice, the maximum amount of speed-up that can be afforded is usually controlled by error tolerance values.
KDE on toy data
Let us learn about KDE in Shogun by estimating a mixture of 2 one-dimensional gaussian distributions.
$$pdf(x) = \frac{1}{2} [\mathcal{N}(\mu_1,\sigma_1) + \mathcal{N}(\mu_2,\sigma_2)]$$
We start by plotting the actual distribution and generating the required samples (i.e. $x_i$s).
End of explanation
def get_kde_result(bandwidth,samples):
# set model parameters
kernel_type = sg.K_GAUSSIAN
dist_metric = sg.D_EUCLIDEAN # other choice is D_MANHATTAN
eval_mode = sg.EM_KDTREE_SINGLE # other choices are EM_BALLTREE_SINGLE, EM_KDTREE_DUAL and EM_BALLTREE_DUAL
leaf_size = 1 # min number of samples to be present in leaves of the spatial tree
abs_tol = 0 # absolute tolerance
rel_tol = 0 # relative tolerance i.e. accepted error as fraction of true density
k=sg.KernelDensity(bandwidth,kernel_type,dist_metric,eval_mode,leaf_size,abs_tol,rel_tol)
# form Shogun features and train
train_feats=sg.create_features(samples)
k.train(train_feats)
# get log density
query_points = np.array([np.linspace(0,15,500)])
query_feats = sg.create_features(query_points)
log_pdf = k.get_log_density(query_feats)
return query_points,log_pdf
query_points,log_pdf=get_kde_result(0.5,samples)
Explanation: Now, we will apply KDE to estimate the actual pdf using the samples. Using KDE in Shogun is a 3 stage process : setting the model parameters, supplying sample data points for training and supplying query points for getting log of pdf estimates.
End of explanation
def plot_pdf(samples,query_points,log_pdf,title):
plt.plot(samples,np.zeros((1,samples.size)),'rx')
plt.plot(query_points[0,:],np.exp(log_pdf),'r',label="Estimated pdf")
plt.plot(x,y,'b--',label="Actual pdf")
plt.title(title)
plt.legend()
plt.show()
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.5')
Explanation: We have calculated log of pdf. Let us see how accurate it is by comparing it with the actual pdf.
End of explanation
query_points,log_pdf=get_kde_result(0.1,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.1')
query_points,log_pdf=get_kde_result(0.2,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.2')
query_points,log_pdf=get_kde_result(0.5,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.5')
query_points,log_pdf=get_kde_result(1.1,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=1.1')
query_points,log_pdf=get_kde_result(1.5,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=1.5')
Explanation: We see that the estimated pdf resembles the actual pdf with reasonable accuracy. This is a small demonstration of the fact that KDE can be used to estimate any arbitrary distribution given a finite number of it's samples.
Effect of bandwidth
Kernel bandwidth is a very important controlling parameter of the kernel density estimate. We have already seen that for bandwidth of 0.5, the estimated pdf almost coincides with the actual pdf. Let us see what happens when we decrease or increase the value of the kernel bandwidth keeping number of samples constant at 200.
End of explanation
samples=generate_samples(20,mu1,sigma1,mu2,sigma2)
query_points,log_pdf=get_kde_result(0.7,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=20, bandwidth=0.7')
samples=generate_samples(200,mu1,sigma1,mu2,sigma2)
query_points,log_pdf=get_kde_result(0.5,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=200, bandwidth=0.5')
samples=generate_samples(2000,mu1,sigma1,mu2,sigma2)
query_points,log_pdf=get_kde_result(0.4,samples)
plot_pdf(samples,query_points,log_pdf,'num_samples=2000, bandwidth=0.4')
Explanation: From the above plots, it can be inferred that the kernel bandwidth controls the extent of smoothness of the pdf function. Low value of bandwidth parameter causes under-smoothing (which is the case with the first 2 plots from top) and high value causes over-smoothing (as it is the case with the bottom 2 plots). The perfect value of the kernel bandwidth should be estimated using
model-selection techniques which is presently not supported by Shogun (to be updated soon).
Effect of number of samples
Here, we see the effect of the number of samples on the estimated pdf, fine-tuning bandwidth in each case such that we get the most accurate pdf.
End of explanation
with open(os.path.join(SHOGUN_DATA_DIR, 'uci/iris/iris.data')) as f:
feats = []
# read data from file
for line in f:
words = line.rstrip().split(',')
feats.append([float(i) for i in words[0:4]])
# create observation matrix
obsmatrix = np.array(feats).T
# Just keep 2 most important features
obsmatrix = obsmatrix[2:4,:]
# plot the data
def plot_samples(marker='o',plot_show=True):
# First 50 data belong to Iris Sentosa, plotted in green
plt.plot(obsmatrix[0,0:50], obsmatrix[1,0:50], marker, color='green', markersize=5,label='Iris Sentosa')
# Next 50 data belong to Iris Versicolour, plotted in red
plt.plot(obsmatrix[0,50:100], obsmatrix[1,50:100], marker, color='red', markersize=5,label='Iris Versicolour')
# Last 50 data belong to Iris Virginica, plotted in blue
plt.plot(obsmatrix[0,100:150], obsmatrix[1,100:150], marker, color='blue', markersize=5,label='Iris Virginica')
if plot_show:
plt.xlim(0,8)
plt.ylim(-1,3)
plt.title('3 varieties of Iris plants')
plt.xlabel('petal length')
plt.ylabel('petal width')
plt.legend(numpoints=1,bbox_to_anchor=(0.97,0.35))
plt.show()
plot_samples()
Explanation: Firstly, We see that the estimated pdf becomes more accurate with increasing number of samples. By running the above snippent multiple times, we also notice that the variation in the shape of estimated pdf, between 2 different runs of the above code snippet, is highest when the number of samples is 20 and lowest when the number of samples is 2000. Therefore, we can say that with increase in the number of samples, the stability of the estimated pdf increases. Both the results can be explained using the intuitive fact that a larger number of samples gives a better picture of the entire distribution. A formal proof of the same has been presented by L. Devroye in his book "Nonparametric Density Estimation: The $L_1$ View" [3]. It is theoretically proven that as the number of samples tends to $\infty$, the estimated pdf converges to the real pdf.
Classification using KDE
In this section we see how KDE can be used for classification using a generative approach. Here, we try to classify the different varieties of Iris plant making use of Fisher's Iris dataset borrowed from the <a href='http://archive.ics.uci.edu/ml/datasets/Iris'>UCI Machine Learning Repository</a>. There are 3 varieties of Iris plants:
<ul><li>Iris Sensosa</li><li>Iris Versicolour</li><li>Iris Virginica</li></ul>
<br>
The Iris dataset enlists 4 features that can be used to segregate these varieties, but for ease of analysis and visualization, we only use two of the most important features (ie. features with very high class correlations)[refer to <a href='http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names'>summary statistics</a>] namely
<ul><li>petal length</li><li>petal width</li></ul>
<br>
As a first step, we plot the data.
End of explanation
import scipy.interpolate as interpolate
def get_kde(samples):
# set model parameters
bandwidth = 0.4
kernel_type = sg.K_GAUSSIAN
dist_metric = sg.D_EUCLIDEAN
eval_mode = sg.EM_BALLTREE_DUAL
leaf_size = 1
abs_tol = 0
rel_tol = 0
k=sg.KernelDensity(bandwidth,kernel_type,dist_metric,eval_mode,leaf_size,abs_tol,rel_tol)
# form Shogun features and train
train_feats=sg.create_features(samples)
k.train(train_feats)
return k
def density_estimate_grid(kdestimator):
xmin,xmax,ymin,ymax=[0,8,-1,3]
# Set up a regular grid of interpolation points
x, y = np.linspace(xmin, xmax, 100), np.linspace(ymin, ymax, 100)
x, y = np.meshgrid(x, y)
# compute density estimate at each of the grid points
query_feats=sg.create_features(np.array([x[0,:],y[0,:]]))
z=np.array([kdestimator.get_log_density(query_feats)])
z=np.exp(z)
for i in range(1,x.shape[0]):
query_feats=sg.create_features(np.array([x[i,:],y[i,:]]))
zi=np.exp(kdestimator.get_log_density(query_feats))
z=np.vstack((z,zi))
return (x,y,z)
def plot_pdf(kdestimator,title):
# compute interpolation points and corresponding kde values
x,y,z=density_estimate_grid(kdestimator)
# plot pdf
plt.imshow(z, vmin=z.min(), vmax=z.max(), origin='lower',extent=[x.min(), x.max(), y.min(), y.max()])
plt.title(title)
plt.colorbar(shrink=0.5)
plt.xlabel('petal length')
plt.ylabel('petal width')
plt.show()
kde1=get_kde(obsmatrix[:,0:50])
plot_pdf(kde1,'pdf for Iris Sentosa')
kde2=get_kde(obsmatrix[:,50:100])
plot_pdf(kde2,'pdf for Iris Versicolour')
kde3=get_kde(obsmatrix[:,100:150])
plot_pdf(kde3,'pdf for Iris Virginica')
kde=get_kde(obsmatrix[:,0:150])
plot_pdf(kde,'Combined pdf')
Explanation: Next, let us use the samples to estimate the probability density functions of each category of plant.
End of explanation
# get 3 likelihoods for each test point in grid
x,y,z1=density_estimate_grid(kde1)
x,y,z2=density_estimate_grid(kde2)
x,y,z3=density_estimate_grid(kde3)
# classify using our decision rule
z=[]
for i in range(0,x.shape[0]):
zj=[]
for j in range(0,x.shape[1]):
if ((z1[i,j]>z2[i,j]) and (z1[i,j]>z3[i,j])):
zj.append(1)
elif (z2[i,j]>z3[i,j]):
zj.append(2)
else:
zj.append(0)
z.append(zj)
z=np.array(z)
# plot results
plt.imshow(z, vmin=z.min(), vmax=z.max(), origin='lower',extent=[x.min(), x.max(), y.min(), y.max()])
plt.title("Classified regions")
plt.xlabel('petal length')
plt.ylabel('petal width')
plot_samples(marker='x',plot_show=False)
plt.show()
Explanation: The above contour plots depict the pdf of respective categories of iris plant. These probability density functions can be used
as generative models to estimate the likelihood of any test sample belonging to a particular category. We use these likelihoods for classification by forming a simple decision rule: a test sample is assigned the class for which it's likelihood is maximum. With this in mind, let us try to segregate the
entire 2-D space into 3 regions :
<ul><li>Iris Sentosa (green)</li><li>Iris Versicolour (red)</li><li>Iris Virginica (blue)</li></ul>
End of explanation |
4,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Examples Rankine Cycle 8.1,8.2
Michael J. Moran, Howard N. Shapiro, Daisie D. Boettner, Margaret B. Bailey. Fundamentals of Engineering Thermodynamics(7th Edition). John Wiley & Sons, Inc. 2011
Chapter 8
Step1: 1..2 Analysis the Cycle
(a) The thermal efficiency
The net power developed by the cycle is
$\dot{W}_{cycle}=\dot{W}_t-\dot{W}_p$
Mass and energy rate balances for control volumes around the turbine and pump give,respectively
$\frac{\dot{W}_t}{\dot{m}}=h_1-h_2$
$\frac{\dot{W}_p}{\dot{m}}=h_4-h_3$
where $\dot{m}$ is the mass flow rate of the steam. The rate of heat transfer to the working fluid as it passes through the boiler is determined using mass and energy rate balances as
$\frac{\dot{Q}_{in}}{\dot{m}}=h_1-h_4$
The thermal efficiency is then
$\eta=\frac{\dot{W}t-\dot{W}_p}{\dot{Q}{in}}=\frac{(h_1-h_2)-(h_4-h_3)}{h_1-h_4}$
Step2: (b) The back work ratio is
$bwr=\frac{\dot{W}_p}{\dot{W}_t}=\frac{h_4-h_3}{h_1-h_2}$
(c) The mass flow rate of the steam can be obtained from the expression for the net power given in part (a)
$\dot{m}=\frac{\dot{W}_{cycle}}{(h_1-h_2)-(h_4-h_3)}$
(d) With the expression for $\dot{Q}_{in}$ in from part (a) and previously determined specific enthalpy values
$\dot{Q}_{in}=\dot{m}(h_1-h_4)$
(e) Mass and energy rate balances applied to a control volume enclosing the steam side of the condenser give
$\dot{Q}_{out}=\dot{m}(h_2-h_3)$
(f) Taking a control volume around the condenser, the mass and energy rate balances give at steady state
$\require{cancel} 0=\dot{\cancel{Q}}^{0}{cv}-\dot{\cancel{w}}^{0}{cv}+\dot{m}{cw}(h{cw,in}-h_{cw,out})+\dot{m}(h_2-h_3)$
where $\dot{m}{cw}$ is the mass flow rate of the cooling water. Solving for $\dot{m}{cw}$
$\dot{m}{cw}=\frac{\dot{m}(h_2-h_3)}{h{cw,in}-h_{cw,out}}$
Step3: 2 Example8.2
Step4: 2.2 Analysis the Cycle
Step5: 1.2.3 T-S Diagram | Python Code:
from seuif97 import *
# State 1
p1 = 8.0 # in MPa
t1 = px2t(p1, 1)
h1 = px2h(p1, 1) # h1 = 2758.0 From table A-3 kj/kg
s1 = px2s(p1, 1) # s1 = 5.7432 From table A-3 kj/kg.k
# State 2 ,p2=0.008
p2 = 0.008
s2 = s1
t2 = ps2t(p2, s2)
h2 = ps2h(p2, s2)
# State 3 is saturated liquid at 0.008 MPa
p3 = 0.008
t3 = px2t(p3, 0)
h3 = px2h(p3, 0) # kj/kg
s3 = px2s(p3, 0)
# State 4
p4 = p1
s4 = s3
h4 = ps2h(p4, s4)
t4 = ps2h(p4, s4)
Explanation: The Examples Rankine Cycle 8.1,8.2
Michael J. Moran, Howard N. Shapiro, Daisie D. Boettner, Margaret B. Bailey. Fundamentals of Engineering Thermodynamics(7th Edition). John Wiley & Sons, Inc. 2011
Chapter 8 : Vapor Power Systems:
1 EXAMPLE 8.1 Analyzing an Ideal Rankine Cycle P438
2 EXAMPLE 8.2 Analyzing a Rankine Cycle with Irreversibilities P444
1 Example 8.1: Analyzing an Ideal Rankine Cycle
Steam is the working fluid in an ideal Rankine cycle.
Saturated vapor enters the turbine at 8.0 MPa and saturated liquid exits the condenser at a pressure of 0.008 MPa.
The net power output of the cycle is 100 MW.
Process 1–2: Isentropic expansion of the working fluid through the turbine from saturated vapor at state 1 to the condenser pressure.
Process 2–3: Heat transfer from the working fluid as it flows at constant pressure
through the condenser with saturated liquid at state 3.
Process 3–4: Isentropic compression in the pump to state 4 in the compressed liquid region.
Process 4–1: Heat transfer to the working fluid as it flows at constant pressure through the boiler to complete the cycle.
Determine for the cycle
(a) the thermal efficiency,
(b) the back work ratio,
(c) the mass flow rate of the steam,in kg/h,
(d) the rate of heat transfer, Qin, into the working fluid as it passes through the boiler, in MW,
(e) the rate of heat transfer, Qout, from the condensing steam as it passes through the condenser, in MW,
(f) the mass flow rate of the condenser cooling water, in kg/h, if cooling water enters the condenser at 15°C and exits at 35°C.
Engineering Model:
1 Each component of the cycle is analyzed as a control volume at steady state. The control volumes are shown on the accompanying sketch by dashed lines.
2 All processes of the working fluid are internally reversible.
3 The turbine and pump operate adiabatically.
4 Kinetic and potential energy effects are negligible.
5 Saturated vapor enters the turbine. Condensate exits the condenser as saturated liquid.
To begin the analysis, we fix each of the principal states(1,2,3,4) located on the accompanying schematic and T–s diagrams.
1.1 States
End of explanation
# Part(a)
# Mass and energy rate balances for control volumes
# around the turbine and pump give, respectively
# turbine
wtdot = h1 - h2
# pump
wpdot = h4-h3
# The rate of heat transfer to the working fluid as it passes
# through the boiler is determined using mass and energy rate balances as
qindot = h1-h4
# thermal efficiency
eta = (wtdot-wpdot)/qindot
# Result for part a
print('(a) The thermal efficiency for the cycle is {:>.2f}%'.format(eta*100))
Explanation: 1..2 Analysis the Cycle
(a) The thermal efficiency
The net power developed by the cycle is
$\dot{W}_{cycle}=\dot{W}_t-\dot{W}_p$
Mass and energy rate balances for control volumes around the turbine and pump give,respectively
$\frac{\dot{W}_t}{\dot{m}}=h_1-h_2$
$\frac{\dot{W}_p}{\dot{m}}=h_4-h_3$
where $\dot{m}$ is the mass flow rate of the steam. The rate of heat transfer to the working fluid as it passes through the boiler is determined using mass and energy rate balances as
$\frac{\dot{Q}_{in}}{\dot{m}}=h_1-h_4$
The thermal efficiency is then
$\eta=\frac{\dot{W}t-\dot{W}_p}{\dot{Q}{in}}=\frac{(h_1-h_2)-(h_4-h_3)}{h_1-h_4}$
End of explanation
# Part(b)
# back work ratio:bwr, defined as the ratio of the pump work input to the work
# developed by the turbine.
bwr = wpdot/wtdot #
# Result
print('(b) The back work ratio is {:>.2f}%'.format(bwr*100))
# Part(c)
Wcycledot = 100.00 # the net power output of the cycle in MW
mdot = (Wcycledot*10**3*3600)/((h1-h2)-(h4-h3)) # mass flow rate in kg/h
# Result
print('(c) The mass flow rate of the steam is {:>.2f}kg/h'.format(mdot))
# Part(d)
Qindot = mdot*qindot/(3600*10**3) # in MW
# Results
print('(d) The rate of heat transfer Qindot into the working fluid as' +
' it passes through the boiler is {:>.2f}MW'.format(Qindot))
# Part(e)
Qoutdot = mdot*(h2-h3)/(3600*10**3) # in MW
# Results
print('(e) The rate of heat transfer Qoutdot from the condensing steam ' +
'as it passes through the condenser is {:>.2f}MW.'.format(Qoutdot))
# Part(f)
# Given:
tcwin = 15
tcwout = 35
hcwout = tx2h(tcwout, 0) # From table A-2,hcwout= 146.68 kj/kg
hcwin = tx2h(tcwin, 0) # hcwin 62.99
mcwdot = (Qoutdot*10**3*3600)/(hcwout-hcwin) # in kg/h
# Results
print('(f) The mass flow rate of the condenser cooling water is {:>.2f}kg/h.'.format(mcwdot))
Explanation: (b) The back work ratio is
$bwr=\frac{\dot{W}_p}{\dot{W}_t}=\frac{h_4-h_3}{h_1-h_2}$
(c) The mass flow rate of the steam can be obtained from the expression for the net power given in part (a)
$\dot{m}=\frac{\dot{W}_{cycle}}{(h_1-h_2)-(h_4-h_3)}$
(d) With the expression for $\dot{Q}_{in}$ in from part (a) and previously determined specific enthalpy values
$\dot{Q}_{in}=\dot{m}(h_1-h_4)$
(e) Mass and energy rate balances applied to a control volume enclosing the steam side of the condenser give
$\dot{Q}_{out}=\dot{m}(h_2-h_3)$
(f) Taking a control volume around the condenser, the mass and energy rate balances give at steady state
$\require{cancel} 0=\dot{\cancel{Q}}^{0}{cv}-\dot{\cancel{w}}^{0}{cv}+\dot{m}{cw}(h{cw,in}-h_{cw,out})+\dot{m}(h_2-h_3)$
where $\dot{m}{cw}$ is the mass flow rate of the cooling water. Solving for $\dot{m}{cw}$
$\dot{m}{cw}=\frac{\dot{m}(h_2-h_3)}{h{cw,in}-h_{cw,out}}$
End of explanation
from seuif97 import *
# State 1
p1 = 8.0 # in MPa
t1 =px2t(p1,1)
h1=px2h(p1,1) # h1 = 2758.0 From table A-3 kj/kg
s1=px2s(p1,1) # s1 = 5.7432 From table A-3 kj/kg.k
# State 2 ,p2=0.008
p2=0.008
s2s = s1
h2s=ps2h(p2,s2s)
t2s=ps2t(p2,s2s)
etat_t=0.85
h2=h1-etat_t*(h1-h2s)
t2 =ph2t(p2,h2)
s2 =ph2s(p2,h2)
# State 3 is saturated liquid at 0.008 MPa
p3 = 0.008
t3=px2t(p3,0)
h3 =px2h(p3,0) # kj/kg
s3 =px2s(p3,0)
#State 4
p4 = p1
s4s=s3
h4s =ps2h(p4,s4s)
t4s =ps2t(p4,s4s)
etat_p=0.85
h4=h3+(h4s-h3)/etat_p
t4 =ph2t(p4,h4)
s4 =ph2s(p4,h4)
Explanation: 2 Example8.2 :Analyzing a Rankine Cycle with Irreversibilities
Reconsider the vapor power cycle of Example 8.1, but include in the analysis that the turbine and the pump each have an isentropic efficiency of 85%.
Determine for the modified cycle
(a) the thermal efficiency,
(b) the mass flow rate of steam, in kg/h, for a net power output of 100MW,
(c) the rate of heat transfer $\dot{Q}_{in}$ in into the working fluid as it passes through the boiler, in MW,
(d) the rate of heat transfer $\dot{Q}_{out}$ out from the condensing steam as it passes through the condenser, in MW,
(e) the mass flow rate of the condenser cooling water, in kg/h, if cooling water enters the condenser at 15°C and exits as 35°C.
SOLUTION
Known: A vapor power cycle operates with steam as the working fluid. The turbine and pump both have efficiencies of 85%.
Find: Determine the thermal efficiency, the mass flow rate, in kg/h, the rate of heat transfer to the working fluid as it passes through the boiler, in MW, the heat transfer rate from the condensing steam as it passes through thecondenser, in MW, and the mass flow rate of the condenser cooling water, in kg/h.
Engineering Model:
Each component of the cycle is analyzed as a control volume at steady state.
The working fluid passes through the boiler and condenser at constant pressure. Saturated vapor enters the turbine. The condensate is saturated at the condenser exit.
The turbine and pump each operate adiabatically with an efficiency of 85%.
Kinetic and potential energy effects are negligible
Analysis:
Owing to the presence of irreversibilities during the expansion of the steam through the turbine, there is an increase in specific entropy from turbine inlet to exit, as shown on the accompanying T–s diagram. Similarly,there is an increase in specific entropy from pump inlet to exit.
Let us begin the analysis by fixing each of the principal states.
1.2 States
End of explanation
# Part(a)
eta = ((h1-h2)-(h4-h3))/(h1-h4) # thermal efficiency
# Result for part (a)
print('Thermal efficiency is: {:>.2f}%'.format(100*eta))
# Part(b)
Wcycledot = 100 # given,a net power output of 100 MW
# Calculations
mdot = (Wcycledot*(10**3)*3600)/((h1-h2)-(h4-h3))
# Result for part (b)
print('The mass flow rate of steam for a net power output of 100 MW is {:>.2f}kg/h'.format(mdot))
# Part(c)
Qindot = mdot*(h1-h4)/(3600 * 10**3)
# Result
print('The rate of heat transfer Qindot into the working fluid as it passes through the boiler, is {:>.2f}MW.'.format(Qindot))
# Part(d)
Qoutdot = mdot*(h2-h3)/(3600*10**3)
# Result
print('The rate of heat transfer Qoutdot from the condensing steam as it passes through the condenser, is {:>.2f}MW.'.format(Qoutdot))
# Part(e)
tcwin = 15
tcwout = 35
hcwout = tx2h(tcwout, 0) # From table A-2,hcwout= 146.68 kj/kg
hcwin = tx2h(tcwin, 0) # hcwin 62.99
mcwdot = (Qoutdot*10**3*3600)/(hcwout-hcwin)
# Result
print('The mass flow rate of the condenser cooling water, is {:>.2f}kg/h'.format(mcwdot))
Explanation: 2.2 Analysis the Cycle
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(10.0,5.0))
# saturated vapor and liquid entropy lines
npt = np.linspace(10,647.096-273.15,200) # range of temperatures
svap = [s for s in [tx2s(t, 1) for t in npt]]
sliq = [s for s in [tx2s(t, 0) for t in npt]]
plt.plot(svap, npt, 'r-')
plt.plot(sliq, npt, 'b-')
t=[t1,t2s,t3,t4s+15]
s=[s1,s2s,s3,s4s]
# point 5
t.append(px2t(p1,0))
s.append(px2s(p1,0))
t.append(t1)
s.append(s1)
plt.plot(s, t, 'ko-')
tb=[t1,t2]
sb=[s1,s2]
plt.plot(sb, tb, 'k--')
tist=[t2,t2s]
sist=[s2,s2s]
plt.plot(sist, tist, 'ko-')
sp=[s3,s3+0.3]
tp=[t3,ps2t(p4,s3+0.3)+15]
plt.plot(sp, tp, 'ko--')
tist=[t2,t2s]
sist=[s2,px2s(p2,1)]
plt.plot(sist, tist, 'g-')
plt.xlabel('Entropy (kJ/(kg K)')
plt.ylabel('Temperature (°C)')
plt.grid()
Explanation: 1.2.3 T-S Diagram
End of explanation |
4,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting the outcome of a US presidential election using Bayesian optimal experimental design
In this tutorial, we explore the use of optimal experimental design techniques to create an optimal polling strategy to predict the outcome of a US presidential election. In a previous tutorial, we explored the use of Bayesian optimal experimental design to learn the working memory capacity of a single person. Here, we apply the same concepts to study a whole country.
To begin, we need a Bayesian model of the winner of the election w, as well as the outcome y of any poll we may plan to conduct. The experimental design is the number of people $n_i$ to poll in each state. To set up our exploratory model, we are going to make a number of simplifying assumptions. We will use historical election data 1976-2012 to construct a plausible prior and the 2016 election as our test set
Step1: The winner $w$ of the election is
$$ w = \begin{cases}
\text{Democrats if } \sum_i e_i > \frac{1}{2}\sum_i k_i \
\text{Republicans otherwise}
\end{cases}
$$
In code, this is expressed as follows
Step2: We are interested in polling strategies that will help us predict $w$, rather than predicting the more complex state-by-state results $\alpha$.
To set up a fully Bayesian model, we need a prior for $\alpha$. We will base the prior on the outcome of some historical presidential elections. Specifically, we'll use the following dataset of state-by-state election results for the presidential elections 1976-2012 inclusive. Note that votes for parties other than Democrats and Republicans have been ignored.
Step3: Based on this data alone, we will base our prior mean for $\alpha$ solely on the 2012 election. Our model will be based on logistic regression, so we will transform the probability of voting Democrat using the logit function. Specifically, we'll choose a prior mean as follows
Step4: Our prior distribution for $\alpha$ will be a multivariate Normal with mean prior_mean. The only thing left to decide upon is the covariance matrix. Since alpha values are logit-transformed, the covariance will be defined in logit space as well.
Aside
Step5: Setting up the model
We are now in a position to define our model. At a high-level the model works as follows
Step6: Understanding the prior
Before we go any further, we're going to study the model to check it matches with our intuition about US presidential elections.
First of all, let's look at an upper and lower confidence limit for the proportion of voters who will vote Democrat in each state.
Step7: The prior on $\alpha$ implicitly defines our prior on w. We can investigate this prior by simulating many times from the prior.
Step8: Since our prior is based on 2012 and the Democrats won in 2012, it makes sense that we would favour a Democrat win in 2016 (this is before we have seen any polling data or incorporated any other information).
We can also investigate which states, a priori, are most marginal.
Step9: This is a sanity check, and seems to accord with our intuitions. Florida is frequently an important swing state and is top of our list of marginal states under the prior. We can also see states such as Pennsylvania and Wisconsin near the top of the list -- we know that these were instrumental in the 2016 election.
Finally, we take a closer look at our prior covariance. Specifically, we examine states that we expect to be more or less correlated. Let's begin by looking at states in New England
Step10: Clearly, these states tend to vote similarly. We can also examine some states of the South which we also expect to be similar.
Step11: These correlation matrices show that, as expected, logical groupings of states tend to have similar voting trends. We now look at the correlations between the groups (e.g. between Maine and Louisiana).
Step12: Now, we see weaker correlation between New England states and Southern states than the correlation within those grouping. Again, this is as expected.
Measuring the expected information gain of a polling strategy
The prior we have set up appears to accord, at least approximately, with intuition. However, we now want to add a second source of information from polling. We aim to use our prior to select a polling strategy that will be most informative about our target $w$. A polling strategy, in this simplified set-up, is the number of people to poll in each state. (We ignore any other covariates such as regional variation inside states, demographics, etc.) We might imagine that polling 1000 people in Florida (the most marginal state), will be much more effective than polling 1000 people in DC (the least marginal state). That's because the outcome in DC is already quite predictable, just based on our prior, whereas the outcome in Florida is really up for grabs.
In fact, the information that our model will gain about $w$ based on conducting a poll with design $d$ and getting outcome $y$ can be described mathematically as follows
Step13: We'll now use this to compute the EIG for several possible polling strategies. First, we need to compute the $H(p(w))$ term in the above formula.
Step14: Let's consider four simple polling strategies.
1. Poll 1000 people in Florida only
2. Poll 1000 people in DC only
3. Poll 1000 people spread evenly over the US
4. Using a polling allocation that focuses on swing states
Step15: We'll now compute the EIG for each option. Since this requires training the network (four times) it may take several minutes.
Step16: Running the experiment
We have now scored our four candidate designs and can choose the best one to use to actually gather new data. In this notebook, we will simulate the new data using the results from the 2016 election. Specifically, we will assume that the outcome of the poll comes from our model, where we condition the value of alpha to correspond to the actual results in 2016.
First, we retrain $q$ with the chosen polling strategy.
Step17: The value of $\alpha$ implied by the 2016 results is computed in the same way we computed the prior.
Step18: Let's view the outcome of our poll.
Step19: Analysing the data
Having collected our data, we can now perform inference under our model to obtain the posterior probability of a Democrat win. There are many ways to perform the inference | Python Code:
# Data path
BASE_URL = "https://d2hg8soec8ck9v.cloudfront.net/datasets/us_elections/"
import pandas as pd
import torch
from urllib.request import urlopen
electoral_college_votes = pd.read_pickle(urlopen(BASE_URL + "electoral_college_votes.pickle"))
print(electoral_college_votes.head())
ec_votes_tensor = torch.tensor(electoral_college_votes.values, dtype=torch.float).squeeze()
Explanation: Predicting the outcome of a US presidential election using Bayesian optimal experimental design
In this tutorial, we explore the use of optimal experimental design techniques to create an optimal polling strategy to predict the outcome of a US presidential election. In a previous tutorial, we explored the use of Bayesian optimal experimental design to learn the working memory capacity of a single person. Here, we apply the same concepts to study a whole country.
To begin, we need a Bayesian model of the winner of the election w, as well as the outcome y of any poll we may plan to conduct. The experimental design is the number of people $n_i$ to poll in each state. To set up our exploratory model, we are going to make a number of simplifying assumptions. We will use historical election data 1976-2012 to construct a plausible prior and the 2016 election as our test set: we imagine that we are conducting polling just before the 2016 election.
Choosing a prior
In our model, we include a 51 dimensional latent variabe alpha. For each of the 50 states plus DC we define
$$ \alpha_i = \text{logit }\mathbb{P}(\text{a random voter in state } i \text{ votes Democrat in the 2016 election}) $$
and we assume all other voters vote Republican. Right before the election, the value of $\alpha$ is unknown and we wish to estimate it by conducting a poll with $n_i$ people in state $i$ for $i=1, ..., 51$ . The winner $w$ of the election is decided by the Electoral College system. The number of electoral college votes gained by the Democrats in state $i$ is
$$e_i = \begin{cases}
k_i \text{ if } \alpha_i > \frac{1}{2} \
0 \text{ otherwise}
\end{cases}
$$(this is a rough approximation of the true system). All other electoral college votes go to the Republicans. Here $k_i$ is the number of electoral college votes alloted to state $i$, which are listed in the following data frame.
End of explanation
def election_winner(alpha):
dem_win_state = (alpha > 0.).float()
dem_electoral_college_votes = ec_votes_tensor * dem_win_state
w = (dem_electoral_college_votes.sum(-1) / ec_votes_tensor.sum(-1) > .5).float()
return w
Explanation: The winner $w$ of the election is
$$ w = \begin{cases}
\text{Democrats if } \sum_i e_i > \frac{1}{2}\sum_i k_i \
\text{Republicans otherwise}
\end{cases}
$$
In code, this is expressed as follows
End of explanation
frame = pd.read_pickle(urlopen(BASE_URL + "us_presidential_election_data_historical.pickle"))
print(frame[[1976, 1980, 1984]].head())
Explanation: We are interested in polling strategies that will help us predict $w$, rather than predicting the more complex state-by-state results $\alpha$.
To set up a fully Bayesian model, we need a prior for $\alpha$. We will base the prior on the outcome of some historical presidential elections. Specifically, we'll use the following dataset of state-by-state election results for the presidential elections 1976-2012 inclusive. Note that votes for parties other than Democrats and Republicans have been ignored.
End of explanation
results_2012 = torch.tensor(frame[2012].values, dtype=torch.float)
prior_mean = torch.log(results_2012[..., 0] / results_2012[..., 1])
Explanation: Based on this data alone, we will base our prior mean for $\alpha$ solely on the 2012 election. Our model will be based on logistic regression, so we will transform the probability of voting Democrat using the logit function. Specifically, we'll choose a prior mean as follows:
End of explanation
idx = 2 * torch.arange(10)
as_tensor = torch.tensor(frame.values, dtype=torch.float)
logits = torch.log(as_tensor[..., idx] / as_tensor[..., idx + 1]).transpose(0, 1)
mean = logits.mean(0)
sample_covariance = (1/(logits.shape[0] - 1)) * (
(logits.unsqueeze(-1) - mean) * (logits.unsqueeze(-2) - mean)
).sum(0)
prior_covariance = sample_covariance + 0.01 * torch.eye(sample_covariance.shape[0])
Explanation: Our prior distribution for $\alpha$ will be a multivariate Normal with mean prior_mean. The only thing left to decide upon is the covariance matrix. Since alpha values are logit-transformed, the covariance will be defined in logit space as well.
Aside: The prior covariance is important in a number of ways. If we allow too much variance, the prior will be uncertain about the outcome in every state, and require polling everywhere. If we allow too little variance, we may be caught off-guard by an unexpected electoral outcome. If we assume states are independent, then we will not be able to pool information across states; but assume too much correlation and we could too faithfully base predictions about one state from poll results in another.
We select the prior covariance by taking the empirical covariance from the elections 1976 - 2012 and adding a small value 0.01 to the diagonal.
End of explanation
import pyro
import pyro.distributions as dist
def model(polling_allocation):
# This allows us to run many copies of the model in parallel
with pyro.plate_stack("plate_stack", polling_allocation.shape[:-1]):
# Begin by sampling alpha
alpha = pyro.sample("alpha", dist.MultivariateNormal(
prior_mean, covariance_matrix=prior_covariance))
# Sample y conditional on alpha
poll_results = pyro.sample("y", dist.Binomial(
polling_allocation, logits=alpha).to_event(1))
# Now compute w according to the (approximate) electoral college formula
dem_win = election_winner(alpha)
pyro.sample("w", dist.Delta(dem_win))
return poll_results, dem_win, alpha
Explanation: Setting up the model
We are now in a position to define our model. At a high-level the model works as follows:
$\alpha$ is multivariate Normal
$w$ is a deterministic function of $\alpha$
$y_i$ is Binomial($n_i$, sigmoid($\alpha_i$)) so we are assuming that people respond to the poll in exactly the same way that they will vote on election day
In Pyro, this model looks as follows
End of explanation
std = prior_covariance.diag().sqrt()
ci = pd.DataFrame({"State": frame.index,
"Lower confidence limit": torch.sigmoid(prior_mean - 1.96 * std),
"Upper confidence limit": torch.sigmoid(prior_mean + 1.96 * std)}
).set_index("State")
print(ci.head())
Explanation: Understanding the prior
Before we go any further, we're going to study the model to check it matches with our intuition about US presidential elections.
First of all, let's look at an upper and lower confidence limit for the proportion of voters who will vote Democrat in each state.
End of explanation
_, dem_wins, alpha_samples = model(torch.ones(100000, 51))
prior_w_prob = dem_wins.float().mean()
print("Prior probability of Dem win", prior_w_prob.item())
Explanation: The prior on $\alpha$ implicitly defines our prior on w. We can investigate this prior by simulating many times from the prior.
End of explanation
dem_prob = (alpha_samples > 0.).float().mean(0)
marginal = torch.argsort((dem_prob - .5).abs()).numpy()
prior_prob_dem = pd.DataFrame({"State": frame.index[marginal],
"Democrat win probability": dem_prob.numpy()[marginal]}
).set_index('State')
print(prior_prob_dem.head())
Explanation: Since our prior is based on 2012 and the Democrats won in 2012, it makes sense that we would favour a Democrat win in 2016 (this is before we have seen any polling data or incorporated any other information).
We can also investigate which states, a priori, are most marginal.
End of explanation
import numpy as np
def correlation(cov):
return cov / np.sqrt(np.expand_dims(np.diag(cov.values), 0) * np.expand_dims(np.diag(cov.values), 1))
new_england_states = ['ME', 'VT', 'NH', 'MA', 'RI', 'CT']
cov_as_frame = pd.DataFrame(prior_covariance.numpy(), columns=frame.index).set_index(frame.index)
ne_cov = cov_as_frame.loc[new_england_states, new_england_states]
ne_corr = correlation(ne_cov)
print(ne_corr)
Explanation: This is a sanity check, and seems to accord with our intuitions. Florida is frequently an important swing state and is top of our list of marginal states under the prior. We can also see states such as Pennsylvania and Wisconsin near the top of the list -- we know that these were instrumental in the 2016 election.
Finally, we take a closer look at our prior covariance. Specifically, we examine states that we expect to be more or less correlated. Let's begin by looking at states in New England
End of explanation
southern_states = ['LA', 'MS', 'AL', 'GA', 'SC']
southern_cov = cov_as_frame.loc[southern_states, southern_states]
southern_corr = correlation(southern_cov)
print(southern_corr)
Explanation: Clearly, these states tend to vote similarly. We can also examine some states of the South which we also expect to be similar.
End of explanation
cross_cov = cov_as_frame.loc[new_england_states + southern_states, new_england_states + southern_states]
cross_corr = correlation(cross_cov)
print(cross_corr.loc[new_england_states, southern_states])
Explanation: These correlation matrices show that, as expected, logical groupings of states tend to have similar voting trends. We now look at the correlations between the groups (e.g. between Maine and Louisiana).
End of explanation
from torch import nn
class OutcomePredictor(nn.Module):
def __init__(self):
super().__init__()
self.h1 = nn.Linear(51, 64)
self.h2 = nn.Linear(64, 64)
self.h3 = nn.Linear(64, 1)
def compute_dem_probability(self, y):
z = nn.functional.relu(self.h1(y))
z = nn.functional.relu(self.h2(z))
return self.h3(z)
def forward(self, y_dict, design, observation_labels, target_labels):
pyro.module("posterior_guide", self)
y = y_dict["y"]
dem_prob = self.compute_dem_probability(y).squeeze()
pyro.sample("w", dist.Bernoulli(logits=dem_prob))
Explanation: Now, we see weaker correlation between New England states and Southern states than the correlation within those grouping. Again, this is as expected.
Measuring the expected information gain of a polling strategy
The prior we have set up appears to accord, at least approximately, with intuition. However, we now want to add a second source of information from polling. We aim to use our prior to select a polling strategy that will be most informative about our target $w$. A polling strategy, in this simplified set-up, is the number of people to poll in each state. (We ignore any other covariates such as regional variation inside states, demographics, etc.) We might imagine that polling 1000 people in Florida (the most marginal state), will be much more effective than polling 1000 people in DC (the least marginal state). That's because the outcome in DC is already quite predictable, just based on our prior, whereas the outcome in Florida is really up for grabs.
In fact, the information that our model will gain about $w$ based on conducting a poll with design $d$ and getting outcome $y$ can be described mathematically as follows:
$$\text{IG}(d, y) = KL(p(w|y,d)||p(w)).$$
Since the outcome of the poll is at present unknown, we consider the expected information gain [1]
$$\text{EIG}(d) = \mathbb{E}_{p(y|d)}[KL(p(w|y,d)||p(w))].$$
Variational estimators of EIG
In the working memory tutorial, we used the 'marginal' estimator to find the EIG. This involved estimating the marginal density $p(y|d)$. In this experiment, that would be relatively difficult: $y$ is 51-dimensional with some rather tricky constraints that make modelling its density difficult. Furthermore, the marginal estimator requires us to know $p(y|w)$ analytically, which we do not.
Fortunately, other variational estimators of EIG exist: see [2] for more details. One such variational estimator is the 'posterior' estimator, based on the following representation
$$\text{EIG}(d) = \max_q \mathbb{E}_{p(w, y|d)}\left[\log q(w|y) \right] + H(p(w)).$$
Here, $H(p(w))$ is the prior entropy on $w$ (we can compute this quite easily). The important term involves the variational approximation $q(w|y)$. This $q$ can be used to perform amortized variational inference. Specifically, it takes as input $y$ and outputs a distribution over $w$. The bound is maximised when $q(w|y) = p(w|y)$ [2]. Since $w$ is a binary random variable, we can think of $q$ as a classifier that tries to decide, based on the poll outcome, who the eventual winner of the election will be. In this notebook, $q$ will be a neural classifier. Training a neural classifier is a fair bit easier than learning the marginal density of $y$, so we adopt this method to estimate the EIG in this tutorial.
End of explanation
prior_entropy = dist.Bernoulli(prior_w_prob).entropy()
Explanation: We'll now use this to compute the EIG for several possible polling strategies. First, we need to compute the $H(p(w))$ term in the above formula.
End of explanation
from collections import OrderedDict
poll_in_florida = torch.zeros(51)
poll_in_florida[9] = 1000
poll_in_dc = torch.zeros(51)
poll_in_dc[8] = 1000
uniform_poll = (1000 // 51) * torch.ones(51)
# The swing score measures how close the state is to 50/50
swing_score = 1. / (.5 - torch.tensor(prior_prob_dem.sort_values("State").values).squeeze()).abs()
swing_poll = 1000 * swing_score / swing_score.sum()
swing_poll = swing_poll.round()
poll_strategies = OrderedDict([("Florida", poll_in_florida),
("DC", poll_in_dc),
("Uniform", uniform_poll),
("Swing", swing_poll)])
Explanation: Let's consider four simple polling strategies.
1. Poll 1000 people in Florida only
2. Poll 1000 people in DC only
3. Poll 1000 people spread evenly over the US
4. Using a polling allocation that focuses on swing states
End of explanation
from pyro.contrib.oed.eig import posterior_eig
from pyro.optim import Adam
eigs = {}
best_strategy, best_eig = None, 0
for strategy, allocation in poll_strategies.items():
print(strategy, end=" ")
guide = OutcomePredictor()
pyro.clear_param_store()
# To reduce noise when comparing designs, we will use the precomputed value of H(p(w))
# By passing eig=False, we tell Pyro not to estimate the prior entropy on each run
# The return value of `posterior_eig` is then -E_p(w,y)[log q(w|y)]
ape = posterior_eig(model, allocation, "y", "w", 10, 12500, guide,
Adam({"lr": 0.001}), eig=False, final_num_samples=10000)
eigs[strategy] = prior_entropy - ape
print(eigs[strategy].item())
if eigs[strategy] > best_eig:
best_strategy, best_eig = strategy, eigs[strategy]
Explanation: We'll now compute the EIG for each option. Since this requires training the network (four times) it may take several minutes.
End of explanation
best_allocation = poll_strategies[best_strategy]
pyro.clear_param_store()
guide = OutcomePredictor()
posterior_eig(model, best_allocation, "y", "w", 10, 12500, guide,
Adam({"lr": 0.001}), eig=False)
Explanation: Running the experiment
We have now scored our four candidate designs and can choose the best one to use to actually gather new data. In this notebook, we will simulate the new data using the results from the 2016 election. Specifically, we will assume that the outcome of the poll comes from our model, where we condition the value of alpha to correspond to the actual results in 2016.
First, we retrain $q$ with the chosen polling strategy.
End of explanation
test_data = pd.read_pickle(urlopen(BASE_URL + "us_presidential_election_data_test.pickle"))
results_2016 = torch.tensor(test_data.values, dtype=torch.float)
true_alpha = torch.log(results_2016[..., 0] / results_2016[..., 1])
conditioned_model = pyro.condition(model, data={"alpha": true_alpha})
y, _, _ = conditioned_model(best_allocation)
Explanation: The value of $\alpha$ implied by the 2016 results is computed in the same way we computed the prior.
End of explanation
outcome = pd.DataFrame({"State": frame.index,
"Number of people polled": best_allocation,
"Number who said they would vote Democrat": y}
).set_index("State")
print(outcome.sort_values(["Number of people polled", "State"], ascending=[False, True]).head())
Explanation: Let's view the outcome of our poll.
End of explanation
q_w = torch.sigmoid(guide.compute_dem_probability(y).squeeze().detach())
print("Prior probability of Democrat win", prior_w_prob.item())
print("Posterior probability of Democrat win", q_w.item())
Explanation: Analysing the data
Having collected our data, we can now perform inference under our model to obtain the posterior probability of a Democrat win. There are many ways to perform the inference: for instance we could use variational inference with Pyro's SVI or MCMC such as Pyro's NUTS. Using these methods, we would compute the posterior over alpha, and use this to obtain the posterior over w.
However, a quick way to analyze the data is to use the neural network we have already trained. At convergence, we expect the network to give a good approximation to the true posterior, i.e. $q(w|y) \approx p(w|y)$.
End of explanation |
4,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SpaCy
Step1: Let's start out with a short string from our reading and see what happens.
Step2: We've downloaded the English model, and now we just have to load it. This model will do everything for us, but we'll only get a little taste today.
Step3: To parse an entire text we just call the model on a string.
Step4: That was quick! So what happened? We've talked a lot about tokenizing, either in words or sentences.
What about sentences?
Step5: Words?
Step6: What about parts of speech?
Step7: Lemmata?
Step8: What else? Let's just make a function tablefy that will make a table of all this information for us
Step9: Challenge
What's the most common verb? Noun? What if you only include lemmata? What if you remove "stop words"?
How would lemmatizing or removing "stop words" help us better understand a text over regular tokenizing?
Dependency Parsing
Let's look at our text again
Step10: Dependency parsing is one of the most useful and interesting NLP tools. A dependency parser will draw a tree of relationships between words. This is how you can find out specifically what adjectives are attributed to a specific person, what verbs are associated with a specific subject, etc.
spacy provides an online visualizer named "displaCy" to visualize dependencies. Let's look at the first sentence
We can loop through a dependency for a subject by checking the head attribute for the pos tag
Step11: You can imagine that you could look over a large corpus to analyze first person, second person, and third person characterizations. Dependency parsers are also important for understanding and processing natural language, a question answering system for example. These models help the computer understand what the question is that is being asked.
Limitations
How accurate are the models? What happens if we change the style of English we're working with?
Step12: NER and Civil War-Era Novels
Wilkens uses a technique called "NER", or "Named Entity Recognition" to let the computer identify all of the geographic place names. Wilkens writes
Step13: Cool! It's identified a few types of things for us. We can check what these mean here. GPE is country, cities, or states. Seems like that's what Wilkens was using.
Since we don't have his corpus of 1000 novels, let's just take our reading, A Romance of the Republic, as an example. We can use the requests library to get the raw HTML of a web page, and if we take the .text property we can make this a nice string.
Step14: We'll leave the chapter headers for now, it shouldn't affect much. Now we need to parse this with that nlp function
Step15: Challenge
With this larger string, find the most common noun, verb, and adjective. Then explore the other features of spacy and see what you can discover about our reading
Step16: That looks OK, but it's pretty rough! Keep this in mind when using trained models. They aren't 100% accurate. That's why Wilkens went through by hand after to get rid of the garbage.
If you thought NER was cool, wait for this. Now that we have a list of "places", we can send that to an online database to get back latitude and longitude coordinates (much like Wilkens used Google's geocoder), along with the US state. To make sure it's actually a US state, we'll need a list to compare to. So let's load that
Step17: OK, now we're ready. The Nominatim function from the geopy library will return an object that has the properties we want. We'll append a new row to our table for each entry. Importantly, we're using the keys of the places counter because we don't need to ask the database for "New Orleans" 10 times to get the location. So after we get the information we'll just add as many rows as the counter tells us there are.
Step18: Now we can plot a nice choropleth. | Python Code:
from datascience import *
import spacy
Explanation: SpaCy: Industrial-Strength NLP
The tradtional NLP library has always been NLTK. While NLTK is still very useful for linguistics analysis and exporation, spacy has become a nice option for easy and fast implementation of the NLP pipeline. What's the NLP pipeline? It's a number of common steps computational linguists perform to help them (and the computer) better understand textual data. Digital Humanists are often fond of the pipeline because it gives us more things to count! Let's what spacy can give us that we can count.
End of explanation
my_string = '''
"What are you going to do with yourself this evening, Alfred?" said Mr.
Royal to his companion, as they issued from his counting-house in New
Orleans. "Perhaps I ought to apologize for not calling you Mr. King,
considering the shortness of our acquaintance; but your father and I
were like brothers in our youth, and you resemble him so much, I can
hardly realize that you are not he himself, and I still a young man.
It used to be a joke with us that we must be cousins, since he was a
King and I was of the Royal family. So excuse me if I say to you, as
I used to say to him. What are you going to do with yourself, Cousin
Alfred?"
"I thank you for the friendly familiarity," rejoined the young man.
"It is pleasant to know that I remind you so strongly of my good
father. My most earnest wish is to resemble him in character as much
as I am said to resemble him in person. I have formed no plans for the
evening. I was just about to ask you what there was best worth seeing
or hearing in the Crescent City."'''.replace("\n", " ")
Explanation: Let's start out with a short string from our reading and see what happens.
End of explanation
nlp = spacy.load('en')
# nlp = spacy.load('en', parser=False) # run this instead if you don't have > 1GB RAM
Explanation: We've downloaded the English model, and now we just have to load it. This model will do everything for us, but we'll only get a little taste today.
End of explanation
parsed_text = nlp(my_string)
parsed_text
Explanation: To parse an entire text we just call the model on a string.
End of explanation
sents_tab = Table()
sents_tab.append_column(label="Sentence", values=[sentence.text for sentence in parsed_text.sents])
sents_tab.show()
Explanation: That was quick! So what happened? We've talked a lot about tokenizing, either in words or sentences.
What about sentences?
End of explanation
toks_tab = Table()
toks_tab.append_column(label="Word", values=[word.text for word in parsed_text])
toks_tab.show()
Explanation: Words?
End of explanation
toks_tab.append_column(label="POS", values=[word.pos_ for word in parsed_text])
toks_tab.show()
Explanation: What about parts of speech?
End of explanation
toks_tab.append_column(label="Lemma", values=[word.lemma_ for word in parsed_text])
toks_tab.show()
Explanation: Lemmata?
End of explanation
def tablefy(parsed_text):
toks_tab = Table()
toks_tab.append_column(label="Word", values=[word.text for word in parsed_text])
toks_tab.append_column(label="POS", values=[word.pos_ for word in parsed_text])
toks_tab.append_column(label="Lemma", values=[word.lemma_ for word in parsed_text])
toks_tab.append_column(label="Stop Word", values=[word.is_stop for word in parsed_text])
toks_tab.append_column(label="Punctuation", values=[word.is_punct for word in parsed_text])
toks_tab.append_column(label="Space", values=[word.is_space for word in parsed_text])
toks_tab.append_column(label="Number", values=[word.like_num for word in parsed_text])
toks_tab.append_column(label="OOV", values=[word.is_oov for word in parsed_text])
toks_tab.append_column(label="Dependency", values=[word.dep_ for word in parsed_text])
return toks_tab
tablefy(parsed_text).show()
Explanation: What else? Let's just make a function tablefy that will make a table of all this information for us:
End of explanation
parsed_text
Explanation: Challenge
What's the most common verb? Noun? What if you only include lemmata? What if you remove "stop words"?
How would lemmatizing or removing "stop words" help us better understand a text over regular tokenizing?
Dependency Parsing
Let's look at our text again:
End of explanation
from spacy.symbols import nsubj, VERB
SV = []
for possible_subject in parsed_text:
if possible_subject.dep == nsubj and possible_subject.head.pos == VERB:
SV.append((possible_subject.text, possible_subject.head))
sv_tab = Table()
sv_tab.append_column(label="Subject", values=[x[0] for x in SV])
sv_tab.append_column(label="Verb", values=[x[1] for x in SV])
sv_tab.show()
Explanation: Dependency parsing is one of the most useful and interesting NLP tools. A dependency parser will draw a tree of relationships between words. This is how you can find out specifically what adjectives are attributed to a specific person, what verbs are associated with a specific subject, etc.
spacy provides an online visualizer named "displaCy" to visualize dependencies. Let's look at the first sentence
We can loop through a dependency for a subject by checking the head attribute for the pos tag:
End of explanation
shakespeare = '''
Tush! Never tell me; I take it much unkindly
That thou, Iago, who hast had my purse
As if the strings were thine, shouldst know of this.
'''
shake_parsed = nlp(shakespeare.strip())
tablefy(shake_parsed).show()
huck_finn_jim = '''
“Who dah?” “Say, who is you? Whar is you? Dog my cats ef I didn’ hear sumf’n.
Well, I know what I’s gwyne to do: I’s gwyne to set down here and listen tell I hears it agin.”"
'''
hf_parsed = nlp(huck_finn_jim.strip())
tablefy(hf_parsed).show()
text_speech = '''
LOL where r u rn? omg that's sooo funnnnnny. c u in a sec.
'''
ts_parsed = nlp(text_speech.strip())
tablefy(ts_parsed).show()
old_english = '''
þæt wearð underne eorðbuendum,
þæt meotod hæfde miht and strengðo
ða he gefestnade foldan sceatas.
'''
oe_parsed = nlp(old_english.strip())
tablefy(oe_parsed).show()
Explanation: You can imagine that you could look over a large corpus to analyze first person, second person, and third person characterizations. Dependency parsers are also important for understanding and processing natural language, a question answering system for example. These models help the computer understand what the question is that is being asked.
Limitations
How accurate are the models? What happens if we change the style of English we're working with?
End of explanation
ner_tab = Table()
ner_tab.append_column(label="NER Label", values=[ent.label_ for ent in parsed_text.ents])
ner_tab.append_column(label="NER Text", values=[ent.text for ent in parsed_text.ents])
ner_tab.show()
Explanation: NER and Civil War-Era Novels
Wilkens uses a technique called "NER", or "Named Entity Recognition" to let the computer identify all of the geographic place names. Wilkens writes:
Text strings representing named locations in the corpus were identified using
the named entity recognizer of the Stanford CoreNLP package with supplied training
data. To reduce errors and to narrow the results for human review, only those
named-location strings that occurred at least five times in the corpus and were used
by at least two different authors were accepted. The remaining unique strings were
reviewed by hand against their context in each source volume. [883]
While we don't have the time for a human review right now, spacy does allow us to annotate place names (among other things!) in the same fashion as Stanford CoreNLP (a native Java library):
End of explanation
import requests
text = requests.get("http://www.gutenberg.org/files/10549/10549.txt").text
text = text[1050:].replace('\r\n', ' ') # fix formatting and skip title header
print(text[:5000])
Explanation: Cool! It's identified a few types of things for us. We can check what these mean here. GPE is country, cities, or states. Seems like that's what Wilkens was using.
Since we don't have his corpus of 1000 novels, let's just take our reading, A Romance of the Republic, as an example. We can use the requests library to get the raw HTML of a web page, and if we take the .text property we can make this a nice string.
End of explanation
parsed = nlp(text)
Explanation: We'll leave the chapter headers for now, it shouldn't affect much. Now we need to parse this with that nlp function:
End of explanation
from collections import Counter
places = []
for ent in parsed.ents:
if ent.label_ == "GPE":
places.append(ent.text.strip())
places = Counter(places)
places
Explanation: Challenge
With this larger string, find the most common noun, verb, and adjective. Then explore the other features of spacy and see what you can discover about our reading:
Let's continue in the fashion that Wilkens did and extract the named entities, specifically those for "GPE". We can loop through each entity, and if it is labeled as GPE we'll add it to our places list. We'll then make a Counter object out of that to get the frequency of each place name.
End of explanation
with open('data/us_states.txt', 'r') as f:
states = f.read().split('\n')
states = [x.strip() for x in states]
states
Explanation: That looks OK, but it's pretty rough! Keep this in mind when using trained models. They aren't 100% accurate. That's why Wilkens went through by hand after to get rid of the garbage.
If you thought NER was cool, wait for this. Now that we have a list of "places", we can send that to an online database to get back latitude and longitude coordinates (much like Wilkens used Google's geocoder), along with the US state. To make sure it's actually a US state, we'll need a list to compare to. So let's load that:
End of explanation
from geopy.geocoders import Nominatim
from datascience import *
import time
geolocator = Nominatim(timeout=10)
geo_tab = Table(["latitude", "longitude", "name", "state"])
for name in places.keys(): # only want to loop through unique place names to call once per place name
print("Getting information for " + name + "...")
# finds the lat and lon of each name in the locations list
location = geolocator.geocode(name)
try:
# index the raw response for lat and lon
lat = float(location.raw["lat"])
lon = float(location.raw["lon"])
# string manipulation to find state name
for p in location.address.split(","):
if p.strip() in states:
state = p.strip()
break
# add to our table
for i in range(places[name] - 1):
geo_tab.append(Table.from_records([{"name": name,
"latitude": lat,
"longitude": lon,
"state": state}]).row(0))
except:
pass
geo_tab.show()
Explanation: OK, now we're ready. The Nominatim function from the geopy library will return an object that has the properties we want. We'll append a new row to our table for each entry. Importantly, we're using the keys of the places counter because we don't need to ask the database for "New Orleans" 10 times to get the location. So after we get the information we'll just add as many rows as the counter tells us there are.
End of explanation
%matplotlib inline
from scripts.choropleth import us_choropleth
us_choropleth(geo_tab)
Explanation: Now we can plot a nice choropleth.
End of explanation |
4,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-3', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-3
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:25
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
4,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transect
Groupby and transform allow me to combine rows into a single 'transect' row.
Or, use a multiIndex, a hierarchical index, so I can target specific cells using id and type. The index item for a multiIndex is a TUPLE.
Step1: shift data to correct column
using loc for assignment
Step2: use groupby and transform to fill the row
Step3: shift data to correct row using a multi-Index | Python Code:
%matplotlib inline
import sys
import numpy as np
import pandas as pd
import json
import matplotlib.pyplot as plt
from io import StringIO
print(sys.version)
print("Pandas:", pd.__version__)
df = pd.read_csv('C:/Users/Peter/Documents/atlas/atlasdata/obs_types/transect.csv', parse_dates=['date'])
df = df.astype(dtype='str')# we don't need numbers in this dataset.
df=df.replace('nan','')
#this turns dates into strings with the proper format for JSON:
#df['date'] = df['date'].dt.strftime('%Y-%m-%d')
df.type = df.type.str.replace('\*remonitoring notes','transect')
df.type = df.type.str.replace('\*plot summary','transect')
Explanation: Transect
Groupby and transform allow me to combine rows into a single 'transect' row.
Or, use a multiIndex, a hierarchical index, so I can target specific cells using id and type. The index item for a multiIndex is a TUPLE.
End of explanation
df.loc[df.type =='map',['mapPhoto']]=df['url'] #moving cell values to correct column
df.loc[df.type.str.contains('lineminus'),['miscPhoto']]=df['url']
df.loc[df.type.str.contains('lineplus'),['miscPhoto']]=df['url']
df.loc[df.type.str.contains('misc'),['miscPhoto']]=df['url']
#now to deal with type='photo'
photos = df[df.type=='photo']
nonphotos = df[df.type != 'photo'] #we can concatenate these later
grouped = photos.groupby(['id','date'])
photos.shape
values=grouped.groups.values()
for value in values:
photos.loc[value[2],['type']] = 'misc'
#photos.loc[value[1],['type']] = 'linephoto2'
photos.loc[photos.type=='linephoto1']
for name, group in grouped:
print(grouped[name])
photos = df[df.type == 'photo']
photos.set_index(['id','date'],inplace=True)
photos.index[1]
photos=df[df.type=='photo']
photos.groupby(['id','date']).count()
photos.loc[photos.index[25],['type','note']]
#combine photo captions
df['caption']=''
df.loc[(df.type.str.contains('lineminus'))|(df.type.str.contains('lineplus')),['caption']]=df['type'] + ' | ' + df['note']
df.loc[df.type.str.contains('lineplus'),['caption']]=df['url']
df.loc[df.type.str.contains('misc'),['caption']]=df['url']
df['mystart'] = 'Baseline summary:'
df.loc[df.type =='transect',['site_description']]= df[['mystart','label1','value1','label2','value2','label3','value3','note']].apply(' | '.join, axis=1)
df.loc[df.type.str.contains('line-'),['linephoto1']]=df['url']
df.loc[df.type.str.contains('line\+'),['linephoto2']]=df['url']#be sure to escape the +
df.loc[df.type.str.contains('linephoto1'),['linephoto1']]=df['url']
df.loc[df.type.str.contains('linephoto2'),['linephoto2']]=df['url']
df.loc[df.type == 'plants',['general_observations']]=df['note']
Explanation: shift data to correct column
using loc for assignment: df.loc[destination condition, column] = df.loc[source]
End of explanation
#since we're using string methods, NaNs won't work
mycols =['general_observations','mapPhoto','linephoto1','linephoto2','miscPhoto','site_description']
for item in mycols:
df[item] = df[item].fillna('')
df.mapPhoto = df.groupby('id')['mapPhoto'].transform(lambda x: "%s" % ''.join(x))
df.linephoto1 = df.groupby(['id','date'])['linephoto1'].transform(lambda x: "%s" % ''.join(x))
df.linephoto2 = df.groupby(['id','date'])['linephoto2'].transform(lambda x: "%s" % ''.join(x))
df.miscPhoto = df.groupby(['id','date'])['miscPhoto'].transform(lambda x: "%s" % ''.join(x))
df['site_description'] = df['site_description'].str.strip()
df.to_csv('test.csv')
#done to here. Next, figure out what to do with linephotos, unclassified photos, and their notes.
#make column for photocaptions. When adding linephoto1, add 'note' and 'type' fields to caption column. E.g. 'linephoto1: 100line- | view east along transect.' Then join the rows in the groupby transform and add to site_description field.
df.shape
df[(df.type.str.contains('line\+'))&(df.linephoto2.str.len()<50)]
maps.str.len().sort_values()
Explanation: use groupby and transform to fill the row
End of explanation
ids = list(df['id'])#make a list of ids to iterate over, before the hierarchical index
#df.type = df.type.map({'\*plot summary':'transect','\*remonitoring notes':'transect'})
df.loc[df.type =='map',['mapPhoto']]=df['url'] #moving cell values to correct column
df.set_index(['id','type'],inplace=True) # hierarchical index so we can call locations
#a hierarchical index uses a tuple. You can set values using loc.
#this format: df.loc[destination] = df.loc[source].values[0]
for item in ids:
df.loc[(item,'*plot summary'),'mapPhoto'] = df.loc[(item,'map'),'mapPhoto'].values[0]
#generates a pink warning about performance, but oh well.
#here we are using an expression in parens to test for a condition
(df['type'].str.contains('\s') & df['note'].notnull()).value_counts()
df.url = df.url.str.replace(' ','_');df.url
df.url.head()
df['newurl'] = df.url.str.replace
df.newurl.head()
#for combining rows try something like this:
print(df.groupby('somecolumn')['temp variable'].apply(' '.join).reset_index())
Explanation: shift data to correct row using a multi-Index
End of explanation |
4,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Here I test the idea of binary, or very discrete, coding in an adaption of Larry Abbott's FORCE learning [0].
I want to make sense of a world where the neural code is binary [6] (or very discrete) but organisms must still make smooth regular movements, and have conitnous feeling thoughts. A coarse code and a smooth world seem, naively, to contradict. I started playing with FORCE really out of curiosity, and because of all the learning rules I know it was the least likely to work! And then it work really well. ....Science! I'm surprised and pleased that my adaption, discrete FORCE, or DFORFCE, works so well.
While the learing part of FORCE is not biologically plausible (Recursive least squares, at least as implemented here), keys ideas do have theoretical and empirical support. FORCE uses the idea of high dimensional random (non-)linear encoding tied to linear decoding, which is found in all the 'echo state' and 'liquid state' learning systems (google for more) and which seem to be supported by data from visual areas[1], and motor cortex [3], among other areas [5]. Separately, Nengo/NEF also used non-linear/linear methods to make SPAUN [4].
Relevant Papers
[0] Original FORCE learning paper
Sussillo D, Abbott LF (2009). Generating coherent patterns of activity from chao*tic neural networks. Neuron. 63(4)
Step1: FORCE
FORCE is a variant 'liquid state machine' or 'echo state machine' supervised learning system, that can tame chaotic patterns to match a target time series. Read [0] before you go any farther.
In the model neurons here are replaced units, or neural mass models, that represent the aggregrate firing rates of many neurons. This is a common, and useful, theoretical abstraction, in case you're not familiar with it.
First I'm going to show you classic (continous) FORCE, the my discerte version.
Neural masses
We simulate the model
Step2: The loss function
You can linearly reweight the chaos into doing very useful computation. We call this learner the readout readout unit.
Model an output or readout unit for the network as
Step3: FORCE does a pretty nice job learning how to be a sin wave. If you rerun this a few times, you'll see the quality of the fits varies. Such if woring with randomness and chaos.
Binary FORCE
Implementation
I first define a function that converts unit rates into a binary output. Then I replace the standard learning FORCE math to use the binary decoded response generated by this function.
Step4: Here's when the binary version looks like. I use random (U) selected thresholds to convert from rates to binary codes. Don't have any idea how this really works, so random seems as good a guess as any.
That said a fixed threshold seems just as good.
I'm curious as to whether different thresholds for different training targets might let me multiplex many decodings onto the same pool?
Step5: The binary loss function
Now let's learn, with binary codes. | Python Code:
import pylab as plt
import numpy as np
%matplotlib inline
from __future__ import division
from scipy.integrate import odeint,ode
from numpy import zeros,ones,eye,tanh,dot,outer,sqrt,linspace,cos,pi,hstack,zeros_like,abs,repeat
from numpy.random import uniform,normal,choice
%config InlineBackend.figure_format = 'retina'
Explanation: Introduction
Here I test the idea of binary, or very discrete, coding in an adaption of Larry Abbott's FORCE learning [0].
I want to make sense of a world where the neural code is binary [6] (or very discrete) but organisms must still make smooth regular movements, and have conitnous feeling thoughts. A coarse code and a smooth world seem, naively, to contradict. I started playing with FORCE really out of curiosity, and because of all the learning rules I know it was the least likely to work! And then it work really well. ....Science! I'm surprised and pleased that my adaption, discrete FORCE, or DFORFCE, works so well.
While the learing part of FORCE is not biologically plausible (Recursive least squares, at least as implemented here), keys ideas do have theoretical and empirical support. FORCE uses the idea of high dimensional random (non-)linear encoding tied to linear decoding, which is found in all the 'echo state' and 'liquid state' learning systems (google for more) and which seem to be supported by data from visual areas[1], and motor cortex [3], among other areas [5]. Separately, Nengo/NEF also used non-linear/linear methods to make SPAUN [4].
Relevant Papers
[0] Original FORCE learning paper
Sussillo D, Abbott LF (2009). Generating coherent patterns of activity from chao*tic neural networks. Neuron. 63(4):544-57
[1] Application of recurrent neural network learning to model dynamics in primate visual cortex
Mante V, Sussillo D, Shenoy KV, Newsome WT (2013). Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature. 503(7474):78-84.
[2] How recurrent neural networks respond to sinusoidal input
Rajan K, Abbott LF, Sompolinsky H (2010). Stimulus-Dependent Suppression of Chaos in Recurrent Neural Networks. Phys. Rev. E 82:011903.
[3] Motor movement and dynamics
Shenoy, Sahani, and Churchland (2013) Cortical Control of Arm Movements: A Dynamical Systems Perspective Annual Review of Neuroscience Vol. 36: 337-359
[4] SPAUN, a real big functional 'brain' model
Eliasmith et al (2012) A Large-Scale Model of the Functioning Brain Science 338(6111):1202-1205
[5] Reservoir computing and reinforcement learning
Bernacchia A, Seo H, Lee D, Wang X-J (2011) A reservoir of time constants for memory traces in cortical neurons Nature Neurosci., 14: 366-372
[6] Evidence for discrete decisions
Latimer KL, Yates JL, Meister MLR, Huk AC, & Pillow JW (2015). Single-trial spike trains in parietal cortex reveal discrete steps during decision-making Science 349(6244): 184-187
End of explanation
def f1(x,t0):
return -x + g*dot(J,tanh(x))
N = 500
J = normal(0,sqrt(1/N),(N,N))
x0 = uniform(-0.5,0.5,N)
t = linspace(0,50,500)
plt.figure(figsize=(10,5))
for s,g in enumerate(linspace(0.5,2,3)):
plt.subplot(1,3,s+1)
x = odeint(f1,x0,t)
plt.plot(t,x[:,choice(N,10)])
plt.title('g = '+str(g),fontweight='bold')
plt.show()
Explanation: FORCE
FORCE is a variant 'liquid state machine' or 'echo state machine' supervised learning system, that can tame chaotic patterns to match a target time series. Read [0] before you go any farther.
In the model neurons here are replaced units, or neural mass models, that represent the aggregrate firing rates of many neurons. This is a common, and useful, theoretical abstraction, in case you're not familiar with it.
First I'm going to show you classic (continous) FORCE, the my discerte version.
Neural masses
We simulate the model:
$$\frac{d\mathbf{x}}{dt} = -\mathbf{x} + g J \tanh{[\mathbf{x}]} $$
with $x \in \mathcal{R}^N$ (vector), $J \in \mathcal{R}^{N \times N}$ (matrix), $g \in \mathcal{R}$ (scalar). Randomly draw each element of $J$ from a Gaussian distribution with zero mean and variance $1/N$. Characterize the output of the system for increasing values of $g$. If $g$ is greater than 1.5 the system will behave chaotically. If you take $g$ = 1 it reduces the system from FORCE to a tradtional 'echo state' system.
Here's an example of what the firing of 10 random units looks like:
End of explanation
target = lambda t0: cos(2 * pi * t0 / 50) # target pattern
def f3(t0, x, tanh_x):
return -x + g * dot(J, tanh_x) + dot(w, tanh_x) * u
dt = 1 # time step
tmax = 1000 # simulation length
tstop = 600
N = 300
J = normal(0, sqrt(1 / N), (N, N))
x0 = uniform(-0.5, 0.5, N)
g = 1.5
u = uniform(-1, 1, N)
w = uniform(-1 / sqrt(N), 1 / sqrt(N), N) # initial weights
P = eye(N) # Running estimate of the inverse correlation matrix
lr = 1 # learning rate
# simulation data: state, output, time, weight updates
x, z, t, wu = [x0], [], [0], [0]
# Set up ode solver
solver = ode(f3)
solver.set_initial_value(x0)
# Integrate ode, update weights, repeat
while t[-1] < tmax:
tanh_x = tanh(x[-1]) # cache
z.append(dot(w, tanh_x))
error = target(t[-1]) - z[-1]
q = dot(P, tanh_x)
c = lr / (1 + dot(q, tanh_x))
P = P - c * outer(q, q)
w = w + c * error * q
# Stop leaning here
if t[-1] > tstop:
lr = 0
solver.set_f_params(tanh_x)
wu.append(np.sum(np.abs(c * error * q)))
solver.integrate(solver.t + dt)
x.append(solver.y)
t.append(solver.t)
# last update for readout neuron
z.append(dot(w, tanh_x))
x = np.array(x)
t = np.array(t)
plt.figure(figsize=(10, 5))
plt.subplot(2, 1, 1)
plt.plot(t, target(t), '-r', lw=2)
plt.plot(t, z, '-b')
plt.legend(('target', 'output'))
plt.ylim([-1.1, 3])
plt.xticks([])
plt.subplot(2, 1, 2)
plt.plot(t, wu, '-k')
plt.yscale('log')
plt.ylabel('$|\Delta w|$', fontsize=20)
plt.xlabel('time', fontweight='bold', fontsize=16)
plt.show()
J[0,0]
for i in range(20):
plt.plot(t[:200], x[:,i][:200]);
Explanation: The loss function
You can linearly reweight the chaos into doing very useful computation. We call this learner the readout readout unit.
Model an output or readout unit for the network as:
$$z = \mathbf{w}^T \tanh[\mathbf{x}]$$
The output $z$ is a scalar formed by the dot product of two N-dimensional vectors ($\mathbf{w}^T$ denotes the transpose of $\mathbf{w}$). We will implement the FORCE learning rule (Susillo & Abbott, 2009), by adjusting the readout weights, $w_i$, so that $z$ matches a target function:
$$f(t) = \cos\left(\frac{2 \pi t}{50} \right)$$
The rule works by implementing recursive least-squares:
$$\mathbf{w} \rightarrow \mathbf{w} + c(f-z) \mathbf{q}$$
$$\mathbf{q} = P \tanh [\mathbf{x}]$$
$$c = \frac{1}{1+ \mathbf{q}^T \tanh(\mathbf{x})}$$
$$P_{ij} \rightarrow P_{ij} - c q_i q_j$$
Real FORCE
Let's teach the chaos how to be a sin wave
End of explanation
def decode(x, rho):
xd = zeros_like(x)
xd[x > rho] = 1
xd[x < -rho] = -1
return xd
Explanation: FORCE does a pretty nice job learning how to be a sin wave. If you rerun this a few times, you'll see the quality of the fits varies. Such if woring with randomness and chaos.
Binary FORCE
Implementation
I first define a function that converts unit rates into a binary output. Then I replace the standard learning FORCE math to use the binary decoded response generated by this function.
End of explanation
def f1(x,t0):
return -x + g*dot(J,tanh(x))
N = 500
J = normal(0,sqrt(1/N),(N,N))
x0 = uniform(-0.5,0.5,N)
t = linspace(0,50,500)
rho = uniform(0,0.1,N) # Rand thresholds!
# rho = 0.5 # fixed threshold!
plt.figure(figsize=(10,5))
for s,g in enumerate(linspace(0.5,1.5,3)):
plt.subplot(1,3,s+1)
x = odeint(f1,x0,t)
xd = decode(x, rho)
plt.plot(t,xd[:,choice(N,10)])
plt.title('g = '+str(g),fontweight='bold')
plt.ylim(-2,2)
plt.show()
Explanation: Here's when the binary version looks like. I use random (U) selected thresholds to convert from rates to binary codes. Don't have any idea how this really works, so random seems as good a guess as any.
That said a fixed threshold seems just as good.
I'm curious as to whether different thresholds for different training targets might let me multiplex many decodings onto the same pool?
End of explanation
target = lambda t0: cos(2 * pi * t0 / 50) # target pattern
def f3(t0, x):
return -x + g * dot(J, tanh_x) + dot(w, tanh_x) * u
dt = 1 # time step
tmax = 1000 # simulation length
tstop = 600
N = 300
J = normal(0, sqrt(1 / N), (N, N))
x0 = uniform(-0.5, 0.5, N)
g = 1.0
u = uniform(-1, 1, N)
w = uniform(-1 / sqrt(N), 1 / sqrt(N), N) # initial weights
P = eye(N) # Running estimate of the inverse correlation matrix
lr = .4 # learning rate
rho = repeat(0.05, N)
# simulation data: state,
# output, time, weight updates
x, z, t, wu = [x0], [], [0], [0]
# Set up ode solver
solver = ode(f3)
solver.set_initial_value(x0)
# Integrate ode, update weights, repeat
while t[-1] < tmax:
tanh_x = tanh(x[-1])
tanh_xd = decode(tanh_x, rho) # BINARY CODE INTRODUCED HERE!
z.append(dot(w, tanh_xd))
error = target(t[-1]) - z[-1]
q = dot(P, tanh_xd)
c = lr / (1 + dot(q, tanh_xd))
P = P - c * outer(q, q)
w = w + c * error * q
# Stop training time
if t[-1] > tstop:
lr = 0
wu.append(np.sum(np.abs(c * error * q)))
solver.integrate(solver.t + dt)
x.append(solver.y)
t.append(solver.t)
# last update for readout neuron
z.append(dot(w, tanh_x))
# plot
x = np.array(x)
t = np.array(t)
plt.figure(figsize=(10, 5))
plt.subplot(2, 1, 1)
plt.plot(t, target(t), '-r', lw=2)
plt.plot(t, z, '-b')
plt.legend(('target', 'output'))
plt.ylim([-1.1, 3])
plt.xticks([])
plt.subplot(2, 1, 2)
plt.plot(t, wu, '-k')
plt.yscale('log')
plt.ylabel('$|\Delta w|$', fontsize=20)
plt.xlabel('time', fontweight='bold', fontsize=16)
plt.show()
Explanation: The binary loss function
Now let's learn, with binary codes.
End of explanation |
4,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An overview of feature engineering for regression and machine learning algorithms
Step1: A simple example to illustrate the intuition behind dummy variables
Step2: Now we have a matrix of values based on the presence of absence of the attribute value in our dataset
Another example with the flight statistics
Now let's look at another example using our flight data
Step3: We could explore whether the NaNs are actually zero delays, but we'll just filter them out for now, especially since they represent such a small number of instances
Step4: We can discretize the continuous DEP_DELAY value by giving it a value of 0 if it's delayed and a 1 if it's not. We record this value into a separate column. (We could also code -1 for early, 0 for ontime, and 1 for late)
Step5: Multicollinearity
Step6: <img src="http | Python Code:
import pandas as pd
%matplotlib inline
Explanation: An overview of feature engineering for regression and machine learning algorithms
End of explanation
df = pd.DataFrame({'key':['b','b','a','c','a','b'],'data1':range(6)})
df
pd.get_dummies(df['key'],prefix='key')
Explanation: A simple example to illustrate the intuition behind dummy variables
End of explanation
df = pd.read_csv('data/ontime_reports_may_2015_ny.csv')
#count number of NaNs in column
df['ARR_DELAY'].isnull().sum()
#calculate the percentage this represents of the total number of instances
df['ARR_DELAY'].isnull().sum()/df['ARR_DELAY'].sum()
Explanation: Now we have a matrix of values based on the presence of absence of the attribute value in our dataset
Another example with the flight statistics
Now let's look at another example using our flight data
End of explanation
#filter ARR_DELAY NaNs
df = df[pd.notnull(df['ARR_DELAY'])]
Explanation: We could explore whether the NaNs are actually zero delays, but we'll just filter them out for now, especially since they represent such a small number of instances
End of explanation
#code whether delay or not delayed
df['IS_DELAYED'] = df['ARR_DELAY'].apply(lambda x: 1 if x>0 else 0 )
#Let's check that our column was created properly
df[['ARR_DELAY','IS_DELAYED']]
pd.get_dummies(df['ORIGIN'],prefix='origin') #We'd want to drop one of these before we actually used this in our algorithm
Explanation: We can discretize the continuous DEP_DELAY value by giving it a value of 0 if it's delayed and a 1 if it's not. We record this value into a separate column. (We could also code -1 for early, 0 for ontime, and 1 for late)
End of explanation
df = pd.read_csv('data/heights_weights_genders.csv')
pd.get_dummies(df['Gender'],prefix='gender').corr()
Explanation: Multicollinearity
End of explanation
from sklearn import preprocessing
x = df[['Height','Weight']].values #returns a numpy array
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
df_normalized = pd.DataFrame(x_scaled)
df['Height'].describe()
df_normalized
Explanation: <img src="http://i.giphy.com/3ornka9rAaKRA2Rkac.gif"></img>
That's the dummy variable trap
We can also normalize variables across a range
End of explanation |
4,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Physicist's Crash Course on Artificial Neural Network
What is a Neuron
What a neuron does is to response when a stimulation is given. This response could be strong or weak or even null. If I would draw a figure, of this behavior, it looks like this.
<img src="assets/images/neuronResponse.png" width=100%>
Artificial Neural Network
A simple network is a collection of neurons that response to stimulations, which could be the responses of other neurons.
<img src="assets/images/neuralNetworkSimple.png" width=100%>
A given input signal is spreaded onto three different neurons. The neurons respond to this signal sperately then summed together with different weights. In the language of math, given input $x$, output $y(x)$ is
$$ y(x) = \sum_{k=1}^{3} v_k * \text{activation}( w_k * x + u_k ) $$
where $\text{activation}$ is the activation function, i.e., the response behavior of the neuron. This is a single layer structure.
A lot of different ways could be used to extend this network.
Increase the number of neurons on one layer.
One can extend the number of layers.
<img src="assets/images/multilayer.png" width=100%>
We could also include interactions between the neurons.
Even memory can be simulated.
How it works
Here is an exmaple of how the network works.
Suppose we have only two neurons in the network.
<img src="assets/images/2neuronNet.png" width=100%>
Seen from this example, we can expect neural network to be good at classification. With one neuron, we can do a classification too. For example we can choose proper parameters so that we have a input temperature and a output that tells us which is high temperature which is low temperature.
Training
We have got a lot of paramters with the set up of the network. The parameters are the degree of freedom we have. The question is how to get the right paramters.
The Network NEEDS TRAINING. Just like human learning, the neural network have to be trained using prepared data. One example would be
Step1: Balance bewteen 'speed' (Beta-coefficient) and 'momentum' of the learning
Problems
Step2: Minimize An Expression
This is a practice of minimizing an expression using scipy.optimize.minimize()
Step3: Here is a summary
Step4: Caution
Step5: Test cost function
Step6: Next step is to optimize this cost. To do this we need the derivitive. But anyway let's try a simple minimization first.
Step7: Test total cost
Step8: Suppose the parameters are five dimensional and we have 10 data points.
Step9: Define a list divier that splits an array into three arrays.
Step10: It shows that the minimization depends greatly on the initial guess. It is not true for a simple scenario with gradient descent however it could be the case if the landscape is too complicated.
Use Jac
I can define a function that deals with this part
Step11: Define the jac of cost function
Step12: Test Results
Plot!
Step13: A Even Simpler Equation
Test a very simple equation
$$\frac{dy}{dx}=4x^3-3x^2+2,$$
with initial condition $$y(0)=0.$$
As in any case,
$$y = \text{Initial} + x_i v_k f(x_iw_k+u_k).$$
$$\frac{dy}{dx} = v_k f(x w_k+u_k) + t v_k f(x w_k+u_k) (1-f(xw_k+u_k))w_k,$$ where the function f is defined as a trigf().
Cost is
$$I = \sum_i \left(\frac{dy}{dx}-(4x^2-3x^2+2) \right)^2$$ | Python Code:
import numpy as np
print np.linspace(0,9,10), np.exp(-np.linspace(0,9,10))
Explanation: A Physicist's Crash Course on Artificial Neural Network
What is a Neuron
What a neuron does is to response when a stimulation is given. This response could be strong or weak or even null. If I would draw a figure, of this behavior, it looks like this.
<img src="assets/images/neuronResponse.png" width=100%>
Artificial Neural Network
A simple network is a collection of neurons that response to stimulations, which could be the responses of other neurons.
<img src="assets/images/neuralNetworkSimple.png" width=100%>
A given input signal is spreaded onto three different neurons. The neurons respond to this signal sperately then summed together with different weights. In the language of math, given input $x$, output $y(x)$ is
$$ y(x) = \sum_{k=1}^{3} v_k * \text{activation}( w_k * x + u_k ) $$
where $\text{activation}$ is the activation function, i.e., the response behavior of the neuron. This is a single layer structure.
A lot of different ways could be used to extend this network.
Increase the number of neurons on one layer.
One can extend the number of layers.
<img src="assets/images/multilayer.png" width=100%>
We could also include interactions between the neurons.
Even memory can be simulated.
How it works
Here is an exmaple of how the network works.
Suppose we have only two neurons in the network.
<img src="assets/images/2neuronNet.png" width=100%>
Seen from this example, we can expect neural network to be good at classification. With one neuron, we can do a classification too. For example we can choose proper parameters so that we have a input temperature and a output that tells us which is high temperature which is low temperature.
Training
We have got a lot of paramters with the set up of the network. The parameters are the degree of freedom we have. The question is how to get the right paramters.
The Network NEEDS TRAINING. Just like human learning, the neural network have to be trained using prepared data. One example would be
End of explanation
# This line configures matplotlib to show figures embedded in the notebook,
# instead of opening a new window for each figure. More about that later.
# If you are using an old version of IPython, try using '%pylab inline' instead.
%matplotlib inline
from scipy.optimize import minimize
from scipy.special import expit
import matplotlib.pyplot as plt
import timeit
Explanation: Balance bewteen 'speed' (Beta-coefficient) and 'momentum' of the learning
Problems: over-trained or 'grandmothered' -> respond only to one set of problems
For References
A very basic introduction: http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.html
Code Practice
End of explanation
fun = lambda x: (x[0] - 1)**2 + (x[1] - 2.5)**2
minimize(fun,(2,1),method="Nelder-Mead")
def fun_jacf(x):
np.asarray(x)
return np.array([2*(x[0] - 1),2*(x[1] - 2.5)])
minimize(fun,(2,1),method="BFGS",jac=fun_jacf)
Explanation: Minimize An Expression
This is a practice of minimizing an expression using scipy.optimize.minimize()
End of explanation
def cost(v,w,u,t):
v = np.array(v) # Don't know why but np.asarray(v) doesn't work here.
w = np.array(w)
u = np.array(u)
fvec = np.array(trigf(t*w + u) ) # This is a vector!!!
yt = 1 + np.sum ( t * v * fvec ) # For a given t, this calculates the value of y(t), given the parameters, v, w, u.
return ( np.sum (v*fvec + t * v* fvec * ( 1 - fvec ) * w ) + yt ) ** 2
# return np.sum(np.array( v*np.array( trigf( np.array( t*w ) + u ) ) ) + np.array( t*np.array( v*np.array( trigf(np.array( t*w ) + u)) ) ) * ( 1 - np.array( trigf( np.array( t*w )+u) ) ) * w + ( 1 + np.array( t*np.array( v*np.array( trigf( np.array(t*w)+u ) ) ) ) ) ) # trigf() should return an array with the same length of the input.
Explanation: Here is a summary:
The jac parameter should be an array. Feed an array to it. The array should be the gradient at point [x[0],x[1],...] for each of these variables.
There are other minimizing methods here http://scipy-lectures.github.io/advanced/mathematical_optimization/.
ANN Solving A Simple Problem
The problem to solve is the differential equation $$\frac{d}{dt}y(t)= - y(t).$$ Using the network, this is $$y_i= 1+t_i v_k f(t_i w_k+u_k).$$
The procedures are
Deal with the function first.
The cost is $$I=\sum_i\left( \frac{dy_i}{dt}+y_i \right)^2.$$ Our purpose is to minimize this cost.
To calculate the differential of y, we can write down the explicit expression for it. $$\frac{dy}{dt} = v_k f(t w_k+u_k) + t v_k f(tw_k+u_k) (1-f(tw_k+u_k))w_k,$$ where the function f is defined as a trigf().
So the cost becomse $$I = \sum_i \left( v_k f(t w_k+u_k) + t v_k f(tw_k+u_k) (1-f(tw_k+u_k)) w_k + y \right)^2.$$
End of explanation
def trigf(x):
#return 1/(1+np.exp(-x)) #
return expit(x)
Explanation: Caution: a number times an array is not returned as array but instead as list. and list + list doesn't conserved the length of the list!
Define the trigf() next, usually we use $$trigf(x)=\frac{1}{1+\exp(-x)}$$.
End of explanation
test11 = np.ones(30)
cost(np.array([1,1,1]),[1,1,1],[1,1,1],1)
Explanation: Test cost function:
End of explanation
def costTotal(v,w,u,t):
t = np.array(t)
costt = 0
for temp in t:
costt = costt + cost(v,w,u,temp)
return costt
Explanation: Next step is to optimize this cost. To do this we need the derivitive. But anyway let's try a simple minimization first.
End of explanation
test11 = np.ones(30)
tlintest = np.linspace(0,1,2)
print costTotal(np.ones(10),np.ones(10),2*np.ones(10),tlintest)
print costTotal(np.ones(10),np.ones(10),np.ones(10),tlintest)
Explanation: Test total cost
End of explanation
tlin = np.linspace(0,5,11)
print tlin
Explanation: Suppose the parameters are five dimensional and we have 10 data points.
End of explanation
## No need to define such a function! Use np.split(x,3) instead.
np.zeros(30)
# This is only an example of 2dimensional neural network.
costTotalF = lambda x: costTotal(np.split(x,3)[0],np.split(x,3)[1],np.split(x,3)[2],tlin)
initGuess = np.zeros(30)
# initGuess = np.random.rand(1,30)+2
start1 = timeit.default_timer()
minimize(costTotalF,initGuess,method="Nelder-Mead")
# minimize(costTotalF,initGuess,method="L-BFGS-B")
# minimize(costTotalF,initGuess,method="TNC")
stop1 = timeit.default_timer()
print stop1 - start1
Explanation: Define a list divier that splits an array into three arrays.
End of explanation
def mhelper(v,w,u,t): ## This function should output a result ## t is a number in this function not array!!
v = np.array(v)
w = np.array(w)
u = np.array(u)
return np.sum( v*trigf( t*w + u ) + t* v* trigf(t*w + u) * ( 1 - trigf( t*w +u) ) * w ) + ( 1 + np.sum( t * v * trigf( t*w +u ) ) )
# Checked # Pass
def vhelper(v,w,u,t):
v = np.array(v)
w = np.array(w)
u = np.array(u)
return trigf(t*w+u) + t*trigf(t*w+u)*( 1-trigf(t*w+u) )*w + t*trigf(t*w+u)
def whelper(v,w,u,t):
v = np.array(v)
w = np.array(w)
u = np.array(u)
return v*t*trigf(t*w+u)*( 1- trigf(t*w+u) ) + t*v*( trigf(t*w+u)*(1-trigf(t*w+u))*t* (1-trigf(t*w+u)) )*w - t*v*trigf(t*w+u)*trigf(t*w+u)*(1-trigf(t*w+u))*t*w + t*v*trigf(t*w+u)*(1-trigf(t*w+u)) + t*v*trigf(t*w+u)*(1-trigf(t*w+u))*t
def uhelper(v,w,u,t):
v = np.array(v)
w = np.array(w)
u = np.array(u)
return v*trigf(t*w+u)*( 1 - trigf(t*w+u)) + t* v * trigf(t*w+u) * (1-trigf(t*w+u))*(1-trigf(t*w+u))*w - t*v*trigf(t*w+u)*trigf(t*w+u)*(1-trigf(t*w+u))*w + t*v*trigf(t*w+u)*(1-trigf(t*w+u))
mhelper([1,2],[2,3],[3,4],[1])
vhelper([1,2],[2,3],[3,4],[1,2])
Explanation: It shows that the minimization depends greatly on the initial guess. It is not true for a simple scenario with gradient descent however it could be the case if the landscape is too complicated.
Use Jac
I can define a function that deals with this part:
$$M = v_k f(t w_k+u_k) + t v_k f(tw_k+u_k) (1-f(tw_k+u_k))w_k + y,$$
which is actually an array given an array input.
So the cost is
$$I = M_i M_i,$$ using summation rule.
The derivative is always
$$\partial_X I = 2 M_i \partial_X M_i .$$
So we have $$\partial_{w_{k'}}f(tw_k+u_k) = f(t w_k+u_k) (1 - f(t w_k+u_k) ) t . $$
$$\partial_{u_{k'}}f(t w_k+u_k) = f(t w_k+u_k) (1 - f(t w_k+u_k) ) . $$
One of the useful relation is $$\frac{df(x)}{dx} = f(x)(1-f(x)).$$
Derived by hand, the jac is a list of the following
for $v_\alpha$ (Note that the k in this expression should be $\alpha$ and no summation should be done.) (double checked):
$$2M_i(f(tw_{k'}+u_{k'}) +t f(tw_{k'}+u_{k'})(1-f(tw_{k'}+u_{k'}))w_{k'} + tf(tw_{k'} +u_{k'} )),$$
for $w_\alpha$ (Note that the k in this expression should be $\alpha$ and no summation should be done.) (double checked):
$$2M_i( v_{k'}tf(1-f) + t v_{k'}f(1-f)t(1-f) w_{k'} - t v_{k'} f f(1-f ) t w_{k'} + tv_{k'} f(1-f) + t v_{k'} f(')( 1 - f(') ) t ),$$
for $u_\alpha$ (Note that the k in this expression should be $\alpha$ and no summation should be done.) (double checked):
$$v_{k'} f(1-f) + t v_{k'} f(1-f) (1-f)w_{k'} - t v_{k'} f f(1-f) w_{k'} + t v_{k'} f(1-f) .$$
where $k'$ is not summed over.
Define a help function M here:
End of explanation
def mhelperT(v,w,u,t):
t = np.array(t)
mhelperT = 0
for temp in t:
mhelperT = mhelperT + mhelper(v,w,u,temp)
return mhelperT
def vhelperT(v,w,u,t):
t = np.array(t)
vhelperT = 0
for temp in t:
vhelperT = vhelperT + vhelper(v,w,u,temp)
return vhelperT
def whelperT(v,w,u,t):
t = np.array(t)
whelperT = 0
for temp in t:
whelperT = whelperT + whelper(v,w,u,temp)
return whelperT
def uhelperT(v,w,u,t):
t = np.array(t)
uhelperT = 0
for temp in t:
uhelperT = uhelperT + uhelper(v,w,u,temp)
return uhelperT
def costJac(v,w,u,t):
v = np.array(v)
w = np.array(w)
u = np.array(u)
vout = 0
wout = 0
uout = 0
for temp in t:
vout = vout + 2*mhelper(v,w,u,temp)*vhelper(v,w,u,temp)
wout = wout + 2*mhelper(v,w,u,temp)*whelper(v,w,u,temp)
uout = uout + 2*mhelper(v,w,u,temp)*uhelper(v,w,u,temp)
out = np.hstack((vout,wout,uout))
return np.array(out)
print uhelperT([1,2],[2,3],[3,4],[1,2,3]),mhelperT([1,2],[2,3],[3,4],[1]),whelperT([1,2],[2,3],[3,4],[1]),vhelperT([1,2],[2,3],[3,4],[1])
costJac([1,2,3],[2,3,1],[3,4,3],[1,2])
costJacF = lambda x: costJac(np.split(x,3)[0],np.split(x,3)[1],np.split(x,3)[2],tlin)
initGuessJ = np.zeros(30)
# initGuessJ = np.random.rand(1,30)+2
minimize(costTotalF,initGuessJ,method="Newton-CG",jac=costJacF)
Explanation: Define the jac of cost function
End of explanation
# funYNN(np.ones(10),np.ones(10),np.ones(10),2)
test13=np.array([-57.2424592 , -57.2424592 , -57.2424592 , -57.2424592 ,
-57.2424592 , -57.2424592 , -57.2424592 , -57.2424592 ,
-57.2424592 , -57.2424592 , -0.28879104, -0.28879104,
-0.28879104, -0.28879104, -0.28879104, -0.28879104,
-0.28879104, -0.28879104, -0.28879104, -0.28879104,
-6.5643978 , -6.5643978 , -6.5643978 , -6.5643978 ,
-6.5643978 , -6.5643978 , -6.5643978 , -6.5643978 ,
-6.5643978 , -6.5643978 ])
for i in np.linspace(0,5,11):
print i,functionYNN(np.split(test13,3)[0],np.split(test13,3)[1],np.split(test13,3)[2],np.array([i]))[0]
temp14 = np.array([])
for i in np.linspace(0,5,11):
temp14 = np.append(temp14,functionYNN(np.split(test13,3)[0],np.split(test13,3)[1],np.split(test13,3)[2],np.array([i]))[0])
testTLin = np.linspace(0,5,11)
plt.figure(figsize=(10,6.18))
plt.plot(testTLin,functionY(testTLin),'bs')
plt.plot(testTLin,temp14,'r-')
plt.show()
temp16 = np.array([1.,0.60129567, 0.36281265 , 0.22220159 , 0.13660321,0.08295538 , 0.04904239 ,0.02817984 , 0.01636932 , 0.01048201, 0.00741816])
temp15 = np.linspace(0,5,11)
print temp15
plt.plot(temp15,temp16)
plt.plot(temp15,functionY(temp15),'bs')
plt.show()
test17 = np.array([])
for temp in np.linspace(0,5,11):
test171 = 1 + expit(10*temp)
test17 = np.append(test17,test171)
print np.array(test17)
1 + expit(10*0)
def functionYNNSt(v,w,u,t): # t is a single scalar value
t = np.array(t)
return 1 + np.sum(t * v * trigf( t*w +u ) )
def functionYNN(v,w,u,t):
t = np.array(t)
func = np.asarray([])
for temp in t:
func = np.append(func, functionYNNSt(v,w,u,temp) )
return np.array(func)
def functionY(t):
return np.exp(-t)
print functionYNN(np.array([1,2]),np.array([1,2]),np.array([1,2]),tlin)
# structArray=np.array([-1.77606225*np.exp(-01), -3.52080053*np.exp(-01), -1.77606225*np.exp(-01),
# -1.77606225*np.exp(-01), -8.65246997*np.exp(-14), 1.00000000,
# -8.65246997*np.exp(-14), -8.65246997*np.exp(-14), -1.13618293*np.exp(-14),
# -7.57778017*np.exp(-16), -1.13618293*np.exp(-14), -1.13618293*np.exp(-14)])
#structArray=np.array([-1.6001368 , -1.6001368 , -2.08065131, -2.06818762, -2.07367757,
# -2.06779168, -2.07260669, -2.08533436, -2.07112826, -2.06893266,
# -0.03859167, -0.03859167, -0.25919807, -0.66904303, -0.41571841,
# -0.76917468, -0.4483773 , -0.17544777, -1.03122022, -0.90581106,
# -3.46409689, -3.46409689, -2.83715218, -2.84817563, -2.8434598 ,
# -2.84773205, -2.84446398, -2.85001617, -2.83613622, -2.84402863])
structArray=np.array([ 0.1330613 , 1.05982273, 0.18777729, -0.60789078, -0.96393469,
-0.65270373, -1.55257864, 0.8002259 , -0.12414033, -0.21230861,
-0.88629202, 0.47527367, 0.21401419, 0.2130512 , -1.5236408 ,
1.35208616, -0.48922234, -0.85850735, 0.72135512, -1.03407686,
2.29041152, 0.91184671, -0.56987761, 0.16597395, -0.43267372,
2.1772668 , -0.1318482 , -0.80817762, 0.44533168, -0.28545885])
structArrayJ = np.array([-11.45706046, -11.45706046, -11.45706046, -11.45706046,
-11.45706046, -11.45706046, -11.45706046, -11.45706046,
-11.45706046, -11.45706046, -0.44524438, -0.44524438,
-0.44524438, -0.44524438, -0.44524438, -0.44524438,
-0.44524438, -0.44524438, -0.44524438, -0.44524438,
-4.7477771 , -4.7477771 , -4.7477771 , -4.7477771 ,
-4.7477771 , -4.7477771 , -4.7477771 , -4.7477771 ,
-4.7477771 , -4.7477771 ])
print("The Structure Array is \n {}".format(structArray))
# print np.split(structArray,3)[0],np.split(structArray,3)[1],np.split(structArray,3)[2]
testTLin = np.linspace(0,5,11)
print "\n \n The plot is"
plt.figure(figsize=(10,6.18))
plt.plot(testTLin,functionY(testTLin),'bs')
plt.plot(testTLin,functionYNN(structArray[0],structArray[1],structArray[2],testTLin),'g^')
plt.plot(testTLin,functionYNN(structArrayJ[0],structArrayJ[1],structArrayJ[2],testTLin),'r^')
plt.yscale('log')
plt.show()
print functionY(testTLin), functionYNN(structArray[0],structArray[1],structArray[2],testTLin), functionYNN(structArrayJ[0],structArrayJ[1],structArrayJ[2],testTLin)
## Test of Numpy
temp1=np.asarray([1,2,3])
temp2=np.asarray([4,5,6])
temp3=np.asarray([7,8,9])
temp1*temp2
print 3*temp1
temp1+temp2
print temp1*temp2*temp3*temp1
1/(1+np.exp(-temp1))
temp1 + temp2
[1,2] + [2,3]
1 - 3*np.array([1,2])
temp1**2
1+np.asarray([1,2,3])
def testfunction(v,w,u,t):
v = np.array(v)
w = np.array(w)
u = np.array(u)
return t*w + u
#return np.sum(v*trigf( t*w + u ))
testfunction([2,3,4],[3,4,5],[4,5,7],2)
Explanation: Test Results
Plot!
End of explanation
def costS(v,w,u,x):
v = np.array(v) # Don't know why but np.asarray(v) doesn't work here.
w = np.array(w)
u = np.array(u)
fvec = np.array(trigf(x*w + u) ) # This is a vector!!!
yx = np.sum ( x * v * fvec ) # For a given x, this calculates the value of y(t), given the parameters, v, w, u.
dySLASHdt = np.sum (v*fvec + x * v* fvec * ( 1 - fvec ) * w )
return ( dySLASHdt - yx )**2
costS(np.array([2,3,4]),[3,4,5],[4,5,7],4)
def costSTotal(v,w,u,x):
x = np.array(x)
costSt = 0
for temp in x:
costSt = costSt + costS(v,w,u,temp)
return costSt
print costSTotal([1,2,3],[2,3,2],[3,4,1],[1,2,3,4,5,2,6,1])
xlinS = np.linspace(0,1,10)
print xlinS
# This is only an example of 2dimensional neural network.
costSTotalF = lambda x: costSTotal(np.split(x,3)[0],np.split(x,3)[1],np.split(x,3)[2],xlinS)
# initGuessS = np.zeros(30)
initGuessS = np.random.rand(1,30)+2
# minimize(costTotalF,([1,0,3,0,1,1,2,0,1,0,1,0]),method="Nelder-Mead")
minimize(costSTotalF,(initGuessS),method="L-BFGS-B")
# minimize(costTotalF,([1,0,3,0,1,1,2,0,1,0,1,0]),method="TNC")
def functionSYNN(v,w,u,x): # t is a single scalar value
x = np.array(x)
func = np.asarray([])
for temp in x:
tempfunc = np.sum(temp * v * trigf( temp*w +u ) )
func = np.append(func, tempfunc)
return np.array(func)
def functionSY(x):
return x**4 - x**3 + 2*x
# structArray=np.array([-1.77606225*np.exp(-01), -3.52080053*np.exp(-01), -1.77606225*np.exp(-01),
# -1.77606225*np.exp(-01), -8.65246997*np.exp(-14), 1.00000000,
# -8.65246997*np.exp(-14), -8.65246997*np.exp(-14), -1.13618293*np.exp(-14),
# -7.57778017*np.exp(-16), -1.13618293*np.exp(-14), -1.13618293*np.exp(-14)])
#structArray=np.array([-1.6001368 , -1.6001368 , -2.08065131, -2.06818762, -2.07367757,
# -2.06779168, -2.07260669, -2.08533436, -2.07112826, -2.06893266,
# -0.03859167, -0.03859167, -0.25919807, -0.66904303, -0.41571841,
# -0.76917468, -0.4483773 , -0.17544777, -1.03122022, -0.90581106,
# -3.46409689, -3.46409689, -2.83715218, -2.84817563, -2.8434598 ,
# -2.84773205, -2.84446398, -2.85001617, -2.83613622, -2.84402863])
structArrayS=np.array([ 0.01462306, 0.13467016, 0.43137834, 0.32915392, 0.16398891,
-0.36502654, -0.1943661 , 0.16082714, -0.2923346 , -0.38280994,
2.23127245, 1.97866504, 2.95181241, 2.70643394, 2.19371603,
2.63386948, 2.20213407, 2.81089774, 2.43916804, 2.80375489,
2.32389017, 2.16118574, 2.7346048 , 2.18630694, 2.19932286,
2.52525807, 2.22125577, 2.81758156, 2.27231039, 2.6118171 ])
print("The Structure Array is \n {}".format(structArray))
# print np.split(structArray,3)[0],np.split(structArray,3)[1],np.split(structArray,3)[2]
testXLinS = np.linspace(0,1,10)
print "\n \n The plot is"
plt.figure(figsize=(10,6.18))
plt.plot(testXLinS,functionSY(testXLinS),'bs')
plt.plot(testXLinS,functionSYNN(structArrayS[0],structArrayS[1],structArrayS[2],testXLinS),'g^')
## plt.plot(testXLin,functionYNN(structArrayJ[0],structArrayJ[1],structArrayJ[2],testXLin),'r^')
plt.show()
print functionY(testXLinS), functionYNN(structArrayS[0],structArrayS[1],structArrayS[2],testXLinS)
Explanation: A Even Simpler Equation
Test a very simple equation
$$\frac{dy}{dx}=4x^3-3x^2+2,$$
with initial condition $$y(0)=0.$$
As in any case,
$$y = \text{Initial} + x_i v_k f(x_iw_k+u_k).$$
$$\frac{dy}{dx} = v_k f(x w_k+u_k) + t v_k f(x w_k+u_k) (1-f(xw_k+u_k))w_k,$$ where the function f is defined as a trigf().
Cost is
$$I = \sum_i \left(\frac{dy}{dx}-(4x^2-3x^2+2) \right)^2$$
End of explanation |
4,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An interactive introduction to Noodles
Step1: Now we can create a workflow composing several calls to this function.
Step2: That looks easy enough; the funny thing is though, that nothing has been computed yet! Noodles just created the workflow graphs corresponding to the values that still need to be computed. Until such time, we work with the promise of a future value. Using some function in pygraphviz we can look at the call graphs.
Step3: Now, to compute the result we have to tell Noodles to evaluate the program.
Step5: Making loops
Thats all swell, but how do we make a parallel loop? Let's look at a map operation; in Python there are several ways to perform a function on all elements in an array. For this example, we will translate some words using the Glosbe service, which has a nice REST interface. We first build some functionality to use this interface.
Step6: We start with a list of strings that desparately need translation.
Step7: Beginning Python programmers like to append things; this is not how you are
supposed to program in Python; if you do, please go and read Jeff Knupp's Writing Idiomatic Python.
Step8: Rather use a comprehension like so
Step9: Or use map
Step11: Noodlify!
If your connection is a bit slow, you may find that the translations take a while to process. Wouldn't it be nice to do it in parallel? How much code would we have to change to get there in Noodles? Let's take the slow part of the program and add a @schedule decorator, and run! Sadly, it is not that simple. We can add @schedule to the word method. This means that it will return a promise.
Rule
Step12: Let's take stock of the mutations to the original. We've added a @schedule decorator to word, and changed a function call in sentence. Also we added the __str__ method; this is only needed to plot the workflow graph. Let's run the new script.
Step13: The last peculiar thing that you may notice, is the gather function. It collects the promises that map generates and creates a single new promise. The definition of gather is very simple
Step14: Dealing with repetition
In the following example we have a line with some repetition. It would be a shame to look up the repeated words twice, wouldn't it? Let's build a little counter routine to check if everything is working.
Step15: To see how this program is being run, we monitor the job submission, retrieval and result storage in a Sqlite3 database.
Step16: Try running the above cells again, and see what happens!
Objects in Noodles
We've already seen that we can @schedule class methods, just as easy as functions. What if a promised objects represents an object? Noodles actually catches references and assignments to perceived members of promised objects and translates them into function calls. We will have another example (this time a bit smaller) to show how this works. We will compute result to Pythagoras theorem by using setters and getters. Python has a beautiful way of capturing reference and assignment to member variables by means of the @property decorator. This concept alows Noodles to catch these in a most generic way.
Step17: We can now treat this object as normal in the user script, and do the following
Step18: Note that, to make this work in general parallel situations, the _setattr function has to create a deepcopy of the object and then return the modified object; so this style of programming can become quite expensive. A better solution would be to create a layered system, where updates only affect the values that are being updated.
User messages
If jobs take a long time (>1s) to run, it is nice to give the user a message when it starts, when it finishes and if it was a success. Noodles has an adaptor for runners to display messages.
Step19: We imported some predefined functions from noodles.tutorial. A new function that we haven't seen before is @schedule_hint. It does the same as @schedule, but now it also attaches some information to the function. This can be anything. Here we add a display string. This string is formatted using the arguments to the function that is being called. | Python Code:
from noodles import schedule
@schedule
def add(x, y):
return x + y
@schedule
def mul(x,y):
return x * y
Explanation: An interactive introduction to Noodles: translating Poetry
Noodles is there to make your life easier, in parallel! The reason why Noodles can be easy and do parallel Python at the same time is its functional approach. In one part you'll define a set of functions that you'd like to run with Noodles, in an other part you'll compose these functions into a workflow graph. To make this approach work a function should not have any side effects. Let's not linger and just start noodling! First we define some functions to use.
End of explanation
a = add(1, 1)
b = mul(a, 2)
c = add(a, a)
d = mul(b, c)
Explanation: Now we can create a workflow composing several calls to this function.
End of explanation
from noodles.tutorial import get_workflow_graph
import ipywidgets as widgets
widgets.HBox([
widgets.VBox([
widgets.HTML('<b>{}</b>'.format(k)),
widgets.HTML(value=get_workflow_graph(w).pipe(format='svg').decode())])
for k, w in {'a': a, 'b': b, 'c': c, 'd': d}.items()])
Explanation: That looks easy enough; the funny thing is though, that nothing has been computed yet! Noodles just created the workflow graphs corresponding to the values that still need to be computed. Until such time, we work with the promise of a future value. Using some function in pygraphviz we can look at the call graphs.
End of explanation
from noodles import run_parallel
run_parallel(d, n_threads=2)
Explanation: Now, to compute the result we have to tell Noodles to evaluate the program.
End of explanation
import urllib.request
import json
import re
class Translate:
Translate words and sentences in the worst possible way. The Glosbe dictionary
has a nice REST interface that we query for a phrase. We then take the first result.
To translate a sentence, we cut it in pieces, translate it and paste it back into
a Frankenstein monster.
def __init__(self, src_lang='en', tgt_lang='fy'):
self.src = src_lang
self.tgt = tgt_lang
self.url = 'https://glosbe.com/gapi/translate?' \
'from={src}&dest={tgt}&' \
'phrase={{phrase}}&format=json'.format(
src=src_lang, tgt=tgt_lang)
def query_phrase(self, phrase):
with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:
translation = json.loads(response.read().decode())
return translation
def word(self, phrase):
# translation = self.query_phrase(phrase)
translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}
if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:
result = translation['tuc'][0]['phrase']['text']
if phrase[0].isupper():
return result.title()
else:
return result
else:
return "<" + phrase + ">"
def sentence(self, phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
return space.format(*map(self.word, words))
Explanation: Making loops
Thats all swell, but how do we make a parallel loop? Let's look at a map operation; in Python there are several ways to perform a function on all elements in an array. For this example, we will translate some words using the Glosbe service, which has a nice REST interface. We first build some functionality to use this interface.
End of explanation
shakespeare = [
"If music be the food of love, play on,",
"Give me excess of it; that surfeiting,",
"The appetite may sicken, and so die."]
def print_poem(intro, poem):
print(intro)
for line in poem:
print(" ", line)
print()
print_poem("Original:", shakespeare)
Explanation: We start with a list of strings that desparately need translation.
End of explanation
shakespeare_auf_deutsch = []
for line in shakespeare:
shakespeare_auf_deutsch.append(
Translate('en', 'de').sentence(line))
print_poem("Auf Deutsch:", shakespeare_auf_deutsch)
Explanation: Beginning Python programmers like to append things; this is not how you are
supposed to program in Python; if you do, please go and read Jeff Knupp's Writing Idiomatic Python.
End of explanation
shakespeare_ynt_frysk = \
(Translate('en', 'fy').sentence(line) for line in shakespeare)
print_poem("Yn it Frysk:", shakespeare_ynt_frysk)
Explanation: Rather use a comprehension like so:
End of explanation
shakespeare_pa_dansk = \
map(Translate('en', 'da').sentence, shakespeare)
print_poem("På Dansk:", shakespeare_pa_dansk)
Explanation: Or use map:
End of explanation
from noodles import schedule
@schedule
def format_string(s, *args, **kwargs):
return s.format(*args, **kwargs)
import urllib.request
import json
import re
class Translate:
Translate words and sentences in the worst possible way. The Glosbe dictionary
has a nice REST interface that we query for a phrase. We then take the first result.
To translate a sentence, we cut it in pieces, translate it and paste it back into
a Frankenstein monster.
def __init__(self, src_lang='en', tgt_lang='fy'):
self.src = src_lang
self.tgt = tgt_lang
self.url = 'https://glosbe.com/gapi/translate?' \
'from={src}&dest={tgt}&' \
'phrase={{phrase}}&format=json'.format(
src=src_lang, tgt=tgt_lang)
def query_phrase(self, phrase):
with urllib.request.urlopen(self.url.format(phrase=phrase.lower())) as response:
translation = json.loads(response.read().decode())
return translation
@schedule
def word(self, phrase):
translation = {'tuc': [{'phrase': {'text': phrase.lower()[::-1]}}]}
# translation = self.query_phrase(phrase)
if len(translation['tuc']) > 0 and 'phrase' in translation['tuc'][0]:
result = translation['tuc'][0]['phrase']['text']
if phrase[0].isupper():
return result.title()
else:
return result
else:
return "<" + phrase + ">"
def sentence(self, phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
return format_string(space, *map(self.word, words))
def __str__(self):
return "[{} -> {}]".format(self.src, self.tgt)
def __serialize__(self, pack):
return pack({'src_lang': self.src,
'tgt_lang': self.tgt})
@classmethod
def __construct__(cls, msg):
return cls(**msg)
Explanation: Noodlify!
If your connection is a bit slow, you may find that the translations take a while to process. Wouldn't it be nice to do it in parallel? How much code would we have to change to get there in Noodles? Let's take the slow part of the program and add a @schedule decorator, and run! Sadly, it is not that simple. We can add @schedule to the word method. This means that it will return a promise.
Rule: Functions that take promises need to be scheduled functions, or refer to a scheduled function at some level.
We could write
return schedule(space.format)(*(self.word(w) for w in words))
in the last line of the sentence method, but the string format method doesn't support wrapping. We rely on getting the signature of a function by calling inspect.signature. In some cases of build-in function this raises an exception. We may find a work around for these cases in future versions of Noodles. For the moment we'll have to define a little wrapper function.
End of explanation
from noodles import gather, run_parallel
shakespeare_en_esperanto = \
map(Translate('en', 'eo').sentence, shakespeare)
wf = gather(*shakespeare_en_esperanto)
result = run_parallel(wf, n_threads=8)
print_poem("Shakespeare en Esperanto:", result)
Explanation: Let's take stock of the mutations to the original. We've added a @schedule decorator to word, and changed a function call in sentence. Also we added the __str__ method; this is only needed to plot the workflow graph. Let's run the new script.
End of explanation
# if you know a way to shrink this image down, please send me a pull request
widgets.HTML(get_workflow_graph(wf).pipe(format='svg').decode())
Explanation: The last peculiar thing that you may notice, is the gather function. It collects the promises that map generates and creates a single new promise. The definition of gather is very simple:
@schedule
def gather(*lst):
return lst
The workflow graph of the Esperanto translator script looks like this:
End of explanation
from noodles import (schedule, gather_all)
import re
@schedule
def count_word_size(word):
return len(word)
@schedule
def format_string(s, *args, **kwargs):
return s.format(*args, **kwargs)
def word_sizes(phrase):
words = re.sub("[^\w]", " ", phrase).split()
space = re.sub("[\w]+", "{}", phrase)
word_lengths = map(count_word_size, words)
return format_string(space, *word_lengths)
from noodles.run.threading.vanilla import run_parallel
line = "Oote oote oote, Boe"
run_parallel(word_sizes(line), n_threads=4)
Explanation: Dealing with repetition
In the following example we have a line with some repetition. It would be a shame to look up the repeated words twice, wouldn't it? Let's build a little counter routine to check if everything is working.
End of explanation
# import logging
# logging.FileHandler(filename='mylog.log', mode='a')
from noodles.run.threading.sqlite3 import run_parallel
from noodles import serial
wf = Translate('de', 'fr').sentence(line)
run_parallel(wf, n_threads=4, registry=serial.base, db_file='jobs.db')
Explanation: To see how this program is being run, we monitor the job submission, retrieval and result storage in a Sqlite3 database.
End of explanation
from noodles import schedule
@schedule
class A:
def __init__(self, value):
self.value = value
@property
def square(self):
return self.value**2
@square.setter
def square(self, sqr):
self.value = sqr**(1/2)
def __str__(self):
return "[A {}]".format(self.value)
Explanation: Try running the above cells again, and see what happens!
Objects in Noodles
We've already seen that we can @schedule class methods, just as easy as functions. What if a promised objects represents an object? Noodles actually catches references and assignments to perceived members of promised objects and translates them into function calls. We will have another example (this time a bit smaller) to show how this works. We will compute result to Pythagoras theorem by using setters and getters. Python has a beautiful way of capturing reference and assignment to member variables by means of the @property decorator. This concept alows Noodles to catch these in a most generic way.
End of explanation
from noodles import run_single
from noodles.tutorial import add
u = A(3.0)
v = A(4.0)
u.square = add(u.square, v.square)
w = u.value
run_single(w)
get_workflow_graph(w)
Explanation: We can now treat this object as normal in the user script, and do the following
End of explanation
from noodles import (gather)
from noodles.tutorial import (sub, mul, accumulate)
from noodles.display import (DumbDisplay)
from noodles.run.runners import (run_parallel_with_display)
import time
@schedule(display="| {a} + {b}", confirm=True)
def add(a, b):
time.sleep(0.5)
return a + b
@schedule(display="{msg}")
def message(msg, value=0):
return value()
def test_logging():
A = add(1, 1)
B = sub(3, A)
multiples = [mul(add(i, B), A) for i in range(6)]
C = accumulate(gather(*multiples))
wf = message("\n+---(Running the test)", lambda: C)
with DumbDisplay() as display:
result = run_parallel_with_display(wf, n_threads=4, display=display)
print("\nThe answer is ", result)
Explanation: Note that, to make this work in general parallel situations, the _setattr function has to create a deepcopy of the object and then return the modified object; so this style of programming can become quite expensive. A better solution would be to create a layered system, where updates only affect the values that are being updated.
User messages
If jobs take a long time (>1s) to run, it is nice to give the user a message when it starts, when it finishes and if it was a success. Noodles has an adaptor for runners to display messages.
End of explanation
import threading
threading.Thread(target=test_logging, daemon=True).start()
Explanation: We imported some predefined functions from noodles.tutorial. A new function that we haven't seen before is @schedule_hint. It does the same as @schedule, but now it also attaches some information to the function. This can be anything. Here we add a display string. This string is formatted using the arguments to the function that is being called.
End of explanation |
4,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Compare shapes of molecules
Step1: We'd like to compare the shape of heroin with other molecules.
ODDT supports three methods of molecular shape comparison
Step2: To compute the shape using USR we need the molecule's 3D coordinates.
Step3: Now we can use the usr function.
Step4: USR represents shape with 12 descriptors, which summarize the distribution of atomic distances in the molecule. For more details see Ballester & Richards (2007).<br/>
USR-CAT and Electroshape use more descriptors, 60 and 15 respectively.
Let's see how similar it is to a different molecule.
Step5: The similarity function returns a number in range (0, 1], where a higher number means that the molecules are more similar and 1 means that the molecules have identical shapes.<br/>
All methods (USR, USR-CAT and Electroshape) use the same similarity function.
We will find a molecule similar to oxamide.
Step6: Heroin
Step7: The most similar molecule
Step8: The least similar molecule
Similarity between these molecules | Python Code:
from __future__ import print_function, division, unicode_literals
import oddt
from oddt.shape import usr, usr_similarity
print(oddt.__version__)
Explanation: <h1>Compare shapes of molecules
End of explanation
heroin = oddt.toolkit.readstring('smi',
'CC(=O)Oc1ccc2c3c1O[C@@H]4[C@]35CC[NH+]([C@H](C2)[C@@H]5C=C[C@@H]4OC(=O)C)C')
smiles = ['CC(=O)Oc1ccc2c3c1O[C@@H]4[C@]35CC[NH+]([C@H](C2)[C@@H]5C=C[C@@H]4OC(=O)Cc6cccnc6)C',
'CC(=O)O[C@@H]1C=C[C@@H]2[C@H]3Cc4ccc(c5c4[C@]2([C@H]1O5)CC[NH+]3C)OC',
'C[N+]1(CC[C@@]23c4c5ccc(c4O[C@H]2[C@@H](C=C[C@@H]3[C@@H]1C5)O)OC)C',
'C[NH2+][C@@H]1Cc2ccc(c3c2[C@]4([C@@H]1CC=C([C@H]4O3)OC)C=C)OC',
'CCOC(=O)CNC(=O)O[C@H]1C=C[C@H]2[C@H]3Cc4ccc(c5c4[C@]2([C@H]1O5)CC[NH+]3C)OCOC',
'CC(=O)OC1=CC[C@H]2[C@@H]3Cc4ccc(c5c4[C@@]2([C@@H]1O5)CC[NH+]3C)OC',
'C[NH+]1CC[C@]23c4c5cc(c(c4O[C@H]2[C@H](C=C[C@H]3[C@H]1C5)O)O)c6cc7c8c(c6O)O[C@@H]9[C@]81CC[NH+]([C@H](C7)[C@@H]1C=C[C@@H]9O)C']
molecules = [oddt.toolkit.readstring('smi', smi) for smi in smiles]
Explanation: We'd like to compare the shape of heroin with other molecules.
ODDT supports three methods of molecular shape comparison: USR, USRCAT and Electroshape.<br/>
USR looks only at the shape of molecule.<br/>
USR-CAT considers the shape and type of atoms.<br/>
Electroshape accounts for the shape and charge of atoms.<br/>
All those methods have the same API.<br/>
We will use USR, because it's the simplest and the fastest.
End of explanation
heroin.make3D()
heroin.removeh()
for mol in molecules:
mol.make3D()
mol.removeh()
Explanation: To compute the shape using USR we need the molecule's 3D coordinates.
End of explanation
usr_heroin = usr(heroin)
usr_heroin
Explanation: Now we can use the usr function.
End of explanation
usr_similarity(usr_heroin, usr(molecules[0]))
Explanation: USR represents shape with 12 descriptors, which summarize the distribution of atomic distances in the molecule. For more details see Ballester & Richards (2007).<br/>
USR-CAT and Electroshape use more descriptors, 60 and 15 respectively.
Let's see how similar it is to a different molecule.
End of explanation
similar_mols = []
for i, mol in enumerate(molecules):
sim = usr_similarity(usr_heroin, usr(mol))
similar_mols.append((i, sim))
similar_mols.sort(key=lambda similarity: similarity[1], reverse=True)
similar_mols
heroin
Explanation: The similarity function returns a number in range (0, 1], where a higher number means that the molecules are more similar and 1 means that the molecules have identical shapes.<br/>
All methods (USR, USR-CAT and Electroshape) use the same similarity function.
We will find a molecule similar to oxamide.
End of explanation
idx_most = similar_mols[0][0]
molecules[idx_most]
Explanation: Heroin
End of explanation
idx_least = similar_mols[-1][0]
molecules[idx_least]
Explanation: The most similar molecule
End of explanation
usr_most = usr(molecules[idx_most])
usr_least = usr(molecules[idx_least])
usr_similarity(usr_most, usr_least)
Explanation: The least similar molecule
Similarity between these molecules:
End of explanation |
4,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-3', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
4,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building your Deep Neural Network
Step2: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
<table style="width | Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases import *
from dnn_utils import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756]
[-0.00528172 -0.01072969]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.00865408 -0.02301539]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
Z, linear_cache = linear_forward(A_prev, W, b)
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.1980455 7.85763489]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], 'relu')
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], 'sigmoid')
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1, X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96076066 0.99961336]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.1980455 7.85763489]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -np.mean(Y * np.log(AL) + (1 - Y) * np.log(1- AL))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
Explanation: <table style="width:40%">
<tr>
<td> **AL** </td>
<td > [[ 0.0844367 0.92356858]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 2</td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = (1. / m) * np.dot(dZ, A_prev.T)
db = (1. / m) * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
### END CODE HERE ###
dA_prev, dW, db = linear_backward(dZ, linear_cache)
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 2.38272385 5.85438014]
[ 6.31969219 15.52755701]
[ -3.97876302 -9.77586689]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[ 2.77870358 -0.05500058 -5.13144969]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 5.527840195 </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
cache = caches[-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, cache, activation="sigmoid")
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
cache = caches[l]
grads["dA" + str(l + 1)], grads["dW" + str(l + 1)], grads["db" + str(l + 1)] = linear_activation_backward(grads["dA" + str(l + 2)], cache, activation="relu")
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.08982777 0.00226265]
[ 0.23824996 0.00600122]
[-0.14999783 -0.00377826]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[-0.06001514 -0.09687383 -0.10598695]] </td>
</tr>
<tr>
<td > db </td>
<td > 0.061800984273 </td>
</tr>
</table>
Expected output with relu
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 2.38272385 5.85438014]
[ 6.31969219 15.52755701]
[ -3.97876302 -9.77586689]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 2.77870358 -0.05500058 -5.13144969]] </td>
</tr>
<tr>
<td > db </td>
<td > 5.527840195 </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(1, L + 1):
parameters["W" + str(l)] = parameters["W" + str(l)] - learning_rate * grads["dW" + str(l)]
parameters["b" + str(l)] = parameters["b" + str(l)] - learning_rate * grads["db" + str(l)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = " + str(parameters["W1"]))
print ("b1 = " + str(parameters["b1"]))
print ("W2 = " + str(parameters["W2"]))
print ("b2 = " + str(parameters["b2"]))
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[-0.09686122 -0.04840482 -0.11864308]] </td>
</tr>
<tr>
<td > db1 </td>
<td > -0.262594998379 </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[-0.71011462 -0.22925516]
[-0.17330152 -0.05594909]
[-0.03831107 -0.01236844]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation |
4,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The Google Research Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http
Step1: Necessary packages and function calls
ridge
Step2: Data loading
Load training, probe and testing datasets and save those datasets as train.csv, probe.csv, test.csv in './repo/data_files/' directory.
If you have your own 'train.csv', 'probe.csv', and 'test.csv' files, you can skip this example dataset construction portion and just put them under './repo/data_files/' directory.
Step3: Data preprocessing
Extract features and labels from train.csv, probe.csv, test.csv in './repo/data_files/' directory.
Normalize the features of training, probe, and testing sets.
Step4: Step 0
Step5: Step 1
Step6: Step 2
Step7: Step 3
Step8: Step 4
Step9: Evaluation
We use two quantitative metrics (overall performance and fidelity) and one qualitative metric (instance-wise explanations) to evaluate the locally interpretable models.
Overall_performance
Step10: 2. Fidelity
We use R2 score and Mean Absolute Error (MAE) as the metrics for the fidelity.
Step11: 3. Instance-wise explanations
We qualitatively demonstrate the local explanations of 5 testing samples using the fitted coefficients of locally interpretable model (Ridge regression).
To run this cell, the interpretable model must have intercept_ and coef_ as the subfunctions. Here, intercept and coef represent the fitted locally interpretable model's intercept and coefficients. | Python Code:
# Installs additional packages
import pip
import IPython
def import_or_install(package):
try:
__import__(package)
except ImportError:
pip.main(['install', package])
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import_or_install('lightgbm')
import os
from git import Repo
# Current working directory
repo_dir = os.getcwd() + '/repo'
if not os.path.exists(repo_dir):
os.makedirs(repo_dir)
# Clones github repository
if not os.listdir(repo_dir):
git_url = "https://github.com/google-research/google-research.git"
Repo.clone_from(git_url, repo_dir)
Explanation: Copyright 2019 The Google Research Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Understanding Black-box Model Predictions using RL-LIM
Jinsung Yoon, Sercan O Arik, Tomas Pfister, "RL-LIM: Reinforcement Learning-based Locally Interpretable Modeling", arXiv preprint arXiv:1909.12367 (2019) - https://arxiv.org/abs/1909.12367
This notebook describes how to explain black-box models using "Reinforcement Learning based Locally Interpretable Modeling (RL-LIM)".
RL-LIM is a state of the art locally interpretable modeling method. It is often challenging to develop a globally interpretable model that has the performance at the level of 'black-box' models. To go beyond the performance limitations, a promising direction is locally interpretable models, which explain a single prediction, instead of explaining the entire model. Methodologically, while a globally interpretable model fits a single inherently interpretable model (such as a linear model or a shallow decision tree) to the entire training set, locally interpretable models aim to fit an inherently interpretable model locally, i.e. for each instance individually, by distilling knowledge from a high performance black-box model.
Such locally interpretable models are very useful for real-world AI deployments to provide succinct and human-like explanations to users. They can be used to identify systematic failure cases (e.g. by seeking common trends in input dependence for failure cases), detect biases (e.g. by quantifying feature importance for a particular variable), and provide actionable feedback to improve a model (e.g. understand failure cases and what training data to collect).
You need:
Training / Probe / Testing sets
* If you don't have a probe set, you can construct it by splitting a small portion of training set, while keeping the rest of the training set for training purpose.
* The training / probe / testing datasets you have should be saved under './repo/data_files/' directory, with the names: 'train.csv', 'probe.csv', and 'test.csv'.
* In this notebook, we create 'train.csv', 'probe.csv', and 'test.csv' files from the Facebook Comment Volume dataset (https://archive.ics.uci.edu/ml/datasets/Facebook+Comment+Volume+Dataset) as an example.
Prerequisite
Download lightgbm package.
Clone https://github.com/google-research/google-research.git to the current directory.
End of explanation
import numpy as np
import pandas as pd
from sklearn.linear_model import Ridge
import lightgbm
# Sets current directory
os.chdir(repo_dir)
from rllim.data_loading import load_facebook_data, preprocess_data
from rllim import rllim
from rllim.rllim_metrics import fidelity_metrics, overall_performance_metrics
Explanation: Necessary packages and function calls
ridge: Ridge regression model used as an interpretable model.
lightgbm: lightGBM model used as a black-box model.
load_facebook_data: Data loader for facebook comment volumn dataset.
preprocess_data: Data extraction and normalization.
rllim: RL-LIM class for training instance-wise weight estimator.
rllim_metrics: Evaluation metrics for the locally interpretable models in various metrics (overall performance and fidelity).
End of explanation
# The number of training and probe samples (we use 10% of the training set as the probe set).
# Explicit testing set exists in facebook comment volume dataset
dict_rate = dict()
dict_rate['train'] = 0.9
dict_rate['probe'] = 0.1
# Random seed
seed = 0
# Loads data
load_facebook_data(dict_rate, seed)
print('Finished data loading.')
Explanation: Data loading
Load training, probe and testing datasets and save those datasets as train.csv, probe.csv, test.csv in './repo/data_files/' directory.
If you have your own 'train.csv', 'probe.csv', and 'test.csv' files, you can skip this example dataset construction portion and just put them under './repo/data_files/' directory.
End of explanation
# Normalization methods: either 'minmax' or 'standard'
normalization = 'minmax'
# Extracts features and labels, and then normalize features
x_train, y_train, x_probe, y_probe, x_test, y_test, col_names = \
preprocess_data(normalization, 'train.csv', 'probe.csv', 'test.csv')
print('Finished data preprocess.')
Explanation: Data preprocessing
Extract features and labels from train.csv, probe.csv, test.csv in './repo/data_files/' directory.
Normalize the features of training, probe, and testing sets.
End of explanation
# Problem specification
problem = 'regression' # or 'classification'
# Initializes black-box model
if problem == 'regression':
bb_model = lightgbm.LGBMRegressor()
elif problem == 'classification':
bb_model = lightgbm.LGBMClassifier()
# Trains black-box model
bb_model = bb_model.fit(x_train, y_train)
print('Finished black-box model training.')
Explanation: Step 0: Black-box model training
This stage is the preliminary stage for RL-LIM. We train a black-box model (in this notebook, lightGBM) using the training datasets (x_train, y_train) to make a pre-trained black-box model. If you already have a saved pre-trained black-box model, you can skip this stage and retrieve the pre-trained black-box model into bb_model. You also need to specify whether the problem is regression or classification.
Note that the bb_model must have fit, predict (for regression) or predict_proba (for classification) as the methods.
End of explanation
# Constructs auxiliary datasets
if problem == 'regression':
y_train_hat = bb_model.predict(x_train)
y_probe_hat = bb_model.predict(x_probe)
elif problem == 'classification':
y_train_hat = bb_model.predict_proba(x_train)[:, 1]
y_probe_hat = bb_model.predict_proba(x_probe)[:, 1]
print('Finished auxiliary dataset construction.')
Explanation: Step 1: Auxiliary dataset construction
Using the pre-trained black-box model, we create auxiliary training (x_train, y_train_hat) and probe datasets (x_probe, y_probe_hat). These auxiliary datasets are used for instance weight estimator and locally interpretable model training.
End of explanation
# Define interpretable baseline model
baseline = Ridge(alpha=1)
# Trains interpretable baseline model
baseline.fit(x_train, y_train_hat)
print('Finished interpretable baseline training.')
Explanation: Step 2: Interpretable baseline training
To improve the stability of the instance-wise weight estimator training, a baseline model is observed to be beneficial. We use a globally interpretable model (in this notebook, we use Ridge regression) optimized to replicate the predictions of the black-box model.
Input:
Locally interpretable model: ridge regression (we can switch this to shallow tree). The model must have fit, predict (for regression) and predict_proba (for classification) as the subfunctions.
Output:
Trained interpretable baseline model: function that tries to replicate the predictions of the black-box model using globally interpretable model.
End of explanation
# Instance-wise weight estimator network parameters
parameters = dict()
parameters['hidden_dim'] = 100
parameters['iterations'] = 2000
parameters['num_layers'] = 5
parameters['batch_size'] = 5000
parameters['batch_size_inner'] = 10
parameters['lambda'] = 1.0
# Defines locally interpretable model
interp_model = Ridge(alpha = 1)
# Checkpoint file name
checkpoint_file_name = './tmp/model.ckpt'
# Initializes RL-LIM
rllim_class = rllim.Rllim(x_train, y_train_hat, x_probe, y_probe_hat, parameters,
interp_model, baseline, checkpoint_file_name)
# Trains RL-LIM
rllim_class.rllim_train()
print('Finished instance-wise weight estimator training.')
## Output functions
# Instance-wise weight estimation for x_test[0, :]
dve_out = rllim_class.instancewise_weight_estimator(x_train, y_train_hat, x_test[0, :])
# Interpretable predictions (test_y_fit) and instance-wise explanations (test_coef) for x_test[:0, :]
test_y_fit, test_coef = rllim_class.rllim_interpreter(x_train, y_train_hat, x_test[0, :], interp_model)
print('Finished instance-wise weight estimations, instance-wise predictions, and local explanations.')
Explanation: Step 3: Train instance-wise weight estimator
We train an instance-wise weight estimator using the auxiliary training (x_train, y_train_hat) and probe datasets (x_probe, y_probe_hat) using reinforcement learning.
Input:
Network parameters: Set network parameters of instance-wise weight estimator.
Locally interpretable model: Ridge regression (we can switch this to shallow tree). The model must have fit, predict (for regression) or predict_proba (for classification) as the methods.
Output:
Instancewise weight estimator: Function that uses auxiliary training set and a testing sample as inputs to estimate weights for each training sample to construct locally interpretable model for the testing sample.
End of explanation
# Train locally interpretable models and output instance-wise explanations (test_coef) and
# interpretable predictions (test_y_fit)
test_y_fit, test_coef = rllim_class.rllim_interpreter(x_train, y_train_hat, x_test, interp_model)
print('Finished instance-wise predictions and local explanations.')
Explanation: Step 4: Interpretable inference
Unlike Step 3 (training instance-wise weight estimator), we use a fixed instance-wise weight estimator (without the sampler and interpretable baseline) and merely fit the locally interpretable model at inference. Given the test instance, we obtain the selection probabilities from the instance-wise weight estimator, and using these as the weights, we fit the locally interpretable model via weighted optimization.
Input:
Locally interpretable model: Ridge regression (we can switch this to shallow tree). The model must have fit, predict (for regression) and predict_proba (for classification) as the subfunctions.
Output:
Instance-wise explanations (test_coef): Estimated local dynamics for testing samples using trained locally interpretable model.
Interpretable predictions (test_y_fit): Local predictions for testing samples using trained locally interpretable model.
End of explanation
# Overall performance
mae = overall_performance_metrics (y_test, test_y_fit, metric='mae')
print('Overall performance of RL-LIM in terms of MAE: ' + str(np.round(mae, 4)))
Explanation: Evaluation
We use two quantitative metrics (overall performance and fidelity) and one qualitative metric (instance-wise explanations) to evaluate the locally interpretable models.
Overall_performance: Difference between ground truth labels (y_test) and interpretable predictions (test_y_fit).
Fidelity: Difference between black-box model predictions (y_test_hat) and interpretable predictions (test_y_fit).
Instance-wise explanations: Qualitatively show the examples of instance-wise explanations.
1. Overall performance
We use Mean Absolute Error (MAE) as the metric for the overall performance. However, users can replace MAE to RMSE or others.
End of explanation
# Black-box model predictions
y_test_hat = bb_model.predict(x_test)
# Fidelity in terms of MAE
mae = fidelity_metrics (y_test_hat, test_y_fit, metric='mae')
print('Fidelity of RL-LIM in terms of MAE: ' + str(np.round(mae, 4)))
# Fidelity in terms of R2 Score
r2 = fidelity_metrics (y_test_hat, test_y_fit, metric='r2')
print('Fidelity of RL-LIM in terms of R2 Score: ' + str(np.round(r2, 4)))
Explanation: 2. Fidelity
We use R2 score and Mean Absolute Error (MAE) as the metrics for the fidelity.
End of explanation
# Local explanations of n samples
n = 5
local_explanations = test_coef[:n, :]
# Make pandas dataframe
final_col_names = np.concatenate((np.asarray(['intercept']), col_names), axis = 0)
pd.DataFrame(data=local_explanations, index=range(n), columns=final_col_names)
Explanation: 3. Instance-wise explanations
We qualitatively demonstrate the local explanations of 5 testing samples using the fitted coefficients of locally interpretable model (Ridge regression).
To run this cell, the interpretable model must have intercept_ and coef_ as the subfunctions. Here, intercept and coef represent the fitted locally interpretable model's intercept and coefficients.
End of explanation |
4,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-2', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-2
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:28
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
4,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Polyglot Unconference
This notebook holds a project conducting data analysis and visualization of the 2017 Polyglot Vancouver Un-Conference.
See the README in this repository for background information.
Session 05 - "Kotlin"
This was an introduction to the Kotlin programming language, that was both tightly controlled by the host and encouraging of questions.
There were several people that had either pointed questions or some experience with Kotlin (and the language(s) that the questioners were experienced with), which led to a fairly egalitarian discussion... at least compared to some of the other sessions.
Python imports
Step1: Reading the Data
Step2: Sanitizing the Data
As we can see, some of our data is stored in a non-numerical format which makes it difficult to perform the maths upon.
Let's clean it up.
Step3: Analysis and Visualization (V1)
Let's do some really basic passes at the data before we run some mathematical computations on it, just to get a better sense of where it stands at the moment.
Step4: Analysis and Visualization (V2)
As per the methodology in the first notebook, for the sake of mapping the actual conversational flow amongst the participants, I am going to run these analyses and visualizations again while removing the hosts...
Step5: this is getting there...
Algebraic Analysis
Now lets step into some deeper (but probaby still naive) analysis based off of my rudiemtary understanding of Data Science! | Python Code:
# Imports
import sys
import pandas as pd
import csv
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (20.0, 10.0)
# %load util.py
#!/usr/bin/python
# Util file to import in all of the notebooks to allow for easy code re-use
# Calculate Percent of Attendees that did not speak
def percent_silent(df):
total = len(df)
silent = 0
for row in df.iteritems():
if row[1] == 0:
silent = silent + 1
percent = {}
percent['TOTAL'] = total
percent['SILENT'] = silent
percent['VERBOSE'] = total - silent
return percent
# Calculate Percent of Attendees that left
def percent_left(df):
total = len(df)
left = 0
for row in df.iteritems():
if row[1] == 0:
left = left + 1
percent = {}
percent['TOTAL'] = total
percent['LEFT'] = left
percent['STAYED'] = total - left
return percent
# Calculate Percent of Attendees along gender
def percent_gender(df):
total = len(df)
female = 0
for row in df.iteritems():
if row[1] == 1:
female = female + 1
percent = {}
percent['TOTAL'] = total
percent['FEMALE'] = female
percent['MALE'] = total - female
return percent
# Calculate Percent of Talking points by
def percent_talking_gender(df):
total = 0
male = 0
female = 0
for talks, gender in df.itertuples(index=False):
if talks > 0:
total = total + 1
if gender == 0:
male = male + 1
elif gender == 1:
female = female + 1
percent = {}
percent['TOTAL'] = total
percent['FEMALE'] = female
percent['MALE'] = male
return percent
Explanation: Polyglot Unconference
This notebook holds a project conducting data analysis and visualization of the 2017 Polyglot Vancouver Un-Conference.
See the README in this repository for background information.
Session 05 - "Kotlin"
This was an introduction to the Kotlin programming language, that was both tightly controlled by the host and encouraging of questions.
There were several people that had either pointed questions or some experience with Kotlin (and the language(s) that the questioners were experienced with), which led to a fairly egalitarian discussion... at least compared to some of the other sessions.
Python imports
End of explanation
# Read
data = pd.read_csv('data/5_kotlin.csv')
# Display
data
Explanation: Reading the Data
End of explanation
# Convert GENDER to Binary (sorry, i know...)
data.loc[data["GENDER"] == "M", "GENDER"] = 0
data.loc[data["GENDER"] == "F", "GENDER"] = 1
# Convert STAYED to 1 and Left/Late to 0
data.loc[data["STAYED"] == "Y", "STAYED"] = 1
data.loc[data["STAYED"] == "N", "STAYED"] = 0
data.loc[data["STAYED"] == "L", "STAYED"] = 0
# We should now see the data in numeric values
data
Explanation: Sanitizing the Data
As we can see, some of our data is stored in a non-numerical format which makes it difficult to perform the maths upon.
Let's clean it up.
End of explanation
# Run Describe to give us some basic Min/Max/Mean/Std values
data.describe()
# Run Value_Counts in order to see some basic grouping by attribute
vc_talks = data['TALKS'].value_counts()
vc_talks
vc_gender = data['GENDER'].value_counts()
vc_gender
vc_stayed = data['STAYED'].value_counts()
vc_stayed
# Now let's do some basic plotting with MatPlotLib
data.plot()
data.plot(kind='bar')
fig1, ax1 = plt.subplots()
ax1.pie(data['TALKS'], autopct='%1.f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
Explanation: Analysis and Visualization (V1)
Let's do some really basic passes at the data before we run some mathematical computations on it, just to get a better sense of where it stands at the moment.
End of explanation
data_hostless = data.drop(data.index[[0]])
data_hostless.head()
data_hostless.describe()
dh_vc_talks = data_hostless['TALKS'].value_counts()
dh_vc_talks
dh_vc_gender = data_hostless['GENDER'].value_counts()
dh_vc_gender
dh_vc_stayed = data_hostless['STAYED'].value_counts()
dh_vc_stayed
data_hostless.plot()
data_hostless.plot(kind='bar')
fig1, ax1 = plt.subplots()
ax1.pie(data_hostless['TALKS'], autopct='%1.f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
Explanation: Analysis and Visualization (V2)
As per the methodology in the first notebook, for the sake of mapping the actual conversational flow amongst the participants, I am going to run these analyses and visualizations again while removing the hosts...
End of explanation
# Percentage of attendees that were silent during the talk
silent = percent_silent(data['TALKS'])
silent
fig1, ax1 = plt.subplots()
sizes = [silent['SILENT'], silent['VERBOSE']]
labels = 'Silent', 'Talked'
explode = (0.05, 0)
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.0f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
# Percentage of attendees that left early during the talk
left = percent_left(data['STAYED'])
left
fig1, ax1 = plt.subplots()
sizes = [left['LEFT'], left['STAYED']]
labels = 'Left', 'Stayed'
explode = (0.1, 0)
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.0f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
# Percentage of attendees that were Male vs. Female (see notes above around methodology)
gender = percent_gender(data['GENDER'])
gender
fig1, ax1 = plt.subplots()
sizes = [gender['FEMALE'], gender['MALE']]
labels = 'Female', 'Male'
explode = (0.1, 0)
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.0f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
# Calculate Percent of Talking points by GENDER
distribution = percent_talking_gender(data[['TALKS','GENDER']])
distribution
fig1, ax1 = plt.subplots()
sizes = [distribution['FEMALE'], distribution['MALE']]
labels = 'Female Speakers', 'Male Speakers'
explode = (0.1, 0)
ax1.pie(sizes, explode=explode, labels=labels, autopct='%1.0f%%', shadow=True, startangle=90)
ax1.axis('equal')
plt.show()
Explanation: this is getting there...
Algebraic Analysis
Now lets step into some deeper (but probaby still naive) analysis based off of my rudiemtary understanding of Data Science! :D
End of explanation |
4,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Greatest Common Divisor
Now, the greatest common divisor (GCD) is the largest natural number $d$ that divides $a$ and $b$ in a fraction $\frac{a}{b}$ without a remainder.
For example, the GCD of the fraction $\frac{6}{9}$ is 3
Step1: Euclidean Algorithm
The Greek mathematician Euclid described this algorithm approx. 300 BC in
It is still being used in the field of number theory and conseqently cryptography to reduce fractions to their simplest form. For additional, interesting details and the proof by Gabriel Lamé in 1844, please take a look at the excellent Wikipedia page.
The idea behind the more efficient version, the Euclidean division (in contrast to the substraction-based) approach is the following
Step2: The Euclidean GCD algorithm will reduce either of the number at least by half at each step (see the Wikipedia page for details). Thus, the time complexity of this algorithm is
$O(log_2 b) + O(log_2 a) = O(log_2 n),$
where $n = max(a, b)$. In contrast, our previous, naive implementation has an upper bound of $O(n)$.
Since Python is "notoriously bad" at recursion, let us implement a dynamic version of this algorithm. (One problem of recursion via Python is the limited stack size, and the other one is that tail recursion optimization not implemented.)
Step3: Given an arbitrary fraction $\frac{a}{b}$, let us use the %timeit module for a quick comparison | Python Code:
def naive_gcd(a, b):
gcd = 0
if a < b:
n = a
else:
n = a
for d in range(1, n + 1):
if not a % d and not b % d:
gcd = d
return gcd
print('In: 1/1,', 'Out:', naive_gcd(1, 1))
print('In: 1/2,', 'Out:', naive_gcd(1, 2))
print('In: 3/9,', 'Out:', naive_gcd(3, 9))
print('In: 12/24,', 'Out:', naive_gcd(12, 24))
print('In: 12/26,', 'Out:', naive_gcd(12, 26))
print('In: 26/12,', 'Out:', naive_gcd(26, 12))
print('In: 13/17,', 'Out:', naive_gcd(13, 17))
Explanation: Greatest Common Divisor
Now, the greatest common divisor (GCD) is the largest natural number $d$ that divides $a$ and $b$ in a fraction $\frac{a}{b}$ without a remainder.
For example, the GCD of the fraction $\frac{6}{9}$ is 3: $$\frac{6/3}{9/3} = \frac{2}{3}$$
First, let us start with the "intuitive," yet naive, brute-force implementation. Here, we simply iterate through all integers from 1 to max(a, b) to find the largest common divisor of the fraction $\frac{a}{b}$ or equivalently $\frac{a}{b}$.
End of explanation
def eucl_gcd_recurse(a, b):
if not b:
return a
else:
return eucl_gcd_recurse(b, a % b)
print('In: 1/1,', 'Out:', naive_gcd(1, 1))
print('In: 1/2,', 'Out:', naive_gcd(1, 2))
print('In: 3/9,', 'Out:', naive_gcd(3, 9))
print('In: 12/24,', 'Out:', naive_gcd(12, 24))
print('In: 12/26,', 'Out:', naive_gcd(12, 26))
print('In: 26/12,', 'Out:', naive_gcd(26, 12))
print('In: 13/17,', 'Out:', naive_gcd(13, 17))
Explanation: Euclidean Algorithm
The Greek mathematician Euclid described this algorithm approx. 300 BC in
It is still being used in the field of number theory and conseqently cryptography to reduce fractions to their simplest form. For additional, interesting details and the proof by Gabriel Lamé in 1844, please take a look at the excellent Wikipedia page.
The idea behind the more efficient version, the Euclidean division (in contrast to the substraction-based) approach is the following:
Given that we want to compute gcd(a, b), the greatest common divisor of the fraction{a}{b}, we first compute the remainder of the division $\frac{a}{b}$; we call this remainder a'. Then, we compute gcd(a', b). We repeat this procedure in recursive manner until b=0.
End of explanation
def eucl_gcd_dynamic(a, b):
while b:
tmp = b
b = a % b
a = tmp
return a
print('In: 1/1,', 'Out:', naive_gcd(1, 1))
print('In: 1/2,', 'Out:', naive_gcd(1, 2))
print('In: 3/9,', 'Out:', naive_gcd(3, 9))
print('In: 12/24,', 'Out:', naive_gcd(12, 24))
print('In: 12/26,', 'Out:', naive_gcd(12, 26))
print('In: 26/12,', 'Out:', naive_gcd(26, 12))
print('In: 13/17,', 'Out:', naive_gcd(13, 17))
Explanation: The Euclidean GCD algorithm will reduce either of the number at least by half at each step (see the Wikipedia page for details). Thus, the time complexity of this algorithm is
$O(log_2 b) + O(log_2 a) = O(log_2 n),$
where $n = max(a, b)$. In contrast, our previous, naive implementation has an upper bound of $O(n)$.
Since Python is "notoriously bad" at recursion, let us implement a dynamic version of this algorithm. (One problem of recursion via Python is the limited stack size, and the other one is that tail recursion optimization not implemented.)
End of explanation
a = 12313432
b = 34234232342
%timeit -r 3 -n 5 naive_gcd(a, b)
%timeit -r 3 -n 5 eucl_gcd_recurse(a, b)
%timeit -r 3 -n 5 eucl_gcd_dynamic(a, b)
Explanation: Given an arbitrary fraction $\frac{a}{b}$, let us use the %timeit module for a quick comparison:
End of explanation |
4,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Combine an MLP with a GP
Modified from
https
Step1: Data
Step2: Deep kernel
We transform the (1d) input using an MLP and then pass it to a Matern kernel.
Step3: Shallow kernel | Python Code:
%%capture
import os
try:
from tinygp import kernels, transforms, GaussianProcess
except ModuleNotFoundError:
%pip install -qq tinygp
from tinygp import kernels, transforms, GaussianProcess
try:
import flax.linen as nn
except ModuleNotFoundError:
%pip install -qq flax
import flax.linen as nn
from flax.linen.initializers import zeros
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
try:
import probml_utils as pml
except ModuleNotFoundError:
%pip install git+https://github.com/probml/probml-utils.git
import probml_utils as pml
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import jax
import jax.numpy as jnp
from jax.config import config
config.update("jax_enable_x64", True)
Explanation: Combine an MLP with a GP
Modified from
https://tinygp.readthedocs.io/en/latest/tutorials/transforms.html
End of explanation
pml.latexify(width_scale_factor=2)
markersize = 3 if pml.is_latexify_enabled() else 6
random = np.random.default_rng(567)
noise = 0.1
x = np.sort(random.uniform(-1, 1, 100))
def true_fn(x):
return 2 * (x > 0) - 1
y = true_fn(x) + random.normal(0.0, noise, len(x))
t = np.linspace(-1.5, 1.5, 500)
plt.plot(t, true_fn(t), "k", lw=1, label="truth")
plt.plot(x, y, ".k", markersize=markersize, label="data")
plt.xlim(-1.5, 1.5)
plt.ylim(-1.3, 1.3)
plt.xlabel("$x$")
plt.ylabel("$y$")
_ = plt.legend()
sns.despine()
pml.savefig("gp-dkl-data")
Explanation: Data
End of explanation
# Define a small neural network used to non-linearly transform the input data in our model
class Transformer(nn.Module):
@nn.compact
def __call__(self, x):
x = nn.Dense(features=15)(x)
x = nn.relu(x)
x = nn.Dense(features=10)(x)
x = nn.relu(x)
x = nn.Dense(features=1)(x)
return x
class GPdeep(nn.Module):
@nn.compact
def __call__(self, x, y, t):
# Set up a typical Matern-3/2 kernel
log_sigma = self.param("log_sigma", zeros, ())
log_rho = self.param("log_rho", zeros, ())
log_jitter = self.param("log_jitter", zeros, ())
base_kernel = jnp.exp(2 * log_sigma) * kernels.Matern32(jnp.exp(log_rho))
# Define a custom transform to pass the input coordinates through our `Transformer`
# network from above
transform = Transformer()
kernel = transforms.Transform(transform, base_kernel)
# Evaluate and return the GP negative log likelihood as usual
gp = GaussianProcess(kernel, x[:, None], diag=jnp.exp(2 * log_jitter))
pred_gp = gp.condition(y, t[:, None]).gp
return -gp.log_probability(y), (pred_gp.loc, pred_gp.variance)
# Define and train the model
def loss(params):
return model.apply(params, x, y, t)[0]
model = GPdeep()
params = model.init(jax.random.PRNGKey(1234), x, y, t)
tx = optax.sgd(learning_rate=1e-4)
opt_state = tx.init(params)
loss_grad_fn = jax.jit(jax.value_and_grad(loss))
for i in range(1000):
loss_val, grads = loss_grad_fn(params)
updates, opt_state = tx.update(grads, opt_state)
params = optax.apply_updates(params, updates)
# Plot the results and compare to the true model
plt.figure()
mu, var = model.apply(params, x, y, t)[1]
plt.plot(t, true_fn(t), "k", lw=1, label="truth")
plt.plot(x, y, ".k", markersize=markersize, label="data")
plt.plot(t, mu)
plt.fill_between(t, mu + np.sqrt(var), mu - np.sqrt(var), alpha=0.5, label="model")
plt.xlim(-1.5, 1.5)
plt.ylim(-1.3, 1.3)
plt.xlabel("$x$")
plt.ylabel("$y$")
_ = plt.legend()
sns.despine()
pml.savefig("gp-dkl-deep")
Explanation: Deep kernel
We transform the (1d) input using an MLP and then pass it to a Matern kernel.
End of explanation
class GPshallow(nn.Module):
@nn.compact
def __call__(self, x, y, t):
# Set up a typical Matern-3/2 kernel
log_sigma = self.param("log_sigma", zeros, ())
log_rho = self.param("log_rho", zeros, ())
log_jitter = self.param("log_jitter", zeros, ())
base_kernel = jnp.exp(2 * log_sigma) * kernels.Matern32(jnp.exp(log_rho))
# Evaluate and return the GP negative log likelihood as usual
gp = GaussianProcess(base_kernel, x[:, None], diag=jnp.exp(2 * log_jitter))
pred_gp = gp.condition(y, t[:, None]).gp
return -gp.log_probability(y), (pred_gp.loc, pred_gp.variance)
model = GPshallow()
params = model.init(jax.random.PRNGKey(1234), x, y, t)
tx = optax.sgd(learning_rate=1e-4)
opt_state = tx.init(params)
loss_grad_fn = jax.jit(jax.value_and_grad(loss))
for i in range(1000):
loss_val, grads = loss_grad_fn(params)
updates, opt_state = tx.update(grads, opt_state)
params = optax.apply_updates(params, updates)
# Plot the results and compare to the true model
plt.figure()
mu, var = model.apply(params, x, y, t)[1]
plt.plot(t, true_fn(t), "k", lw=1, label="truth")
plt.plot(x, y, ".k", markersize=markersize, label="data")
plt.plot(t, mu)
plt.fill_between(t, mu + np.sqrt(var), mu - np.sqrt(var), alpha=0.5, label="model")
plt.xlim(-1.5, 1.5)
plt.ylim(-1.3, 1.3)
plt.xlabel("$x$")
plt.ylabel("$y$")
_ = plt.legend()
sns.despine()
pml.savefig("gp-dkl-shallow")
Explanation: Shallow kernel
End of explanation |
4,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Load blast hits
Step1: 2. Process blastp results
2.1 Extract ORF stats from fasta file
Step2: 2.2 Annotate blast hits with orf stats
Step3: 2.3 Extract best hit for each ORF ( q_cov > 0.8 and pct_id > 40% and e-value < 1)
Define these resulting 7 ORFs as the core ORFs for the d9539 assembly.
The homology between the Metahit gene catalogue is very good, and considering the catalogue was curated
on a big set of gut metagenomes, it is reasonable to assume that these putative proteins would come
from our detected circular putative virus/phage genome
Two extra notes
Step4: 2.4 Extract selected orfs for further analysis
Step5: 2.4.2 Extract fasta
Step6: 2.4.3 Write out filtered blast hits | Python Code:
#Load blast hits
blastp_hits = pd.read_csv("2_blastp_hits.csv")
blastp_hits.head()
#Filter out Metahit 2010 hits, keep only Metahit 2014
blastp_hits = blastp_hits[blastp_hits.db != "metahit_pep"]
Explanation: 1. Load blast hits
End of explanation
#Assumes the Fasta file comes with the header format of EMBOSS getorf
fh = open("1_orf/d9539_asm_v1.2_orf.fa")
header_regex = re.compile(r">([^ ]+?) \[([0-9]+) - ([0-9]+)\]")
orf_stats = []
for line in fh:
header_match = header_regex.match(line)
if header_match:
is_reverse = line.rstrip(" \n").endswith("(REVERSE SENSE)")
q_id = header_match.group(1)
#Position in contig
q_cds_start = int(header_match.group(2) if not is_reverse else header_match.group(3))
q_cds_end = int(header_match.group(3) if not is_reverse else header_match.group(2))
#Length of orf in aminoacids
q_len = (q_cds_end - q_cds_start + 1) / 3
orf_stats.append( pd.Series(data=[q_id,q_len,q_cds_start,q_cds_end,("-" if is_reverse else "+")],
index=["q_id","orf_len","q_cds_start","q_cds_end","strand"]))
orf_stats_df = pd.DataFrame(orf_stats)
print(orf_stats_df.shape)
orf_stats_df.head()
#Write orf stats to fasta
orf_stats_df.to_csv("1_orf/orf_stats.csv",index=False)
Explanation: 2. Process blastp results
2.1 Extract ORF stats from fasta file
End of explanation
blastp_hits_annot = blastp_hits.merge(orf_stats_df,left_on="query_id",right_on="q_id")
#Add query coverage calculation
blastp_hits_annot["q_cov_calc"] = (blastp_hits_annot["q_end"] - blastp_hits_annot["q_start"] + 1 ) * 100 / blastp_hits_annot["q_len"]
blastp_hits_annot.sort_values(by="bitscore",ascending=False).head()
assert blastp_hits_annot.shape[0] == blastp_hits.shape[0]
Explanation: 2.2 Annotate blast hits with orf stats
End of explanation
! mkdir -p 4_msa_prots
#Get best hit (highest bitscore) for each ORF
gb = blastp_hits_annot[ (blastp_hits_annot.q_cov > 80) & (blastp_hits_annot.pct_id > 40) & (blastp_hits_annot.e_value < 1) ].groupby("query_id")
reliable_orfs = pd.DataFrame( hits.ix[hits.bitscore.idxmax()] for q_id,hits in gb )[["query_id","db","subject_id","pct_id","q_cov","q_len",
"bitscore","e_value","strand","q_cds_start","q_cds_end"]]
reliable_orfs = reliable_orfs.sort_values(by="q_cds_start",ascending=True)
reliable_orfs
Explanation: 2.3 Extract best hit for each ORF ( q_cov > 0.8 and pct_id > 40% and e-value < 1)
Define these resulting 7 ORFs as the core ORFs for the d9539 assembly.
The homology between the Metahit gene catalogue is very good, and considering the catalogue was curated
on a big set of gut metagenomes, it is reasonable to assume that these putative proteins would come
from our detected circular putative virus/phage genome
Two extra notes:
* Additionally, considering only these 7 ORFs , almost the entire genomic region is covered, with very few non-coding regions, still consistent with the hypothesis of a small viral genome which should be mainly coding
Also, even though the naive ORF finder detected putative ORFs in both positive and negative strands, the supported ORFs only occur in the positive strand. This could be an indication of a ssDNA or ssRNA virus.
End of explanation
reliable_orfs["orf_id"] = ["orf{}".format(x) for x in range(1,reliable_orfs.shape[0]+1) ]
reliable_orfs["cds_len"] = reliable_orfs["q_cds_end"] - reliable_orfs["q_cds_start"] +1
reliable_orfs.sort_values(by="q_cds_start",ascending=True).to_csv("3_filtered_orfs/filt_orf_stats.csv",index=False,header=True)
reliable_orfs.sort_values(by="q_cds_start",ascending=True).to_csv("3_filtered_orfs/filt_orf_list.txt",index=False,header=False,columns=["query_id"])
Explanation: 2.4 Extract selected orfs for further analysis
End of explanation
! ~/utils/bin/seqtk subseq 1_orf/d9539_asm_v1.2_orf.fa 3_filtered_orfs/filt_orf_list.txt > 3_filtered_orfs/d9539_asm_v1.2_orf_filt.fa
Explanation: 2.4.2 Extract fasta
End of explanation
filt_blastp_hits = blastp_hits_annot[ blastp_hits_annot.query_id.apply(lambda x: x in reliable_orfs.query_id.tolist())]
filt_blastp_hits.to_csv("3_filtered_orfs/d9539_asm_v1.2_orf_filt_blastp.csv")
filt_blastp_hits.head()
Explanation: 2.4.3 Write out filtered blast hits
End of explanation |
4,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Partial Differential Equations of Groundwater Flow
How to interpret the equations
If you are mathematically minded, then the groundwater flow equation by itself give you a really good feel for how the fluid works. You recognise a diffusion equation and understand that it naturally comes about from a problem where the flux in some quantity is proportional to the gradient of this quantity. (In this case, mass or volume flux is proportional to the pressure gradient. In the case of thermal diffusion, the heat flux is proportional to the temperature gradient). You also understand how the problem becomes more complicated when the material properties vary from place to place.
For the less mathematically minded, remember that the Laplace equation above is the result of writing the rate of change of the pressure as a gradient of the flux.
$$ S \frac{\partial h}{\partial t} + H = -\frac{\partial }{\partial x}
\underbrace{\left( - K \frac{\partial h}{\partial x} \right)}_\text{flux} $$
Here is a simple picture to help you visualise why the gradient of the flux controls the build up of pressure (or temperature)
Step1: Let's set up a bunch of points for $P_i$ from the diagram we sketched earlier. We start making this $\sin(x)$ since we know that the derivative should be $\cos(x)$ and so we can easily check how well it works.
<img src="images/BlockFlux.png" height=160px>
Step2: How about the second derivative, since we are dealing with a diffusion equation ?
Step3: So, it seems we can calculate reasonably accurate gradients with our little-boxes approximation. We can now try to solve the time-evolution of the pressure with the same function and see what happens.
Note
Step4: We can start to obtain results which are clearly "right" in the sense that they evolve quantitatively the way the solutions are supposed to. They act like diffusion solutions – smoothing rapidly fluctuating (short wavelength) regions quickly.
Harder problems
What about if we let \(K\) vary in space ? We can guess how this will influence the solution, but we can also try it out.
Note in the solution below, we can also see how sensitive the answer can be to getting the numerical representation correct. Although this is a specialised topic, I would like you to see that these results always need to be tested against any analytic solutions you can find as it is very easy to calculate a lot of nonsense !
Try changing \(\Delta t\) and then \(\Delta x\) – is that what you expected ? | Python Code:
%matplotlib inline
'''
This is a function to calculate the gradient at a point in a long line
of points. You feed it coordinates (X), values (H) and - optional -
boundary conditions. It returns the gradient.
'''
def gradx(X, H, leftbc=None, rightbc=None):
size = len(H)
gradP = zeros(size)
gradP[0] = (H[1] - H[0]) / ( X[1] - X[0] )
gradP[-1] = (H[-1] - H[-2]) / (X[-1] - X[-2])
if leftbc != None:
gradP[0] = leftbc
if rightbc != None:
gradP[-1] = rightbc
for i in range (1, size-1):
gradP[i] = (H[i+1] - H[i-1]) / ( X[i+1] - X[i-1] )
return gradP
Explanation: Partial Differential Equations of Groundwater Flow
How to interpret the equations
If you are mathematically minded, then the groundwater flow equation by itself give you a really good feel for how the fluid works. You recognise a diffusion equation and understand that it naturally comes about from a problem where the flux in some quantity is proportional to the gradient of this quantity. (In this case, mass or volume flux is proportional to the pressure gradient. In the case of thermal diffusion, the heat flux is proportional to the temperature gradient). You also understand how the problem becomes more complicated when the material properties vary from place to place.
For the less mathematically minded, remember that the Laplace equation above is the result of writing the rate of change of the pressure as a gradient of the flux.
$$ S \frac{\partial h}{\partial t} + H = -\frac{\partial }{\partial x}
\underbrace{\left( - K \frac{\partial h}{\partial x} \right)}_\text{flux} $$
Here is a simple picture to help you visualise why the gradient of the flux controls the build up of pressure (or temperature): the level of fluid builds up if the rate at which fluid enters the volume (the mass flux), \(q_i\), is larger than the rate at which fluid leaves the volume, \(q_o\). The values of the fluxes top / bottom in this case are only different if there is a gradient in the flux (a difference between the values at two points !). The greater the difference, the faster the pressure in the volume will change.
<img src="images/BucketFlux.png" alt="Bucket" width="250">
If the fluxes don't change, then change of the level, \(\Delta h\) in this bucket over some interval, \(\Delta t\), is just proportional to the difference between the flux out and the flux in.
$$ \;\;\; \frac{\Delta h}{\Delta t} \propto q_o - q_i $$
Extension to 2D and 3D
The flux is a vector quantity (the flow has a specific direction), so in 2D and 3D problems, we have a vector balance of the flux in / out of the local volume in each direction.
$$ S \frac{\partial h}{\partial t} + H = -\frac{\partial }{\partial x}
\underbrace{\left( - K \frac{\partial h}{\partial x} \right)}_\textrm{x flux}
-\frac{\partial }{\partial y}
\left( - K \frac{\partial h}{\partial y} \right)
-\frac{\partial }{\partial z}
\left( - K \frac{\partial h}{\partial z} \right)
$$
Which simplifies to
$$ S\frac{\partial h}{\partial t} + H =
K \left( \frac{\partial^2 h}{\partial x^2} + \frac{\partial^2 h}{\partial y^2} + \frac{\partial^2 h}{\partial z^2} \right) \textit{ $\longrightarrow$ (3D, isotropic groundwater flow equation)}$$
\( K \) may not be isotropic — it can vary with direction. In fact this is often the case because the crust is formed under the influence of gravity which introduces layers and preferential directions for faults / joints. Under these circumstances, we have a tensor valued coefficient. Let's worry about that another time.
Finite Differences
Still working backwards !!
In deriving the equations above, we always think about a small volume of material, and we imagine it shrinking to an infinitessimal size. This gives us the differential equations which relate changes in some quantity to its gradients in the spatial directions and in time.
What if we don't shrink the little boxes to an infinitessimal size ?
In that case, we get an approximate version of what would happen in the real world because we are only looking at the gradients in a few places and averaging their effects over the whole of each box. If we choose just a few boxes, then we get a very rough approximation !
Look at this picture in which we pick a small number of discrete points in a line and draw boxes around each one (boxes that touch each other).
<img src="images/BlockFlux.png" height=160px>
$$ S \frac{\partial h}{\partial t} + H = -\frac{\partial }{\partial x}
\underbrace{\left( - K \frac{\partial h}{\partial x} \right)}_{q}
$$
$$ S \frac{\Delta h}{\Delta t} = - K \frac{\Delta q}{\Delta x}
\textit{$\longrightarrow$ ( approximate rate of change )}
$$
$$ q = \frac{\Delta h}{\Delta x} \textit{$\longrightarrow$ ( approximate flux )} $$
We use \(\Delta \) here to indicate that we are not going to shrink the lengths to find the infinitessimal limit, but are going to leave the boxes at a finite size. If we have some initial values of \(h\) at the start of the calculation, we can calculate approximate fluxes on each of the vertical planes from the expression above. That means we can compute the approximate change in \(h\) with time at each of the point in the diagram.
We can only calculate the changes at this finite set of points, but, if we have write this into a simple computer program, we can calculate this at an awful lot of points. Let's see if it works !
End of explanation
import numpy
xP = linspace(0,2*math.pi,100)
hP = numpy.sin(xP)
gradx_hP = gradx(xP,hP)
pass
plot(xP / math.pi, hP)
plot(xP / math.pi, numpy.cos(xP), marker='+')
plot(xP / math.pi, gradx_hP, linewidth=2)
hP2 = numpy.sin(xP) + 0.1 * numpy.sin(xP*3.0)
gradx_hP2 = gradx(xP,hP2)
plot(xP / math.pi, hP2)
plot(xP / math.pi, numpy.cos(xP) + 0.3 * numpy.cos(xP*3.0), marker='+')
plot(xP / math.pi, gradx_hP2, linewidth=2)
Explanation: Let's set up a bunch of points for $P_i$ from the diagram we sketched earlier. We start making this $\sin(x)$ since we know that the derivative should be $\cos(x)$ and so we can easily check how well it works.
<img src="images/BlockFlux.png" height=160px>
End of explanation
# The rate of change with time is proportional to the gradient of the gradient !
grad2x_hP = gradx(xP,gradx_hP)
grad2x_hP2 = gradx(xP,gradx_hP2)
f, (ax1, ax2, ax3) = plt.subplots(1, 3, sharex=True)
f.set_size_inches(15,5)
ax1.plot(xP / math.pi, hP)
ax1.plot(xP / math.pi, -numpy.sin(xP), marker='+')
ax1.plot(xP / math.pi, grad2x_hP, linewidth=2)
ax1.set_title( "Value & 2nd derivative $\sin(x)$")
ax2.plot(xP / math.pi, hP2)
ax2.plot(xP / math.pi, -numpy.sin(xP) - 0.9 * numpy.sin(xP*3.0), marker='+')
ax2.plot(xP / math.pi, grad2x_hP2, linewidth=2)
ax2.set_title( "Value & 2nd derivative $\sin(x) + 0.1 \sin(3x)$")
ax3.plot(xP / math.pi, -numpy.sin(xP) - grad2x_hP, marker=".")
ax3.plot(xP / math.pi, -numpy.sin(xP) - 0.9 * numpy.sin(xP*3.0) - grad2x_hP2, marker="x")
ax3.set_ylim(-0.01, 0.1)
ax3.set_title( "Error")
Explanation: How about the second derivative, since we are dealing with a diffusion equation ?
End of explanation
xP = linspace(0,math.pi,100)
hP = numpy.sin(xP)
hP2 = zeros(len(xP))
# random initial wavelengths
for k in range(1,25):
hP2 = hP2 + numpy.random.rand() * numpy.sin(xP * float(k))
hP2 = hP2 / max(hP2)
gradx_hP = gradx(xP,hP)
gradx_hP2 = gradx(xP,hP2)
grad2x_hP = gradx(xP,gradx_hP)
grad2x_hP2 = gradx(xP,gradx_hP2)
deltat = 0.002
f, (ax1, ax2) = plt.subplots(1, 2, sharex=True)
f.set_size_inches(16,6)
ax1.set_title( "Smoothly varying")
ax2.set_title( "Wildly varying")
## Timestepping loop
for time in range(0,500):
if(time%50==0):
ax1.plot(xP / math.pi,hP)
ax2.plot(xP / math.pi,hP2)
hP = hP + deltat * grad2x_hP
hP2 = hP2 + deltat * grad2x_hP2
# Boundary conditions
hP[0] = hP[-1] = 0.0
hP2[0] = hP2[-1] = 0.0
# compute new gradients
gradx_hP = gradx(xP,hP)
gradx_hP2 = gradx(xP,hP2)
grad2x_hP = gradx(xP,gradx_hP)
grad2x_hP2 = gradx(xP,gradx_hP2)
Explanation: So, it seems we can calculate reasonably accurate gradients with our little-boxes approximation. We can now try to solve the time-evolution of the pressure with the same function and see what happens.
Note: In the previous plots, errors are less than 1% in the second derivative away from the boundary but nearer 5% there. On the other hand, we haven't even tried to do a clever job of building the interpolation / derivatives.
End of explanation
# delta x = 1/100 ... also try 1/250
xP = linspace(0,math.pi,100)
# Check out what happens if you change this value from 0.0020 to 0.0025
deltat = 0.0025
hP = numpy.sin(xP)
K = zeros(len(xP))
# random initial wavelengths
for k in range(1,10,1):
K = K + numpy.random.rand() * numpy.sin(xP * float(k)) / k
K = K - min(K)
K = 1.0 - 0.9 * K / max(K)
gradx_hP = gradx(xP,hP)
grad2x_hP = gradx(xP,gradx_hP*K)
f, (ax1, ax2) = plt.subplots(1, 2, sharex=True)
f.set_size_inches(16,6)
ax1.plot(xP / math.pi, K, marker=".")
ax1.set_title( "Coefficient $K$ varies with position")
## Timestepping loop
for time in range(0,500):
hP = hP + deltat * grad2x_hP
hP2 = hP2 + deltat * grad2x_hP2
# Boundary conditions
hP[0] = hP[-1] = 0.0
# compute new gradients
gradx_hP = gradx(xP,hP)
grad2x_hP = gradx(xP,gradx_hP*K)
if(time%100==0):
ax2.plot(xP / math.pi,hP)
Explanation: We can start to obtain results which are clearly "right" in the sense that they evolve quantitatively the way the solutions are supposed to. They act like diffusion solutions – smoothing rapidly fluctuating (short wavelength) regions quickly.
Harder problems
What about if we let \(K\) vary in space ? We can guess how this will influence the solution, but we can also try it out.
Note in the solution below, we can also see how sensitive the answer can be to getting the numerical representation correct. Although this is a specialised topic, I would like you to see that these results always need to be tested against any analytic solutions you can find as it is very easy to calculate a lot of nonsense !
Try changing \(\Delta t\) and then \(\Delta x\) – is that what you expected ?
End of explanation |
4,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Visualization Landscape
Step1: Note
Using cleaned data from Data Cleaning Notebook. See Notebook for details.
Step2: BQPlot
Examples here are shamelessly stolen from the amazing
Step3: Quantile cuts | Python Code:
from IPython.lib.display import YouTubeVideo
YouTubeVideo("FytuB8nFHPQ", width=400, height=300)
from __future__ import absolute_import, division, print_function
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('poster')
sns.set_style('whitegrid')
# sns.set_style('darkgrid')
plt.rcParams['figure.figsize'] = 12, 8 # plotsize
import numpy as np
import pandas as pd
from pandas.tools.plotting import scatter_matrix
from sklearn.datasets import load_boston
import warnings
warnings.filterwarnings('ignore')
Explanation: Python Visualization Landscape
End of explanation
df = pd.read_csv("../data/coal_prod_cleaned.csv")
df.head()
plt.scatter(df['Average_Employees'],
df.Labor_Hours)
plt.xlabel("Number of Employees")
plt.ylabel("Total Hours Worked");
Explanation: Note
Using cleaned data from Data Cleaning Notebook. See Notebook for details.
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo("uHPcshgTotE", width=560, height=315)
import bqplot as bq
sample_df = df.sample(100)
x_sc = bq.LinearScale()
y_sc = bq.LinearScale()
ax_x = bq.Axis(label='Number of Employees', scale=x_sc, grid_lines='solid')
ax_y = bq.Axis(label='Total Hours Worked', scale=y_sc, orientation='vertical', grid_lines='solid')
line = bq.Scatter(x=sample_df.Average_Employees,
y=sample_df.Labor_Hours,
scales={'x': x_sc, 'y': y_sc},
interactions={'click': 'select'},
selected_style={'opacity': 1.0, 'fill': 'DarkOrange', 'stroke': 'Red'},
unselected_style={'opacity': 0.5})
fig = bq.Figure(axes=[ax_x, ax_y], marks=[line], title='BQPlot Example')
fig
line.selected
line.selected = [23, 3]
import bqplot.pyplot as plt
import numpy as np
x = np.linspace(0, 2, 50)
y = x**2
fig = plt.figure()
scatter = plt.scatter(x, y)
plt.show()
fig.animation_duration = 5000
scatter.y = x**.5
scatter.selected_style = {'stroke':'red', 'fill': 'orange'}
plt.brush_selector();
scatter.selected
scatter.selected = [1,2,10,40]
import ipyvolume as ipv
import numpy as np
ipv.example_ylm()
N = 1000
x, y, z = np.random.random((3, N))
fig = ipv.figure()
scatter = ipv.scatter(x, y, z, marker='box')
ipv.show()
scatter.x = scatter.x + 0.1
scatter.color = "green"
scatter.size = 5
scatter.color = np.random.random((N,3))
scatter.size = 2
ex = ipv.datasets.animated_stream.fetch().data
ex.shape
ex[:, ::, ::4].shape
ipv.figure()
ipv.style.use('dark')
quiver = ipv.quiver(*ipv.datasets.animated_stream.fetch().data[:,::,::4], size=5)
ipv.animation_control(quiver, interval=200)
ipv.show()
ipv.style.use('light')
ipv.style.use('light')
quiver.geo = "cat"
N = 1000*1000
x, y, z = np.random.random((3, N)).astype('f4')
ipv.figure()
s = ipv.scatter(x, y, z, size=0.2)
ipv.show()
ipv.save("bqplot.html", )
!open bqplot.html
colors = sns.color_palette(n_colors=df.Year.nunique())
color_dict = {key: value
for key, value in zip(sorted(df.Year.unique()), colors)}
color_dict
for year in sorted(df.Year.unique()[[0, 2, -1]]):
plt.scatter(df[df.Year == year].Labor_Hours,
df[df.Year == year].Production_short_tons,
c=color_dict[year],
s=50,
label=year,
)
plt.xlabel("Total Hours Worked")
plt.ylabel("Total Amount Produced")
plt.legend()
plt.savefig("ex1.png")
import matplotlib as mpl
plt.style.available
mpl.style.use('seaborn-colorblind')
for year in sorted(df.Year.unique()[[0, 2, -1]]):
plt.scatter(df[df.Year == year].Labor_Hours,
df[df.Year == year].Production_short_tons,
# c=color_dict[year],
s=50,
label=year,
)
plt.xlabel("Total Hours Worked")
plt.ylabel("Total Amount Produced")
plt.legend();
# plt.savefig("ex1.png")
df_dict = load_boston()
features = pd.DataFrame(data=df_dict.data, columns = df_dict.feature_names)
target = pd.DataFrame(data=df_dict.target, columns = ['MEDV'])
df = pd.concat([features, target], axis=1)
df.head()
# Target variable
fig, ax = plt.subplots(figsize=(6, 4))
sns.distplot(df.MEDV, ax=ax, rug=True, hist=False)
fig, ax = plt.subplots(figsize=(10,7))
sns.kdeplot(df.LSTAT,
df.MEDV,
ax=ax)
fig, ax = plt.subplots(figsize=(10, 10))
scatter_matrix(df[['MEDV', 'LSTAT', 'CRIM', 'RM', 'NOX', 'DIS']], alpha=0.2, diagonal='hist', ax=ax);
sns.pairplot(data=df,
vars=['MEDV', 'LSTAT', 'CRIM', 'RM', 'NOX', 'DIS'],
plot_kws={'s':20, 'alpha':0.5}
);
Explanation: BQPlot
Examples here are shamelessly stolen from the amazing: https://github.com/maartenbreddels/jupytercon-2017/blob/master/jupytercon2017-widgets.ipynb
End of explanation
players = pd.read_csv("../data/raw_players.csv.gz", compression='gzip')
players.head()
weight_categories = ["vlow_weight",
"low_weight",
"mid_weight",
"high_weight",
"vhigh_weight",
]
players['weightclass'] = pd.qcut(players['weight'],
len(weight_categories),
weight_categories)
players.head()
Explanation: Quantile cuts
End of explanation |
4,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Logistic Regression Using TF 2.0
Learning Objectives
Build a model,
Train this model on example data, and
Use the model to make predictions about unknown data.
Introduction
In this lab, you use machine learning to categorize Iris flowers by species. It uses TensorFlow to
Step1: The Iris classification problem
Imagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their sepals and petals.
The Iris genus entails about 300 species, but our program will only classify the following three
Step2: Inspect the data
This dataset, iris_training.csv, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the head -n5 command to take a peek at the first five entries
Step3: From this view of the dataset, notice the following
Step4: Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as
Step5: Create a tf.data.Dataset
TensorFlow's Dataset API handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training.
Since the dataset is a CSV-formatted text file, use the tf.data.experimental.make_csv_dataset function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (shuffle=True, shuffle_buffer_size=10000), and repeat the dataset forever (num_epochs=None). We also set the batch_size parameter
Step6: The make_csv_dataset function returns a tf.data.Dataset of (features, label) pairs, where features is a dictionary
Step7: Notice that like-features are grouped together, or batched. Each example row's fields are appended to the corresponding feature array. Change the batch_size to set the number of examples stored in these feature arrays.
You can start to see some clusters by plotting a few features from the batch
Step9: To simplify the model building step, create a function to repackage the features dictionary into a single array with shape
Step10: Then use the tf.data.Dataset#map method to pack the features of each (features,label) pair into the training dataset
Step11: The features element of the Dataset are now arrays with shape (batch_size, num_features). Let's look at the first few examples
Step12: Select the type of model
Why model?
A model is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.
Could you determine the relationship between the four features and the Iris species without using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach determines the model for you. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you.
Select the model
We need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. Neural networks can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more hidden layers. Each hidden layer consists of one or more neurons. There are several categories of neural networks and this program uses a dense, or fully-connected neural network
Step13: The activation function determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many tf.keras.activations, but ReLU is common for hidden layers.
The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively.
Using the model
Let's have a quick look at what this model does to a batch of features
Step14: Here, each example returns a logit for each class.
To convert these logits to a probability for each class, use the softmax function
Step15: Taking the tf.argmax across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions
Step16: Train the model
Training is the stage of machine learning when the model is gradually optimized, or the model learns the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn too much about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called overfitting—it's like memorizing the answers instead of understanding how to solve a problem.
The Iris classification problem is an example of supervised machine learning
Step17: Use the tf.GradientTape context to calculate the gradients used to optimize your model
Step18: Create an optimizer
An optimizer applies the computed gradients to the model's variables to minimize the loss function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.
<table>
<tr><td>
<img src="https
Step19: We'll use this to calculate a single optimization step
Step20: Training loop
With all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps
Step21: Visualize the loss function over time
While it's helpful to print out the model's training progress, it's often more helpful to see this progress. TensorBoard is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the matplotlib module.
Interpreting these charts takes some experience, but you really want to see the loss go down and the accuracy go up
Step22: Evaluate the model's effectiveness
Now that the model is trained, we can get some statistics on its performance.
Evaluating means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's predictions against the actual label. For example, a model that picked the correct species on half the input examples has an accuracy of 0.5. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy
Step23: Evaluate the model on the test dataset
Unlike the training stage, the model only evaluates a single epoch of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set
Step24: We can see on the last batch, for example, the model is usually correct
Step25: Use the trained model to make predictions
We've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on unlabeled examples; that is, on examples that contain features but not a label.
In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as | Python Code:
import os
import matplotlib.pyplot as plt
import tensorflow as tf
print(f"TensorFlow version: {tf.__version__}")
print(f"Eager execution: {tf.executing_eagerly()}")
Explanation: Introduction to Logistic Regression Using TF 2.0
Learning Objectives
Build a model,
Train this model on example data, and
Use the model to make predictions about unknown data.
Introduction
In this lab, you use machine learning to categorize Iris flowers by species. It uses TensorFlow to:
Use TensorFlow's default eager execution development environment,
Import data with the Datasets API,
Build models and layers with TensorFlow's Keras API.
This tutorial is structured like many TensorFlow programs:
Import and parse the dataset.
Select the type of model.
Train the model.
Evaluate the model's effectiveness.
Use the trained model to make predictions.
Setup program
Configure imports
Import TensorFlow and the other required Python modules. By default, TensorFlow uses eager execution to evaluate operations immediately, returning concrete values instead of creating a computational graph that is executed later. If you are used to a REPL or the python interactive console, this feels familiar.
End of explanation
train_dataset_url = "https://storage.googleapis.com/download.tensorflow.org/data/iris_training.csv"
train_dataset_fp = tf.keras.utils.get_file(
fname=os.path.basename(train_dataset_url), origin=train_dataset_url
)
print(f"Local copy of the dataset file: {train_dataset_fp}")
Explanation: The Iris classification problem
Imagine you are a botanist seeking an automated way to categorize each Iris flower you find. Machine learning provides many algorithms to classify flowers statistically. For instance, a sophisticated machine learning program could classify flowers based on photographs. Our ambitions are more modest—we're going to classify Iris flowers based on the length and width measurements of their sepals and petals.
The Iris genus entails about 300 species, but our program will only classify the following three:
Iris setosa
Iris virginica
Iris versicolor
<table>
<tr><td>
<img src="https://www.tensorflow.org/images/iris_three_species.jpg"
alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor">
</td></tr>
<tr><td align="center">
<b>Figure 1.</b> <a href="https://commons.wikimedia.org/w/index.php?curid=170298">Iris setosa</a> (by <a href="https://commons.wikimedia.org/wiki/User:Radomil">Radomil</a>, CC BY-SA 3.0), <a href="https://commons.wikimedia.org/w/index.php?curid=248095">Iris versicolor</a>, (by <a href="https://commons.wikimedia.org/wiki/User:Dlanglois">Dlanglois</a>, CC BY-SA 3.0), and <a href="https://www.flickr.com/photos/33397993@N05/3352169862">Iris virginica</a> (by <a href="https://www.flickr.com/photos/33397993@N05">Frank Mayfield</a>, CC BY-SA 2.0).<br/>
</td></tr>
</table>
Fortunately, someone has already created a dataset of 120 Iris flowers with the sepal and petal measurements. This is a classic dataset that is popular for beginner machine learning classification problems.
Import and parse the training dataset
Download the dataset file and convert it into a structure that can be used by this Python program.
Download the dataset
Download the training dataset file using the tf.keras.utils.get_file function. This returns the file path of the downloaded file:
End of explanation
!head -n5 {train_dataset_fp}
Explanation: Inspect the data
This dataset, iris_training.csv, is a plain text file that stores tabular data formatted as comma-separated values (CSV). Use the head -n5 command to take a peek at the first five entries:
End of explanation
# column order in CSV file
column_names = [
"sepal_length",
"sepal_width",
"petal_length",
"petal_width",
"species",
]
feature_names = column_names[:-1]
label_name = column_names[-1]
print(f"Features: {feature_names}")
print(f"Label: {label_name}")
Explanation: From this view of the dataset, notice the following:
The first line is a header containing information about the dataset:
There are 120 total examples. Each example has four features and one of three possible label names.
Subsequent rows are data records, one example per line, where:
The first four fields are features: these are the characteristics of an example. Here, the fields hold float numbers representing flower measurements.
The last column is the label: this is the value we want to predict. For this dataset, it's an integer value of 0, 1, or 2 that corresponds to a flower name.
Let's write that out in code:
End of explanation
class_names = ["Iris setosa", "Iris versicolor", "Iris virginica"]
Explanation: Each label is associated with string name (for example, "setosa"), but machine learning typically relies on numeric values. The label numbers are mapped to a named representation, such as:
0: Iris setosa
1: Iris versicolor
2: Iris virginica
For more information about features and labels, see the ML Terminology section of the Machine Learning Crash Course.
End of explanation
batch_size = 32
train_dataset = tf.data.experimental.make_csv_dataset(
train_dataset_fp,
batch_size,
column_names=column_names,
label_name=label_name,
num_epochs=1,
)
Explanation: Create a tf.data.Dataset
TensorFlow's Dataset API handles many common cases for loading data into a model. This is a high-level API for reading data and transforming it into a form used for training.
Since the dataset is a CSV-formatted text file, use the tf.data.experimental.make_csv_dataset function to parse the data into a suitable format. Since this function generates data for training models, the default behavior is to shuffle the data (shuffle=True, shuffle_buffer_size=10000), and repeat the dataset forever (num_epochs=None). We also set the batch_size parameter:
End of explanation
features, labels = next(iter(train_dataset))
print(features)
Explanation: The make_csv_dataset function returns a tf.data.Dataset of (features, label) pairs, where features is a dictionary: {'feature_name': value}
These Dataset objects are iterable. Let's look at a batch of features:
End of explanation
plt.scatter(
features["petal_length"],
features["sepal_length"],
c=labels,
cmap="viridis",
)
plt.xlabel("Petal length")
plt.ylabel("Sepal length")
plt.show()
Explanation: Notice that like-features are grouped together, or batched. Each example row's fields are appended to the corresponding feature array. Change the batch_size to set the number of examples stored in these feature arrays.
You can start to see some clusters by plotting a few features from the batch:
End of explanation
def pack_features_vector(features, labels):
Pack the features into a single array.
features = tf.stack(list(features.values()), axis=1)
return features, labels
Explanation: To simplify the model building step, create a function to repackage the features dictionary into a single array with shape: (batch_size, num_features).
This function uses the tf.stack method which takes values from a list of tensors and creates a combined tensor at the specified dimension:
End of explanation
train_dataset = train_dataset.map(pack_features_vector)
Explanation: Then use the tf.data.Dataset#map method to pack the features of each (features,label) pair into the training dataset:
End of explanation
features, labels = next(iter(train_dataset))
print(features[:5])
Explanation: The features element of the Dataset are now arrays with shape (batch_size, num_features). Let's look at the first few examples:
End of explanation
model = tf.keras.Sequential(
[
tf.keras.layers.Dense(
10, activation=tf.nn.relu, input_shape=(4,)
), # input shape required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3),
]
)
Explanation: Select the type of model
Why model?
A model is a relationship between features and the label. For the Iris classification problem, the model defines the relationship between the sepal and petal measurements and the predicted Iris species. Some simple models can be described with a few lines of algebra, but complex machine learning models have a large number of parameters that are difficult to summarize.
Could you determine the relationship between the four features and the Iris species without using machine learning? That is, could you use traditional programming techniques (for example, a lot of conditional statements) to create a model? Perhaps—if you analyzed the dataset long enough to determine the relationships between petal and sepal measurements to a particular species. And this becomes difficult—maybe impossible—on more complicated datasets. A good machine learning approach determines the model for you. If you feed enough representative examples into the right machine learning model type, the program will figure out the relationships for you.
Select the model
We need to select the kind of model to train. There are many types of models and picking a good one takes experience. This tutorial uses a neural network to solve the Iris classification problem. Neural networks can find complex relationships between features and the label. It is a highly-structured graph, organized into one or more hidden layers. Each hidden layer consists of one or more neurons. There are several categories of neural networks and this program uses a dense, or fully-connected neural network: the neurons in one layer receive input connections from every neuron in the previous layer. For example, Figure 2 illustrates a dense neural network consisting of an input layer, two hidden layers, and an output layer:
<table>
<tr><td>
<img src="https://www.tensorflow.org/images/custom_estimators/full_network.png"
alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs">
</td></tr>
<tr><td align="center">
<b>Figure 2.</b> A neural network with features, hidden layers, and predictions.<br/>
</td></tr>
</table>
When the model from Figure 2 is trained and fed an unlabeled example, it yields three predictions: the likelihood that this flower is the given Iris species. This prediction is called inference. For this example, the sum of the output predictions is 1.0. In Figure 2, this prediction breaks down as: 0.02 for Iris setosa, 0.95 for Iris versicolor, and 0.03 for Iris virginica. This means that the model predicts—with 95% probability—that an unlabeled example flower is an Iris versicolor.
Create a model using Keras
The TensorFlow tf.keras API is the preferred way to create models and layers. This makes it easy to build models and experiment while Keras handles the complexity of connecting everything together.
The tf.keras.Sequential model is a linear stack of layers. Its constructor takes a list of layer instances, in this case, two tf.keras.layers.Dense layers with 10 nodes each, and an output layer with 3 nodes representing our label predictions. The first layer's input_shape parameter corresponds to the number of features from the dataset, and is required:
End of explanation
predictions = model(features)
predictions[:5]
Explanation: The activation function determines the output shape of each node in the layer. These non-linearities are important—without them the model would be equivalent to a single layer. There are many tf.keras.activations, but ReLU is common for hidden layers.
The ideal number of hidden layers and neurons depends on the problem and the dataset. Like many aspects of machine learning, picking the best shape of the neural network requires a mixture of knowledge and experimentation. As a rule of thumb, increasing the number of hidden layers and neurons typically creates a more powerful model, which requires more data to train effectively.
Using the model
Let's have a quick look at what this model does to a batch of features:
End of explanation
tf.nn.softmax(predictions[:5])
Explanation: Here, each example returns a logit for each class.
To convert these logits to a probability for each class, use the softmax function:
End of explanation
print(f"Prediction: {tf.argmax(predictions, axis=1)}")
print(f" Labels: {labels}")
Explanation: Taking the tf.argmax across classes gives us the predicted class index. But, the model hasn't been trained yet, so these aren't good predictions:
End of explanation
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y, training):
# training=training is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
y_ = model(x, training=training)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels, training=False)
print(f"Loss test: {l}")
Explanation: Train the model
Training is the stage of machine learning when the model is gradually optimized, or the model learns the dataset. The goal is to learn enough about the structure of the training dataset to make predictions about unseen data. If you learn too much about the training dataset, then the predictions only work for the data it has seen and will not be generalizable. This problem is called overfitting—it's like memorizing the answers instead of understanding how to solve a problem.
The Iris classification problem is an example of supervised machine learning: the model is trained from examples that contain labels. In unsupervised machine learning, the examples don't contain labels. Instead, the model typically finds patterns among the features.
Define the loss and gradient function
Both training and evaluation stages need to calculate the model's loss. This measures how off a model's predictions are from the desired label, in other words, how bad the model is performing. We want to minimize, or optimize, this value.
Our model will calculate its loss using the tf.keras.losses.SparseCategoricalCrossentropy function which takes the model's class probability predictions and the desired label, and returns the average loss across the examples.
End of explanation
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets, training=True)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
Explanation: Use the tf.GradientTape context to calculate the gradients used to optimize your model:
End of explanation
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
Explanation: Create an optimizer
An optimizer applies the computed gradients to the model's variables to minimize the loss function. You can think of the loss function as a curved surface (see Figure 3) and we want to find its lowest point by walking around. The gradients point in the direction of steepest ascent—so we'll travel the opposite way and move down the hill. By iteratively calculating the loss and gradient for each batch, we'll adjust the model during training. Gradually, the model will find the best combination of weights and bias to minimize loss. And the lower the loss, the better the model's predictions.
<table>
<tr><td>
<img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%"
alt="Optimization algorithms visualized over time in 3D space.">
</td></tr>
<tr><td align="center">
<b>Figure 3.</b> Optimization algorithms visualized over time in 3D space.<br/>(Source: <a href="http://cs231n.github.io/neural-networks-3/">Stanford class CS231n</a>, MIT License, Image credit: <a href="https://twitter.com/alecrad">Alec Radford</a>)
</td></tr>
</table>
TensorFlow has many optimization algorithms available for training. This model uses the tf.keras.optimizers.SGD that implements the stochastic gradient descent (SGD) algorithm. The learning_rate sets the step size to take for each iteration down the hill. This is a hyperparameter that you'll commonly adjust to achieve better results.
Let's setup the optimizer:
End of explanation
loss_value, grads = grad(model, features, labels)
print(
"Step: {}, Initial Loss: {}".format(
optimizer.iterations.numpy(), loss_value.numpy()
)
)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print(
"Step: {}, Loss: {}".format(
optimizer.iterations.numpy(),
loss(model, features, labels, training=True).numpy(),
)
)
Explanation: We'll use this to calculate a single optimization step:
End of explanation
## Note: Rerunning this cell uses the same model variables
# Keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Track progress
epoch_loss_avg.update_state(loss_value) # Add current batch loss
# Compare predicted label to actual label
# training=True is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
epoch_accuracy.update_state(y, model(x, training=True))
# End epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print(
"Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(
epoch, epoch_loss_avg.result(), epoch_accuracy.result()
)
)
Explanation: Training loop
With all the pieces in place, the model is ready for training! A training loop feeds the dataset examples into the model to help it make better predictions. The following code block sets up these training steps:
Iterate each epoch. An epoch is one pass through the dataset.
Within an epoch, iterate over each example in the training Dataset grabbing its features (x) and label (y).
Using the example's features, make a prediction and compare it with the label. Measure the inaccuracy of the prediction and use that to calculate the model's loss and gradients.
Use an optimizer to update the model's variables.
Keep track of some stats for visualization.
Repeat for each epoch.
The num_epochs variable is the number of times to loop over the dataset collection. Counter-intuitively, training a model longer does not guarantee a better model. num_epochs is a hyperparameter that you can tune. Choosing the right number usually requires both experience and experimentation:
End of explanation
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle("Training Metrics")
axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
Explanation: Visualize the loss function over time
While it's helpful to print out the model's training progress, it's often more helpful to see this progress. TensorBoard is a nice visualization tool that is packaged with TensorFlow, but we can create basic charts using the matplotlib module.
Interpreting these charts takes some experience, but you really want to see the loss go down and the accuracy go up:
End of explanation
test_url = (
"https://storage.googleapis.com/download.tensorflow.org/data/iris_test.csv"
)
test_fp = tf.keras.utils.get_file(
fname=os.path.basename(test_url), origin=test_url
)
test_dataset = tf.data.experimental.make_csv_dataset(
test_fp,
batch_size,
column_names=column_names,
label_name="species",
num_epochs=1,
shuffle=False,
)
test_dataset = test_dataset.map(pack_features_vector)
Explanation: Evaluate the model's effectiveness
Now that the model is trained, we can get some statistics on its performance.
Evaluating means determining how effectively the model makes predictions. To determine the model's effectiveness at Iris classification, pass some sepal and petal measurements to the model and ask the model to predict what Iris species they represent. Then compare the model's predictions against the actual label. For example, a model that picked the correct species on half the input examples has an accuracy of 0.5. Figure 4 shows a slightly more effective model, getting 4 out of 5 predictions correct at 80% accuracy:
<table cellpadding="8" border="0">
<colgroup>
<col span="4" >
<col span="1" bgcolor="lightblue">
<col span="1" bgcolor="lightgreen">
</colgroup>
<tr bgcolor="lightgray">
<th colspan="4">Example features</th>
<th colspan="1">Label</th>
<th colspan="1" >Model prediction</th>
</tr>
<tr>
<td>5.9</td><td>3.0</td><td>4.3</td><td>1.5</td><td align="center">1</td><td align="center">1</td>
</tr>
<tr>
<td>6.9</td><td>3.1</td><td>5.4</td><td>2.1</td><td align="center">2</td><td align="center">2</td>
</tr>
<tr>
<td>5.1</td><td>3.3</td><td>1.7</td><td>0.5</td><td align="center">0</td><td align="center">0</td>
</tr>
<tr>
<td>6.0</td> <td>3.4</td> <td>4.5</td> <td>1.6</td> <td align="center">1</td><td align="center" bgcolor="red">2</td>
</tr>
<tr>
<td>5.5</td><td>2.5</td><td>4.0</td><td>1.3</td><td align="center">1</td><td align="center">1</td>
</tr>
<tr><td align="center" colspan="6">
<b>Figure 4.</b> An Iris classifier that is 80% accurate.<br/>
</td></tr>
</table>
Setup the test dataset
Evaluating the model is similar to training the model. The biggest difference is the examples come from a separate test set rather than the training set. To fairly assess a model's effectiveness, the examples used to evaluate a model must be different from the examples used to train the model.
The setup for the test Dataset is similar to the setup for training Dataset. Download the CSV text file and parse that values, then give it a little shuffle:
End of explanation
test_accuracy = tf.keras.metrics.Accuracy()
for (x, y) in test_dataset:
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
logits = model(x, training=False)
prediction = tf.argmax(logits, axis=1, output_type=tf.int32)
test_accuracy(prediction, y)
print(f"Test set accuracy: {test_accuracy.result():.3%}")
Explanation: Evaluate the model on the test dataset
Unlike the training stage, the model only evaluates a single epoch of the test data. In the following code cell, we iterate over each example in the test set and compare the model's prediction against the actual label. This is used to measure the model's accuracy across the entire test set:
End of explanation
tf.stack([y, prediction], axis=1)
Explanation: We can see on the last batch, for example, the model is usually correct:
End of explanation
predict_dataset = tf.convert_to_tensor(
[
[5.1, 3.3, 1.7, 0.5],
[5.9, 3.0, 4.2, 1.5],
[6.9, 3.1, 5.4, 2.1],
]
)
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(predict_dataset, training=False)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print(f"Example {i} prediction: {name} ({100 * p:4.1f}%)")
Explanation: Use the trained model to make predictions
We've trained a model and "proven" that it's good—but not perfect—at classifying Iris species. Now let's use the trained model to make some predictions on unlabeled examples; that is, on examples that contain features but not a label.
In real-life, the unlabeled examples could come from lots of different sources including apps, CSV files, and data feeds. For now, we're going to manually provide three unlabeled examples to predict their labels. Recall, the label numbers are mapped to a named representation as:
0: Iris setosa
1: Iris versicolor
2: Iris virginica
End of explanation |
4,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aggregation (via pymongo)
Step1: Do An Aggregation
Basic process to convert pipelines from a JavaScript array to a Python list
Convert all comments (from "//" to "#")
Title-case all true/false to True/False
Quote all operators and fields ($match --> '$match')
Important
Step2: Aggregation
Step3: Final Pipeline | Python Code:
# import pymongo
from pymongo import MongoClient
from pprint import pprint
# Create client
client = MongoClient('mongodb://localhost:32768')
# Connect to database
db = client['fifa']
# Get collection
my_collection = db['player']
Explanation: Aggregation (via pymongo)
End of explanation
def print_docs(pipeline, limit=5):
pipeline.append({'$limit':limit})
# Run Aggregation
docs = my_collection.aggregate(pipeline)
# Print Results
for idx, doc in enumerate(docs):
# print(type(doc))
pprint(doc)
# print(f"#{idx + 1}: {doc}\n\n")
Explanation: Do An Aggregation
Basic process to convert pipelines from a JavaScript array to a Python list
Convert all comments (from "//" to "#")
Title-case all true/false to True/False
Quote all operators and fields ($match --> '$match')
Important: When using $sort operator in Python 2, wrap list with SON() method (from bson import SON)
Tips to avoid above process
Use 1/0 for True/False
Quote things in JavaScript ahead of time
Helper Functions
End of explanation
# $match - Filter out Goalkeepers
match_a = {
'$match': {
'positionFull': {'$ne': 'Goalkeeper'}
}
}
# Create Pipeline
pipeline = [
match_a,
]
# Fetch and Print the Results
print_docs(pipeline, limit=1)
# $project - Keep only the fields we're interested in
project_a = {
'$project': {
'_id': True, # Note: not required, _id is included by default
'name': {'$concat': ['$firstName', ' ', '$lastName']},
'pos': '$positionFull', # Note: renaming
'rating': True,
'attributes': True
}
}
# Create Pipeline
pipeline = [
match_a,
project_a,
]
# Fetch and Print the Results
print_docs(pipeline, limit=5)
# $unwind - Convert N documents to 6*N documents (so we can do math on attributes)
unwind_a = {
'$unwind': '$attributes'
}
# Create Pipeline
pipeline = [
match_a,
project_a,
unwind_a,
]
# Fetch and Print the Results
print_docs(pipeline, limit=5)
# $group - $sum the value of the attributes (and pass the rest of the fields through the _id)
group_a = {
'$group': {
'_id': {
'id': '$_id',
'rating': '$rating',
'name': '$name',
'pos': '$pos'
},
"sum_attributes": {
'$sum': "$attributes.value"
}
}
}
# Create Pipeline
pipeline = [
match_a,
project_a,
unwind_a,
group_a,
]
# Fetch and Print the Results
print_docs(pipeline, limit=5)
# $project - Keep only the fields we're interested in
# Note: this is our second $project operator !!!
project_b = {
'$project': {
'_id': False, # turn off _id
'id': '$_id.id',
'name': '$_id.name',
'pos': '$_id.pos',
'rating': '$_id.rating',
'avg_attributes': {"$divide": ['$sum_attributes', 6]},
'rating_attribute_difference': {"$subtract": [{"$divide": ['$sum_attributes', 6]}, '$_id.rating']}
}
}
# Create Pipeline
pipeline = [
match_a,
project_a,
unwind_a,
group_a,
project_b,
]
# Fetch and Print the Results
print_docs(pipeline, limit=5)
# $match - Find anybody rated LESS than 75 that has a higher than 75 avg_attributes
# Note: this is our second $match operator !!!
match_b = {
'$match': {
'rating': {'$lt': 75},
'avg_attributes': {'$gte': 75}
}
}
# Create Pipeline
pipeline = [
match_a,
project_a,
unwind_a,
group_a,
project_b,
match_b,
]
# Fetch and Print the Results
print_docs(pipeline, limit=5)
# $sort - Based on the amount of injustice
# Note: This step could be placed above previous "$match" step, but placing it here is more efficient with less
# data to sort
sort_a = {
'$sort': {
'rating_attribute_difference': -1
}
}
# Create Pipeline
pipeline = [
match_a,
project_a,
unwind_a,
group_a,
project_b,
match_b,
sort_a,
]
# Fetch and Print the Results
print_docs(pipeline, limit=5)
Explanation: Aggregation
End of explanation
# Create Pipeline
pipeline = [match_a, project_a, unwind_a, group_a, project_b, match_b, sort_a]
# Run Aggregation
docs = my_collection.aggregate(pipeline)
# Print Results
for idx, doc in enumerate(docs):
print(f"#{idx + 1}: {doc['name']}, a {doc['pos']}, is rated {doc['rating']} instead of {doc['avg_attributes']:.0f}")
Explanation: Final Pipeline
End of explanation |
4,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: In this notebook, we will use LSTM layers to develop time series forecasting models.
The dataset used for the examples of this notebook is on air pollution measured by concentration of
particulate matter (PM) of diameter less than or equal to 2.5 micrometers. There are other variables
such as air pressure, air temparature, dewpoint and so on.
Two time series models are developed - one on air pressure and the other on pm2.5.
The dataset has been downloaded from UCI Machine Learning Repository.
https
Step2: To make sure that the rows are in the right order of date and time of observations,
a new column datetime is created from the date and time related columns of the DataFrame.
The new column consists of Python's datetime.datetime objects. The DataFrame is sorted in ascending order
over this column.
Step3: Gradient descent algorithms perform better (for example converge faster) if the variables are wihtin range [-1, 1]. Many sources relax the boundary to even [-3, 3]. The pm2.5 variable is mixmax scaled to bound the tranformed variable within [0,1].
Step6: Before training the model, the dataset is split in two parts - train set and validation set.
The neural network is trained on the train set. This means computation of the loss function, back propagation
and weights updated by a gradient descent algorithm is done on the train set. The validation set is
used to evaluate the model and to determine the number of epochs in model training. Increasing the number of
epochs will further decrease the loss function on the train set but might not neccesarily have the same effect
for the validation set due to overfitting on the train set.Hence, the number of epochs is controlled by keeping
a tap on the loss function computed for the validation set. We use Keras with Tensorflow backend to define and train
the model. All the steps involved in model training and validation is done by calling appropriate functions
of the Keras API.
Step8: Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm log_PRES in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y.
Step9: Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer.
Step10: The LSTM layers are defined for seven timesteps. In this example, two LSTM layers are stacked. The first LSTM returns the output from each all seven timesteps. This output is a sequence and is fed to the second LSTM which returns output only from the last step. The first LSTM has sixty four hidden neurons in each timestep. Hence the sequence returned by the first LSTM has sixty four features.
Step11: The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making
predictions. The box plot of pm2.5 shows the presence of outliers. Hence, mean absolute error (MAE) is used as absolute deviations suffer less fluctuations compared to squared deviations.
The network's weights are optimized by the Adam algorithm. Adam stands for adaptive moment estimation
and has been a popular choice for training deep neural networks. Unlike, stochastic gradient descent, adam uses
different learning rates for each weight and separately updates the same as the training progresses. The learning rate of a weight is updated based on exponentially weighted moving averages of the weight's gradients and the squared gradients.
Step12: The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training
is done for a predefined number of epochs. Additionally, batch_size defines the number of samples of train set to be
used for a instance of back propagation.The validation dataset is also passed to evaluate the model after every epoch
completes. A ModelCheckpoint object tracks the loss function on the validation set and saves the model for the epoch,
at which the loss function has been minimum.
Step13: Prediction are made for the pm2.5 from the best saved model. The model's predictions, which are on the standardized pm2.5, are inverse transformed to get predictions of original pm2.5. | Python Code:
from __future__ import print_function
import os
import sys
import pandas as pd
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import datetime
#set current working directory
os.chdir('D:/Practical Time Series')
#Read the dataset into a pandas.DataFrame
df = pd.read_csv('datasets/PRSA_data_2010.1.1-2014.12.31.csv')
print('Shape of the dataframe:', df.shape)
#Let's see the first five rows of the DataFrame
df.head()
Rows having NaN values in column pm2.5 are dropped.
df.dropna(subset=['pm2.5'], axis=0, inplace=True)
df.reset_index(drop=True, inplace=True)
Explanation: In this notebook, we will use LSTM layers to develop time series forecasting models.
The dataset used for the examples of this notebook is on air pollution measured by concentration of
particulate matter (PM) of diameter less than or equal to 2.5 micrometers. There are other variables
such as air pressure, air temparature, dewpoint and so on.
Two time series models are developed - one on air pressure and the other on pm2.5.
The dataset has been downloaded from UCI Machine Learning Repository.
https://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data
End of explanation
df['datetime'] = df[['year', 'month', 'day', 'hour']].apply(lambda row: datetime.datetime(year=row['year'], month=row['month'], day=row['day'],
hour=row['hour']), axis=1)
df.sort_values('datetime', ascending=True, inplace=True)
#Let us draw a box plot to visualize the central tendency and dispersion of PRES
plt.figure(figsize=(5.5, 5.5))
g = sns.boxplot(df['pm2.5'])
g.set_title('Box plot of pm2.5')
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df['pm2.5'])
g.set_title('Time series of pm2.5')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
#Let's plot the series for six months to check if any pattern apparently exists.
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df['pm2.5'].loc[df['datetime']<=datetime.datetime(year=2010,month=6,day=30)], color='g')
g.set_title('pm2.5 during 2010')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
#Let's zoom in on one month.
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df['pm2.5'].loc[df['datetime']<=datetime.datetime(year=2010,month=1,day=31)], color='g')
g.set_title('pm2.5 during Jan 2010')
g.set_xlabel('Index')
g.set_ylabel('pm2.5 readings')
Explanation: To make sure that the rows are in the right order of date and time of observations,
a new column datetime is created from the date and time related columns of the DataFrame.
The new column consists of Python's datetime.datetime objects. The DataFrame is sorted in ascending order
over this column.
End of explanation
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
df['scaled_pm2.5'] = scaler.fit_transform(np.array(df['pm2.5']).reshape(-1, 1))
Explanation: Gradient descent algorithms perform better (for example converge faster) if the variables are wihtin range [-1, 1]. Many sources relax the boundary to even [-3, 3]. The pm2.5 variable is mixmax scaled to bound the tranformed variable within [0,1].
End of explanation
Let's start by splitting the dataset into train and validation. The dataset's time period if from
Jan 1st, 2010 to Dec 31st, 2014. The first fours years - 2010 to 2013 is used as train and
2014 is kept for validation.
split_date = datetime.datetime(year=2014, month=1, day=1, hour=0)
df_train = df.loc[df['datetime']<split_date]
df_val = df.loc[df['datetime']>=split_date]
print('Shape of train:', df_train.shape)
print('Shape of test:', df_val.shape)
#First five rows of train
df_train.head()
#First five rows of validation
df_val.head()
#Reset the indices of the validation set
df_val.reset_index(drop=True, inplace=True)
The train and validation time series of scaled_pm2.5 is also plotted.
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_train['scaled_pm2.5'], color='b')
g.set_title('Time series of scaled pm2.5 in train set')
g.set_xlabel('Index')
g.set_ylabel('Scaled pm2.5 readings')
plt.figure(figsize=(5.5, 5.5))
g = sns.tsplot(df_val['scaled_pm2.5'], color='r')
g.set_title('Time series of scaled pm2.5 in validation set')
g.set_xlabel('Index')
g.set_ylabel('Scaled pm2.5 readings')
Explanation: Before training the model, the dataset is split in two parts - train set and validation set.
The neural network is trained on the train set. This means computation of the loss function, back propagation
and weights updated by a gradient descent algorithm is done on the train set. The validation set is
used to evaluate the model and to determine the number of epochs in model training. Increasing the number of
epochs will further decrease the loss function on the train set but might not neccesarily have the same effect
for the validation set due to overfitting on the train set.Hence, the number of epochs is controlled by keeping
a tap on the loss function computed for the validation set. We use Keras with Tensorflow backend to define and train
the model. All the steps involved in model training and validation is done by calling appropriate functions
of the Keras API.
End of explanation
def makeXy(ts, nb_timesteps):
Input:
ts: original time series
nb_timesteps: number of time steps in the regressors
Output:
X: 2-D array of regressors
y: 1-D array of target
X = []
y = []
for i in range(nb_timesteps, ts.shape[0]):
X.append(list(ts.loc[i-nb_timesteps:i-1]))
y.append(ts.loc[i])
X, y = np.array(X), np.array(y)
return X, y
X_train, y_train = makeXy(df_train['scaled_pm2.5'], 7)
print('Shape of train arrays:', X_train.shape, y_train.shape)
X_val, y_val = makeXy(df_val['scaled_pm2.5'], 7)
print('Shape of validation arrays:', X_val.shape, y_val.shape)
#X_train and X_val are reshaped to 3D arrays
X_train, X_val = X_train.reshape((X_train.shape[0], X_train.shape[1], 1)), X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
print('Shape of arrays after reshaping:', X_train.shape, X_val.shape)
Explanation: Now we need to generate regressors (X) and target variable (y) for train and validation. 2-D array of regressor and 1-D array of target is created from the original 1-D array of columm log_PRES in the DataFrames. For the time series forecasting model, Past seven days of observations are used to predict for the next day. This is equivalent to a AR(7) model. We define a function which takes the original time series and the number of timesteps in regressors as input to generate the arrays of X and y.
End of explanation
from keras.layers import Dense, Input, Dropout
from keras.layers.recurrent import LSTM
from keras.optimizers import SGD
from keras.models import Model
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
#Define input layer which has shape (None, 7) and of type float32. None indicates the number of instances
input_layer = Input(shape=(7,1), dtype='float32')
Explanation: Now we define the MLP using the Keras Functional API. In this approach a layer can be declared as the input of the following layer at the time of defining the next layer.
End of explanation
lstm_layer1 = LSTM(64, input_shape=(7,1), return_sequences=True)(input_layer)
lstm_layer2 = LSTM(32, input_shape=(7,64), return_sequences=False)(lstm_layer1)
dropout_layer = Dropout(0.2)(lstm_layer2)
#Finally the output layer gives prediction for the next day's air pressure.
output_layer = Dense(1, activation='linear')(dropout_layer)
Explanation: The LSTM layers are defined for seven timesteps. In this example, two LSTM layers are stacked. The first LSTM returns the output from each all seven timesteps. This output is a sequence and is fed to the second LSTM which returns output only from the last step. The first LSTM has sixty four hidden neurons in each timestep. Hence the sequence returned by the first LSTM has sixty four features.
End of explanation
ts_model = Model(inputs=input_layer, outputs=output_layer)
ts_model.compile(loss='mean_absolute_error', optimizer='adam')#SGD(lr=0.001, decay=1e-5))
ts_model.summary()
Explanation: The input, dense and output layers will now be packed inside a Model, which is wrapper class for training and making
predictions. The box plot of pm2.5 shows the presence of outliers. Hence, mean absolute error (MAE) is used as absolute deviations suffer less fluctuations compared to squared deviations.
The network's weights are optimized by the Adam algorithm. Adam stands for adaptive moment estimation
and has been a popular choice for training deep neural networks. Unlike, stochastic gradient descent, adam uses
different learning rates for each weight and separately updates the same as the training progresses. The learning rate of a weight is updated based on exponentially weighted moving averages of the weight's gradients and the squared gradients.
End of explanation
save_weights_at = os.path.join('keras_models', 'PRSA_data_PM2.5_LSTM_weights.{epoch:02d}-{val_loss:.4f}.hdf5')
save_best = ModelCheckpoint(save_weights_at, monitor='val_loss', verbose=0,
save_best_only=True, save_weights_only=False, mode='min',
period=1)
ts_model.fit(x=X_train, y=y_train, batch_size=16, epochs=20,
verbose=1, callbacks=[save_best], validation_data=(X_val, y_val),
shuffle=True)
Explanation: The model is trained by calling the fit function on the model object and passing the X_train and y_train. The training
is done for a predefined number of epochs. Additionally, batch_size defines the number of samples of train set to be
used for a instance of back propagation.The validation dataset is also passed to evaluate the model after every epoch
completes. A ModelCheckpoint object tracks the loss function on the validation set and saves the model for the epoch,
at which the loss function has been minimum.
End of explanation
best_model = load_model(os.path.join('keras_models', 'PRSA_data_PM2.5_LSTM_weights.09-0.0117.hdf5'))
preds = best_model.predict(X_val)
pred_pm25 = scaler.inverse_transform(preds)
pred_pm25 = np.squeeze(pred_pm25)
from sklearn.metrics import mean_absolute_error
mae = mean_absolute_error(df_val['pm2.5'].loc[7:], pred_pm25)
print('MAE for the validation set:', round(mae, 4))
#Let's plot the first 50 actual and predicted values of pm2.5.
plt.figure(figsize=(5.5, 5.5))
plt.plot(range(50), df_val['pm2.5'].loc[7:56], linestyle='-', marker='*', color='r')
plt.plot(range(50), pred_pm25[:50], linestyle='-', marker='.', color='b')
plt.legend(['Actual','Predicted'], loc=2)
plt.title('Actual vs Predicted pm2.5')
plt.ylabel('pm2.5')
plt.xlabel('Index')
Explanation: Prediction are made for the pm2.5 from the best saved model. The model's predictions, which are on the standardized pm2.5, are inverse transformed to get predictions of original pm2.5.
End of explanation |
4,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
For the testing we will use a standard DSI acqusition scheme with 514 gradient directions and 1 S0. There's also the alternative of having a scheme with 4195 directions. In the case of the simulated multi-tensor signal is doesn't seem to make a big difference.
Step1: Let’s create a multi tensor with 2 fiber directions at 60 degrees.
Step2: Perform the reconstructions with standard DSI. To get sharp results, having a large r_end seems to have the biggest impact but increasing the qgrid_size also has some effect. Also, you can effectively disable the Hanning filter by setting the filter_width to infinity.
Step3: Predict the test data values using GP. To save time the hyperparameters are set to the values that the training would otherwise yield.
Step4: Visualize the ground truth ODF together with the DSI. | Python Code:
btable = np.loadtxt(get_data('dsi515btable'))
#btable = np.loadtxt(get_data('dsi4169btable'))
gtab['test'] = gradient_table(btable[:, 0], btable[:, 1:],
big_delta=gtab['train'].big_delta, small_delta=gtab['train'].small_delta)
gtab['test'].info
Explanation: For the testing we will use a standard DSI acqusition scheme with 514 gradient directions and 1 S0. There's also the alternative of having a scheme with 4195 directions. In the case of the simulated multi-tensor signal is doesn't seem to make a big difference.
End of explanation
evals = np.array([[0.0015, 0.0003, 0.0003],
[0.0015, 0.0003, 0.0003]])
directions = [(-30, 0), (30, 0)]
fractions = [50, 50]
for key, _gtab in gtab.items():
data[key] = multi_tensor(_gtab, evals, 100, angles=directions,
fractions=fractions, snr=None)[0][None, None, None, :]
sphere = get_sphere('symmetric724').subdivide(1)
odf['gt'] = multi_tensor_odf(sphere.vertices, evals, angles=directions,
fractions=fractions)[None, None, None, :]
Explanation: Let’s create a multi tensor with 2 fiber directions at 60 degrees.
End of explanation
dsi_model = DiffusionSpectrumModel(gtab['test'], qgrid_size=25, r_end=50, filter_width=np.inf)
dsi_fit = dsi_model.fit(data['test'])
odf['dsi'] = dsi_fit.odf(sphere)
#dsi_model.filter
#plt.plot(gtab['train'].qvals)
plt.plot(data['test'].flatten())
#plt.plot(data['pred'].flatten())
plt.plot(dsi_fit.data.flatten() * dsi_model.filter)
dsi_fit.rtop_pdf(normalized=False)/dsi_fit.rtop_signal(filtering=True)
#/dsi_model.fit(data['pred']).rtop_signal(filtering=True)
Explanation: Perform the reconstructions with standard DSI. To get sharp results, having a large r_end seems to have the biggest impact but increasing the qgrid_size also has some effect. Also, you can effectively disable the Hanning filter by setting the filter_width to infinity.
End of explanation
if dataset == 'HCP':
kernel = get_default_independent_kernel(3, n_max=6, q_lengthscale=1.16, coefficients=(1778, 31.4, 3.63, 0.56))
gp_model = GaussianProcessModel(gtab['train'], kernel=kernel)
gp_fit = gp_model.fit(data['train'], retrain=False)
elif dataset == 'SPARC':
kernel = get_default_independent_kernel(3, n_max=6, q_lengthscale=0.57, coefficients=(2578, 52, 5.0, 0.94))
gp_model = GaussianProcessModel(gtab['train'], kernel=kernel, verbose=False)
gp_fit = gp_model.fit(data['train'], retrain=False)
odf['gp'] = gp_fit.odf(sphere)
Explanation: Predict the test data values using GP. To save time the hyperparameters are set to the values that the training would otherwise yield.
End of explanation
ren = fvtk.ren()
odfs = np.vstack((odf['gt'], odf['dsi'], odf['gp']))
odf_actor = fvtk.sphere_funcs(odfs, sphere)
odf_actor.RotateX(90)
fvtk.add(ren, odf_actor)
fvtk.record(ren, out_path='dsi_sim.png', size=(300, 300))
Explanation: Visualize the ground truth ODF together with the DSI.
End of explanation |
4,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Duffing Oscillator Solution Using Frequency Domain Residuals
This notebook uses the newer solver. This solver minimizes frequency domain error. hb_freq can also ignore the constant term ($\omega = 0$) in the solution process. The error in Fourier Series of the state-equation calculated state derivative as compared to that obtained by taking the derivative of the input state. Any variety of time points may be used to ensure substantial averaging over a single cycle.
Step1: Sometimes we can improve just by restarting from the prior end point. Sometimes, we just think it's improved.
Step2: Using lambda functions
As an aside, we can use a lambda function to solve a simple equation without much hassle. For example, $\ddot{x} + 0.1\dot{x}+ x + 0.1 x^3 = \sin(0.7t)$ | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
import mousai as ms
from scipy import pi, sin
# Test that all is working.
# f_tol adjusts accuracy. This is smaller than reasonable, but illustrative of usage.
t, x, e, amps, phases = ms.hb_freq(ms.duff_osc, np.array([[0,1,-1]]), omega = .7, f_tol = 1e-8)
print('Equation errors (should be zero): ', e)
print('Constant term of FFT of signal should be zero: ', ms.fftp.fft(x)[0,0])
# Using more harmonics.
t, x, e, amps, phases = ms.hb_freq(ms.duff_osc, x0 = np.array([[0,1,-1]]), omega = .1, num_harmonics= 1)
print('Equation errors (should be zero): ', e)
print('Constant term of FFT of signal should be zero: ', ms.fftp.fft(x)[0,0])
np.abs(e)
Explanation: Duffing Oscillator Solution Using Frequency Domain Residuals
This notebook uses the newer solver. This solver minimizes frequency domain error. hb_freq can also ignore the constant term ($\omega = 0$) in the solution process. The error in Fourier Series of the state-equation calculated state derivative as compared to that obtained by taking the derivative of the input state. Any variety of time points may be used to ensure substantial averaging over a single cycle.
End of explanation
t, x, e, amps, phases = ms.hb_freq(ms.duff_osc, x0 = x, omega = 0.1, num_harmonics= 7)
print('Errors: ', e)
print('Constant term of FFT of signal should be zero: ', ms.fftp.fft(x)[0,0])
# Let's get a smoother response
time, xc = ms.time_history(t,x)
plt.plot(time,xc.T,t,x.T,'*')
plt.grid(True)
print('The average for this problem is known to be zero, we got', sp.average(x))
def duff_osc2(x, v, params):
omega = params['omega']
t = params['cur_time']
return np.array([[-x-.01*x**3-.01*v+1*sin(omega*t)]])
t, x, e, amps, phases = ms.hb_freq(duff_osc2, np.array([[0,1,-1]]), omega = 0.8, num_harmonics=7)
print(amps, x, e)
print('Constant term of FFT of signal should be zero: ', ms.fftp.fft(x)[0,0])
time, xc = ms.time_history(t,x)
plt.plot(time, xc.T, t, x.T, '*')
plt.grid(True)
omega = np.linspace(0.1,3,200)+1/200
amp = np.zeros_like(omega)
x = np.array([[0,-1,1]])
for i, freq in enumerate(omega):
#print(i,freq,x)
try:
t, x, e, amps, phases = ms.hb_freq(duff_osc2, x, omega = freq, num_harmonics = 1)# , callback = resid)
#print(freq, amps, e)
amp[i]=amps[0]
except:
amp[i] = sp.nan
print(np.hstack((omega.reshape(-1,1), amp.reshape(-1,1))))
plt.plot(omega, amp)
t, x, e, amps, phases = ms.hb_freq(duff_osc2, np.array([[0,1,-1]]), omega = 1.1, num_harmonics=1)
print(' amps = {}\n x = {}\n e = {}\n phases = {}'.format(amps, x, e, phases))
print('Constant term of FFT of signal should be zero: ', ms.fftp.fft(x)[0,0])
time, xc = ms.time_history(t,x)
plt.plot(time, xc.T, t, x.T, '*')
plt.grid(True)
phases
omega = sp.linspace(0.1,3,90)+1/200
amp = sp.zeros_like(omega)
x = np.array([[0,-1,1]])
for i, freq in enumerate(omega):
#print(i,freq,x)
#print(sp.average(x))
x = x-sp.average(x)
try:
t, x, e, amps, phases = ms.hb_freq(duff_osc2, x, freq, num_harmonics=1)#, callback = resid)
amp[i]=amps[0]
except:
amp[i] = sp.nan
plt.plot(omega, amp)
omegal = sp.arange(3,.03,-1/200)+1/200
ampl = sp.zeros_like(omegal)
x = np.array([[0,-1,1]])
for i, freq in enumerate(omegal):
# Here we try to obtain solutions, but if they don't work,
# we ignore them by inserting `np.nan` values.
x = x-sp.average(x)
try:
t, x, e, amps, phases = ms.hb_freq(duff_osc2, x, freq, num_harmonics=1, f_tol = 1e-6)#, callback = resid)
ampl[i]=amps[0]
except:
ampl[i] = sp.nan
plt.plot(omegal, ampl)
plt.plot(omegal,ampl)
plt.plot(omega,amp)
#plt.axis([0,3, 0, 10.5])
from scipy.optimize import newton_krylov
def duff_amp_resid(a):
return (mu**2+(sigma-3/8*alpha/omega_0*a**2)**2)*a**2-(k**2)/4/omega_0**2
mu = 0.05 # damping
k = 1 # excitation amplitude
sigma = -0.9 #detuning
omega_0 = 1 # driving frequency
alpha = 0.1 # cubic coefficient
newton_krylov(duff_amp_resid,-.1)
sigmas = sp.linspace(-1,3,200)
amplitudes = sp.zeros_like(sigmas)
x = newton_krylov(duff_amp_resid,1)
for i, sigma in enumerate(sigmas):
try:
amplitudes[i] = newton_krylov(duff_amp_resid,x)
x = amplitudes[i]
except:
amplitudes[i] = newton_krylov(duff_amp_resid,0)
x = amplitudes[i]
plt.plot(sigmas,amplitudes)
sigmas = sp.linspace(-1,3,200)
sigmasr = sigmas[::-1]
amplitudesr = sp.zeros_like(sigmas)
x = newton_krylov(duff_amp_resid,3)
for i, sigma in enumerate(sigmasr):
try:
amplitudesr[i] = newton_krylov(duff_amp_resid,x)
x = amplitudesr[i]
except:
amplitudesr[i] = sp.nan#newton_krylov(duff_amp_resid,0)
x = amplitudesr[i]
plt.plot(sigmasr,amplitudesr)
plt.plot(sigmasr,amplitudesr)
plt.plot(sigmas,amplitudes)
Explanation: Sometimes we can improve just by restarting from the prior end point. Sometimes, we just think it's improved.
End of explanation
def duff_osc2(x, v, params):
omega = params['omega']
t = params['cur_time']
return np.array([[-x-.1*x**3-.1*v+1*sin(omega*t)]])
_,_,_,a,_ = ms.hb_freq(duff_osc2, np.array([[0,1,-1]]), 0.7, num_harmonics=1)
print(a)
_,_,_,a,_ = ms.hb_freq(lambda x,v, params:np.array([[-x-.1*x**3-.1*v+1*sin(0.7*params['cur_time'])]]), np.array([[0,1,-1]]), .7, num_harmonics=1)
a
Explanation: Using lambda functions
As an aside, we can use a lambda function to solve a simple equation without much hassle. For example, $\ddot{x} + 0.1\dot{x}+ x + 0.1 x^3 = \sin(0.7t)$
End of explanation |
4,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Looking at KIC 8462852 (Boyajian's star) with gPhoton
Using the time-tagged photon data from GALEX, available with gPhoton, lets make some light curves of "Tabby's Star"
Step1: Searching for GALEX visits
Looks like there are 4 visits available in the database
Step2: ... and they seem to be spaced over about 2 months time.
Alas, not the multi-year coverage I'd hoped for to compare with the results from Montet & Simon (2016)
Step3: Making light curves
Following examples in the repo...
Step4: Huh... that 3rd panel looks like a nice long visit. Let's take a slightly closer look!
Step5: Any short timescale variability of note? Let's use a Lomb-Scargle to make a periodogram!
(limited to the 10sec windowing I imposed... NOTE
Step6: How about the long-term evolution?
Answer
Step7: Conclusion...?
Based on data from only 4 GALEX visits, spaced over ~70 days, we can't say much possible evolution of this star with GALEX.
Step8: The visits are centered in mid 2011 (Quarter 9 and 10, I believe)
Note
Step9: For time comparison, here is an example MJD from scan 15 of the GKM data.
(note
Step10: Thinking about Dust - v1
Use basic values for the relative extinction in each band at a "standard" R_v = 3.1
Imagine if the long-term Kepler fading was due to dust. What is the extinction we'd expect in the NUV? (A
Step11: Same plot as above, but with WISE W1 band, and considering a different time window unfortunately
Step12: Combining the fading and dust model for both the NUV and W1 data.
In the IR we can't say much... so maybe toss it out since it doesn't constrain dust model one way or another
Step13: Dust 2.0
Step14: CCM89 gives us R_V = 5.02097489191 +/- 0.938304455977 satisfies both the Kepler and NUV fading we see.
Such a high value of R_V~5 is not unheard of, particulary in protostars, however Boyajian's Star does not show any other indications of being such a source.
NOTE
Step15: Another simple model | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
from gPhoton import gFind
from gPhoton import gAperture
from gPhoton import gMap
from gPhoton.gphoton_utils import read_lc
import datetime
from astropy.time import Time
from astropy import units as u
# from astropy.analytic_functions import blackbody_lambda #OLD!
from astropy.modeling.blackbody import blackbody_lambda
from gatspy.periodic import LombScargleFast
import extinction
matplotlib.rcParams.update({'font.size':18})
matplotlib.rcParams.update({'font.family':'serif'})
ra = 301.5644
dec = 44.45684
Explanation: Looking at KIC 8462852 (Boyajian's star) with gPhoton
Using the time-tagged photon data from GALEX, available with gPhoton, lets make some light curves of "Tabby's Star"
End of explanation
exp_data = gFind(band='NUV', skypos=[ra, dec], exponly=True)
exp_data
Explanation: Searching for GALEX visits
Looks like there are 4 visits available in the database
End of explanation
(exp_data['NUV']['t0'] - exp_data['NUV']['t0'][0]) / (60. * 60. * 24. * 365.)
Explanation: ... and they seem to be spaced over about 2 months time.
Alas, not the multi-year coverage I'd hoped for to compare with the results from Montet & Simon (2016)
End of explanation
# step_size = 20. # the time resolution in seconds
target = 'KIC8462852'
# phot_rad = 0.0045 # in deg
# ap_in = 0.0050 # in deg
# ap_out = 0.0060 # in deg
# print(datetime.datetime.now())
# for k in range(len(exp_data['NUV']['t0'])):
# photon_events = gAperture(band='NUV', skypos=[ra, dec], stepsz=step_size, radius=phot_rad,
# annulus=[ap_in, ap_out], verbose=3, csvfile=target+ '_' +str(k)+"_lc.csv",
# trange=[int(exp_data['NUV']['t0'][k]), int(exp_data['NUV']['t1'][k])+1],
# overwrite=True)
# print(datetime.datetime.now(), k)
med_flux = np.array(np.zeros(4), dtype='float')
med_flux_err = np.array(np.zeros(4), dtype='float')
time_big = np.array([], dtype='float')
mag_big = np.array([], dtype='float')
flux_big = np.array([], dtype='float')
for k in range(4):
data = read_lc(target+ '_' +str(k)+"_lc.csv")
med_flux[k] = np.nanmedian(data['flux_bgsub'])
med_flux_err[k] = np.std(data['flux_bgsub'])
time_big = np.append(time_big, data['t_mean'])
flux_big = np.append(flux_big, data['flux_bgsub'])
mag_big = np.append(mag_big, data['mag'])
# t0k = Time(int(data['t_mean'][0]) + 315964800, format='unix').mjd
flg0 = np.where((data['flags'] == 0))[0]
# for Referee: convert GALEX time -> MJD
t_unix = Time(data['t_mean'] + 315964800, format='unix')
mjd_time = t_unix.mjd
t0k = (mjd_time[0])
plt.figure()
plt.errorbar((mjd_time - t0k)*24.*60.*60., data['flux_bgsub']/(1e-15), yerr=data['flux_bgsub_err']/(1e-15),
marker='.', linestyle='none', c='k', alpha=0.75, lw=0.5, markersize=2)
plt.errorbar((mjd_time[flg0] - t0k)*24.*60.*60., data['flux_bgsub'][flg0]/(1e-15),
yerr=data['flux_bgsub_err'][flg0]/(1e-15),
marker='.', linestyle='none')
# plt.xlabel('GALEX time (sec - '+str(t0k)+')')
plt.xlabel('MJD - '+ format(t0k, '9.3f') +' (seconds)')
plt.ylabel('NUV Flux \n'
r'(x10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ ${\rm\AA}^{-1}$)')
plt.savefig(target+ '_' +str(k)+"_lc.pdf", dpi=150, bbox_inches='tight', pad_inches=0.25)
flagcol = np.zeros_like(mjd_time)
flagcol[flg0] = 1
dfout = pd.DataFrame(data={'MJD':mjd_time,
'flux':data['flux_bgsub']/(1e-15),
'fluxerr':data['flux_bgsub_err']/(1e-15),
'flag':flagcol})
dfout.to_csv(target+ '_' +str(k)+'data.csv', index=False, columns=('MJD', 'flux','fluxerr', 'flag'))
Explanation: Making light curves
Following examples in the repo...
End of explanation
# k=2
# data = read_lc(target+ '_' +str(k)+"_lc.csv")
# t0k = int(data['t_mean'][0])
# plt.figure(figsize=(14,5))
# plt.errorbar(data['t_mean'] - t0k, data['flux_bgsub'], yerr=data['flux_bgsub_err'], marker='.', linestyle='none')
# plt.xlabel('GALEX time (sec - '+str(t0k)+')')
# plt.ylabel('NUV Flux')
Explanation: Huh... that 3rd panel looks like a nice long visit. Let's take a slightly closer look!
End of explanation
# try cutting on flags=0
flg0 = np.where((data['flags'] == 0))[0]
plt.figure(figsize=(14,5))
plt.errorbar(data['t_mean'][flg0] - t0k, data['flux_bgsub'][flg0]/(1e-15), yerr=data['flux_bgsub_err'][flg0]/(1e-15),
marker='.', linestyle='none')
plt.xlabel('GALEX time (sec - '+str(t0k)+')')
# plt.ylabel('NUV Flux')
plt.ylabel('NUV Flux \n'
r'(x10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ ${\rm\AA}^{-1}$)')
plt.title('Flags = 0')
minper = 10 # my windowing
maxper = 200000
nper = 1000
pgram = LombScargleFast(fit_offset=False)
pgram.optimizer.set(period_range=(minper,maxper))
pgram = pgram.fit(time_big - min(time_big), flux_big - np.nanmedian(flux_big))
df = (1./minper - 1./maxper) / nper
f0 = 1./maxper
pwr = pgram.score_frequency_grid(f0, df, nper)
freq = f0 + df * np.arange(nper)
per = 1./freq
##
plt.figure()
plt.plot(per, pwr, lw=0.75)
plt.xlabel('Period (seconds)')
plt.ylabel('L-S Power')
plt.xscale('log')
plt.xlim(10,500)
plt.savefig('periodogram.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
Explanation: Any short timescale variability of note? Let's use a Lomb-Scargle to make a periodogram!
(limited to the 10sec windowing I imposed... NOTE: gPhoton could easily go shorter, but S/N looks dicey)
Answer: Some interesting structure around 70-80 sec, but nothing super strong
Update: David Wilson says that although there are significant pointing motions (which Scott Flemming says do occur), they don't align with the ~80sec signal here. Short timescale may be interesting! However, Keaton Bell says he saw no short timescale variations in optical last week...
Update 2: This ~80 sec structure seems to be present in the gPhoton data at all three of (9,10,11) second sampling, suggesting it is real.
End of explanation
t_unix = Time(exp_data['NUV']['t0'] + 315964800, format='unix')
mjd_time_med = t_unix.mjd
t0k = (mjd_time[0])
plt.figure(figsize=(9,5))
plt.errorbar(mjd_time_med - mjd_time_med[0], med_flux/1e-15, yerr=med_flux_err/1e-15,
linestyle='none', marker='o')
plt.xlabel('MJD - '+format(mjd_time[0], '9.3f')+' (days)')
# plt.ylabel('NUV Flux')
plt.ylabel('NUV Flux \n'
r'(x10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ ${\rm\AA}^{-1}$)')
# plt.title(target)
plt.savefig(target+'.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
Explanation: How about the long-term evolution?
Answer: looks flat!
End of explanation
# average time of the gPhoton data
print(np.mean(exp_data['NUV']['t0']))
t_unix = Time(np.mean(exp_data['NUV']['t0']) + 315964800, format='unix')
t_date = t_unix.yday
print(t_date)
mjd_date = t_unix.mjd
print(mjd_date)
Explanation: Conclusion...?
Based on data from only 4 GALEX visits, spaced over ~70 days, we can't say much possible evolution of this star with GALEX.
End of explanation
plt.errorbar([10, 14], [16.46, 16.499], yerr=[0.01, 0.006], linestyle='none', marker='o')
plt.xlabel('Quarter (approx)')
plt.ylabel(r'$m_{NUV}$ (mag)')
plt.ylim(16.52,16.44)
Explanation: The visits are centered in mid 2011 (Quarter 9 and 10, I believe)
Note: there was a special GALEX pointing at the Kepler field that overlapped with Quarter 14 - approximately 1 year later. This data is not available via gPhoton, but it may be able to be used! The gPhoton data shown here occurs right before the "knee" in Figure 3 of Montet & Simon (2016), and Quarter 14 is well after. Therefore a ~3% dip in the flux should be observed between this data and the Q14 visit
However: the per-vist errors shown here (std dev) are around 6-10% for this target. If we co-add it all, we may get enough precision. The Q14 data apparently has 15 total scans... so the measurment may be borderline possible!
Long timescale variability!
I followed up on both the GALEX archival flux mearument, and the published scan-mode flux.
The GALEX source database from MAST (from which I believe gPhoton is derived) says m_NUV = 16.46 +/- 0.01
The "Deep GALEX NUV survey of the Kepler field" catalog by Olmedo (2015), aka GALEX CAUSE Kepler, says m_NUV = 16.499 +/- 0.006
Converting these <a href="https://en.wikipedia.org/wiki/Magnitude_(astronomy)"> magnitudes </a> to a change in flux:
10^((16.46 - 16.499) / (-2.5)) = 1.03657
And if you trust all those catalog values as stated, here is a highly suggestive plot:
End of explanation
gck_time = Time(1029843320.995 + 315964800, format='unix')
gck_time.mjd
# and to push the comparison to absurd places...
# http://astro.uchicago.edu/~bmontet/kic8462852/reduced_lc.txt
df = pd.read_table('reduced_lc.txt', delim_whitespace=True, skiprows=1,
names=('time','raw_flux', 'norm_flux', 'model_flux'))
# time = BJD-2454833
# *MJD = JD - 2400000.5
plt.figure()
plt.plot(df['time'] + 2454833 - 2400000.5, df['model_flux'], c='grey', lw=0.2)
gtime = [mjd_date, gck_time.mjd]
gmag = np.array([16.46, 16.499])
gflux = np.array([1, 10**((gmag[1] - gmag[0]) / (-2.5))])
gerr = np.abs(np.array([0.01, 0.006]) * np.log(10) / (-2.5) * gflux)
plt.errorbar(gtime, gflux, yerr=gerr,
linestyle='none', marker='o')
plt.ylim(0.956,1.012)
plt.xlabel('MJD (days)')
plt.ylabel('Relative Flux')
# plt.savefig(target+'_compare.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
####################
# add in WISE
plt.figure()
plt.plot(df['time'] + 2454833 - 2400000.5, df['model_flux'], c='grey', lw=0.2)
plt.errorbar(gtime, gflux, yerr=gerr,
linestyle='none', marker='o')
# the WISE W1-band results from another notebook
wise_time = np.array([55330.86838, 55509.906929000004])
wise_flux = np.array([ 1.,0.98627949])
wise_err = np.array([ 0.02011393, 0.02000256])
plt.errorbar(wise_time, wise_flux, yerr=wise_err,
linestyle='none', marker='o')
plt.ylim(0.956,1.025)
plt.xlabel('MJD (days)')
plt.ylabel('Relative Flux')
# plt.savefig(target+'_compare2.png', dpi=150, bbox_inches='tight', pad_inches=0.25)
ffi_file = '8462852.txt'
ffi = pd.read_table(ffi_file, delim_whitespace=True, names=('mjd', 'flux', 'err'))
plt.figure()
# plt.plot(df['time'] + 2454833 - 2400000.5, df['model_flux'], c='grey', lw=0.2)
plt.errorbar(ffi['mjd'], ffi['flux'], yerr=ffi['err'], linestyle='none', marker='s', c='gray',
zorder=0, alpha=0.7)
gtime = [mjd_date, gck_time.mjd]
gmag = np.array([16.46, 16.499])
gflux = np.array([1, 10**((gmag[1] - gmag[0]) / (-2.5))])
gerr = np.abs(np.array([0.01, 0.006]) * np.log(10) / (-2.5) * gflux)
plt.errorbar(gtime, gflux, yerr=gerr,
linestyle='none', marker='o', zorder=1, markersize=10)
plt.xlabel('MJD (days)')
plt.ylabel('Relative Flux')
# plt.errorbar(mjd_time_med, med_flux/np.mean(med_flux), yerr=med_flux_err/np.mean(med_flux),
# linestyle='none', marker='o', markerfacecolor='none', linewidth=0.5)
# print(np.sqrt(np.sum((med_flux_err / np.mean(med_flux))**2) / len(med_flux)))
plt.ylim(0.956,1.012)
# plt.ylim(0.9,1.1)
plt.savefig(target+'_compare.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
print('gflux: ', gflux, gerr)
Explanation: For time comparison, here is an example MJD from scan 15 of the GKM data.
(note: I grabbed a random time-like number from here. YMMV, but it's probably OK for comparing to the Kepler FFI results)
End of explanation
# considering extinction...
# w/ thanks to the Padova Isochrone page for easy shortcut to getting these extinction values:
# http://stev.oapd.inaf.it/cgi-bin/cmd
A_NUV = 2.27499 # actually A_NUV / A_V, in magnitudes, for R_V = 3.1
A_Kep = 0.85946 # actually A_Kep / A_V, in magnitudes, for R_V = 3.1
A_W1 = 0.07134 # actually A_W1 / A_V, in magnitudes, for R_V = 3.1
wave_NUV = 2556.69 # A
wave_Kep = 6389.68 # A
wave_W1 = 33159.26 # A
print('nuv')
## use the Long Cadence data.
frac_kep = (np.median(df['model_flux'][np.where((np.abs(df['time']+ 2454833 - 2400000.5 -gtime[0])) < 25)[0]]) -
np.median(df['model_flux'][np.where((np.abs(df['time']+ 2454833 - 2400000.5 -gtime[1])) < 25)[0]]))
## could use the FFI data, but it slightly changes the extinction coefficients and they a pain in the butt
## to adjust manually because I was an idiot how i wrote this
# frac_kep = (np.median(ffi['flux'][np.where((np.abs(ffi['mjd'] -gtime[0])) < 75)[0]]) -
# np.median(ffi['flux'][np.where((np.abs(ffi['mjd'] -gtime[1])) < 75)[0]]))
print(frac_kep)
mag_kep = -2.5 * np.log10(1.-frac_kep)
print(mag_kep)
mag_nuv = mag_kep / A_Kep * A_NUV
print(mag_nuv)
frac_nuv = 10**(mag_nuv / (-2.5))
print(1-frac_nuv)
frac_kep_w = (np.median(df['model_flux'][np.where((np.abs(df['time']+ 2454833 - 2400000.5 -wise_time[0])) < 25)[0]]) -
np.median(df['model_flux'][np.where((np.abs(df['time']+ 2454833 - 2400000.5 -wise_time[1])) < 25)[0]]))
print('w1')
print(frac_kep_w)
mag_kep_w = -2.5 * np.log10(1.-frac_kep_w)
print(mag_kep_w)
mag_w1 = mag_kep_w / A_Kep * A_W1
print(mag_w1)
frac_w1 = 10**(mag_w1 / (-2.5))
print(1-frac_w1)
plt.errorbar([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], yerr=[0, np.sqrt(np.sum(gerr**2))],
label='Observed', marker='o')
plt.plot([wave_Kep, wave_NUV], [1-frac_kep, frac_nuv], '--o', label=r'$R_V$=3.1 Model')
plt.legend(fontsize=10, loc='lower right')
plt.xlabel(r'Wavelength ($\rm\AA$)')
plt.ylabel('Relative Flux Decrease')
plt.ylim(0.93,1)
# plt.savefig(target+'_extinction_model_1.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
Explanation: Thinking about Dust - v1
Use basic values for the relative extinction in each band at a "standard" R_v = 3.1
Imagine if the long-term Kepler fading was due to dust. What is the extinction we'd expect in the NUV? (A: Much greater)
End of explanation
plt.errorbar([wave_Kep, wave_W1], [1-frac_kep_w, wise_flux[1]], yerr=[0, np.sqrt(np.sum(wise_err**2))],
label='Observed', marker='o', c='purple')
plt.plot([wave_Kep, wave_W1], [1-frac_kep_w, frac_w1], '--o', label='Extinction Model', c='green')
plt.legend(fontsize=10, loc='lower right')
plt.xlabel(r'Wavelength ($\rm\AA$)')
plt.ylabel('Relative Flux')
plt.ylim(0.93,1.03)
# plt.savefig(target+'_extinction_model_2.png', dpi=150, bbox_inches='tight', pad_inches=0.25)
Explanation: Same plot as above, but with WISE W1 band, and considering a different time window unfortunately
End of explanation
plt.errorbar([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], yerr=[0, np.sqrt(np.sum(gerr**2))],
label='Observed1', marker='o')
plt.plot([wave_Kep, wave_NUV], [1-frac_kep, frac_nuv], '--o', label='Extinction Model1')
plt.errorbar([wave_Kep, wave_W1], [1-frac_kep_w, wise_flux[1]], yerr=[0, np.sqrt(np.sum(wise_err**2))],
label='Observed2', marker='o', c='purple')
plt.plot([wave_Kep, wave_W1], [1-frac_kep_w, frac_w1], '--o', label='Extinction Model2')
plt.legend(fontsize=10, loc='upper left')
plt.xlabel(r'Wavelength ($\rm\AA$)')
plt.ylabel('Relative Flux')
plt.ylim(0.93,1.03)
plt.xscale('log')
# plt.savefig(target+'_extinction_model_2.png', dpi=150, bbox_inches='tight', pad_inches=0.25)
Explanation: Combining the fading and dust model for both the NUV and W1 data.
In the IR we can't say much... so maybe toss it out since it doesn't constrain dust model one way or another
End of explanation
# the "STANDARD MODEL" for extinction
A_V = 0.0265407
R_V = 3.1
ext_out = extinction.ccm89(np.array([wave_Kep, wave_NUV]), A_V, R_V)
# (ext_out[1] - ext_out[0]) / ext_out[1]
print(10**(ext_out[0]/(-2.5)), (1-frac_kep)) # these need to match (within < 1%)
print(10**(ext_out[1]/(-2.5)), gflux[1]) # and then these won't, as per our previous plot
# print(10**((ext_out[1] - ext_out[0])/(-2.5)) / 10**(ext_out[0]/(-2.5)))
# now find an R_V (and A_V) that gives matching extinctions in both bands
# start by doing a grid over plasible A_V values at each R_V I care about... we doing this brute force!
ni=50
nj=50
di = 0.2
dj = 0.0003
ext_out_grid = np.zeros((2,ni,nj))
for i in range(ni):
R_V = 1.1 + i*di
for j in range(nj):
A_V = 0.02 + j*dj
ext_out_ij = extinction.ccm89(np.array([wave_Kep, wave_NUV]), A_V, R_V)
ext_out_grid[:,i,j] = 10**(ext_out_ij/(-2.5))
R_V_grid = 1.1 + np.arange(ni)*di
A_V_grid = 0.02 + np.arange(nj)*dj
# now plot where the Kepler extinction (A_Kep) matches the measured value, for each R_V
plt.figure()
plt.contourf( A_V_grid, R_V_grid, ext_out_grid[0,:,:], origin='lower' )
cb = plt.colorbar()
cb.set_label('A_Kep (flux)')
A_V_match = np.zeros(ni)
ext_NUV = np.zeros(ni)
for i in range(ni):
xx = np.interp(1-frac_kep, ext_out_grid[0,i,:][::-1], A_V_grid[::-1])
plt.scatter(xx, R_V_grid[i], c='r', s=10)
A_V_match[i] = xx
ext_NUV[i] = 10**(extinction.ccm89(np.array([wave_NUV]),xx, R_V_grid[i]) / (-2.5))
plt.ylabel('R_V')
plt.xlabel('A_V (mag)')
plt.show()
# Finally: at what R_V do we both match A_Kep (as above), and *now* A_NUV?
RV_final = np.interp(gflux[1], ext_NUV, R_V_grid)
print(RV_final)
# this is the hacky way to sorta do an error propogation....
RV_err = np.mean(np.interp([gflux[1] + np.sqrt(np.sum(gerr**2)),
gflux[1] - np.sqrt(np.sum(gerr**2))],
ext_NUV, R_V_grid)) - RV_final
print(RV_err)
AV_final = np.interp(gflux[1], ext_NUV, A_V_grid)
print(AV_final)
plt.plot(R_V_grid, ext_NUV)
plt.errorbar(RV_final, gflux[1], yerr=np.sqrt(np.sum(gerr**2)), xerr=RV_err, marker='o')
plt.xlabel('R_V')
plt.ylabel('A_NUV (flux)')
Explanation: Dust 2.0: Let's fit some models to the Optical-NUV data!
Start w/ simple dust model, try to fit for R_V
The procedure will be:
for each R_V I want to test
figure out what A_Kep / A_V is for this R_V
solve for A_NUV(R_V) given measured A_Kep
End of explanation
plt.errorbar([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], yerr=[0, np.sqrt(np.sum(gerr**2))],
label='Observed', marker='o', linestyle='none', zorder=0, markersize=10)
plt.plot([wave_Kep, wave_NUV], [1-frac_kep, gflux[1]], label=r'$R_V$=5.0 Model', c='r', lw=3, alpha=0.7,zorder=1)
plt.plot([wave_Kep, wave_NUV], [1-frac_kep, frac_nuv], '--', label=r'$R_V$=3.1 Model',zorder=2)
plt.legend(fontsize=10, loc='lower right')
plt.xlabel(r'Wavelength ($\rm\AA$)')
plt.ylabel('Relative Flux Decrease')
plt.ylim(0.93,1)
plt.savefig(target+'_extinction_model_2.pdf', dpi=150, bbox_inches='tight', pad_inches=0.25)
# For referee: compute how many sigma away the Rv=3.1 model is from the Rv=5
print( (gflux[1] - frac_nuv) / np.sqrt(np.sum(gerr**2)), np.sqrt(np.sum(gerr**2)) )
print( (gflux[1] - frac_nuv) / 3., 3. * np.sqrt(np.sum(gerr**2)) )
# how much Hydrogen would you need to cause this fading?
# http://www.astronomy.ohio-state.edu/~pogge/Ast871/Notes/Dust.pdf
# based on data from Rachford et al. (2002) http://adsabs.harvard.edu/abs/2002ApJ...577..221R
A_Ic = extinction.ccm89(np.array([8000.]), AV_final, RV_final)
N_H = A_Ic / ((2.96 - 3.55 * ((3.1 / RV_final)-1)) * 1e-22)
print(N_H[0] , 'cm^-2')
# see also http://adsabs.harvard.edu/abs/2009MNRAS.400.2050G for R_V=3.1 only
print(2.21e21 * AV_final, 'cm^-2')
1-gflux[1]
Explanation: CCM89 gives us R_V = 5.02097489191 +/- 0.938304455977 satisfies both the Kepler and NUV fading we see.
Such a high value of R_V~5 is not unheard of, particulary in protostars, however Boyajian's Star does not show any other indications of being such a source.
NOTE:
If we re-run using extinction.fitzpatrick99 instead of extinction
we get R_v = 5.80047674637 +/- 1.57810616272
End of explanation
# do simple thing first: a grid of temperatures starting at T_eff of the star (SpT = F3, T_eff = 6750)
temp0 = 6750 * u.K
wavelengths = [wave_Kep, wave_NUV] * u.AA
wavegrid = np.arange(wave_NUV, wave_Kep) * u.AA
flux_lam0 = blackbody_lambda(wavelengths, temp0)
flux_lamgrid = blackbody_lambda(wavegrid, temp0)
plt.plot(wavegrid, flux_lamgrid/1e6)
plt.scatter(wavelengths, flux_lam0/1e6)
Ntemps = 50
dT = 5 * u.K
flux_lam_out = np.zeros((2,Ntemps))
for k in range(Ntemps):
flux_new = blackbody_lambda(wavelengths, (temp0 - dT*k) )
flux_lam_out[:,k] = flux_new
# [1-frac_kep, gflux[1]]
yy = flux_lam_out[0,:] / flux_lam_out[0,0]
xx = temp0 - np.arange(Ntemps)*dT
temp_new = np.interp(1-frac_kep, yy[::-1], xx[::-1] )
# this is the hacky way to sorta do an error propogation....
err_kep = np.mean(ffi['err'][np.where((np.abs(ffi['mjd'] -gtime[0])) < 50)[0]])
temp_err = (np.interp([1-frac_kep - err_kep,
1-frac_kep + err_kep],
yy[::-1], xx[::-1]))
temp_err = (temp_err[1] - temp_err[0])/2.
print(temp_new, temp_err)
yy2 = flux_lam_out[1,:] / flux_lam_out[1,0]
NUV_new = np.interp(temp_new, xx[::-1], yy2[::-1])
print(NUV_new)
print(gflux[1], np.sqrt(np.sum(gerr**2)))
plt.plot(temp0 - np.arange(Ntemps)*dT, flux_lam_out[0,:]/flux_lam_out[0,0], label='Blackbody model (Kep)')
plt.plot(temp0 - np.arange(Ntemps)*dT, flux_lam_out[1,:]/flux_lam_out[1,0],ls='--', label='Blackbody model (NUV)')
plt.errorbar(temp_new, gflux[1], yerr=np.sqrt(np.sum(gerr**2)), marker='o', label='Observed NUV' )
plt.scatter([temp_new], [1-frac_kep], s=60, marker='s')
plt.scatter([temp_new], [NUV_new], s=60, marker='s')
plt.legend(fontsize=10, loc='upper left')
plt.xlim(6650,6750)
plt.ylim(.9,1)
plt.ylabel('Fractional flux')
plt.xlabel('Temperature')
# plt.title('Tuned to Kepler Dimming')
plt.savefig(target+'_blackbody.png', dpi=150, bbox_inches='tight', pad_inches=0.25)
Explanation: Another simple model: changes in Blackbody temperature
We have 2 bands, thats enough to constrain how the temperature should have changed with a blackbody...
So the procedure is:
- assume the stellar radius doesn't change, only temp (good approximation)
- given the drop in optical luminosity, what change in temp is needed?
- given that change in temp, what is predicted drop in NUV? does it match data?
NOTE: a better model would be changing T_eff of Phoenix model grid to match, since star isn't a blackbody through the NUV-Optical (b/c opacities!)
End of explanation |
4,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project
Step1: Step 1
Step2: Total number of columns is 14 + 1 target
Step3: Step 2
Step4: Step 3
Step5: Logistic Regression
Step6: KNN
Step7: Random Forest
Step8: Naive Bayes
Step9: Neural Network
Step10: Step 4
Step11: Aggregate manager_id to get features representing manager performance
Note
Step12: Step 5 | Python Code:
# imports
import pandas as pd
import dateutil.parser
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_recall_fscore_support
from sklearn.metrics import log_loss
import matplotlib.pyplot as plt
import seaborn as sns
from wordcloud import WordCloud
%matplotlib inline
Explanation: Project: McNulty
Date: 02/22/2017
Name: Prashant Tatineni
Project Overview
In this project, I attempt to predict the popularity (target variable: interest_level) of apartment rental listings based on listing characteristics. The data comes from a Kaggle Competition.
AWS and SQL were not used for joining to the dataset, as it was provided as a single file train.json (49,352 rows).
An additional file, test.json (74,659 rows) contains the same columns as train.json, except that the target variable, interest_level, is missing. Predictions of the target variable are to be made on the test.json file and submitted to Kaggle.
Summary of Solution Steps
Load data from JSON
Build initial predictor variables, with interest_level as the target.
Initial run of classification models.
Add category indicators and aggregated features based on manager_id.
Run new Random Forest model.
Predict interest_level for the available test dataset.
End of explanation
# Load the training dataset from Kaggle.
df = pd.read_json('data/raw/train.json')
print df.shape
df.head(2)
Explanation: Step 1: Load Data
End of explanation
# Distribution of target, interest_level
s = df.groupby('interest_level')['listing_id'].count()
s.plot.bar();
df_high = df.loc[df['interest_level'] == 'high']
df_medium = df.loc[df['interest_level'] == 'medium']
df_low = df.loc[df['interest_level'] == 'low']
plt.figure(figsize=(6,10))
plt.scatter(df_low.longitude, df_low.latitude, color='yellow', alpha=0.2, marker='.', label='Low')
plt.scatter(df_medium.longitude, df_medium.latitude, color='green', alpha=0.2, marker='.', label='Medium')
plt.scatter(df_high.longitude, df_high.latitude, color='purple', alpha=0.2, marker='.', label='High')
plt.xlim(-74.04,-73.80)
plt.ylim(40.6,40.9)
plt.legend(loc=2);
Explanation: Total number of columns is 14 + 1 target:
- 1 target variable (interest_level), with classes low, medium, high
- 1 photo link
- lat/long, street address, display address
- listing_id, building_id, manager_id
- numerical (price, bathrooms, bedrooms)
- created date
- text (description, features)
Features for modeling:
- bathrooms
- bedrooms
- created date (calculate age of posting in days)
- description (number of words in description)
- features (number of features)
- photos (number of photos)
- price
- features (split into category indicators)
- manager_id (with manager skill level)
Further opportunities for modeling:
- description (need to text parse)
- building_id (possibly with a building popularity level)
- photos (quality)
End of explanation
(pd.to_datetime(df['created'])).sort_values(ascending=False).head()
# The most recent records are 6/29/2016. Computing days old from 6/30/2016.
df['days_old'] = (dateutil.parser.parse('2016-06-30') - pd.to_datetime(df['created'])).apply(lambda x: x.days)
# Add other "count" features
df['num_words'] = df['description'].apply(lambda x: len(x.split()))
df['num_features'] = df['features'].apply(len)
df['num_photos'] = df['photos'].apply(len)
Explanation: Step 2: Initial Features
End of explanation
X = df[['bathrooms','bedrooms','price','latitude','longitude','days_old','num_words','num_features','num_photos']]
y = df['interest_level']
# Scaling is necessary for Logistic Regression and KNN
X_scaled = pd.DataFrame(preprocessing.scale(X))
X_scaled.columns = X.columns
X_train, X_test, y_train, y_test = train_test_split(X_scaled, y, random_state=42)
Explanation: Step 3: Modeling, First Pass
End of explanation
lr = LogisticRegression()
lr.fit(X_train, y_train)
y_test_predicted_proba = lr.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
lr = LogisticRegression(solver='newton-cg', multi_class='multinomial')
lr.fit(X_train, y_train)
y_test_predicted_proba = lr.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
Explanation: Logistic Regression
End of explanation
for i in [95,100,105]:
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, y_train)
y_test_predicted_proba = knn.predict_proba(X_test)
print log_loss(y_test, y_test_predicted_proba)
Explanation: KNN
End of explanation
rf = RandomForestClassifier(n_estimators=1000, n_jobs=-1)
rf.fit(X_train, y_train)
y_test_predicted_proba = rf.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
Explanation: Random Forest
End of explanation
bnb = BernoulliNB()
bnb.fit(X_train, y_train)
y_test_predicted_proba = bnb.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
gnb = GaussianNB()
gnb.fit(X_train, y_train)
y_test_predicted_proba = gnb.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
Explanation: Naive Bayes
End of explanation
clf = MLPClassifier(hidden_layer_sizes=(100,50,10))
clf.fit(X_train, y_train)
y_test_predicted_proba = clf.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
Explanation: Neural Network
End of explanation
# Reduce 1556 unique category text values into 34 main categories
def reduce_categories(full_list):
reduced_list = []
for i in full_list:
item = i.lower()
if 'cats allowed' in item:
reduced_list.append('cats')
if 'dogs allowed' in item:
reduced_list.append('dogs')
if 'elevator' in item:
reduced_list.append('elevator')
if 'hardwood' in item:
reduced_list.append('elevator')
if 'doorman' in item or 'concierge' in item:
reduced_list.append('doorman')
if 'dishwasher' in item:
reduced_list.append('dishwasher')
if 'laundry' in item or 'dryer' in item:
if 'unit' in item:
reduced_list.append('laundry_in_unit')
else:
reduced_list.append('laundry')
if 'no fee' in item:
reduced_list.append('no_fee')
if 'reduced fee' in item:
reduced_list.append('reduced_fee')
if 'fitness' in item or 'gym' in item:
reduced_list.append('gym')
if 'prewar' in item or 'pre-war' in item:
reduced_list.append('prewar')
if 'dining room' in item:
reduced_list.append('dining')
if 'pool' in item:
reduced_list.append('pool')
if 'internet' in item:
reduced_list.append('internet')
if 'new construction' in item:
reduced_list.append('new_construction')
if 'wheelchair' in item:
reduced_list.append('wheelchair')
if 'exclusive' in item:
reduced_list.append('exclusive')
if 'loft' in item:
reduced_list.append('loft')
if 'simplex' in item:
reduced_list.append('simplex')
if 'fire' in item:
reduced_list.append('fireplace')
if 'lowrise' in item or 'low-rise' in item:
reduced_list.append('lowrise')
if 'midrise' in item or 'mid-rise' in item:
reduced_list.append('midrise')
if 'highrise' in item or 'high-rise' in item:
reduced_list.append('highrise')
if 'pool' in item:
reduced_list.append('pool')
if 'ceiling' in item:
reduced_list.append('high_ceiling')
if 'garage' in item or 'parking' in item:
reduced_list.append('parking')
if 'furnished' in item:
reduced_list.append('furnished')
if 'multi-level' in item:
reduced_list.append('multilevel')
if 'renovated' in item:
reduced_list.append('renovated')
if 'super' in item:
reduced_list.append('live_in_super')
if 'green building' in item:
reduced_list.append('green_building')
if 'appliances' in item:
reduced_list.append('new_appliances')
if 'luxury' in item:
reduced_list.append('luxury')
if 'penthouse' in item:
reduced_list.append('penthouse')
if 'deck' in item or 'terrace' in item or 'balcony' in item or 'outdoor' in item or 'roof' in item or 'garden' in item or 'patio' in item:
reduced_list.append('outdoor_space')
return list(set(reduced_list))
df['categories'] = df['features'].apply(reduce_categories)
text = ''
for index, row in df.iterrows():
for i in row.categories:
text = text + i + ' '
plt.figure(figsize=(12,6))
wc = WordCloud(background_color='white', width=1200, height=600).generate(text)
plt.title('Reduced Categories', fontsize=30)
plt.axis("off")
wc.recolor(random_state=0)
plt.imshow(wc);
# Create indicators
X_dummies = pd.get_dummies(df['categories'].apply(pd.Series).stack()).sum(level=0)
Explanation: Step 4: More Features
Splitting out categories into 0/1 dummy variables
End of explanation
# Choose features for modeling (and sorting)
df = df.sort_values('listing_id')
X = df[['bathrooms','bedrooms','price','latitude','longitude','days_old','num_words','num_features','num_photos','listing_id','manager_id']]
y = df['interest_level']
# Merge indicators to X dataframe and sort again to match sorting of y
X = X.merge(X_dummies, how='outer', left_index=True, right_index=True).fillna(0)
X = X.sort_values('listing_id')
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# compute ratios and count for each manager
mgr_perf = pd.concat([X_train.manager_id,pd.get_dummies(y_train)], axis=1).groupby('manager_id').mean()
mgr_perf.head(2)
mgr_perf['manager_count'] = X_train.groupby('manager_id').count().iloc[:,1]
mgr_perf['manager_skill'] = mgr_perf['high']*1 + mgr_perf['medium']*0 + mgr_perf['low']*-1
# for training set
X_train = X_train.merge(mgr_perf.reset_index(), how='left', left_on='manager_id', right_on='manager_id')
# for test set
X_test = X_test.merge(mgr_perf.reset_index(), how='left', left_on='manager_id', right_on='manager_id')
# Fill na's with mean skill and median count
X_test['manager_skill'] = X_test.manager_skill.fillna(X_test.manager_skill.mean())
X_test['manager_count'] = X_test.manager_count.fillna(X_test.manager_count.median())
# Delete unnecessary columns before modeling
del X_train['listing_id']
del X_train['manager_id']
del X_test['listing_id']
del X_test['manager_id']
del X_train['high']
del X_train['medium']
del X_train['low']
del X_test['high']
del X_test['medium']
del X_test['low']
Explanation: Aggregate manager_id to get features representing manager performance
Note: Need to aggregate manager performance ONLY over a training subset in order to validate against test subset. So the train-test split is being performed in this step before creating the columns for manager performance.
End of explanation
rf = RandomForestClassifier(n_estimators=1000, n_jobs=-1)
rf.fit(X_train, y_train)
y_test_predicted_proba = rf.predict_proba(X_test)
log_loss(y_test, y_test_predicted_proba)
y_test_predicted = rf.predict(X_test)
accuracy_score(y_test, y_test_predicted)
precision_recall_fscore_support(y_test, y_test_predicted)
rf.classes_
plt.figure(figsize=(15,5))
pd.Series(index = X_train.columns, data = rf.feature_importances_).sort_values().plot(kind = 'bar');
Explanation: Step 5: Modeling, second pass with Random Forest
End of explanation |
4,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculate Coupling Coefficients
Two split rings are placed in a broadside coupled configuration. The scalar model of each resonator is augmented by additional coefficients representing the coupling between them.
In D. A. Powell et al, Phys. Rev. B 82, 155128 (2010) similar coefficients were calculated under the quasi-static approximation for a single mode only. Here the effects of retardation and radiation losses are included, as are the contributions of multiple modes of each ring.
Step1: Creating geometry
As in previous examples, we load a pair of SRRs, place them in the simulation and visualise the result.
Step2: Solving modes of individual rings
We find the singularities for the two identical rings.
Step3: Constructing the models
We now use these singularities to construct a model for each of the rings, where $n$ represents the ring number and $m$ represents the mode number. $s=j\omega$ is the complex frequency, $s_{n,m}$ is the complex resonant frequency of each mode and $\mathbf{V}_{n, m}$ is the corresponding current of the mode. The current on each ring is represented as a sum of modes
$$\mathbf{J}{n} = \sum_m a{n,m}\mathbf{V}_{n, m}$$
This results in the following coupled equation system for the modal coefficients on each ring
$$\frac{s_{n,m}}{s}\left(s-s_{n,m}\right)a_{n,m} + \sum_{n'\neq n}\sum_{m'}\left(sL_{m,n,m',n'} + \frac{1}{s}S_{m,n,m',n'}\right)a_{n',m'} = \mathbf{V}{n, m}\cdot\mathbf{E}{inc}$$
The first term just says that the self-impedance of each mode is calculated directly from the pole expansion.
The second term is the mutual inductance $L$ and capacitance $C = S^{-1}$ between each modes of different rings. These coefficients are obtained by weighting the relevant parts of the impedance matrix, e.g.
Step4: Solving scattering based on models
Now we iterate through all frequencies, and calculate the model parameters. Their accuracy is demonstrated by using them to calculate the extinction cross-section. For reference purposes, this will be compared with the direct calculation.
Step5: Accuracy of the models
The extinction cross section is now plotted for the pair of rings, using both the simpler model and the direct calculation. Additionally, the cross section of a single ring is shown. It can be seen that this fundamental mode of a single ring is split into two coupled modes. Due to the coupling impedance being complex, the hybridised modes have different widths.
Step6: In the above figure, it can be seen that the simple model of interaction between the rings gives quite good agreement. This can be improved by using the model which considers the two modes of each ring. While the modes on the same ring are independent of each other, between meta-atoms all modes couple to each other, thus there are 3 distinct coupling impedances, and two distinct self impedance terms.
The results of this model are plotted below, showing the improved accuracy by accounting for the higher-order mode.
Step7: Coupling coefficients within the model
Now we can study the coupling coefficients between the dominant modes of the two rings. It can be seen that both $L_{mut}$ and $S_{mut}$ show quite smooth behaviour over this wide frequency range. This justifies the use of a low-order polynomial model for the interaction terms. This frequency variation is due to retardation, which is not very strong for such close separation.
However, the retardation is still strong enough to make the imaginary parts of these coupling terms non-negligible. These parts corresponds to the real part of the mutual impedance, and mean that the coupling affects not only the stored energy, but also the rate of energy loss due to radiation by the modes. | Python Code:
# setup 2D and 3D plotting
%matplotlib inline
from openmodes.ipython import matplotlib_defaults
matplotlib_defaults()
import matplotlib.pyplot as plt
import numpy as np
import os.path as osp
import openmodes
from openmodes.constants import c, eta_0
from openmodes.model import EfieModelMutualWeight
from openmodes.sources import PlaneWaveSource
Explanation: Calculate Coupling Coefficients
Two split rings are placed in a broadside coupled configuration. The scalar model of each resonator is augmented by additional coefficients representing the coupling between them.
In D. A. Powell et al, Phys. Rev. B 82, 155128 (2010) similar coefficients were calculated under the quasi-static approximation for a single mode only. Here the effects of retardation and radiation losses are included, as are the contributions of multiple modes of each ring.
End of explanation
parameters={'inner_radius': 2.5e-3, 'outer_radius': 4e-3}
sim = openmodes.Simulation(notebook=True)
srr = sim.load_mesh(osp.join(openmodes.geometry_dir, "SRR.geo"),
parameters=parameters,
mesh_tol=0.7e-3)
srr1 = sim.place_part(srr)
srr2 = sim.place_part(srr, location=[0e-3, 0e-3, 2e-3])
srr2.rotate([0, 0, 1], 180)
sim.plot_3d()
Explanation: Creating geometry
As in previous examples, we load a pair of SRRs, place them in the simulation and visualise the result.
End of explanation
start_s = 2j*np.pi*1e9
num_modes = 3
estimate = sim.estimate_poles(start_s, modes=num_modes, parts=[srr1, srr2], cauchy_integral=False)
refined = sim.refine_poles(estimate)
Explanation: Solving modes of individual rings
We find the singularities for the two identical rings.
End of explanation
dominant_modes = refined.select([0]).add_conjugates()
simple_model = EfieModelMutualWeight(dominant_modes)
full_modes = refined.select([0, 2]).add_conjugates()
full_model = EfieModelMutualWeight(full_modes)
Explanation: Constructing the models
We now use these singularities to construct a model for each of the rings, where $n$ represents the ring number and $m$ represents the mode number. $s=j\omega$ is the complex frequency, $s_{n,m}$ is the complex resonant frequency of each mode and $\mathbf{V}_{n, m}$ is the corresponding current of the mode. The current on each ring is represented as a sum of modes
$$\mathbf{J}{n} = \sum_m a{n,m}\mathbf{V}_{n, m}$$
This results in the following coupled equation system for the modal coefficients on each ring
$$\frac{s_{n,m}}{s}\left(s-s_{n,m}\right)a_{n,m} + \sum_{n'\neq n}\sum_{m'}\left(sL_{m,n,m',n'} + \frac{1}{s}S_{m,n,m',n'}\right)a_{n',m'} = \mathbf{V}{n, m}\cdot\mathbf{E}{inc}$$
The first term just says that the self-impedance of each mode is calculated directly from the pole expansion.
The second term is the mutual inductance $L$ and capacitance $C = S^{-1}$ between each modes of different rings. These coefficients are obtained by weighting the relevant parts of the impedance matrix, e.g.:
$$L_{m,n,m',n'} = \mathbf{V}{n, m} L{n,n'}\mathbf{V}_{n', m'}$$
The right hand side is just the projection of the incident field onto each mode
Here we construct two different models, one considering only the fundamental mode of each ring, and another considering the first and third modes. Due to symmetry, the second mode of each ring does not play a part in the hybridised modes which will be considered here.
End of explanation
num_freqs = 200
freqs = np.linspace(5e9, 10e9, num_freqs)
plane_wave = PlaneWaveSource([0, 1, 0], [1, 0, 0], p_inc=1.0)
extinction_tot = np.empty(num_freqs, np.complex128)
extinction_single = np.empty(num_freqs, np.complex128)
extinction_full_model = np.empty((num_freqs, len(full_modes)), np.complex128)
extinction_simple_model = np.empty((num_freqs, len(dominant_modes)), np.complex128)
# store the mutual coupling coefficients for plotting purposes
mutual_L = np.empty(num_freqs, np.complex128)
mutual_S = np.empty(num_freqs, np.complex128)
simple_vr = dominant_modes.vr
simple_vl = dominant_modes.vl
full_vr = full_modes.vr
full_vl = full_modes.vl
for freq_count, s in sim.iter_freqs(freqs):
impedance = sim.impedance(s)
V = sim.source_vector(plane_wave, s)
# For reference directly calculate extinction for the complete system, and for a single ring
extinction_tot[freq_count] = np.vdot(V, impedance.solve(V))
extinction_single[freq_count] = np.vdot(V["E", srr1], impedance[srr1, srr1].solve(V["E", srr1]))
# calculate based on the simple model
Z_model = simple_model.impedance(s)
V_model = simple_vl.dot(V)
I_model = Z_model.solve(V_model)
extinction_simple_model[freq_count] = V.conj().dot(simple_vr*(I_model))
# calculate based on the full model
Z_model = full_model.impedance(s)
V_model = full_vl.dot(V)
I_model = Z_model.solve(V_model)
extinction_full_model[freq_count] = V.conj().dot(full_vr*(I_model))
mutual_L[freq_count] = Z_model.matrices['L'][srr1, srr2][0, 0]
mutual_S[freq_count] = Z_model.matrices['S'][srr1, srr2][0, 0]
Explanation: Solving scattering based on models
Now we iterate through all frequencies, and calculate the model parameters. Their accuracy is demonstrated by using them to calculate the extinction cross-section. For reference purposes, this will be compared with the direct calculation.
End of explanation
# normalise the extinction to the cross-sectional area of each ring
area = np.pi*(parameters['outer_radius'])**2
Q_single = extinction_single/area
Q_pair = extinction_tot/area
Q_full_model = extinction_full_model/area
Q_simple_model = extinction_simple_model/area
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.plot(freqs*1e-9, Q_pair.real, label='pair')
plt.plot(freqs*1e-9, Q_single.real, label='single')
plt.plot(freqs*1e-9, np.sum(Q_simple_model.real, axis=1), label='model')
plt.xlim(freqs[0]*1e-9, freqs[-1]*1e-9)
plt.xlabel('f (GHz)')
plt.legend(loc='upper right')
plt.ylabel('Extinction efficiency')
plt.subplot(122)
plt.plot(freqs*1e-9, Q_pair.imag)
plt.plot(freqs*1e-9, Q_single.imag)
plt.plot(freqs*1e-9, np.sum(Q_simple_model.imag, axis=1))
plt.xlim(freqs[0]*1e-9, freqs[-1]*1e-9)
plt.ylabel('Normalised reactance')
plt.xlabel('f (GHz)')
plt.show()
Explanation: Accuracy of the models
The extinction cross section is now plotted for the pair of rings, using both the simpler model and the direct calculation. Additionally, the cross section of a single ring is shown. It can be seen that this fundamental mode of a single ring is split into two coupled modes. Due to the coupling impedance being complex, the hybridised modes have different widths.
End of explanation
plt.figure(figsize=(12,4))
plt.subplot(121)
plt.plot(freqs*1e-9, Q_pair.real, label='exact')
plt.plot(freqs*1e-9, np.sum(Q_simple_model.real, axis=1), label='single mode')
plt.plot(freqs*1e-9, np.sum(Q_full_model.real, axis=1), label='two modes')
plt.legend(loc="upper right")
plt.xlim(5.0, 7)
plt.xlabel('f (GHz)')
plt.subplot(122)
plt.plot(freqs*1e-9, Q_pair.imag)
plt.plot(freqs*1e-9, np.sum(Q_simple_model.imag, axis=1))
plt.plot(freqs*1e-9, np.sum(Q_full_model.imag, axis=1))
plt.xlim(5.3, 6.5)
plt.xlabel('f (GHz)')
plt.show()
Explanation: In the above figure, it can be seen that the simple model of interaction between the rings gives quite good agreement. This can be improved by using the model which considers the two modes of each ring. While the modes on the same ring are independent of each other, between meta-atoms all modes couple to each other, thus there are 3 distinct coupling impedances, and two distinct self impedance terms.
The results of this model are plotted below, showing the improved accuracy by accounting for the higher-order mode.
End of explanation
plt.figure()
plt.plot(freqs*1e-9, mutual_S.real, label='real')
plt.plot(freqs*1e-9, mutual_S.imag, label='imag')
plt.legend(loc="center right")
plt.ylabel('$S_{mut}$ (F$^{-1}$)')
plt.xlabel('f (GHz)')
plt.show()
plt.figure()
plt.plot(freqs*1e-9, mutual_L.real, label='real')
plt.plot(freqs*1e-9, mutual_L.imag, label='imag')
plt.legend(loc="center right")
plt.ylabel('$L_{mut}$ (H)')
plt.xlabel('f (GHz)')
plt.show()
Explanation: Coupling coefficients within the model
Now we can study the coupling coefficients between the dominant modes of the two rings. It can be seen that both $L_{mut}$ and $S_{mut}$ show quite smooth behaviour over this wide frequency range. This justifies the use of a low-order polynomial model for the interaction terms. This frequency variation is due to retardation, which is not very strong for such close separation.
However, the retardation is still strong enough to make the imaginary parts of these coupling terms non-negligible. These parts corresponds to the real part of the mutual impedance, and mean that the coupling affects not only the stored energy, but also the rate of energy loss due to radiation by the modes.
End of explanation |
4,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Работа 2.4. Определение вязкости воздуха по скорости течения через тонкие трубки
Цель работы
Step1: Первые 7 точек укладываются на прямую, оставшиеся нет. Значит, примерно при $Q = 10^{-4}~м^3/с$ возникает турбулентность.
Step2: $$\eta = \frac{k \pi R^4}{8L}$$
Step3: Значит, вязкость воздуха равна $\eta = (1.65 ± 0.44)~10^{-5}~Па~с$.
$$Re= v \frac{R\rho}{\eta} = \frac{\Delta l}{\Delta t}\frac{R\rho}{\eta}=\frac{\Delta V R \rho}{S\Delta t } = \frac{Q\rho}{\pi R \eta}$$
Step4: При $Re = 1000$ ламинарное течение устанавливается на расстоянии $a = 0.2~R_2~Re = 0.1~d_2~Re = 0.585~м > 0.5~м = L$.
Step5: Первые 7 точек укладываются на прямую, оставшиеся нет. Значит, примерно при $Q = 1.2\cdot10^{-4}~м^3/с$ возникает турбулентность.
Step6: Значит, вязкость воздуха равна $\eta = (1.80 ± 0.55)~10^{-5}~Па~с$. | Python Code:
import pandas
PQn = pandas.read_excel('lab-2-3.xlsx', 't-1')
PQn.head(len(PQn))
x = PQn.values[:, 6] / 3600
y = PQn.values[:, 8]
dx = PQn.values[:, 7] / 3600
dy = PQn.values[:, 9]
xl = x[:7]
yl = y[:7]
import numpy
k, b = numpy.polyfit(xl, yl, deg=1)
grid = numpy.linspace(0.04 / 3600, 0.0001)
import matplotlib.pyplot
matplotlib.pyplot.figure(figsize=(12, 8))
matplotlib.pyplot.grid(linestyle='--')
matplotlib.pyplot.title('$\Delta P = f(Q)$', fontweight='bold')
matplotlib.pyplot.xlabel('$Q$, м³/c')
matplotlib.pyplot.ylabel('$\Delta P$, Па')
matplotlib.pyplot.scatter(x, y)
matplotlib.pyplot.plot(grid, k * grid + b)
matplotlib.pyplot.xlim((0.04 / 3600, 0.5 / 3600))
matplotlib.pyplot.ylim((20, 370))
matplotlib.pyplot.errorbar(x, y, xerr=dx / 3600, yerr=dy, fmt='o')
matplotlib.pyplot.show()
Explanation: Работа 2.4. Определение вязкости воздуха по скорости течения через тонкие трубки
Цель работы: экспериментально выявить участки ламинарно-
го и турбулентного течения; определить число Рейнольдса; опре-
делить вязкость воздуха; экспериментально определить зависи-
мость расхода воздуха в трубках от радиуса.
В работе используются: металлические трубки, укрепленные
на горизонтальной подставке; газовый счетчик; микроманометр
типа ММН; стеклянная U-образная трубка; секундомер.
Параметры установки
Парам. | Значение | Абс. п. | Описание
-------|-------------|---------|---------------------
$d_1$ | 3.85 мм | 0.05 мм | Диаметр узкой трубки
$d_2$ | 5.85 мм | 0.05 мм | Диаметр широкой трубки
$\rho$ | 809.5 кг/м³ | | Плотность спирта
$L$ | 50 см | | Длина трубки (от четвертого до пятого клапана)
$l_2$ | 11.4 см | | Расстояние от первого до второго клапана
$l_3$ | 41.4 см | | Расстояние от первого до третьего клапана
$l_4$ | 81.4 см | | Расстояние от первого до четвертого клапана
$l_5$ | 131.4 см | | Расстояние от первого до пятого клапана
$Q_m$ | 0.02 м³/ч | | Минимальный рабочий расход
$Q_M$ | 0.6 м³/ч | | Максимальный рабочий расход
$i$ | 1.96 Па | | Одно деление на шкале манометра
$\rho$ | 1.2 кг/м³ | | Плотность воздуха
Ход работы
При $Re = 1000$ ламинарное течение устанавливается на расстоянии $a = 0.2~R_1~Re = 0.1~d_1~Re = 0.385~м < 0.5~м = L$.
End of explanation
k = round(k)
dk = numpy.round((((yl ** 2).mean() / (xl ** 2).mean() - (k ** 2)) / 7) ** 0.5)
print('{} ± {} Па с / м³'.format(k, dk))
Explanation: Первые 7 точек укладываются на прямую, оставшиеся нет. Значит, примерно при $Q = 10^{-4}~м^3/с$ возникает турбулентность.
End of explanation
R_1 = 0.00385 / 2
L = 0.5
eta = numpy.round(k * numpy.pi * (R_1 ** 4) / (8 * L), 7)
print(eta)
dR_1 = 0.0005
deta = numpy.round(eta * ((dk / k) ** 2 + (dR_1 / R_1) ** 2 ) ** 0.5 , 7)
print(deta)
Explanation: $$\eta = \frac{k \pi R^4}{8L}$$
End of explanation
ro = 1.2
Q = 0.0001
Re = Q * ro / (numpy.pi * R_1 * eta)
print(Re)
Explanation: Значит, вязкость воздуха равна $\eta = (1.65 ± 0.44)~10^{-5}~Па~с$.
$$Re= v \frac{R\rho}{\eta} = \frac{\Delta l}{\Delta t}\frac{R\rho}{\eta}=\frac{\Delta V R \rho}{S\Delta t } = \frac{Q\rho}{\pi R \eta}$$
End of explanation
PQw = pandas.read_excel('lab-2-3.xlsx', 't-2')
PQw.head(len(PQn))
x = PQw.values[:, 6] / 3600
y = PQw.values[:, 8]
dx = PQw.values[:, 7] / 3600
dy = PQw.values[:, 9]
xl = x[:10]
yl = y[:10]
k, b = numpy.polyfit(xl, yl, deg=1)
grid = numpy.linspace(0.05 / 3600, 0.00012)
import matplotlib.pyplot
matplotlib.pyplot.figure(figsize=(12, 8))
matplotlib.pyplot.grid(linestyle='--')
matplotlib.pyplot.title('$\Delta P = f(Q)$', fontweight='bold')
matplotlib.pyplot.xlabel('$Q$, м³/c')
matplotlib.pyplot.ylabel('$\Delta P$, Па')
matplotlib.pyplot.scatter(x, y)
matplotlib.pyplot.plot(grid, k * grid + b)
matplotlib.pyplot.xlim((0.05 / 3600, 0.6 / 3600))
matplotlib.pyplot.ylim((13, 71))
matplotlib.pyplot.errorbar(x, y, xerr=dx / 3600, yerr=dy, fmt='o')
matplotlib.pyplot.show()
Explanation: При $Re = 1000$ ламинарное течение устанавливается на расстоянии $a = 0.2~R_2~Re = 0.1~d_2~Re = 0.585~м > 0.5~м = L$.
End of explanation
k = round(k)
dk = numpy.round((((yl ** 2).mean() / (xl ** 2).mean() - (k ** 2)) / 7) ** 0.5)
print('{} ± {} Па с / м³'.format(k, dk))
k = round(k)
dk = numpy.round((((yl ** 2).mean() / (xl ** 2).mean() - (k ** 2)) / 7) ** 0.5)
print('{} ± {} Па с / м³'.format(k, dk))
R_2 = 0.00585 / 2
L = 0.5
eta = numpy.round(k * numpy.pi * (R_2 ** 4) / (8 * L), 7)
print(eta)
dR_2 = 0.0005
deta = numpy.round(eta * ((dk / k) ** 2 + (dR_2 / R_2) ** 2 ) ** 0.5 , 7)
print(deta)
Explanation: Первые 7 точек укладываются на прямую, оставшиеся нет. Значит, примерно при $Q = 1.2\cdot10^{-4}~м^3/с$ возникает турбулентность.
End of explanation
ro = 1.2
Q = 0.00012
Re = Q * ro / (numpy.pi * R_2 * eta)
print(Re)
PLn = pandas.read_excel('lab-2-3.xlsx', 't-3')
PLn.head(len(PLn))
x = PLn.values[:, 3] / 100
y = PLn.values[:, 4]
dy = PLn.values[:, 5]
import matplotlib.pyplot
matplotlib.pyplot.figure(figsize=(12, 8))
matplotlib.pyplot.grid(linestyle='--')
matplotlib.pyplot.title('$\Delta P = f(l)$', fontweight='bold')
matplotlib.pyplot.xlabel('$L$, м')
matplotlib.pyplot.ylabel('$\Delta P$, Па')
matplotlib.pyplot.plot(x, y)
matplotlib.pyplot.xlim((0.1, 1.35))
matplotlib.pyplot.ylim((110, 425))
matplotlib.pyplot.errorbar(x, y, yerr=dy, fmt='o')
matplotlib.pyplot.show()
PLw = pandas.read_excel('lab-2-3.xlsx', 't-4')
PLw.head(len(PLn))
x = PLw.values[:, 3] / 100
y = PLw.values[:, 4]
dy = PLw.values[:, 5]
import matplotlib.pyplot
matplotlib.pyplot.figure(figsize=(12, 8))
matplotlib.pyplot.grid(linestyle='--')
matplotlib.pyplot.title('$\Delta P = f(l)$', fontweight='bold')
matplotlib.pyplot.xlabel('$L$, м')
matplotlib.pyplot.ylabel('$\Delta P$, Па')
matplotlib.pyplot.plot(x, y)
matplotlib.pyplot.xlim((0.1, 1.35))
matplotlib.pyplot.ylim((15, 60))
matplotlib.pyplot.errorbar(x, y, yerr=dy, fmt='o')
matplotlib.pyplot.show()
Explanation: Значит, вязкость воздуха равна $\eta = (1.80 ± 0.55)~10^{-5}~Па~с$.
End of explanation |
4,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 2
Step1: Visualize data
Step2: Interactive pandas Dataframe
Using qgrid it is possible to modify the tables in place as following
Step3: Grid and potential field
We can see the potential field generated out of the data above
Step4: From potential field to block
The potential field describe the deposition form and direction of a basin. However, in most scenarios the real goal of structural modeling is the segmentation in layers of areas with significant change of properties (e.g. shales and carbonates). Since we need to provide at least one point per interface, we can easily compute the value of the potential field at the intersections between two layers. Therefore, by simple comparison between a concrete value of the potential field and the values of the interfaces it is possible to segment the domain into layers Fig X.
Step5: Combining potential fields
Step6: This potential field gives the following block
Step7: Combining both potential field where the first potential field is younger than the second we can obtain the following structure.
Step8: Side note | Python Code:
# Importing
import theano.tensor as T
import sys, os
sys.path.append("../GeMpy")
# Importing GeMpy modules
import GeMpy
# Reloading (only for development purposes)
import importlib
importlib.reload(GeMpy)
# Usuful packages
import numpy as np
import pandas as pn
import matplotlib.pyplot as plt
# This was to choose the gpu
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
# Default options of printin
np.set_printoptions(precision = 6, linewidth= 130, suppress = True)
%matplotlib inline
#%matplotlib notebook
Explanation: Example 2: Simple model
This notebook is a series of independent cells showing how to create a simple model from the beginning to the end using GeMpy
Importing dependencies
End of explanation
geo_data = GeMpy.import_data([0,10,0,10,0,10], [50,50,50])
# =========================
# DATA GENERATION IN PYTHON
# =========================
# Layers coordinates
layer_1 = np.array([[0.5,4,7], [2,4,6.5], [4,4,7], [5,4,6]])#-np.array([5,5,4]))/8+0.5
layer_2 = np.array([[3,4,5], [6,4,4],[8,4,4], [7,4,3], [1,4,6]])
layers = np.asarray([layer_1,layer_2])
# Foliations coordinates
dip_pos_1 = np.array([7,4,7])#- np.array([5,5,4]))/8+0.5
dip_pos_2 = np.array([2.,4,4])
# Dips
dip_angle_1 = float(15)
dip_angle_2 = float(340)
dips_angles = np.asarray([dip_angle_1, dip_angle_2], dtype="float64")
# Azimuths
azimuths = np.asarray([90,90], dtype="float64")
# Polarity
polarity = np.asarray([1,1], dtype="float64")
# Setting foliations and interfaces values
GeMpy.set_interfaces(geo_data, pn.DataFrame(
data = {"X" :np.append(layer_1[:, 0],layer_2[:,0]),
"Y" :np.append(layer_1[:, 1],layer_2[:,1]),
"Z" :np.append(layer_1[:, 2],layer_2[:,2]),
"formation" : np.append(
np.tile("Layer 1", len(layer_1)),
np.tile("Layer 2", len(layer_2))),
"labels" : [r'${\bf{x}}_{\alpha \, 0}^1$',
r'${\bf{x}}_{\alpha \, 1}^1$',
r'${\bf{x}}_{\alpha \, 2}^1$',
r'${\bf{x}}_{\alpha \, 3}^1$',
r'${\bf{x}}_{\alpha \, 0}^2$',
r'${\bf{x}}_{\alpha \, 1}^2$',
r'${\bf{x}}_{\alpha \, 2}^2$',
r'${\bf{x}}_{\alpha \, 3}^2$',
r'${\bf{x}}_{\alpha \, 4}^2$'] }))
GeMpy.set_foliations(geo_data, pn.DataFrame(
data = {"X" :np.append(dip_pos_1[0],dip_pos_2[0]),
"Y" :np.append(dip_pos_1[ 1],dip_pos_2[1]),
"Z" :np.append(dip_pos_1[ 2],dip_pos_2[2]),
"azimuth" : azimuths,
"dip" : dips_angles,
"polarity" : polarity,
"formation" : ["Layer 1", "Layer 2"],
"labels" : [r'${\bf{x}}_{\beta \,{0}}$',
r'${\bf{x}}_{\beta \,{1}}$'] }))
GeMpy.get_raw_data(geo_data)
# Plotting data
GeMpy.plot_data(geo_data)
GeMpy.PlotData.annotate_plot(GeMpy.get_raw_data(geo_data),
'labels','X', 'Z', size = 'x-large')
Explanation: Visualize data
End of explanation
GeMpy.i_set_data(geo_data)
Explanation: Interactive pandas Dataframe
Using qgrid it is possible to modify the tables in place as following:
End of explanation
from ipywidgets import widgets
from ipywidgets import interact
def cov_cubic_f(r,a = 6, c_o = 1):
if r <= a:
return c_o*(1-7*(r/a)**2+35/4*(r/a)**3-7/2*(r/a)**5+3/4*(r/a)**7)
else:
return 0
def cov_cubic_d1_f(r,a = 6., c_o = 1):
SED_dips_dips = r
f = c_o
return (f * ((-14 /a ** 2) + 105 / 4 * SED_dips_dips / a ** 3 -
35 / 2 * SED_dips_dips ** 3 / a ** 5 + 21 / 4 * SED_dips_dips ** 5 / a ** 7))
def cov_cubic_d2_f(r, a = 6, c_o = 1):
SED_dips_dips = r
f = c_o
return 7*f*(9*r**5-20*a**2*r**3+15*a**4*r-4*a**5)/(2*a**7)
def plot_potential_var(a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0,12,50)
y = [cov_cubic_f(i, a = a, c_o = c_o) for i in x]
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.hlines(0,0,12, linestyles = "--")
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
ax2.scatter(0,nugget_effect+c_o)
plt.title("Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C_Z(r)$', y = 1.08, fontsize=15, fontweight='bold')
def plot_potential_direction_var( a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0,12,50)
y = np.asarray([cov_cubic_d1_f(i, a = a, c_o = c_o) for i in x])
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
#ax2.scatter(0,c_o)
plt.title("Cross-Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C\'_Z / r$', y = 1.08, fontsize=15, fontweight='bold')
def plot_directionU_directionU_var(a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0.01,12,50)
d1 = np.asarray([cov_cubic_d1_f(i, a = a, c_o = c_o) for i in x])
d2 = np.asarray([cov_cubic_d2_f(i, a = a, c_o = c_o) for i in x])
y = -(d2) # (0.5*x**2)/(x**2)*
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
ax2.scatter(0,nugget_effect+y[0], s = 20)
plt.title("Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C_{\partial {Z}/ \partial x, \, \partial {Z}/ \partial x}(h_x)$'
, y = 1.08, fontsize=15)
def plot_directionU_directionV_var(a = 10, c_o = 1, nugget_effect = 0):
x = np.linspace(0.01,12,50)
d1 = np.asarray([cov_cubic_d1_f(i, a = a, c_o = c_o) for i in x])
d2 = np.asarray([cov_cubic_d2_f(i, a = a, c_o = c_o) for i in x])
y = -(d2-d1) # (0.5*x**2)/(x**2)*
fig = plt.figure()
ax1 = fig.add_subplot(121)
ax1.plot(x,c_o-np.asarray(y)+nugget_effect)
plt.title("Variogram")
plt.margins(0,0.1)
ax2 = fig.add_subplot(122)
ax2.plot(x,np.asarray(y))
ax2.scatter(0,nugget_effect+y[0], s = 20)
plt.title("Covariance Function")
plt.tight_layout()
plt.margins(0,0.1)
plt.suptitle('$C_{\partial {Z}/ \partial x, \, \partial {Z}/ \partial y}(h_x,h_y)$'
, y = 1.08, fontsize=15)
def plot_all(a = 10, c_o = 1, nugget_effect = 0):
plot_potential_direction_var(a, c_o, nugget_effect)
plot_directionU_directionU_var(a, c_o, nugget_effect)
plot_directionU_directionV_var(a, c_o, nugget_effect)
Explanation: Grid and potential field
We can see the potential field generated out of the data above
End of explanation
GeMpy.compute_block_model(geo_data)
GeMpy.plot_section(geo_data, 13)
Explanation: From potential field to block
The potential field describe the deposition form and direction of a basin. However, in most scenarios the real goal of structural modeling is the segmentation in layers of areas with significant change of properties (e.g. shales and carbonates). Since we need to provide at least one point per interface, we can easily compute the value of the potential field at the intersections between two layers. Therefore, by simple comparison between a concrete value of the potential field and the values of the interfaces it is possible to segment the domain into layers Fig X.
End of explanation
layer_3 = np.array([[2,4,3], [8,4,2], [9,4,3]])
dip_pos_3 = np.array([1,4,1])
dip_angle_3 = float(80)
azimuth_3 = 90
polarity_3 = 1
GeMpy.set_interfaces(geo_data, pn.DataFrame(
data = {"X" :layer_3[:, 0],
"Y" :layer_3[:, 1],
"Z" :layer_3[:, 2],
"formation" : np.tile("Layer 3", len(layer_3)),
"labels" : [ r'${\bf{x}}_{\alpha \, 0}^3$',
r'${\bf{x}}_{\alpha \, 1}^3$',
r'${\bf{x}}_{\alpha \, 2}^3$'] }), append = True)
GeMpy.get_raw_data(geo_data,"interfaces")
GeMpy.set_foliations(geo_data, pn.DataFrame(data = {
"X" : dip_pos_3[0],
"Y" : dip_pos_3[1],
"Z" : dip_pos_3[2],
"azimuth" : azimuth_3,
"dip" : dip_angle_3,
"polarity" : polarity_3,
"formation" : [ 'Layer 3'],
"labels" : r'${\bf{x}}_{\beta \,{2}}$'}), append = True)
GeMpy.get_raw_data(geo_data, 'foliations')
GeMpy.set_data_series(geo_data, {'younger': ('Layer 1', 'Layer 2'),
'older': 'Layer 3'}, order_series = ['younger', 'older'])
GeMpy.plot_data(geo_data)
Explanation: Combining potential fields: Depositional series
In reality, most geological settings are formed by a concatenation of depositional phases separated clearly by unconformity bounderies. Each of these phases can be model by a potential field. In order to capture this behavior, we can classify the formations that belong to individual depositional phase into categories or series. The potential field computed for each of these series could be seen as a sort of evolution of the basin if an unconformity did not occur. Finally, sorting the temporal relation between series allow to superpose the corresponding potential field at an specific location.
In the next example, we add a new serie consisting in a layer---'Layer 3'--- Fig X, which generate the potential field of Fig X and subsequently the block Figure X.
End of explanation
GeMpy.plot_potential_field(geo_data,4, n_pf=1, direction='y',
colorbar = True, cmap = 'magma' )
GeMpy.get_raw_data(geo_data)
Explanation: This potential field gives the following block
End of explanation
GeMpy.compute_block_model(geo_data, series_number= 'all', verbose = 0)
GeMpy.plot_section(geo_data, 13)
Explanation: Combining both potential field where the first potential field is younger than the second we can obtain the following structure.
End of explanation
plot_potential_var(10,10**2 / 14 / 3 , 0.01)
plot_all(10,10**2 / 14 / 3 , 0.01) # 0**2 /14/3
Explanation: Side note: Example of covariances involved in the cokriging system
End of explanation |
4,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Baseline is static, a straight line for each input - Test
Step1: Baseline is static, a straight line for each input - Train (small)
Step2: Baseline is static, a straight line for each input - Train Full | Python Code:
bltest = MyBaseline(npz_path=npz_test)
bltest.getMSE()
bltest.renderMSEs()
plt.show()
bltest.getHuberLoss()
bltest.renderHuberLosses()
plt.show()
%%time
bltest.get_dtw()
bltest.renderRandomTargetVsPrediction()
plt.show()
Explanation: Baseline is static, a straight line for each input - Test
End of explanation
bltrainsmall = MyBaseline(npz_path=npz_train_reduced)
bltrainsmall.getMSE()
bltrainsmall.renderMSEs()
plt.show()
bltrainsmall.getHuberLoss()
bltrainsmall.renderHuberLosses()
plt.show()
# myran = lambda: np.random.randn() * 1e-3
# dtw_scores = [fastdtw(bltrainsmall.targets[ind], bltrainsmall.targets[ind] + myran())[0]
# for ind in range(len(bltrainsmall.targets))]
# np.mean(dtw_scores)
%%time
bltrainsmall.get_dtw()
bltrainsmall.renderRandomTargetVsPrediction()
plt.show()
Explanation: Baseline is static, a straight line for each input - Train (small)
End of explanation
bltrain = MyBaseline(npz_path=npz_train)
bltrain.getMSE()
bltrain.renderMSEs()
plt.show()
bltrain.getHuberLoss()
bltrain.renderHuberLosses()
plt.show()
%%time
bltrain.get_dtw()
bltrain.renderRandomTargetVsPrediction()
plt.show()
Explanation: Baseline is static, a straight line for each input - Train Full
End of explanation |
4,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Challenge of processing large amounts of data
<ul>
<li>How to process is quickly?
<li>So how do we go about making the problem map so that it can be distributed computation?
<li> Distributed/Parrallel Programming is hard
</ul>
Mapreduce addresses all the challengs
<ul>
<li>Google's computational/data manipulation model
<li> Elegant way to work with big data
</ul>
Map Reduce and the New Software Stack
<ul>
<li> Covered in Chapter 2 in the Ullman book
<li> Python support packages https
Step1: If there are more parameters then each could be an array, and they are applied together one element at a time
Step2: Python Reduce Function
For an array passed in, compute a single result.
So a reduce function always has two paramters to carry the result foward with the next element in the sequence.
The operation starts by using the first two values then passes the result with the next element to the function...
Step3: If there is only one value in a sequence then that element is returned; if the sequence is empty, an exception is raised.
Word Count using python in a map reduce manner
Using the Canterbury Corpus Test file from http
Step4: MapReduce using MRJOB
Find documentation for MRJOB at https | Python Code:
def cube(x): return x*x*x
map(cube,range(1,11))
Explanation: Challenge of processing large amounts of data
<ul>
<li>How to process is quickly?
<li>So how do we go about making the problem map so that it can be distributed computation?
<li> Distributed/Parrallel Programming is hard
</ul>
Mapreduce addresses all the challengs
<ul>
<li>Google's computational/data manipulation model
<li> Elegant way to work with big data
</ul>
Map Reduce and the New Software Stack
<ul>
<li> Covered in Chapter 2 in the Ullman book
<li> Python support packages https://pypi.python.org/pypi/mrjob
<li> map() and reduce() are basic built in functions in python https://docs.python.org/2/library/functions.html
</ul>
Built-in Python Functional Programming Tools
https://docs.python.org/2/tutorial/datastructures.html
Python Map Function
For each value in a sequence, process each one and output a new result for each element.
End of explanation
seq = range(8)
def add(x,y): return x+y
map(add, seq,seq)
Explanation: If there are more parameters then each could be an array, and they are applied together one element at a time
End of explanation
result = map(add, seq,seq)
reduce(add, result) # adding each element of the result together
Explanation: Python Reduce Function
For an array passed in, compute a single result.
So a reduce function always has two paramters to carry the result foward with the next element in the sequence.
The operation starts by using the first two values then passes the result with the next element to the function...
End of explanation
import re
import pandas as pd
import numpy as np
aliceFile = open('data/canterbury/alice29.txt','r')
map1=[]
WORD_RE = re.compile(r"[\w']+")
# Create the map of words with prelminary counts
for line in aliceFile:
for w in WORD_RE.findall(line):
map1.append([w,1])
#sort the map
map2 = sorted(map1)
#Separate the map into groups by the key values
df = pd.DataFrame(map2)
uniquewords = df[0].unique()
DataFrameDict = {elem : pd.DataFrame for elem in uniquewords}
for key in DataFrameDict.keys():
DataFrameDict[key] = df[:][df[0] == key]
def wordcount(x,y):
x[1] = x[1] + y[1]
return x
#Add up the counts using reduce
for uw in uniquewords:
uarray = np.array(DataFrameDict[uw])
print reduce(wordcount,uarray)
Explanation: If there is only one value in a sequence then that element is returned; if the sequence is empty, an exception is raised.
Word Count using python in a map reduce manner
Using the Canterbury Corpus Test file from http://compression.ca/act/files/canterbury.zip,
We will attempt to count the each unique word.
End of explanation
# %load code/MRWordFrequencyCount.py
from mrjob.job import MRJob
class MRWordFrequencyCount(MRJob):
def mapper(self, _ , line):
yield "chars", len(line)
yield "words", len(line.split())
yield "lines", 1
def reducer(self, key, values):
yield key, sum(values)
if __name__ == '__main__':
MRWordFrequencyCount.run()
%run code/MRWordFrequencyCount.py ./.mrjob.conf data/canterbury/alice29.txt
# MRJOB Word count function
# %load code\MRWordFreqCount.py
from mrjob.job import MRJob
import re
WORD_RE = re.compile(r"[\w']+")
class MRWordFreqCount(MRJob):
def mapper(self, _, line):
for word in WORD_RE.findall(line):
yield word.lower(), 1
def combiner(self, word, counts):
yield word, sum(counts)
def reducer(self, word, counts):
yield word, sum(counts)
if __name__ == '__main__':
MRWordFreqCount.run()
%run code/MRWordFreqCount.py data/canterbury/alice29.txt
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from __future__ import print_function
import sys
from operator import add
from pyspark import SparkContext
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: wordcount <file>", file=sys.stderr)
exit(-1)
sc = SparkContext(appName="PythonWordCount")
lines = sc.textFile(sys.argv[1], 1)
counts = lines.flatMap(lambda x: x.split(' ')) \
.map(lambda x: (x, 1)) \
.reduceByKey(add)
output = counts.collect()
for (word, count) in output:
print("%s: %i" % (word, count))
sc.stop()
Explanation: MapReduce using MRJOB
Find documentation for MRJOB at https://pythonhosted.org/mrjob/
A framework that allows you to do mapreduce jobs without HADOOP but will run the same jobs in an hadoop environment.
( DUMBO and Pydoop give you lower level access to HADOOP)
Before using it for the first time install this package using : <code> pip install mrjob </code>
If that does not work use the alternatives in provided on https://pythonhosted.org/mrjob/guides/quickstart.html#installation
%%writefile myfile.py
write/save cell contents into myfile.py (use -a to append). Another alias: %%file myfile.py
%run myfile.py
run myfile.py and output results in the current cell
%load myfile.py
load "import" myfile.py into the current cell
End of explanation |
4,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KeplerLightCurveCelerite.ipynb
‹ KeplerLightCurve.ipynb › Copyright (C) ‹ 2017 › ‹ Anna Scaife - [email protected] ›
This program is free software
Step1: Import some libraries
Step2: Import the celerite Gaussian Process Modelling library and the george covariance kernels
Step3: Specify the datafile containing Kepler data for the object KIC 1430163
Step4: Read the Kepler data from the file
Step5: The paper says
Step6: And the time has been made relative to the first measurement
Step7: Make a plot like the one in Figure 7
Step8: In the paper there are two suggested kernels for modelling the covariance of the Kepler data (Eqs. 55 & 56). In the paper the authors fit Eq 56 - here we are going to fit Eq. 56.
$$
k(\tau) = \frac{B}{1+C}\exp^{-\tau/L} \left[ \cos{\left( \frac{2\pi\tau}{P} \right)} + (1+C) \right]
$$
This is the same as the CustomTerm described in the celerite documentation here
Step9: We need to pick some first guess parameters. Because we're lazy we'll just start by setting them all to 1
Step10: The paper says
Step11: I'm going to set bounds on the available parameters space, i.e. our prior volume, using the ranges taken from Table 4 of https
Step12: The key parameter here is the period, which is the fourth number along. We expect this to be about 3.9 and... we're getting 4.24, so not a million miles off.
From the paper
Step13:
Step14: First we need to define a log(likelihood). We'll use the log(likelihood) implemented in the george library, which implements
Step15: We also need to specify our parameter priors. Here we'll just use uniform logarithmic priors. The ranges are the same as specified in Table 3 of https
Step16: We then need to combine our log likelihood and our log prior into an (unnormalised) log posterior
Step17: ok, now we have our probability stuff set up we can run the MCMC. We'll start by explicitly specifying our Kepler data as our training data
Step18: The paper then says
Step19: We can then use these inputs to initiate our sampler
Step20: The paper says
Step21: Now let's run the production MCMC | Python Code:
%matplotlib inline
Explanation: KeplerLightCurveCelerite.ipynb
‹ KeplerLightCurve.ipynb › Copyright (C) ‹ 2017 › ‹ Anna Scaife - [email protected] ›
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.
[AMS - 170829] Notebook created for TIARA Astrostatistics Summer School, Taipei, September 2017.
This notebook runs through the Gaussian Process Modelling described in Example 3 of https://arxiv.org/pdf/1703.09710.pdf and builds on the methodology presented in the accompanying lecture: "Can You Predict the Future..?"
It uses a number of Python libraries, which are all installable using pip.
This example uses the celerite GPM library (http://celerite.readthedocs.io) and the emcee package (http://dan.iel.fm/emcee/).
End of explanation
import numpy as np
import pylab as pl
Explanation: Import some libraries:
End of explanation
import celerite
from celerite import terms
Explanation: Import the celerite Gaussian Process Modelling library and the george covariance kernels:
End of explanation
filename="KIC1430163.tbl"
datafile = open(filename,'r')
Explanation: Specify the datafile containing Kepler data for the object KIC 1430163:
End of explanation
time=[];value=[]
while True:
line = datafile.readline()
if not line: break
items=line.split()
if (items[0][0]!='|'):
time.append(float(items[1]))
value.append(float(items[2]))
time=np.array(time)
value=np.array(value)
print "There are ",len(time)," data points"
Explanation: Read the Kepler data from the file:
End of explanation
mean = np.mean(value)
value-=mean
norm = np.max(value)
value/=norm
Explanation: The paper says:
We set the mean function to zero
and we can see from Fig 7 that the data have also been normalised to have a maximum value of one.
So, let's also do that:
End of explanation
day1 = time[0]
time-=day1
Explanation: And the time has been made relative to the first measurement:
End of explanation
pl.subplot(111)
pl.scatter(time,value,s=0.2)
pl.axis([0.,60.,-1.,1.])
pl.ylabel("Relative flux [ppt]")
pl.xlabel("Time [days]")
pl.show()
Explanation: Make a plot like the one in Figure 7:
End of explanation
import autograd.numpy as np
class CustomTerm(terms.Term):
parameter_names = ("log_a", "log_b", "log_c", "log_P")
def get_real_coefficients(self, params):
log_a, log_b, log_c, log_P = params
b = np.exp(log_b)
return (
np.exp(log_a) * (1.0 + b) / (2.0 + b), np.exp(log_c),
)
def get_complex_coefficients(self, params):
log_a, log_b, log_c, log_P = params
b = np.exp(log_b)
return (
np.exp(log_a) / (2.0 + b), 0.0,
np.exp(log_c), 2*np.pi*np.exp(-log_P),
)
Explanation: In the paper there are two suggested kernels for modelling the covariance of the Kepler data (Eqs. 55 & 56). In the paper the authors fit Eq 56 - here we are going to fit Eq. 56.
$$
k(\tau) = \frac{B}{1+C}\exp^{-\tau/L} \left[ \cos{\left( \frac{2\pi\tau}{P} \right)} + (1+C) \right]
$$
This is the same as the CustomTerm described in the celerite documentation here: http://celerite.readthedocs.io/en/stable/python/kernel/
There is one small difference though - the exponent is expressed differently. This doesn't mean we need to change anything... except for our prior bounds because we're going to apply those as logarithmic bounds so we will need to put a minus sign in front of them since $\log(1/x) = -\log(x)$.
End of explanation
log_a = 0.0;log_b = 0.0; log_c = 0.0; log_P = 0.0
kernel = CustomTerm(log_a, log_b, log_c, log_P)
gp = celerite.GP(kernel, mean=0.0)
yerr = 0.000001*np.ones(time.shape)
gp.compute(time,yerr)
print("Initial log-likelihood: {0}".format(gp.log_likelihood(value)))
t = np.arange(np.min(time),np.max(time),0.1)
# calculate expectation and variance at each point:
mu, cov = gp.predict(value, t)
std = np.sqrt(np.diag(cov))
ax = pl.subplot(111)
pl.plot(t,mu)
ax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True)
pl.scatter(time,value,s=2)
pl.axis([0.,60.,-1.,1.])
pl.ylabel("Relative flux [ppt]")
pl.xlabel("Time [days]")
pl.show()
Explanation: We need to pick some first guess parameters. Because we're lazy we'll just start by setting them all to 1:
End of explanation
def nll(p, y, gp):
# Update the kernel parameters:
gp.set_parameter_vector(p)
# Compute the loglikelihood:
ll = gp.log_likelihood(y)
# The scipy optimizer doesn’t play well with infinities:
return -ll if np.isfinite(ll) else 1e25
def grad_nll(p, y, gp):
# Update the kernel parameters:
gp.set_parameter_vector(p)
# Compute the gradient of the loglikelihood:
gll = gp.grad_log_likelihood(y)[1]
return -gll
Explanation: The paper says:
As with the earlier examples, we start by estimating the MAP parameters using L-BFGS-B
So let's do that. We'll use the scipy optimiser, which requires us to define a log(likelihood) function and a function for the gradient of the log(likelihood):
End of explanation
import scipy.optimize as op
# extract our initial guess at parameters
# from the celerite kernel and put it in a
# vector:
p0 = gp.get_parameter_vector()
# set prior ranges
# Note that these are in *logarithmic* space
bnds = ((-10.,0.),(-5.,5.),(-5.,-1.5),(-3.,5.))
# run optimization:
results = op.minimize(nll, p0, method='L-BFGS-B', jac=grad_nll, bounds=bnds, args=(value, gp))
# print the value of the optimised parameters:
print np.exp(results.x)
print("Final log-likelihood: {0}".format(-results.fun))
Explanation: I'm going to set bounds on the available parameters space, i.e. our prior volume, using the ranges taken from Table 4 of https://arxiv.org/pdf/1706.05459.pdf
End of explanation
# pass the parameters to the george kernel:
gp.set_parameter_vector(results.x)
t = np.arange(np.min(time),np.max(time),0.1)
# calculate expectation and variance at each point:
mu, cov = gp.predict(value, t)
std = np.sqrt(np.diag(cov))
ax = pl.subplot(111)
pl.plot(t,mu)
ax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True)
pl.scatter(time,value,s=2)
pl.axis([0.,60.,-1.,1.])
pl.ylabel("Relative flux [ppt]")
pl.xlabel("Time [days]")
pl.show()
Explanation: The key parameter here is the period, which is the fourth number along. We expect this to be about 3.9 and... we're getting 4.24, so not a million miles off.
From the paper:
This star has a published rotation period of 3.88 ± 0.58 days, measured using traditional periodogram and autocorrelation function approaches applied to Kepler data from Quarters 0–16 (Mathur et al. 2014), covering about four years.
Let's now pass these optimised parameters to george and recompute our prediction:
End of explanation
import emcee
# we need to define three functions:
# a log likelihood, a log prior & a log posterior.
Explanation:
End of explanation
# set the loglikelihood:
def lnlike(p, x, y):
lnB = np.log(p[0])
lnC = p[1]
lnL = np.log(p[2])
lnP = np.log(p[3])
p0 = np.array([lnB,lnC,lnL,lnP])
# update kernel parameters:
gp.set_parameter_vector(p0)
# calculate the likelihood:
ll = gp.log_likelihood(y)
# return
return ll if np.isfinite(ll) else 1e25
Explanation: First we need to define a log(likelihood). We'll use the log(likelihood) implemented in the george library, which implements:
$$
\ln L = -\frac{1}{2}(y - \mu)^{\rm T} C^{-1}(y - \mu) - \frac{1}{2}\ln |C\,| + \frac{N}{2}\ln 2\pi
$$
(see Eq. 5 in https://arxiv.org/pdf/1706.05459.pdf).
End of explanation
# set the logprior
def lnprior(p):
# These ranges are taken from Table 4
# of https://arxiv.org/pdf/1703.09710.pdf
lnB = np.log(p[0])
lnC = p[1]
lnL = np.log(p[2])
lnP = np.log(p[3])
# really crappy prior:
if (-10<lnB<0.) and (-5.<lnC<5.) and (-5.<lnL<1.5) and (-3.<lnP<5.):
return 0.0
return -np.inf
#return gp.log_prior()
Explanation: We also need to specify our parameter priors. Here we'll just use uniform logarithmic priors. The ranges are the same as specified in Table 3 of https://arxiv.org/pdf/1703.09710.pdf.
<img src="table3.png">
End of explanation
# set the logposterior:
def lnprob(p, x, y):
lp = lnprior(p)
return lp + lnlike(p, x, y) if np.isfinite(lp) else -np.inf
Explanation: We then need to combine our log likelihood and our log prior into an (unnormalised) log posterior:
End of explanation
x_train = time
y_train = value
Explanation: ok, now we have our probability stuff set up we can run the MCMC. We'll start by explicitly specifying our Kepler data as our training data:
End of explanation
# put all the data into a single array:
data = (x_train,y_train)
# set your initial guess parameters
# as the output from the scipy optimiser
# remember celerite keeps these in ln() form!
# C looks like it's going to be a very small
# value - so we will sample from ln(C):
# A, lnC, L, P
p = gp.get_parameter_vector()
initial = np.array([np.exp(p[0]),p[1],np.exp(p[2]),np.exp(p[3])])
print "Initial guesses: ",initial
# set the dimension of the prior volume
# (i.e. how many parameters do you have?)
ndim = len(initial)
print "Number of parameters: ",ndim
# The number of walkers needs to be more than twice
# the dimension of your parameter space.
nwalkers = 32
# perturb your inital guess parameters very slightly (10^-5)
# to get your starting values:
p0 = [np.array(initial) + 1e-5 * np.random.randn(ndim)
for i in xrange(nwalkers)]
Explanation: The paper then says:
initialize 32 walkers by sampling from an isotropic Gaussian with a standard deviation of $10^{−5}$ centered on the MAP parameters.
So, let's do that:
End of explanation
# initalise the sampler:
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob, args=data)
Explanation: We can then use these inputs to initiate our sampler:
End of explanation
# run a few samples as a burn-in:
print("Running burn-in")
p0, lnp, _ = sampler.run_mcmc(p0, 500)
sampler.reset()
Explanation: The paper says:
We run 500 steps of burn-in, followed by 5000 steps of MCMC using emcee.
First let's run the burn-in:
End of explanation
# take the highest likelihood point from the burn-in as a
# starting point and now begin your production run:
print("Running production")
p = p0[np.argmax(lnp)]
p0 = [p + 1e-5 * np.random.randn(ndim) for i in xrange(nwalkers)]
p0, _, _ = sampler.run_mcmc(p0, 5000)
print "Finished"
import acor
# calculate the convergence time of our
# MCMC chains:
samples = sampler.flatchain
s2 = np.ndarray.transpose(samples)
tau, mean, sigma = acor.acor(s2)
print "Convergence time from acor: ", tau
print "Number of independent samples:", 5000.-(20.*tau)
# get rid of the samples that were taken
# before convergence:
delta = int(20*tau)
samples = sampler.flatchain[delta:,:]
samples[:, 2] = np.exp(samples[:, 2])
b_mcmc, c_mcmc, l_mcmc, p_mcmc = map(lambda v: (v[1], v[2]-v[1], v[1]-v[0]),
zip(*np.percentile(samples, [16, 50, 84],
axis=0)))
# specify prediction points:
t = np.arange(np.min(time),np.max(time),0.1)
# update the kernel hyper-parameters:
hp = np.array([b_mcmc[0], c_mcmc[0], l_mcmc[0], p_mcmc[0]])
lnB = np.log(p[0])
lnC = p[1]
lnL = np.log(p[2])
lnP = np.log(p[3])
p0 = np.array([lnB,lnC,lnL,lnP])
gp.set_parameter_vector(p0)
print hp
# calculate expectation and variance at each point:
mu, cov = gp.predict(value, t)
ax = pl.subplot(111)
pl.plot(t,mu)
ax.fill_between(t,mu-std,mu+std,facecolor='lightblue', lw=0, interpolate=True)
pl.scatter(time,value,s=2)
pl.axis([0.,60.,-1.,1.])
pl.ylabel("Relative flux [ppt]")
pl.xlabel("Time [days]")
pl.show()
import corner
# Plot it.
figure = corner.corner(samples, labels=[r"$B$", r"$lnC$", r"$L$", r"$P$"],
quantiles=[0.16,0.5,0.84],
#levels=[0.39,0.86,0.99],
levels=[0.68,0.95,0.99],
title="KIC 1430163",
show_titles=True, title_args={"fontsize": 12})
Explanation: Now let's run the production MCMC:
End of explanation |
4,834 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am having a problem with minimization procedure. Actually, I could not create a correct objective function for my problem. | Problem:
import scipy.optimize
import numpy as np
np.random.seed(42)
a = np.random.rand(3,5)
x_true = np.array([10, 13, 5, 8, 40])
y = a.dot(x_true ** 2)
x0 = np.array([2, 3, 1, 4, 20])
def residual_ans(x, a, y):
s = ((y - a.dot(x**2))**2).sum()
return s
out = scipy.optimize.minimize(residual_ans, x0=x0, args=(a, y), method= 'L-BFGS-B').x |
4,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inference
Step1: The uniform prior, $U\sim(a, b)$, here with $a=-10$ and $b=15$. When this function is called, its log density is returned.
Step2: To plot the density, we take the exponential of its log density.
Step3: To specify a multidimensional uniform prior, use the same function. Here we specify, $\theta_1\sim U(2, 4)$ and $\theta_2\sim U(-7,-5)$.
Step4: Plot $p(\theta_2|\theta_1 = 3)$.
Step5: If you have a prior constrained to lie $\in[0,1]$, you can use a beta prior.
Step6: Specifying a value outside the support of the distribution returns $-\infty$ for the log density.
Step7: Each prior has a mean function that allows you to quickly check what parameterisation is being used.
Step8: Alternatively, if you need a prior constrained to lie $\in[a,b]$, but for which a Gaussian distribution might otherwise be appropriate, you can use the truncated Gaussian prior (also known as a truncated normal).
Step9: Each prior also has a sample function which allows generation of independent samples from each distribution. Using this we can sample from a Student-t density, with input dimensions (location, degrees of freedom, scale).
Step10: For models with multiple parameters, we can specify different distributions for each dimension using ComposedLogPrior.
Step11: Functions like sample and mean also work for ComposedLogPrior objects.
Step12: We also have multivariate priors in PINTS. For example, the multivariate Gaussian.
Step13: Converting prior samples to be uniform within unit cube
Some inference methods only work when samples are uniformly distributed in unit cube. PINTS contains methods to convert prior samples to those from the unit cube (often but not only using the cumulative density function (CDF)).
Here we show how this function works for the multivariate Gaussian (a case of when a different transformation to the CDF is applied).
First, we show samples from the prior.
Step14: Next, we show those samples after they have been converted to be uniform on the unit cube.
Step15: And we can convert them back again. | Python Code:
import pints
import numpy as np
import matplotlib.pyplot as plt
Explanation: Inference: Log priors
This example notebook illustrates some of the functionality that is available for LogPrior objects that are currently available within PINTS.
End of explanation
uniform_log_prior = pints.UniformLogPrior(-10, 15)
print('U(0|a=-10, b=15) = ' + str(uniform_log_prior([0])))
Explanation: The uniform prior, $U\sim(a, b)$, here with $a=-10$ and $b=15$. When this function is called, its log density is returned.
End of explanation
values = np.linspace(-20, 20, 1000)
log_prob = [uniform_log_prior([x]) for x in values]
prob = np.exp(log_prob)
plt.figure(figsize=(10,4))
plt.xlabel('theta')
plt.ylabel('Density')
plt.plot(values, prob)
plt.show()
Explanation: To plot the density, we take the exponential of its log density.
End of explanation
uniform_log_prior = pints.UniformLogPrior([2, -7], [4, -5])
Explanation: To specify a multidimensional uniform prior, use the same function. Here we specify, $\theta_1\sim U(2, 4)$ and $\theta_2\sim U(-7,-5)$.
End of explanation
values = np.linspace(-10, -4, 1000)
log_prob = [uniform_log_prior([3, x]) for x in values]
prob = np.exp(log_prob)
plt.figure(figsize=(10,4))
plt.xlabel('theta[2]')
plt.ylabel('Density')
plt.plot(values, prob)
plt.show()
Explanation: Plot $p(\theta_2|\theta_1 = 3)$.
End of explanation
beta_log_prior1 = pints.BetaLogPrior(1, 1)
beta_log_prior2 = pints.BetaLogPrior(5, 3)
beta_log_prior3 = pints.BetaLogPrior(3, 5)
beta_log_prior4 = pints.BetaLogPrior(10, 10)
values = np.linspace(0, 1, 1000)
prob1 = np.exp([beta_log_prior1([x]) for x in values])
prob2 = np.exp([beta_log_prior2([x]) for x in values])
prob3 = np.exp([beta_log_prior3([x]) for x in values])
prob4 = np.exp([beta_log_prior4([x]) for x in values])
plt.figure(figsize=(10,4))
plt.xlabel('theta')
plt.ylabel('Density')
plt.plot(values, prob1)
plt.plot(values, prob2)
plt.plot(values, prob3)
plt.plot(values, prob4)
plt.legend(['beta(1, 1)', 'beta(5, 3)', 'beta(3, 5)', 'beta(10, 10)'])
plt.show()
Explanation: If you have a prior constrained to lie $\in[0,1]$, you can use a beta prior.
End of explanation
print('beta(-0.5|a=1, b=1) = ' + str(beta_log_prior1([-0.5])))
Explanation: Specifying a value outside the support of the distribution returns $-\infty$ for the log density.
End of explanation
print('mean = ' + str(beta_log_prior3.mean()))
Explanation: Each prior has a mean function that allows you to quickly check what parameterisation is being used.
End of explanation
truncnorm_log_prior = pints.TruncatedGaussianLogPrior(2.0, 1.0, 0.0, 4.25)
values = np.linspace(-1, 6, 1000)
prob = np.exp([truncnorm_log_prior([x]) for x in values])
plt.figure(figsize=(10,4))
plt.xlabel('theta')
plt.ylabel('Density')
plt.plot(values, prob)
plt.show()
Explanation: Alternatively, if you need a prior constrained to lie $\in[a,b]$, but for which a Gaussian distribution might otherwise be appropriate, you can use the truncated Gaussian prior (also known as a truncated normal).
End of explanation
n = 10000
student_t_log_prior = pints.StudentTLogPrior(10, 8, 5)
samples = student_t_log_prior.sample(n)
plt.hist(samples, 20)
plt.xlabel('theta')
plt.ylabel('Frequency')
plt.show()
Explanation: Each prior also has a sample function which allows generation of independent samples from each distribution. Using this we can sample from a Student-t density, with input dimensions (location, degrees of freedom, scale).
End of explanation
log_prior1 = pints.GaussianLogPrior(6, 3)
log_prior2 = pints.InverseGammaLogPrior(5, 5)
log_prior3 = pints.LogNormalLogPrior(-1, 1)
composed_log_prior = pints.ComposedLogPrior(log_prior1, log_prior2, log_prior3)
# calling
composed_log_prior([-3, 1, 6])
Explanation: For models with multiple parameters, we can specify different distributions for each dimension using ComposedLogPrior.
End of explanation
print('mean = ' + str(composed_log_prior.mean()))
n = 10
samples = composed_log_prior.sample(1000)
plt.hist(samples[:, 0], alpha=0.5)
plt.hist(samples[:, 1], alpha=0.5)
plt.hist(samples[:, 2], alpha=0.5)
plt.legend(['Gaussian(6, 3)', 'InverseGamma(5, 5)', 'LogNormal(-1, 1)'])
plt.xlabel('theta')
plt.ylabel('Frequency')
plt.show()
Explanation: Functions like sample and mean also work for ComposedLogPrior objects.
End of explanation
two_d_gaussian_log_prior = pints.MultivariateGaussianLogPrior([0, 10], [[1, 0.5], [0.5, 3]])
# Contour plot of pdf
x = np.linspace(-3, 3, 100)
y = np.linspace(4, 15, 100)
X, Y = np.meshgrid(x, y)
Z = np.exp([[two_d_gaussian_log_prior([i, j]) for i in x] for j in y])
plt.contour(X, Y, Z)
plt.xlabel('theta[2]')
plt.ylabel('theta[1]')
plt.show()
Explanation: We also have multivariate priors in PINTS. For example, the multivariate Gaussian.
End of explanation
mean = [-5.5, 6.7, 3.2]
covariance = [[3.4, -0.5, -0.7], [-0.5, 2.7, 1.4], [-0.7, 1.4, 5]]
log_prior = pints.MultivariateGaussianLogPrior(mean, covariance)
n = 1000
samples = log_prior.sample(n)
plt.scatter(samples[:, 1], samples[:, 2])
plt.show()
Explanation: Converting prior samples to be uniform within unit cube
Some inference methods only work when samples are uniformly distributed in unit cube. PINTS contains methods to convert prior samples to those from the unit cube (often but not only using the cumulative density function (CDF)).
Here we show how this function works for the multivariate Gaussian (a case of when a different transformation to the CDF is applied).
First, we show samples from the prior.
End of explanation
u = []
for i in range(n):
u.append(log_prior.convert_to_unit_cube(samples[i]))
u = np.vstack(u)
plt.scatter(u[:, 1], u[:, 2])
plt.show()
Explanation: Next, we show those samples after they have been converted to be uniform on the unit cube.
End of explanation
theta = []
for i in range(n):
theta.append(log_prior.convert_from_unit_cube(u[i]))
theta = np.vstack(theta)
plt.scatter(theta[:, 1], theta[:, 2])
plt.show()
Explanation: And we can convert them back again.
End of explanation |
4,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python data types
Python can be a little strange in providing lots of data types, dynamic type allocation, and some interconversion.
Numbers
Integers, Floating point numbers, and complex numbers are available automatically.
Step1: The math module needs to be imported before you can use it.
Step2: numbers as objects
Virtually everything in python is an object. This means that it is a thing that can have multiple copies made (all of which behave independently) and which knows how to do certain operations on itself.
For example, a floating point number knows certain things that it can do as well as simply "being" a number
Step3: Strings
Step4: But one of the problems with strings as data structures is that they are immutable. To change anything, we need to make copies of the data
Step5: tuples and lists and sets
Tuples are bundles of data in a structured form but they are not vectors ... and they are immutable
Step6: Lists are more flexible than tuples, they can be assigned to, have items removed etc
Step7: Sets are special list-like collections of unique items. NOTE that the elements are not ordered (no such thing as s[1]
Step8: Dictionaries
These are very useful data collections where the information can be looked up by name instead of a numerical index. This will come in handy as a lightweight database and is commonly something we need to use when using modules to read in data.
Step9: More useful is the fact that the dictionary can have as a key, anything that can be converted using the hash function into a unique number. Strings, obviously, work well but anything immutable can be hashed
Step10: Exercise | Python Code:
f = 1.0
i = 1
print f, i
print
print "Value of f is {}, value of i is {}".format(f,i)
print
print "Value of f is {:f}, value of i is {:f}".format(f,i)
## BUT !!
print "Value of f is {:d}, value of i is {:f}".format(f,i)
c = 0.0 + 1.0j
print c
print "Value of c is {:f}".format(c)
print "Value of c**2 is {:f}".format(c**2)
Explanation: Python data types
Python can be a little strange in providing lots of data types, dynamic type allocation, and some interconversion.
Numbers
Integers, Floating point numbers, and complex numbers are available automatically.
End of explanation
import math
math.sqrt(f)
math.sqrt(c)
math.sqrt(-1)
import cmath
print cmath.sqrt(f)
print cmath.sqrt(c)
print cmath.sqrt(-1)
Explanation: The math module needs to be imported before you can use it.
End of explanation
help(f)
print f.is_integer() # Strange eh ?
print f.conjugate()
print c.conjugate()
print f.__div__(2.0) # This looks odd, but it is the way that f / 2.0 is implemented underneath
Explanation: numbers as objects
Virtually everything in python is an object. This means that it is a thing that can have multiple copies made (all of which behave independently) and which knows how to do certain operations on itself.
For example, a floating point number knows certain things that it can do as well as simply "being" a number:
End of explanation
s = 'hello'
print s[1]
print s[-1]
print len(s)
print s + ' world'
ss = "\t\t hello \n \t\t world\n \t\t !!!\n\n "
print ss
print ss.partition(' ')
print s[-1]," ", s[0:-1]
Explanation: Strings
End of explanation
s[1] = 'a'
Explanation: But one of the problems with strings as data structures is that they are immutable. To change anything, we need to make copies of the data
End of explanation
a = (1.0, 2.0, 0.0)
b = (3.0, 2.0, 4.0)
print a[1]
print a + b
print a-b
print a*b
print 2*a
a[1] = 2
e = ('a', 'b', 1.0)
2 * e
Explanation: tuples and lists and sets
Tuples are bundles of data in a structured form but they are not vectors ... and they are immutable
End of explanation
l = [1.0, 2.0, 3.0]
ll = ['a', 'b', 'c']
lll = [1.0, 'a', (1,2,3), ['f','g', 'h']]
print l
print ll
print l[2], ll[2]
print 2*l
print l+l
print lll
print lll[3], " -- sub item 3 --> ", lll[3][1]
print 2.0*l
l[2] = 2.99
print l
l.append(3.0)
print l
ll += 'b'
print ll
ll.remove('b') # removes the first one !
print ll
l += [5.0]
print "1 - ", l
l.remove(5.0)
print "2 - ", l
l.remove(3.0)
print "3 - ", l
l.remove(4.0)
print "4 - ", l
Explanation: Lists are more flexible than tuples, they can be assigned to, have items removed etc
End of explanation
s = set([6,5,4,3,2,1,1,1,1])
print s
s.add(7)
print s
s.add(1)
s2 = set([5,6,7,8,9,10,11])
s.intersection(s2)
s.union(s2)
Explanation: Sets are special list-like collections of unique items. NOTE that the elements are not ordered (no such thing as s[1]
End of explanation
d = { "item1": ['a','b','c'], "item2": ['c','d','e']}
print d["item1"]
print d["item1"][1]
d1 = {"Small Number":1.0, "Tiny Number":0.00000001, "Big Number": 100000000.0}
print d1["Small Number"] + d1["Tiny Number"]
print d1["Small Number"] + d1["Big Number"]
print d1.keys()
for k in d1.keys():
print "{:>15s}".format(k)," --> ", d1[k]
Explanation: Dictionaries
These are very useful data collections where the information can be looked up by name instead of a numerical index. This will come in handy as a lightweight database and is commonly something we need to use when using modules to read in data.
End of explanation
def hashfn(item):
try:
h = hash(item)
print "{:>25}".format(item), " --> ", h
except:
print "{:>25}".format(item), " --> unhashable type {}".format(type(item))
return
hashfn("abc")
hashfn("abd")
hashfn("alphabeta")
hashfn("abcdefghi")
hashfn(1.0)
hashfn(1.00000000000001)
hashfn(2.1)
hashfn(('a','b'))
hashfn((1.0,2.0))
hashfn([1,2,3])
import math
hashfn(math.sin) # weird ones !!
Explanation: More useful is the fact that the dictionary can have as a key, anything that can be converted using the hash function into a unique number. Strings, obviously, work well but anything immutable can be hashed:
End of explanation
# Name: ( area code, number )
phone_book = { "Achibald": ("04", "1234 4321"),
"Barrington": ("08", "1111 4444"),
"Chaotica" : ("07", "5555 1234") }
reverse_phone_book = {}
for key in phone_book.keys():
reverse_phone_book[phone_book[key]] = key
print reverse_phone_book[('07','5555 1234')]
Explanation: Exercise: Build a reverse lookup table
Suppose you have this dictionary of phone numbers:
```python
phone_book = { "Achibald": ("04", "1234 4321"),
"Barrington": ("08", "1111 4444"),
"Chaotica" : ("07", "5555 1234") }
```
Can you construct a reverse phone book to look up who is calling from their phone number ?
Solution: Here is a possible solution for the simple version of the problem but this could still use some error checking (if you type in a wrong number)
End of explanation |
4,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Save this file as studentid1_studentid2_lab#.ipynb
(Your student-id is the number shown on your student card.)
E.g. if you work with 3 people, the notebook should be named
Step1: Lab 1
Step2: $\newcommand{\bPhi}{\mathbf{\Phi}}$
$\newcommand{\bx}{\mathbf{x}}$
$\newcommand{\bw}{\mathbf{w}}$
$\newcommand{\bt}{\mathbf{t}}$
$\newcommand{\by}{\mathbf{y}}$
$\newcommand{\bm}{\mathbf{m}}$
$\newcommand{\bS}{\mathbf{S}}$
$\newcommand{\bI}{\mathbf{I}}$
Part 1
Step3: 1.2 Polynomial regression (10 points)
Write a method fit_polynomial(x, t, M) that finds the maximum-likelihood solution of an unregularized $M$-th order polynomial for some dataset x. The error function to minimize w.r.t. $\bw$ is
Step4: 1.3 Plot (5 points)
Sample a dataset with $N=10$, and fit four polynomials with $M \in (0, 2, 4, 8)$.
For each value of $M$, plot the prediction function, along with the data and the original cosine function. The resulting figure should look similar to fig 1.4 of the Bishop's book. Note that you can use matplotlib's plt.pyplot(.) functionality for creating grids of figures.
Step5: 1.4 Regularized linear regression (10 points)
Write a method fit_polynomial_reg(x, t, M, lamb) that fits a regularized $M$-th order polynomial to the periodic data, as discussed in the lectures, where lamb is the regularization term lambda. (Note that 'lambda' cannot be used as a variable name in Python since it has a special meaning). The error function to minimize w.r.t. $\bw$
Step6: 1.5 Model selection by cross-validation (15 points)
Use cross-validation to find a good choice of $M$ and $\lambda$, given a dataset of $N=10$ datapoints generated with gen_cosine(20). You should write a function that tries (loops over) a reasonable range of choices of $M$ and $\lambda$, and returns the choice with the best cross-validation error. In this case you use $K=5$ folds.
You can let $M \in (0, 1, ..., 10)$, and let $\lambda \in (e^{-10}, e^{-9}, ..., e^{0})$.
a) (5 points) First of all, write a method pred_error(x_train, x_valid, t_train, t_valid, M, lamb) that compares the prediction of your method fit_polynomial_reg for a given set of parameters $M$ and $\lambda$ to t_valid. It should return the prediction error for a single fold.
Step7: b) (10 points) Now write a method find_best_m_and_lamb(x, t) that finds the best values for $M$ and $\lambda$. The method should return the best $M$ and $\lambda$. To get you started, here is a method you can use to generate indices of cross-validation folds.
Step8: 1.7 Plot best cross-validated fit (5 points)
For some dataset with $N = 10$, plot the model with the optimal $M$ and $\lambda$ according to the cross-validation error, using the method you just wrote. In addition, the plot should show the dataset itself and the function that we try to approximate. Let the plot make clear which $M$ and $\lambda$ were found.
Step9: Part 2
Step10: 2.2 Compute Posterior (15 points)
You're going to implement a Bayesian linear regression model, and fit it to the periodic data. Your regression model has a zero-mean isotropic Gaussian prior over the parameters, governed by a single (scalar) precision parameter $\alpha$, i.e.
Step11: 2.3 Prediction (10 points)
The predictive distribution of Bayesian linear regression is
Step12: 2.4 Plot predictive distribution (10 points)
a) (5 points) Generate 10 datapoints with gen_cosine2(10). Compute the posterior mean and covariance for a Bayesian polynomial regression model with $M=4$, $\alpha=\frac{1}{2}$ and $\beta=\frac{1}{0.2^2}$.
Plot the Bayesian predictive distribution, where you plot (for $x$ between 0 and $2 \pi$) $t$'s predictive mean and a 1-sigma predictive variance using plt.fill_between(..., alpha=0.1) (the alpha argument induces transparency).
Include the datapoints in your plot.
Step13: b) (5 points) For a second plot, draw 100 samples from the parameters' posterior distribution. Each of these samples is a certain choice of parameters for 4-th order polynomial regression.
Display each of these 100 polynomials. | Python Code:
NAME = "Laura Ruis"
NAME2 = "Fredie Haver"
NAME3 = "Lukás Jelínek"
EMAIL = "[email protected]"
EMAIL2 = "[email protected]"
EMAIL3 = "[email protected]"
Explanation: Save this file as studentid1_studentid2_lab#.ipynb
(Your student-id is the number shown on your student card.)
E.g. if you work with 3 people, the notebook should be named:
12301230_3434343_1238938934_lab1.ipynb.
This will be parsed by a regexp, so please double check your filename.
Before you turn this problem in, please make sure everything runs correctly. First, restart the kernel (in the menubar, select Kernel$\rightarrow$Restart) and then run all cells (in the menubar, select Cell$\rightarrow$Run All).
Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE", as well as your names and email adresses below.
End of explanation
%pylab inline
plt.rcParams["figure.figsize"] = [20,10]
Explanation: Lab 1: Linear Regression and Overfitting
Machine Learning 1, September 2017
Notes on implementation:
You should write your code and answers in this IPython Notebook: http://ipython.org/notebook.html. If you have problems, please contact your teaching assistant.
Please write your answers right below the questions.
Among the first lines of your notebook should be "%pylab inline". This imports all required modules, and your plots will appear inline.
Refer to last week's lab notes, i.e. http://docs.scipy.org/doc/, if you are unsure about what function to use. There are different correct ways to implement each problem!
For this lab, your regression solutions should be in closed form, i.e., should not perform iterative gradient-based optimization but find the exact optimum directly.
use the provided test boxes to check if your answers are correct
End of explanation
def gen_cosine(n):
x = np.array(np.linspace(0, 2 * np.pi, n))
t = np.random.normal(np.cos(x), 0.2)
return x, t
def gen_real_cosine(n):
real_x = np.array(np.linspace(0, 2 * np.pi, n))
real_y = np.cos(real_x)
return real_x,real_y
### Test your function
np.random.seed(5)
N = 10
x, t = gen_cosine(N)
assert x.shape == (N,), "the shape of x is incorrect"
assert t.shape == (N,), "the shape of t is incorrect"
Explanation: $\newcommand{\bPhi}{\mathbf{\Phi}}$
$\newcommand{\bx}{\mathbf{x}}$
$\newcommand{\bw}{\mathbf{w}}$
$\newcommand{\bt}{\mathbf{t}}$
$\newcommand{\by}{\mathbf{y}}$
$\newcommand{\bm}{\mathbf{m}}$
$\newcommand{\bS}{\mathbf{S}}$
$\newcommand{\bI}{\mathbf{I}}$
Part 1: Polynomial Regression
1.1. Generate periodic data (5 points)
Write a method gen_cosine(N) that generates toy data like in fig 1.2 of Bishop's book. The method should have a parameter $N$, and should return $N$-dimensional vectors $\bx$ and $\bt$, where $\bx$ contains evenly spaced values from 0 to (including) 2$\pi$, and the elements $t_i$ of $\bt$ are distributed according to:
$$t_i \sim \mathcal{N}(\mu_i, \sigma^2)$$
where $x_i$ is the $i$-th elements of $\bf{x}$, the mean $\mu_i = \cos(x_i)$ and the standard deviation $\sigma = 0.2$.
End of explanation
def designmatrix(x, M): # it is highly recommended to write a helper function that computes Phi
design_matrix = []
for i in range(M+1):
design_matrix.append([data ** i for data in x])
design_matrix = np.matrix(design_matrix).transpose()
return design_matrix
def LSE(phi, t):
phi_squared_inv = np.linalg.inv(np.matmul(phi.transpose(), phi))
mp_pseudo_inv = np.matmul(phi_squared_inv, phi.transpose())
return np.reshape(np.array(np.matmul(mp_pseudo_inv, t)), phi.shape[1])
def fit_polynomial(x, t, M):
Phi = designmatrix(x, M)
w_ml = LSE(Phi, t)
return w_ml, Phi
### Test your function
N = 10
x = np.square((np.linspace(-1, 1, N)))
t = 0.5*x + 1.5
m = 2
w, Phi = fit_polynomial(x,t,m)
assert w.shape == (m+1,), "The shape of w is incorrect"
assert Phi.shape == (N, m+1), "The shape of Phi is incorrect"
Explanation: 1.2 Polynomial regression (10 points)
Write a method fit_polynomial(x, t, M) that finds the maximum-likelihood solution of an unregularized $M$-th order polynomial for some dataset x. The error function to minimize w.r.t. $\bw$ is:
$E(\bw) = \frac{1}{2} (\bPhi\bw - \bt)^T(\bPhi\bw - \bt)$
where $\bPhi$ is the feature matrix (or design matrix) as explained in Bishop's book at section 3.1.1, $\bt$ is the vector of target values. Your method should return a vector $\bw$ with the maximum-likelihood parameter estimates, as well as the feature matrix $\bPhi$.
End of explanation
N = 10
x,t = gen_cosine(N)
real_x , real_y = gen_real_cosine(100)
w_ml = []
Phi = []
M = [0, 2, 4, 8]
for m in M:
w,phi = fit_polynomial(x,t,m)
w_ml.append(w)
Phi.append(phi)
predictions = []
for i in range(len(M)):
predictions.append(np.squeeze(np.asarray(np.matmul(Phi[i],w_ml[i].transpose()))))
f, ((ax1, ax2), (ax4, ax8)) = plt.subplots(2, 2, sharex='col', sharey='row')
ax1.scatter(x,t)
ax1.plot(real_x,real_y)
ax1.plot(x,predictions[0])
ax1.set_title("M = "+ str(M[0]))
ax2.scatter(x,t)
ax2.plot(real_x,real_y)
ax2.plot(x,predictions[1])
ax2.set_title("M = "+ str(M[1]))
ax4.scatter(x,t)
ax4.plot(real_x,real_y)
ax4.plot(x,predictions[2])
ax4.set_title("M = "+ str(M[2]))
ax8.scatter(x,t)
ax8.plot(real_x,real_y)
ax8.plot(x,predictions[3])
ax8.set_title("M = "+ str(M[3]))
Explanation: 1.3 Plot (5 points)
Sample a dataset with $N=10$, and fit four polynomials with $M \in (0, 2, 4, 8)$.
For each value of $M$, plot the prediction function, along with the data and the original cosine function. The resulting figure should look similar to fig 1.4 of the Bishop's book. Note that you can use matplotlib's plt.pyplot(.) functionality for creating grids of figures.
End of explanation
def LSEReg(phi,t,lamb):
lambid = lamb*np.identity(phi.shape[1])
phi_squared = np.matmul(phi.transpose(), phi)
lamphi_inv = np.linalg.inv(numpy.add(lambid,phi_squared))
mp_pseudo_inv = np.matmul(lamphi_inv, phi.transpose())
return np.reshape(np.array(np.matmul(mp_pseudo_inv, t)), phi.shape[1])
def fit_polynomial_reg(x, t, m, lamb):
Phi = designmatrix(x, m)
w = LSEReg(Phi, t, lamb)
return w, Phi
### Test your function
N = 10
x = np.square((np.linspace(-1, 1, N)))
t = 0.5*x + 1.5
m = 2
lamb = 0.1
w, Phi = fit_polynomial_reg(x,t,m, lamb)
assert w.shape == (m+1,), "The shape of w is incorrect"
assert Phi.shape == (N, m+1), "The shape of w is incorrect"
Explanation: 1.4 Regularized linear regression (10 points)
Write a method fit_polynomial_reg(x, t, M, lamb) that fits a regularized $M$-th order polynomial to the periodic data, as discussed in the lectures, where lamb is the regularization term lambda. (Note that 'lambda' cannot be used as a variable name in Python since it has a special meaning). The error function to minimize w.r.t. $\bw$:
$E(\bw) = \frac{1}{2} (\bPhi\bw - \bt)^T(\bPhi\bw - \bt) + \frac{\lambda}{2} \mathbf{w}^T \mathbf{w}$
For background, see section 3.1.4 of Bishop's book.
The function should return $\bw$ and $\bPhi$.
End of explanation
def pred_error(x_train, x_valid, t_train, t_valid, M, reg):
w, Phi = fit_polynomial_reg(x_train,t_train,M,reg)
# w, Phi = fit_polynomial(x_train,t_train,M)
valid_Phi = designmatrix(x_valid, M)
t_trained = np.matmul(valid_Phi, w.transpose())
err =0.5*np.matmul((t_valid - t_trained),(t_valid - t_trained).transpose())
return err
### Test your function
N = 10
x = np.linspace(-1, 1, N)
t = 0.5*np.square(x) + 1.5
M = 2
reg = 0.1
pred_err = pred_error(x[:-2], x[-2:], t[:-2], t[-2:], M, reg)
assert pred_err < 0.01, "pred_err is too big"
Explanation: 1.5 Model selection by cross-validation (15 points)
Use cross-validation to find a good choice of $M$ and $\lambda$, given a dataset of $N=10$ datapoints generated with gen_cosine(20). You should write a function that tries (loops over) a reasonable range of choices of $M$ and $\lambda$, and returns the choice with the best cross-validation error. In this case you use $K=5$ folds.
You can let $M \in (0, 1, ..., 10)$, and let $\lambda \in (e^{-10}, e^{-9}, ..., e^{0})$.
a) (5 points) First of all, write a method pred_error(x_train, x_valid, t_train, t_valid, M, lamb) that compares the prediction of your method fit_polynomial_reg for a given set of parameters $M$ and $\lambda$ to t_valid. It should return the prediction error for a single fold.
End of explanation
def kfold_indices(N, k):
all_indices = np.arange(N,dtype=int)
np.random.shuffle(all_indices)
idx = [int(i) for i in np.floor(np.linspace(0,N,k+1))]
train_folds = []
valid_folds = []
for fold in range(k):
valid_indices = all_indices[idx[fold]:idx[fold+1]]
valid_folds.append(valid_indices)
train_folds.append(np.setdiff1d(all_indices, valid_indices))
return train_folds, valid_folds
def find_best_m_and_lamb(x, t):
folds = 5
train_idx, valid_idx = kfold_indices(len(x),folds)
M_best = 2
lamb_best = 1
err_best = 1000000000
for m in range(11):
for exponent in range(0,11):
total_err = 0
for i in range(folds):
x_train = np.take(x, train_idx[i])
t_train = np.take(t, train_idx[i])
x_valid = np.take(x, valid_idx[i])
t_valid = np.take(t, valid_idx[i])
total_err =total_err + pred_error(x_train, x_valid, t_train, t_valid, m, math.exp(-exponent))
# print(m,exponent,err)
if total_err < err_best:
err_best = total_err
M_best = m
lamb_best = math.exp(-exponent)
return M_best, lamb_best
### If you want you can write your own test here
np.random.seed(9)
N=10
x, t = gen_cosine(N)
print(find_best_m_and_lamb(x,t))
Explanation: b) (10 points) Now write a method find_best_m_and_lamb(x, t) that finds the best values for $M$ and $\lambda$. The method should return the best $M$ and $\lambda$. To get you started, here is a method you can use to generate indices of cross-validation folds.
End of explanation
N=20
x, t = gen_cosine(N)
real_x,real_y = gen_real_cosine(100)
M,lamb = find_best_m_and_lamb(x,t)
w, Phi = fit_polynomial(x,t,M)
res = np.matmul(Phi,w.transpose())
res = np.asarray(res).reshape(-1)
plt.plot(real_x,real_y)
plt.scatter(x,t)
plt.plot(x,res)
plt.figtext(0.6, 0.8, "M = "+ str(M), size='xx-large')
plt.figtext(0.6, 0.77, "Lambda = exp("+ str(math.log(lamb))+" )", size='xx-large')
plt.show()
Explanation: 1.7 Plot best cross-validated fit (5 points)
For some dataset with $N = 10$, plot the model with the optimal $M$ and $\lambda$ according to the cross-validation error, using the method you just wrote. In addition, the plot should show the dataset itself and the function that we try to approximate. Let the plot make clear which $M$ and $\lambda$ were found.
End of explanation
def gen_cosine2(n):
x = np.array(np.random.uniform(0, 2 * np.pi, n))
t = np.random.normal(np.cos(x), 0.2)
return x, t
### Test your function
np.random.seed(5)
N = 10
x, t = gen_cosine2(N)
assert x.shape == (N,), "the shape of x is incorrect"
assert t.shape == (N,), "the shape of t is incorrect"
Explanation: Part 2: Bayesian Linear (Polynomial) Regression
2.1 Cosine 2 (5 points)
Write a function gen_cosine2(N) that behaves identically to gen_cosine(N) except that the generated values $x_i$ are not linearly spaced, but drawn from a uniform distribution between $0$ and $2 \pi$.
End of explanation
def bayesian_LR(phi, t, alpha, beta, M):
phi_squared = np.matmul(phi.transpose(), phi)
Alpha = np.multiply(alpha, np.identity(M + 1))
cov = np.linalg.inv(np.add(Alpha, np.multiply(beta, phi_squared)))
phi_t = np.matmul(phi.transpose(), t).transpose()
mean = np.multiply(beta, np.matmul(cov, phi_t))
mean = np.reshape(np.array(mean), M+1)
return cov, mean
def fit_polynomial_bayes(x, t, M, alpha, beta):
Phi = designmatrix(x, M)
S, m = bayesian_LR(Phi, t, alpha, beta, M)
return m, S, Phi
### Test your function
N = 10
x = np.linspace(-1, 1, N)
t = 0.5*np.square(x) + 1.5
M = 2
alpha = 0.5
beta = 25
m, S, Phi = fit_polynomial_bayes(x, t, M, alpha, beta)
assert m.shape == (M+1,), "the shape of m is incorrect"
assert S.shape == (M+1, M+1), "the shape of S is incorrect"
assert Phi.shape == (N, M+1), "the shape of Phi is incorrect"
Explanation: 2.2 Compute Posterior (15 points)
You're going to implement a Bayesian linear regression model, and fit it to the periodic data. Your regression model has a zero-mean isotropic Gaussian prior over the parameters, governed by a single (scalar) precision parameter $\alpha$, i.e.:
$$p(\bw \;|\; \alpha) = \mathcal{N}(\bw \;|\; 0, \alpha^{-1} \bI)$$
The covariance and mean of the posterior are given by:
$$\bS_N= \left( \alpha \bI + \beta \bPhi^T \bPhi \right)^{-1} $$
$$\bm_N = \beta\; \bS_N \bPhi^T \bt$$
where $\alpha$ is the precision of the predictive distribution, and $\beta$ is the noise precision.
See MLPR chapter 3.3 for background.
Write a method fit_polynomial_bayes(x, t, M, alpha, beta) that returns the mean $\bm_N$ and covariance $\bS_N$ of the posterior for a $M$-th order polynomial. In addition it should return the design matrix $\bPhi$. The arguments x, t and M have the same meaning as in question 1.2.
End of explanation
def predict_polynomial_bayes(x, m, S, beta):
M = len(m)
sigma = []
mean = []
for i in x:
phi = designmatrix([i], M - 1)
var = 1/beta + np.matmul(np.reshape(phi.transpose(),(1,M)), np.reshape(np.matmul(S, phi.transpose()),(M,1)))
mean.append(np.matmul(np.reshape(m, (1,M)), phi.transpose()))
sigma.append(var)
mean = np.reshape(np.array(mean), (N,))
sigma = np.reshape(np.array(sigma), (N,))
Phi = designmatrix(x, M - 1)
return mean, sigma, Phi
### Test your function
np.random.seed(5)
N = 10
x = np.linspace(-1, 1, N)
m = np.empty(3)
S = np.empty((3, 3))
beta = 25
mean, sigma, Phi = predict_polynomial_bayes(x, m, S, beta)
assert mean.shape == (N,), "the shape of mean is incorrect"
assert sigma.shape == (N,), "the shape of sigma is incorrect"
assert Phi.shape == (N, m.shape[0]), "the shape of Phi is incorrect"
Explanation: 2.3 Prediction (10 points)
The predictive distribution of Bayesian linear regression is:
$$ p(t \;|\; \bx, \bt, \alpha, \beta) = \mathcal{N}(t \;|\; \bm_N^T \phi(\bx), \sigma_N^2(\bx))$$
$$ \sigma_N^2 = \frac{1}{\beta} + \phi(\bx)^T \bS_N \phi(\bx) $$
where $\phi(\bx)$ are the computed features for a new datapoint $\bx$, and $t$ is the predicted variable for datapoint $\bx$.
Write a function that predict_polynomial_bayes(x, m, S, beta) that returns the predictive mean, variance and design matrix $\bPhi$ given a new datapoint x, posterior mean m, posterior variance S and a choice of model variance beta.
End of explanation
np.random.seed(101)
real_x, real_y = gen_real_cosine(100)
N = 10
M = 4
alpha = 0.5
x, t = gen_cosine2(N)
perm = x.argsort()
beta = 25
m, S, Phi = fit_polynomial_bayes(x, t, M, alpha, beta)
plt.plot(real_x, real_y)
plt.scatter(x[perm], t[perm])
mean_pt, var, Phi = predict_polynomial_bayes(x, m, S, beta)
plt.plot(x[perm], mean_pt[perm])
plt.fill_between(x[perm],mean_pt[perm] - var[perm],mean_pt[perm] + var[perm], alpha=0.1)
plt.show()
Explanation: 2.4 Plot predictive distribution (10 points)
a) (5 points) Generate 10 datapoints with gen_cosine2(10). Compute the posterior mean and covariance for a Bayesian polynomial regression model with $M=4$, $\alpha=\frac{1}{2}$ and $\beta=\frac{1}{0.2^2}$.
Plot the Bayesian predictive distribution, where you plot (for $x$ between 0 and $2 \pi$) $t$'s predictive mean and a 1-sigma predictive variance using plt.fill_between(..., alpha=0.1) (the alpha argument induces transparency).
Include the datapoints in your plot.
End of explanation
M = 4
alpha = 0.5
beta = 25
real_x, real_y = gen_real_cosine(100)
perm = x.argsort()
plt.plot(real_x, real_y)
for i in range(1, 100):
m, S, Phi = fit_polynomial_bayes(x, t, M, alpha, i)
mean, var, phi = predict_polynomial_bayes(x, m, S, i)
plt.plot(x[perm], mean[perm])
plt.show()
Explanation: b) (5 points) For a second plot, draw 100 samples from the parameters' posterior distribution. Each of these samples is a certain choice of parameters for 4-th order polynomial regression.
Display each of these 100 polynomials.
End of explanation |
4,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstration of accessing lepton information
Import what we need from Matplotlib and ROOT
Step1: Create a "chain" of files (but just one file for now)
Step2: This is how we plotted the pT of the leading lepton (the first, highest pT one) in each event before. We are looking at 1000 events, and exactly 1 lepton from each event, so our histogram has 1000 entries.
Step3: Plotting the pT of all leptons is a bit more complicated, and we can see that there are more entries in the resulting histogram because some events have more than one lepton. | Python Code:
import pylab
import matplotlib.pyplot as plt
%matplotlib inline
pylab.rcParams['figure.figsize'] = 12,8
from ROOT import TChain
Explanation: Demonstration of accessing lepton information
Import what we need from Matplotlib and ROOT:
End of explanation
data = TChain("mini"); # "mini" is the name of the TTree stored in the data files
data.Add("/home/waugh/atlas-data/DataMuons.root")
Explanation: Create a "chain" of files (but just one file for now):
End of explanation
pt = []
for event_num in xrange(1000):
data.GetEntry(event_num)
pt.append(data.lep_pt[0]) # We are assuming there is at least one lepton in each event
n, bins, patches = plt.hist(pt)
plt.xlabel('Leading lepton pT [MeV]')
plt.ylabel('Events per bin')
n_entries = int(sum(n))
print("Number of entries = {}".format(n_entries))
Explanation: This is how we plotted the pT of the leading lepton (the first, highest pT one) in each event before. We are looking at 1000 events, and exactly 1 lepton from each event, so our histogram has 1000 entries.
End of explanation
pt_leptons = []
for event_num in xrange(1000): # loop over the events
data.GetEntry(event_num) # read the next event into memory
num_leptons = data.lep_n # number of leptons in the event
for lepton_num in xrange(num_leptons): # loop over the leptons within this event
pt_lepton = data.lep_pt[lepton_num] # get the pT of the next lepton...
pt_leptons.append(pt_lepton) # ... and add it to the list
n, bins, patches = plt.hist(pt_leptons)
plt.xlabel('Lepton pT [MeV]')
plt.ylabel('Events per bin')
n_entries = int(sum(n))
print("Number of entries = {}".format(n_entries))
Explanation: Plotting the pT of all leptons is a bit more complicated, and we can see that there are more entries in the resulting histogram because some events have more than one lepton.
End of explanation |
4,839 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am trying to find col duplicates rows in a pandas dataframe. | Problem:
import pandas as pd
df=pd.DataFrame(data=[[1,1,2,5],[1,3,4,1],[4,1,2,5],[5,1,4,9],[1,1,2,5]],columns=['val', 'col1','col2','3col'])
def g(df):
cols = list(df.filter(like='col'))
df['index_original'] = df.groupby(cols)[cols[0]].transform('idxmin')
return df[df.duplicated(subset=cols, keep='first')]
result = g(df.copy()) |
4,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.eco - Mise en pratique des séances 1 et 2 - Utilisation de pandas et visualisation
Trois exercices pour manipuler les donner, manipulation de texte, données vélib.
Step1: Données
Les données sont téléchargeables à cette adresse | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.eco - Mise en pratique des séances 1 et 2 - Utilisation de pandas et visualisation
Trois exercices pour manipuler les donner, manipulation de texte, données vélib.
End of explanation
from pyensae.datasource import download_data
files = download_data("td2a_eco_exercices_de_manipulation_de_donnees.zip",
url="https://github.com/sdpython/ensae_teaching_cs/raw/master/_doc/notebooks/td2a_eco/data/")
files
Explanation: Données
Les données sont téléchargeables à cette adresse : td2a_eco_exercices_de_manipulation_de_donnees.zip. Le code suivant permet de les télécharger automatiquement.
End of explanation |
4,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preprocessing
Step2: Training LeNet
First, we will train a simple CNN with a single hidden fully connected layer as a classifier.
Step3: Training Random Forests
Preprocessing to a fixed size training set since sklearn doesn't suppport streaming training sets?
Step4: So training on raw pixel values might not be a good idea. Let's build a feature extractor based on the trained LeNet (or any other pretrained image classifier) | Python Code:
import os
from skimage import io
from skimage.color import rgb2gray
from skimage import transform
from math import ceil
IMGSIZE = (100, 100)
def load_images(folder, scalefactor=(2, 2), labeldict=None):
images = []
labels = []
files = os.listdir(folder)
for file in (fname for fname in files if fname.endswith('.png')):
img = io.imread(folder + file).astype(float)
img = rgb2gray(img)
# Crop since some of the real world pictures are other shape
img = img[:IMGSIZE[0], :IMGSIZE[1]]
# Possibly downscale to speed up processing
img = transform.downscale_local_mean(img, scalefactor)
# normalize image range
img -= np.min(img)
img /= np.max(img)
images.append(img)
if labeldict is not None:
# lookup label for real world data in dict generated from labels.txt
key, _ = os.path.splitext(file)
labels.append(labeldict[key])
else:
# infere label from filename
if file.find("einstein") > -1 or file.find("curie") > -1:
labels.append(1)
else:
labels.append(0)
return np.asarray(images)[:, None], np.asarray(labels)
x_train, y_train = load_images('data/aps/train/')
# Artifically pad Einstein's and Curie't to have balanced training set
# ok, since we use data augmentation later anyway
sel = y_train == 1
repeats = len(sel) // sum(sel) - 1
x_train = np.concatenate((x_train[~sel], np.repeat(x_train[sel], repeats, axis=0)),
axis=0)
y_train = np.concatenate((y_train[~sel], np.repeat(y_train[sel], repeats, axis=0)),
axis=0)
x_test, y_test = load_images('data/aps/test/')
rw_labels = {str(key): 0 if label == 0 else 1
for key, label in np.loadtxt('data/aps/real_world/labels.txt', dtype=int)}
x_rw, y_rw = load_images('data/aps/real_world/', labeldict=rw_labels)
from mpl_toolkits.axes_grid import ImageGrid
from math import ceil
def imsshow(images, grid=(5, -1)):
assert any(g > 0 for g in grid)
grid_x = grid[0] if grid[0] > 0 else ceil(len(images) / grid[1])
grid_y = grid[1] if grid[1] > 0 else ceil(len(images) / grid[0])
axes = ImageGrid(pl.gcf(), "111", (grid_y, grid_x), share_all=True)
for ax, img in zip(axes, images):
ax.get_xaxis().set_ticks([])
ax.get_yaxis().set_ticks([])
ax.imshow(img[0], cmap='gray')
pl.figure(0, figsize=(16, 10))
imsshow(x_train, grid=(5, 1))
pl.show()
pl.figure(0, figsize=(16, 10))
imsshow(x_train[::-4], grid=(5, 1))
pl.show()
from keras.preprocessing.image import ImageDataGenerator
imggen = ImageDataGenerator(rotation_range=20,
width_shift_range=0.15,
height_shift_range=0.15,
shear_range=0.4,
fill_mode='constant',
cval=1.,
zoom_range=0.3,
channel_shift_range=0.1)
imggen.fit(x_train)
for batch in it.islice(imggen.flow(x_train, batch_size=5), 2):
pl.figure(0, figsize=(16, 5))
imsshow(batch, grid=(5, 1))
pl.show()
Explanation: Preprocessing
End of explanation
from keras.layers import Conv2D, Dense, Flatten, MaxPooling2D
from keras.models import Sequential
from keras.backend import image_data_format
def generate(figsize, nr_classes, cunits=[20, 50], fcunits=[500]):
model = Sequential()
cunits = list(cunits)
input_shape = figsize + (1,) if image_data_format == 'channels_last' \
else (1,) + figsize
model.add(Conv2D(cunits[0], (5, 5), padding='same',
activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Convolutional layers
for nr_units in cunits[1:]:
model.add(Conv2D(nr_units, (5, 5), padding='same',
activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# Fully connected layers
model.add(Flatten())
for nr_units in fcunits:
model.add(Dense(nr_units, activation='relu'))
# Output layer
activation = 'softmax' if nr_classes > 1 else 'sigmoid'
model.add(Dense(nr_classes, activation=activation))
return model
from keras.optimizers import Adam
from keras.models import load_model
try:
model = load_model('aps_lenet.h5')
print("Model succesfully loaded...")
except OSError:
print("Saved model not found, traing...")
model = generate(figsize=x_train.shape[-2:], nr_classes=1,
cunits=[24, 48], fcunits=[100])
optimizer = Adam()
model.compile(loss='binary_crossentropy', optimizer=optimizer,
metrics=['accuracy'])
model.fit_generator(imggen.flow(x_train, y_train, batch_size=len(x_train)),
validation_data=imggen.flow(x_test, y_test),
steps_per_epoch=100, epochs=5,
verbose=1, validation_steps=256)
model.save('aps_lenet.h5')
from sklearn.metrics import confusion_matrix
def plot_cm(cm, classes, normalize=False,
title='Confusion matrix', cmap=pl.cm.viridis):
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
pl.imshow(cm, interpolation='nearest', cmap=cmap)
pl.title(title)
pl.colorbar()
tick_marks = np.arange(len(classes))
pl.xticks(tick_marks, classes, rotation=45)
pl.yticks(tick_marks, classes)
thresh = cm.max() / 2.
for i, j in it.product(range(cm.shape[0]), range(cm.shape[1])):
pl.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
pl.tight_layout()
pl.ylabel('True label')
pl.xlabel('Predicted label')
y_pred_rw = model.predict_classes(x_rw, verbose=0).ravel()
plot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True,
classes=["Not Einstein", "Einstein"])
Explanation: Training LeNet
First, we will train a simple CNN with a single hidden fully connected layer as a classifier.
End of explanation
# Same size training set as LeNet
TRAININGSET_SIZE = len(x_train) * 5 * 100
batch_size = len(x_train)
nr_batches = TRAININGSET_SIZE // batch_size + 1
imgit = imggen.flow(x_train, y=y_train, batch_size=batch_size)
x_train_sampled = np.empty((TRAININGSET_SIZE, 1,) + x_train.shape[-2:])
y_train_sampled = np.empty(TRAININGSET_SIZE)
for batch, (x_batch, y_batch) in enumerate(it.islice(imgit, nr_batches)):
buflen = len(x_train_sampled[batch * batch_size:(batch + 1) * batch_size])
x_train_sampled[batch * batch_size:(batch + 1) * batch_size] = x_batch[:buflen]
y_train_sampled[batch * batch_size:(batch + 1) * batch_size] = y_batch[:buflen]
from sklearn.ensemble import RandomForestClassifier
rfe = RandomForestClassifier(n_estimators=64, criterion='entropy', n_jobs=-1,
verbose=True)
rfe = rfe.fit(x_train_sampled.reshape((TRAININGSET_SIZE, -1)), y_train_sampled)
y_pred_rw = rfe.predict(x_rw.reshape((len(x_rw), -1)))
plot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True,
classes=["Not Einstein", "Einstein"])
pl.show()
print("Rightly classified Einsteins:")
imsshow(x_rw[((y_rw - y_pred_rw) == 0) * (y_rw == 1)])
pl.show()
print("Wrongly classified images:")
imsshow(x_rw[(y_rw - y_pred_rw) != 0])
pl.show()
Explanation: Training Random Forests
Preprocessing to a fixed size training set since sklearn doesn't suppport streaming training sets?
End of explanation
model = load_model('aps_lenet.h5')
enc_layers = it.takewhile(lambda l: not isinstance(l, keras.layers.Flatten),
model.layers)
encoder_model = keras.models.Sequential(enc_layers)
encoder_model.add(keras.layers.Flatten())
x_train_sampled_enc = encoder_model.predict(x_train_sampled, verbose=True)
rfe = RandomForestClassifier(n_estimators=64, criterion='entropy', n_jobs=-1,
verbose=True)
rfe = rfe.fit(x_train_sampled_enc, y_train_sampled)
y_pred_rw = rfe.predict(encoder_model.predict(x_rw, verbose=False))
plot_cm(confusion_matrix(y_rw, y_pred_rw), normalize=True,
classes=["Not Einstein", "Einstein"])
pl.show()
Explanation: So training on raw pixel values might not be a good idea. Let's build a feature extractor based on the trained LeNet (or any other pretrained image classifier)
End of explanation |
4,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Look at the hourly data
Now on to the real challenge, the hourly data. So, as usual, make some plots, do a bit of anaalysis and try to get a feel for the data.
Remaining TO DO
Step1: It's quite clear that registered and casual users are very different
I already knew this from analyzing the daily data, but this shows that registered riders tend to commute, while casual riders joy ride on the weekends. And, if you look closely, you can see that registered users tend to head out of work a bit earlier on fridays. To get the best model, we'll need to include weekday (which we may want to put through one-hot encoder first).
Get ready for ML
I want to use the training/test cut that Kaggle does, so need to put the day of the month into the column, and then drop the unnecessary columns
Step2: RandomForestRegressor does a much better job than the other estimators. Not seeing much difference when I change n_estimators, and the mean score is much lower than I'd expect (ie, it doesn't match the cv scores above). I wonder if I did this right ....
Step3: This looks reasonable. Obviously not perfect, and the difference plot has some structure. The quotient is reasonably flat, though. This is also for the entire dataset - there may be more pronounced differences under certain conditions (ie, weather, temp, season, etc).
take a look at the same plots when the weather is good and the temp is high
Step5: way overpredicting saturday rides in bad weather.
Now plot the learning curve
this method is lifted from the scikit-learn docs | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
hourly = pd.read_csv('Bike-Sharing-Dataset/hour.csv',header = 0)
hourly.head(20)
type(hourly)
for row_index, row in hourly.iterrows():
print row_index , row['registered']
if (row_index > 5):
break
weekly = np.zeros((24,7))
for row_index, row in hourly.iterrows():
weekly[row['hr'], row['weekday']] += row['registered']
print np.max(weekly)
plt.imshow(np.sqrt(weekly/np.max(weekly)), interpolation='nearest', aspect=0.5)
weekly = np.zeros((24,7))
for row_index, row in hourly.iterrows():
weekly[row['hr'], row['weekday']] += row['casual']
plt.imshow(np.sqrt(weekly/np.max(weekly)), interpolation='nearest', aspect=0.5)
weekly = np.zeros((24,7))
for row_index, row in hourly.iterrows():
weekly[row['hr'], row['weekday']] += row['cnt']
plt.imshow(np.sqrt(weekly/np.max(weekly)), interpolation='nearest', aspect=0.5)
Explanation: Look at the hourly data
Now on to the real challenge, the hourly data. So, as usual, make some plots, do a bit of anaalysis and try to get a feel for the data.
Remaining TO DO:
Try preprocessing - standardizing the features, which I really should do....
Try one hot encoder with days of the week (each day should be treated as a separate feature instead of a continuous one)
When doing this, can get rid of 'working day' feature
Fit independently to the 'casual' and 'registered' values as targets, and then when we do predict, sum the two predictions to compare to the test values - this is probably only useful/effective for certain estimators
Evaluate the estimators
Clean up the plots and make them look nicer
End of explanation
hourly['day'] = pd.DatetimeIndex(hourly.dteday).day
hourly = hourly.drop(['instant','dteday','casual','registered'], axis = 1)
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
dumb = LabelEncoder()
dumb1 = dumb.fit_transform(hourly['weekday'])
hourly[['weekday','day']]
#try one hot encoding
#>>> from sklearn.preprocessing import OneHotEncoder
# >>> enc = OneHotEncoder()
# >>> enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], \
#[1, 0, 2]]) # doctest: +ELLIPSIS
# OneHotEncoder(categorical_features='all', dtype=<... 'float'>,
# handle_unknown='error', n_values='auto', sparse=True)
# >>> enc.n_values_
# array([2, 3, 4])
# >>> enc.feature_indices_
# array([0, 2, 5, 9])
# >>> enc.transform([[0, 1, 1]]).toarray()
# array([[ 1., 0., 0., 1., 0., 0., 1., 0., 0.]])
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import LabelEncoder
from sklearn import preprocessing
dumb = preprocessing.LabelEncoder()
dumb1 = dumb.fit_transform(hourly['weekday'])
#to convert back
#train.Sex = le_sex.inverse_transform(train.Sex)
enc = OneHotEncoder()
#enc.categorical_features = [['weekday','day']]
#enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1],[1, 0, 2]])
#print enc.transform([[0,1,2]]).toarray()
#print enc.transform([[0,1,3]]).toarray()
enc.fit(hourly[['weekday']])
dumb = enc.transform(hourly[['weekday']])
print dumb[0:100].toarray()
Xtrain = hourly[hourly.day < 19].drop('cnt',axis=1).values #the data for the training set
ytrain = (hourly[hourly.day < 19])['cnt'].values #the target of the training set
Xtest = hourly[hourly.day >= 19].drop('cnt',axis=1).values #the data for the test set
ytest = (hourly[hourly.day >= 19])['cnt'].values #the target of the test set
print ytrain.shape
print Xtrain.shape
print ytest.shape
print Xtest.shape
print Xtest[0]
from sklearn import cross_validation
from sklearn.ensemble import RandomForestRegressor
from sklearn.grid_search import GridSearchCV
cv = cross_validation.ShuffleSplit(len(Xtrain), n_iter=3, test_size=0.2,
random_state=0)
for train, test in cv:
reg = RandomForestRegressor(n_estimators = 500).fit(Xtrain[train], ytrain[train])
print reg.score(Xtrain[train], ytrain[train]), reg.score(Xtrain[test], ytrain[test])
print reg.score(Xtest,ytest)
estimators = [10,100,500]
grid = GridSearchCV(estimator=reg, param_grid=dict(n_estimators=estimators), n_jobs=-1)
grid.fit(Xtrain,ytrain)
print grid.best_score_
print grid.best_estimator_.n_estimators
print grid.grid_scores_
Explanation: It's quite clear that registered and casual users are very different
I already knew this from analyzing the daily data, but this shows that registered riders tend to commute, while casual riders joy ride on the weekends. And, if you look closely, you can see that registered users tend to head out of work a bit earlier on fridays. To get the best model, we'll need to include weekday (which we may want to put through one-hot encoder first).
Get ready for ML
I want to use the training/test cut that Kaggle does, so need to put the day of the month into the column, and then drop the unnecessary columns
End of explanation
pred = reg.predict(Xtest) #put the predicted values into an array
hrInd=3 #the column number of the hr column
weekdayInd = 5
weeklyPredict = np.zeros((24,7))
weeklyActual = np.zeros((24,7))
for i in range(0,len(ytest)):
weeklyPredict[Xtest[i,hrInd], Xtest[i,weekdayInd]] += pred[i]
weeklyActual[Xtest[i,hrInd], Xtest[i,weekdayInd]] += ytest[i]
def makeDifferencePlot(weeklyPredict, weeklyActual):
plt.figure(1, figsize=(12,6))
plt.subplot(141)
plt.imshow(np.sqrt(weeklyPredict), interpolation='nearest', aspect=0.5)
plt.subplot(142)
plt.imshow(np.sqrt(weeklyActual), interpolation='nearest', aspect=0.5)
plt.subplot(143)
plt.imshow((weeklyPredict-weeklyActual), interpolation='nearest', aspect=0.5)
plt.subplot(144)
plt.imshow((weeklyPredict/weeklyActual), interpolation='nearest', aspect=0.5)
plt.show()
makeDifferencePlot(weeklyPredict, weeklyActual)
Explanation: RandomForestRegressor does a much better job than the other estimators. Not seeing much difference when I change n_estimators, and the mean score is much lower than I'd expect (ie, it doesn't match the cv scores above). I wonder if I did this right ....
End of explanation
weeklyPredict = np.zeros((24,7))
weeklyActual = np.zeros((24,7))
weatherInd=7
atempInd = 9
for i in range(0,len(ytest)):
if (Xtest[i, weatherInd] < 3 and Xtest[i,atempInd] > .6):
weeklyPredict[Xtest[i,hrInd], Xtest[i,weekdayInd]] += pred[i]
weeklyActual[Xtest[i,hrInd], Xtest[i,weekdayInd]] += ytest[i]
makeDifferencePlot(weeklyPredict, weeklyActual)
weeklyPredict = np.zeros((24,7))
weeklyActual = np.zeros((24,7))
weatherInd=7
atempInd = 9
for i in range(0,len(ytest)):
if (Xtest[i, weatherInd] > 2 and Xtest[i,atempInd] < .6):
weeklyPredict[Xtest[i,hrInd], Xtest[i,weekdayInd]] += pred[i]
weeklyActual[Xtest[i,hrInd], Xtest[i,weekdayInd]] += ytest[i]
makeDifferencePlot(weeklyPredict, weeklyActual)
Explanation: This looks reasonable. Obviously not perfect, and the difference plot has some structure. The quotient is reasonably flat, though. This is also for the entire dataset - there may be more pronounced differences under certain conditions (ie, weather, temp, season, etc).
take a look at the same plots when the weather is good and the temp is high
End of explanation
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
Generate a simple plot of the test and traning learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : integer, cross-validation generator, optional
If an integer is passed, it is the number of folds (defaults to 3).
Specific cross-validation objects can be passed, see
sklearn.cross_validation module for the list of possible objects
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
from sklearn.learning_curve import learning_curve
title = "Learning Curves (Random forest)"
# Cross validation with 100 iterations to get smoother mean test and train
# score curves, each time with 20% data randomly selected as a validation set.
cv = cross_validation.ShuffleSplit(len(Xtrain), n_iter=30,
test_size=0.3, random_state=0)
reg = RandomForestRegressor(n_estimators = 100)
plot_learning_curve(reg, title, Xtrain, ytrain, ylim=(0.6, 1.01), cv=cv, n_jobs=1)
plt.show()
Explanation: way overpredicting saturday rides in bad weather.
Now plot the learning curve
this method is lifted from the scikit-learn docs
End of explanation |
4,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Principal Components Analysis on the UCI Image Segmentation Data Set.
Kevin Maher
<span style="color
Step1: Read in the data. Extra header rows in the UCI data file were manually deleted using OpenOffice.
Step2: Prepare the data for machine learning by dividing between training and test sets.
Step3: Scale the data for principal components analysis.
Step4: Check machine learning accuracy with a Random Forest model.
Step5: Plot the amount of explained variance for each component of the principal components analysis model.
Step6: Looking at the graph above, it looks like we could cut the number of features to 13 without losing much in the way of model accuracy. Lets try numbers of features from 10 to 19 to check this hypothesis. | Python Code:
%matplotlib inline
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn import decomposition
from sklearn import metrics
Explanation: Principal Components Analysis on the UCI Image Segmentation Data Set.
Kevin Maher
<span style="color:blue">[email protected]</span>
This is a multi-class classification problem. The objective is to predict whether a picture is one of grass, path, window, etc. There are 19 features and 2310 different instances in the model data from UCI. My objective here is to determine if the number of features needed for the model might be reduced by using principal components analysis.
Imports needed for the script. Uses Python 2.7.13, numpy 1.11.3, pandas 0.19.2, sklearn 0.18.1, matplotlib 2.0.0.
End of explanation
df = pd.read_csv('segmentation.csv')
print df.head()
Explanation: Read in the data. Extra header rows in the UCI data file were manually deleted using OpenOffice.
End of explanation
y = df['CLASS']
X = df.drop('CLASS', axis=1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=245)
Explanation: Prepare the data for machine learning by dividing between training and test sets.
End of explanation
sc = StandardScaler()
sc.fit(X_train)
X_train = sc.transform(X_train)
X_test = sc.transform(X_test)
Explanation: Scale the data for principal components analysis.
End of explanation
n_est = 100
clf = RandomForestClassifier(n_estimators=n_est, random_state=357)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print 'RF Model: %.2f%% accurate' % (metrics.accuracy_score(y_test, pred) * 100.0)
Explanation: Check machine learning accuracy with a Random Forest model.
End of explanation
pca = decomposition.PCA()
pca.fit(X_train)
pca_var = pca.explained_variance_
plt.plot(pca_var, linewidth=2)
plt.axis('tight')
plt.xlabel('n_components')
plt.ylabel('explained_variance_')
plt.show()
plt.close()
Explanation: Plot the amount of explained variance for each component of the principal components analysis model.
End of explanation
for i in range(10, X.shape[1] + 1):
pca = decomposition.PCA(n_components=i)
pca.fit(X_train)
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
clf.fit(X_train_pca, y_train)
pred = clf.predict(X_test_pca)
print 'RF Model: %.2f%% accurate with %d PCA components' % ((metrics.accuracy_score(y_test, pred) * 100.0), i)
Explanation: Looking at the graph above, it looks like we could cut the number of features to 13 without losing much in the way of model accuracy. Lets try numbers of features from 10 to 19 to check this hypothesis.
End of explanation |
4,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Install dependencies
Step2: Clone, compile and set up Tesseract
Step3: Grab some things to scrape the RIA corpus
Step4: Scrape the RIA corpus
Step5: Get the raw corpus in a single text file
Step6: Compress the raw text; this can be downloaded through the file browser on the left, so the scraping steps can be skipped in future
Step7: ...and can be re-added using the upload feature in the file browser
Step8: This next part is so I can update the langdata files
Step9: Generate | Python Code:
!wget https://github.com/jimregan/tesseract-gle-uncial/releases/download/v0.1beta2/gle_uncial.traineddata
Explanation: <a href="https://colab.research.google.com/github/jimregan/tesseract-gle-uncial/blob/master/Update_gle_uncial_traineddata_for_Tesseract_4.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Grab this for later
End of explanation
!apt-get install libicu-dev libpango1.0-dev libcairo2-dev libleptonica-dev
Explanation: Install dependencies
End of explanation
!git clone https://github.com/tesseract-ocr/tesseract
import os
os.chdir('tesseract')
!sh autogen.sh
!./configure --disable-graphics
!make -j 8
!make install
!ldconfig
!make training
!make training-install
Explanation: Clone, compile and set up Tesseract
End of explanation
import os
os.chdir('/content')
!git clone https://github.com/jimregan/tesseract-gle-uncial/
!apt-get install lynx
Explanation: Grab some things to scrape the RIA corpus
End of explanation
! for i in A B C D E F G H I J K L M N O P Q R S T U V W X Y Z;do lynx -dump "http://corpas.ria.ie/index.php?fsg_function=1&fsg_page=$i" |grep http://corpas.ria.ie|awk '{print $NF}' >> list;done
!grep 'function=3' list |sort|uniq|grep corpas.ria|sed -e 's/function=3/function=5/' > input
!wget -x -c -i input
!mkdir text
!for i in corpas.ria.ie/*;do id=$(echo $i|awk -F'=' '{print $NF}');cat $i | perl /content/tesseract-gle-uncial/scripts/extract-ria.pl > text/$id.txt;done
Explanation: Scrape the RIA corpus
End of explanation
!cat text/*.txt|grep -v '^$' > ria-raw.txt
Explanation: Get the raw corpus in a single text file
End of explanation
!gzip ria-raw.txt
Explanation: Compress the raw text; this can be downloaded through the file browser on the left, so the scraping steps can be skipped in future
End of explanation
!gzip -d ria-raw.txt.gz
Explanation: ...and can be re-added using the upload feature in the file browser
End of explanation
import os
os.chdir('/content')
!git clone https://github.com/tesseract-ocr/langdata
!cat ria-raw.txt | perl /content/tesseract-gle-uncial/scripts/toponc.pl > ria-ponc.txt
!mkdir genwlout
!perl /content/tesseract-gle-uncial/scripts/genlangdata.pl -i ria-ponc.txt -d genwlout -p gle_uncial
import os
os.chdir('/content/genwlout')
#!for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cat $i.unsorted | awk -F'\t' '{print $1}' | sort | uniq > $i.sorted;done
!for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cat $i.sorted /content/langdata/gle_uncial/$i | sort | uniq > $i;done
!for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cp $i /content/langdata/gle_uncial/;done
Grab the fonts
import os
os.chdir('/content')
!mkdir fonts
os.chdir('fonts')
!wget -i /content/tesseract-gle-uncial/fonts.txt
!for i in *.zip; do unzip $i;done
Explanation: This next part is so I can update the langdata files
End of explanation
os.chdir('/content')
!mkdir unpack
!combine_tessdata -u /content/gle_uncial.traineddata unpack/gle_uncial.
os.chdir('unpack')
!for i in gle_uncial.word.bigrams gle_uncial.wordlist gle_uncial.numbers gle_uncial.punc; do cp /content/genwlout/$i .;done
!wordlist2dawg gle_uncial.numbers gle_uncial.lstm-number-dawg gle_uncial.lstm-unicharset
!wordlist2dawg gle_uncial.punc gle_uncial.lstm-punc-dawg gle_uncial.lstm-unicharset
!wordlist2dawg gle_uncial.wordlist gle_uncial.lstm-word-dawg gle_uncial.lstm-unicharset
!rm gle_uncial.numbers gle_uncial.word.bigrams gle_uncial.punc gle_uncial.wordlist
os.chdir('/content')
!mv gle_uncial.traineddata gle_uncial.traineddata.orig
!combine_tessdata unpack/gle_uncial.
os.chdir('/content')
!bash /content/tesseract/src/training/tesstrain.sh
!text2image --fonts_dir fonts --list_available_fonts
!cat genwlout/gle_uncial.wordlist.unsorted|awk -F'\t' '{print $2 "\t" $1'}|sort -nr > freqlist
!cat freqlist|awk -F'\t' '{print $2}'|grep -v '^$' > wordlist
!cat ria-ponc.txt|sort|uniq|head -n 400000 > gle_uncial.training_text
!cp unpack/gle_uncial.traineddata /usr/share/tesseract-ocr/4.00/tessdata
!cp gle_uncial.trainingtext langdata/gle_uncial/
!mkdir output
!bash tesseract/src/training/tesstrain.sh --fonts_dir fonts --lang gle_uncial --linedata_only --noextract_font_properties --langdata_dir langdata --tessdata_dir /usr/share/tesseract-ocr/4.00/tessdata --output_dir output
Explanation: Generate
End of explanation |
4,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Spectral Features
For classification, we're going to be using new features in our arsenal
Step1: librosa.feature.spectral_bandwidth
librosa.feature.spectral_bandwidth
Step2: librosa.feature.spectral_contrast
librosa.feature.spectral_contrast
Step3: librosa.feature.spectral_rolloff
librosa.feature.spectral_rolloff | Python Code:
x, fs = librosa.load('simple_loop.wav')
IPython.display.Audio(x, rate=fs)
spectral_centroids = librosa.feature.spectral_centroid(x, sr=fs)
plt.plot(spectral_centroids[0])
Explanation: ← Back to Index
Spectral Features
For classification, we're going to be using new features in our arsenal: spectral moments (centroid, bandwidth, skewness, kurtosis) and other spectral statistics.
[Moments](https://en.wikipedia.org/wiki/Moment_(mathematics) is a term used in physics and statistics. There are raw moments and central moments.
You are probably already familiar with two examples of moments: mean and variance. The first raw moment is known as the mean. The second central moment is known as the variance.
librosa.feature.spectral_centroid
librosa.feature.spectral_centroid
End of explanation
spectral_bandwidth = librosa.feature.spectral_bandwidth(x, sr=fs)
plt.plot(spectral_bandwidth[0])
Explanation: librosa.feature.spectral_bandwidth
librosa.feature.spectral_bandwidth
End of explanation
spectral_contrast = librosa.feature.spectral_contrast(x, sr=fs)
# For visualization, scale each feature dimension to have zero mean and unit variance
spectral_contrast = sklearn.preprocessing.scale(spectral_contrast, axis=1)
librosa.display.specshow(spectral_contrast, x_axis='time')
Explanation: librosa.feature.spectral_contrast
librosa.feature.spectral_contrast
End of explanation
spectral_rolloff = librosa.feature.spectral_rolloff(x, sr=fs)
plt.plot(spectral_rolloff[0])
Explanation: librosa.feature.spectral_rolloff
librosa.feature.spectral_rolloff
End of explanation |
4,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Make decision tree from iris data
Taken from Google's Visualizing a Decision Tree - Machine Learning Recipes #2
Step1: Tensorflow
Examples from http
Step3: Custom model with TensorFlowEstimator()
Step4: Recurrent neural network
See http
Step5: From https | Python Code:
import tensorflow.contrib.learn as skflow
from sklearn.datasets import load_iris
from sklearn import metrics
iris = load_iris()
iris.keys()
iris.feature_names
iris.target_names
# Withhold 3 for testing
test_idx = [0, 50, 100]
train_data = np.delete(iris.data, test_idx, axis=0)
train_target = np.delete(iris.target, test_idx)
test_target = iris.target[test_idx] # array([0, 1, 2])
test_data = iris.data[test_idx] # array([[ 5.1, 3.5, 1.4, 0.2], [ 7. , 3.2, 4.7, 1.4], ...])
Explanation: Make decision tree from iris data
Taken from Google's Visualizing a Decision Tree - Machine Learning Recipes #2
End of explanation
classifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=3)
classifier.fit(iris.data, iris.target)
metrics.accuracy_score(iris.target, classifier.predict(iris.data))
Explanation: Tensorflow
Examples from http://terrytangyuan.github.io/2016/03/14/scikit-flow-intro/
Deep neural network
3 layer deep neural network with 10, 20 and 10 hidden units in each layer, respectively.
End of explanation
def my_model(X, y):
This is DNN with 10, 20, 10 hidden layers, and dropout of 0.5 probability.
layers = skflow.ops.dnn(X, [10, 20, 10]) # keep_prob=0.5 causes error
return skflow.models.logistic_regression(layers, y)
classifier = skflow.TensorFlowEstimator(model_fn=my_model, n_classes=3)
classifier.fit(iris.data, iris.target)
metrics.accuracy_score(iris.target, classifier.predict(iris.data))
Explanation: Custom model with TensorFlowEstimator()
End of explanation
classifier = skflow.TensorFlowRNNClassifier(rnn_size=2, n_classes=15)
classifier.fit(iris.data, iris.target)
Explanation: Recurrent neural network
See http://terrytangyuan.github.io/2016/03/14/scikit-flow-intro/#recurrent-neural-network.
End of explanation
import numpy as np
from tensorflow.contrib.learn.python import learn
import tensorflow as tf
np.random.seed(42)
data = np.array(
list([[2, 1, 2, 2, 3], [2, 2, 3, 4, 5], [3, 3, 1, 2, 1], [2, 4, 5, 4, 1]
]),
dtype=np.float32)
# labels for classification
labels = np.array(list([1, 0, 1, 0]), dtype=np.float32)
# targets for regression
targets = np.array(list([10, 16, 10, 16]), dtype=np.float32)
test_data = np.array(list([[1, 3, 3, 2, 1], [2, 3, 4, 5, 6]]))
def input_fn(X):
return tf.split(1, 5, X)
# Classification
classifier = learn.TensorFlowRNNClassifier(rnn_size=2,
cell_type="lstm",
n_classes=2,
input_op_fn=input_fn)
classifier.fit(data, labels)
classifier.weights_
classifier.bias_
predictions = classifier.predict(test_data)
#assertAllClose(predictions, np.array([1, 0]))
classifier = learn.TensorFlowRNNClassifier(rnn_size=2,
cell_type="rnn",
n_classes=2,
input_op_fn=input_fn,
num_layers=2)
classifier.fit(data, labels)
classifier.predict(iris.data)
Explanation: From https://github.com/tensorflow/tensorflow/blob/17dcc5a176d152caec570452d28fb94920cceb8c/tensorflow/contrib/learn/python/learn/tests/test_nonlinear.py
End of explanation |
4,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Evoked data
In this tutorial we focus on the plotting functions of
Step1: First we read the evoked object from a file. Check out
tut_epoching_and_averaging to get to this stage from raw data.
Step2: Notice that evoked is a list of
Step3: Let's start with a simple one. We plot event related potentials / fields
(ERP/ERF). The bad channels are not plotted by default. Here we explicitly
set the exclude parameter to show the bad channels in red. All plotting
functions of MNE-python return a handle to the figure instance. When we have
the handle, we can customise the plots to our liking.
Step4: All plotting functions of MNE-python return a handle to the figure instance.
When we have the handle, we can customise the plots to our liking. For
example, we can get rid of the empty space with a simple function call.
Step5: Now we will make it a bit fancier and only use MEG channels. Many of the
MNE-functions include a picks parameter to include a selection of
channels. picks is simply a list of channel indices that you can easily
construct with
Step6: Notice the legend on the left. The colors would suggest that there may be two
separate sources for the signals. This wasn't obvious from the first figure.
Try painting the slopes with left mouse button. It should open a new window
with topomaps (scalp plots) of the average over the painted area. There is
also a function for drawing topomaps separately.
Step7: By default the topomaps are drawn from evenly spread out points of time over
the evoked data. We can also define the times ourselves.
Step8: Or we can automatically select the peaks.
Step9: You can take a look at the documentation of
Step10: Notice that we created five axes, but had only four categories. The fifth
axes was used for drawing the colorbar. You must provide room for it when you
create this kind of custom plots or turn the colorbar off with
colorbar=False. That's what the warnings are trying to tell you. Also, we
used show=False for the three first function calls. This prevents the
showing of the figure prematurely. The behavior depends on the mode you are
using for your python session. See http
Step11: Sometimes, you may want to compare two or more conditions at a selection of
sensors, or e.g. for the Global Field Power. For this, you can use the
function
Step12: We can also plot the activations as images. The time runs along the x-axis
and the channels along the y-axis. The amplitudes are color coded so that
the amplitudes from negative to positive translates to shift from blue to
red. White means zero amplitude. You can use the cmap parameter to define
the color map yourself. The accepted values include all matplotlib colormaps.
Step13: Finally we plot the sensor data as a topographical view. In the simple case
we plot only left auditory responses, and then we plot them all in the same
figure for comparison. Click on the individual plots to open them bigger.
Step14: Visualizing field lines in 3D
We now compute the field maps to project MEG and EEG data to the MEG helmet
and scalp surface.
To do this, we need coregistration information. See
tut_forward for more details. Here we just illustrate usage. | Python Code:
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
import mne
# sphinx_gallery_thumbnail_number = 9
Explanation: Visualize Evoked data
In this tutorial we focus on the plotting functions of :class:mne.Evoked.
End of explanation
data_path = mne.datasets.sample.data_path()
fname = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname, baseline=(None, 0), proj=True)
print(evoked)
Explanation: First we read the evoked object from a file. Check out
tut_epoching_and_averaging to get to this stage from raw data.
End of explanation
evoked_l_aud = evoked[0]
evoked_r_aud = evoked[1]
evoked_l_vis = evoked[2]
evoked_r_vis = evoked[3]
Explanation: Notice that evoked is a list of :class:evoked <mne.Evoked> instances.
You can read only one of the categories by passing the argument condition
to :func:mne.read_evokeds. To make things more simple for this tutorial, we
read each instance to a variable.
End of explanation
fig = evoked_l_aud.plot(exclude=(), time_unit='s')
Explanation: Let's start with a simple one. We plot event related potentials / fields
(ERP/ERF). The bad channels are not plotted by default. Here we explicitly
set the exclude parameter to show the bad channels in red. All plotting
functions of MNE-python return a handle to the figure instance. When we have
the handle, we can customise the plots to our liking.
End of explanation
fig.tight_layout()
Explanation: All plotting functions of MNE-python return a handle to the figure instance.
When we have the handle, we can customise the plots to our liking. For
example, we can get rid of the empty space with a simple function call.
End of explanation
picks = mne.pick_types(evoked_l_aud.info, meg=True, eeg=False, eog=False)
evoked_l_aud.plot(spatial_colors=True, gfp=True, picks=picks, time_unit='s')
Explanation: Now we will make it a bit fancier and only use MEG channels. Many of the
MNE-functions include a picks parameter to include a selection of
channels. picks is simply a list of channel indices that you can easily
construct with :func:mne.pick_types. See also :func:mne.pick_channels and
:func:mne.pick_channels_regexp.
Using spatial_colors=True, the individual channel lines are color coded
to show the sensor positions - specifically, the x, y, and z locations of
the sensors are transformed into R, G and B values.
End of explanation
evoked_l_aud.plot_topomap(time_unit='s')
Explanation: Notice the legend on the left. The colors would suggest that there may be two
separate sources for the signals. This wasn't obvious from the first figure.
Try painting the slopes with left mouse button. It should open a new window
with topomaps (scalp plots) of the average over the painted area. There is
also a function for drawing topomaps separately.
End of explanation
times = np.arange(0.05, 0.151, 0.05)
evoked_r_aud.plot_topomap(times=times, ch_type='mag', time_unit='s')
Explanation: By default the topomaps are drawn from evenly spread out points of time over
the evoked data. We can also define the times ourselves.
End of explanation
evoked_r_aud.plot_topomap(times='peaks', ch_type='mag', time_unit='s')
Explanation: Or we can automatically select the peaks.
End of explanation
fig, ax = plt.subplots(1, 5, figsize=(8, 2))
kwargs = dict(times=0.1, show=False, vmin=-300, vmax=300, time_unit='s')
evoked_l_aud.plot_topomap(axes=ax[0], colorbar=True, **kwargs)
evoked_r_aud.plot_topomap(axes=ax[1], colorbar=False, **kwargs)
evoked_l_vis.plot_topomap(axes=ax[2], colorbar=False, **kwargs)
evoked_r_vis.plot_topomap(axes=ax[3], colorbar=False, **kwargs)
for ax, title in zip(ax[:4], ['Aud/L', 'Aud/R', 'Vis/L', 'Vis/R']):
ax.set_title(title)
plt.show()
Explanation: You can take a look at the documentation of :func:mne.Evoked.plot_topomap
or simply write evoked_r_aud.plot_topomap? in your python console to
see the different parameters you can pass to this function. Most of the
plotting functions also accept axes parameter. With that, you can
customise your plots even further. First we create a set of matplotlib
axes in a single figure and plot all of our evoked categories next to each
other.
End of explanation
ts_args = dict(gfp=True, time_unit='s')
topomap_args = dict(sensors=False, time_unit='s')
evoked_r_aud.plot_joint(title='right auditory', times=[.09, .20],
ts_args=ts_args, topomap_args=topomap_args)
Explanation: Notice that we created five axes, but had only four categories. The fifth
axes was used for drawing the colorbar. You must provide room for it when you
create this kind of custom plots or turn the colorbar off with
colorbar=False. That's what the warnings are trying to tell you. Also, we
used show=False for the three first function calls. This prevents the
showing of the figure prematurely. The behavior depends on the mode you are
using for your python session. See http://matplotlib.org/users/shell.html for
more information.
We can combine the two kinds of plots in one figure using the
:func:mne.Evoked.plot_joint method of Evoked objects. Called as-is
(evoked.plot_joint()), this function should give an informative display
of spatio-temporal dynamics.
You can directly style the time series part and the topomap part of the plot
using the topomap_args and ts_args parameters. You can pass key-value
pairs as a python dictionary. These are then passed as parameters to the
topomaps (:func:mne.Evoked.plot_topomap) and time series
(:func:mne.Evoked.plot) of the joint plot.
For an example of specific styling using these topomap_args and
ts_args arguments, here, topomaps at specific time points
(90 and 200 ms) are shown, sensors are not plotted (via an argument
forwarded to plot_topomap), and the Global Field Power is shown:
End of explanation
conditions = ["Left Auditory", "Right Auditory", "Left visual", "Right visual"]
evoked_dict = dict()
for condition in conditions:
evoked_dict[condition.replace(" ", "/")] = mne.read_evokeds(
fname, baseline=(None, 0), proj=True, condition=condition)
print(evoked_dict)
colors = dict(Left="Crimson", Right="CornFlowerBlue")
linestyles = dict(Auditory='-', visual='--')
pick = evoked_dict["Left/Auditory"].ch_names.index('MEG 1811')
mne.viz.plot_compare_evokeds(evoked_dict, picks=pick, colors=colors,
linestyles=linestyles, split_legend=True)
Explanation: Sometimes, you may want to compare two or more conditions at a selection of
sensors, or e.g. for the Global Field Power. For this, you can use the
function :func:mne.viz.plot_compare_evokeds. The easiest way is to create
a Python dictionary, where the keys are condition names and the values are
:class:mne.Evoked objects. If you provide lists of :class:mne.Evoked
objects, such as those for multiple subjects, the grand average is plotted,
along with a confidence interval band - this can be used to contrast
conditions for a whole experiment.
First, we load in the evoked objects into a dictionary, setting the keys to
'/'-separated tags (as we can do with event_ids for epochs). Then, we plot
with :func:mne.viz.plot_compare_evokeds.
The plot is styled with dict arguments, again using "/"-separated tags.
We plot a MEG channel with a strong auditory response.
For move advanced plotting using :func:mne.viz.plot_compare_evokeds.
See also sphx_glr_auto_tutorials_plot_metadata_epochs.py.
End of explanation
evoked_r_aud.plot_image(picks=picks, time_unit='s')
Explanation: We can also plot the activations as images. The time runs along the x-axis
and the channels along the y-axis. The amplitudes are color coded so that
the amplitudes from negative to positive translates to shift from blue to
red. White means zero amplitude. You can use the cmap parameter to define
the color map yourself. The accepted values include all matplotlib colormaps.
End of explanation
title = 'MNE sample data\n(condition : %s)'
evoked_l_aud.plot_topo(title=title % evoked_l_aud.comment,
background_color='k', color=['white'])
mne.viz.plot_evoked_topo(evoked, title=title % 'Left/Right Auditory/Visual',
background_color='w')
Explanation: Finally we plot the sensor data as a topographical view. In the simple case
we plot only left auditory responses, and then we plot them all in the same
figure for comparison. Click on the individual plots to open them bigger.
End of explanation
subjects_dir = data_path + '/subjects'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
maps = mne.make_field_map(evoked_l_aud, trans=trans_fname, subject='sample',
subjects_dir=subjects_dir, n_jobs=1)
# Finally, explore several points in time
field_map = evoked_l_aud.plot_field(maps, time=.1)
Explanation: Visualizing field lines in 3D
We now compute the field maps to project MEG and EEG data to the MEG helmet
and scalp surface.
To do this, we need coregistration information. See
tut_forward for more details. Here we just illustrate usage.
End of explanation |
4,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
You can use LPVisu to visualize Integer Linear Programming problems in Jupyter notebooks. First import the LPVisu class
Step1: You can then define a problem
Step2: To draw the polygon, create an object of type LPVisu. Do not forget to use the notebook parameter. Use the integers parameter to draw integers inside the polygon.
Step3: You can add cuts with the A_cuts and b_cuts parameters. | Python Code:
from lp_visu import LPVisu
Explanation: You can use LPVisu to visualize Integer Linear Programming problems in Jupyter notebooks. First import the LPVisu class:
End of explanation
# problem definition
A = [[1.0, 0.0], [1.0, 2.0], [2.0, 1.0]]
b = [8.0, 15.0, 18.0]
c = [-4.0, -3.0]
x1_bounds = (0, None)
x2_bounds = (0, None)
# GUI bounds
x1_gui_bounds = (-1, 16)
x2_gui_bounds = (-1, 10)
Explanation: You can then define a problem:
End of explanation
visu = LPVisu(A, b, c,
x1_bounds, x2_bounds,
x1_gui_bounds, x2_gui_bounds,
scale = 0.8,
integers = True)
Explanation: To draw the polygon, create an object of type LPVisu. Do not forget to use the notebook parameter. Use the integers parameter to draw integers inside the polygon.
End of explanation
visu = LPVisu(A, b, c,
x1_bounds, x2_bounds,
x1_gui_bounds, x2_gui_bounds,
scale = 0.8,
A_cuts = [[0.0, 1.0]], b_cuts = [6.0], integers = True)
Explanation: You can add cuts with the A_cuts and b_cuts parameters.
End of explanation |
4,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This may be more readable on NBViewer.
Step1: As we saw earlier the Dirichlet process describes the distribution of a random probability distribution. The Dirichlet process takes two parameters
Step2: Let's illustrate again with a standard normal base measure. We can construct a function base_measure that generates samples from it.
Step3: Because the normal distribution has continuous support, we can generate samples from it forever and we will never see the same sample twice (in theory). We can illustrate this by drawing from the distribution ten thousand times and seeing that we get ten thousand unique values.
Step4: However, when we feed the base measure through the stochastic memoization procedure and then sample, we get many duplicate samples. The number of unique samples goes down as $\alpha$ increases.
Step5: At this point, we have a function dp_draws that returns samples from a probability distribution (specifically, a probability distribution sampled from $\text{DP}(\alpha H_0)$). We can use dp_draws as a base distribution for another Dirichlet process!
Step6: How do we interpret this? norm_dp is a sampler from a probability distribution that looks like the standard normal distribution. norm_hdp is a sampler from a probability distribution that "looks like" the distribution norm_dp samples from.
Here is a histogram of samples drawn from norm_dp, our first sampler.
Step7: And here is a histogram for samples drawn from norm_hdp, our second sampler.
Step8: The second plot doesn't look very much like the first! The level to which a sample from a Dirichlet process approximates the base distribution is a function of the dispersion parameter $\alpha$. Because I set $\alpha=10$ (which is relatively small), the approximation is fairly course. In terms of memoization, a small $\alpha$ value means the stochastic memoizer will more frequently reuse values already seen instead of drawing new ones.
This nesting procedure, where a sample from one Dirichlet process is fed into another Dirichlet process as a base distribution, is more than just a curiousity. It is known as a Hierarchical Dirichlet Process, and it plays an important role in the study of Bayesian Nonparametrics (more on this in a future post).
Without the stochastic memoization framework, constructing a sampler for a hierarchical Dirichlet process is a daunting task. We want to be able to draw samples from a distribution drawn from the second level Dirichlet process. However, to be able to do that, we need to be able to draw samples from a distribution sampled from a base distribution of the second-level Dirichlet process
Step9: Since the Hierarchical DP is a Dirichlet Process inside of Dirichlet process, we must provide it with both a first and second level $\alpha$ value.
Step10: We can sample directly from the probability distribution drawn from the Hierarchical Dirichlet Process.
Step11: norm_hdp is not equivalent to the Hierarchical Dirichlet Process; it samples from a single distribution sampled from this HDP. Each time we instantiate the norm_hdp variable, we are getting a sampler for a unique distribution. Below we sample five times and get five different distributions. | Python Code:
%matplotlib inline
Explanation: This may be more readable on NBViewer.
End of explanation
from numpy.random import choice
from scipy.stats import beta
class DirichletProcessSample():
def __init__(self, base_measure, alpha):
self.base_measure = base_measure
self.alpha = alpha
self.cache = []
self.weights = []
self.total_stick_used = 0.
def __call__(self):
remaining = 1.0 - self.total_stick_used
i = DirichletProcessSample.roll_die(self.weights + [remaining])
if i is not None and i < len(self.weights) :
return self.cache[i]
else:
stick_piece = beta(1, self.alpha).rvs() * remaining
self.total_stick_used += stick_piece
self.weights.append(stick_piece)
new_value = self.base_measure()
self.cache.append(new_value)
return new_value
@staticmethod
def roll_die(weights):
if weights:
return choice(range(len(weights)), p=weights)
else:
return None
Explanation: As we saw earlier the Dirichlet process describes the distribution of a random probability distribution. The Dirichlet process takes two parameters: a base distribution $H_0$ and a dispersion parameter $\alpha$. A sample from the Dirichlet process is itself a probability distribution that looks like $H_0$. On average, the larger $\alpha$ is, the closer a sample from $\text{DP}(\alpha H_0)$ will be to $H_0$.
Suppose we're feeling masochistic and want to input a distribution sampled from a Dirichlet process as base distribution to a new Dirichlet process. (It will turn out that there are good reasons for this!) Conceptually this makes sense. But can we construct such a thing in practice? Said another way, can we build a sampler that will draw samples from a probability distribution drawn from these nested Dirichlet processes? We might initially try construct a sample (a probability distribution) from the first Dirichlet process before feeding it into the second.
But recall that fully constructing a sample (a probability distribution!) from a Dirichlet process would require drawing a countably infinite number of samples from $H_0$ and from the beta distribution to generate the weights. This would take forever, even with Hadoop!
Dan Roy, et al helpfully described a technique of using stochastic memoization to construct a distribution sampled from a Dirichlet process in a just-in-time manner. This process provides us with the equivalent of the Scipy rvs method for the sampled distribution. Stochastic memoization is equivalent to the Chinese restaurant process: sometimes you get seated an an occupied table (i.e. sometimes you're given a sample you've seen before) and sometimes you're put at a new table (given a unique sample).
Here is our memoization class again:
End of explanation
from scipy.stats import norm
base_measure = lambda: norm().rvs()
Explanation: Let's illustrate again with a standard normal base measure. We can construct a function base_measure that generates samples from it.
End of explanation
from pandas import Series
ndraws = 10000
print "Number of unique samples after {} draws:".format(ndraws),
draws = Series([base_measure() for _ in range(ndraws)])
print draws.unique().size
Explanation: Because the normal distribution has continuous support, we can generate samples from it forever and we will never see the same sample twice (in theory). We can illustrate this by drawing from the distribution ten thousand times and seeing that we get ten thousand unique values.
End of explanation
norm_dp = DirichletProcessSample(base_measure, alpha=100)
print "Number of unique samples after {} draws:".format(ndraws),
dp_draws = Series([norm_dp() for _ in range(ndraws)])
print dp_draws.unique().size
Explanation: However, when we feed the base measure through the stochastic memoization procedure and then sample, we get many duplicate samples. The number of unique samples goes down as $\alpha$ increases.
End of explanation
norm_hdp = DirichletProcessSample(norm_dp, alpha=10)
Explanation: At this point, we have a function dp_draws that returns samples from a probability distribution (specifically, a probability distribution sampled from $\text{DP}(\alpha H_0)$). We can use dp_draws as a base distribution for another Dirichlet process!
End of explanation
import matplotlib.pyplot as plt
pd.Series(norm_dp() for _ in range(10000)).hist()
_=plt.title("Histogram of Samples from norm_dp")
Explanation: How do we interpret this? norm_dp is a sampler from a probability distribution that looks like the standard normal distribution. norm_hdp is a sampler from a probability distribution that "looks like" the distribution norm_dp samples from.
Here is a histogram of samples drawn from norm_dp, our first sampler.
End of explanation
pd.Series(norm_hdp() for _ in range(10000)).hist()
_=plt.title("Histogram of Samples from norm_hdp")
Explanation: And here is a histogram for samples drawn from norm_hdp, our second sampler.
End of explanation
class HierarchicalDirichletProcessSample(DirichletProcessSample):
def __init__(self, base_measure, alpha1, alpha2):
first_level_dp = DirichletProcessSample(base_measure, alpha1)
self.second_level_dp = DirichletProcessSample(first_level_dp, alpha2)
def __call__(self):
return self.second_level_dp()
Explanation: The second plot doesn't look very much like the first! The level to which a sample from a Dirichlet process approximates the base distribution is a function of the dispersion parameter $\alpha$. Because I set $\alpha=10$ (which is relatively small), the approximation is fairly course. In terms of memoization, a small $\alpha$ value means the stochastic memoizer will more frequently reuse values already seen instead of drawing new ones.
This nesting procedure, where a sample from one Dirichlet process is fed into another Dirichlet process as a base distribution, is more than just a curiousity. It is known as a Hierarchical Dirichlet Process, and it plays an important role in the study of Bayesian Nonparametrics (more on this in a future post).
Without the stochastic memoization framework, constructing a sampler for a hierarchical Dirichlet process is a daunting task. We want to be able to draw samples from a distribution drawn from the second level Dirichlet process. However, to be able to do that, we need to be able to draw samples from a distribution sampled from a base distribution of the second-level Dirichlet process: this base distribution is a distribution drawn from the first-level Dirichlet process.
Though it appeared that we would need to be able to fully construct the first level sample (by drawing a countably infinite number of samples from the first-level base distribution). However, stochastic memoization allows us to only construct the first distribution just-in-time as it is needed at the second-level.
We can define a Python class to encapsulate the Hierarchical Dirichlet Process as a base class of the Dirichlet process.
End of explanation
norm_hdp = HierarchicalDirichletProcessSample(base_measure, alpha1=10, alpha2=20)
Explanation: Since the Hierarchical DP is a Dirichlet Process inside of Dirichlet process, we must provide it with both a first and second level $\alpha$ value.
End of explanation
pd.Series(norm_hdp() for _ in range(10000)).hist()
_=plt.title("Histogram of samples from distribution drawn from Hierarchical DP")
Explanation: We can sample directly from the probability distribution drawn from the Hierarchical Dirichlet Process.
End of explanation
for i in range(5):
norm_hdp = HierarchicalDirichletProcessSample(base_measure, alpha1=10, alpha2=10)
_=pd.Series(norm_hdp() for _ in range(100)).hist()
_=plt.title("Histogram of samples from distribution drawn from Hierarchical DP")
_=plt.figure()
Explanation: norm_hdp is not equivalent to the Hierarchical Dirichlet Process; it samples from a single distribution sampled from this HDP. Each time we instantiate the norm_hdp variable, we are getting a sampler for a unique distribution. Below we sample five times and get five different distributions.
End of explanation |
4,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
House Prices Estimator
Note
Step1: Here we have to find the 'NaN' values and fill them with the mean. Probably it's not the best way to complete the info where we have empty values but at least we are keeping the same distribution.
Average and standard deviation are not modified with this method.
Step2: The 'SalePrice' has a skewed graph. We can stabilize it applying a logarithmic operation because we know that all the values are positive.
Step3: There are a lot of features in this dataset so we are going to select only the most correlated features with the 'SalePrice'.
With the following query we can see that the ten first features have a good correlation. In case we want to change the number of correlated features to be retrieved, we'll create a function to customize that.
Step4: KFold
As usual, it's going to use the 'KFold' to split the dataset in different buckets.
Step5: We implement two methos to plot the PCA in case we need to visualize the information in a 2D graph. We'll need to reduce all the features to only one feature (component).
Step6: Removing the 1% of the anomalies we can get an stable prediction but it's not sure. Probably it's going to help but this would be removed from the final calculation.
Step7: Model
Two methods are created, one of them is for training and the other for showing the calculated metrics.
This is still under study so it can be modified in the future. Until now the best regressor has been the gradient boost.
Step8: Running Models
Step9: Adding Categorical
Step10: Get Predictions | Python Code:
import numpy as np
import pandas as pd
#load the files
train = pd.read_csv('input/train.csv')
test = pd.read_csv('input/test.csv')
data = pd.concat([train, test])
#size of training dataset
train_samples = train.shape[0]
test_samples = test.shape[0]
# remove the Id feature
data.drop(['Id'],1, inplace=True);
#data.describe()
Explanation: House Prices Estimator
Note: It's a competition from Kaggle.com and the input data was retrieved from there.
Data Analysis
End of explanation
datanum = data.select_dtypes([np.number])
datanum = datanum.fillna(datanum.dropna().mean())
Explanation: Here we have to find the 'NaN' values and fill them with the mean. Probably it's not the best way to complete the info where we have empty values but at least we are keeping the same distribution.
Average and standard deviation are not modified with this method.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
# Transforming to non-skewed SalePrice
data.SalePrice = data.SalePrice.apply(np.log)
data.SalePrice.hist(bins=50)
Explanation: The 'SalePrice' has a skewed graph. We can stabilize it applying a logarithmic operation because we know that all the values are positive.
End of explanation
# Correlation features
datanum.corr()['SalePrice'].drop('SalePrice').sort_values(ascending=False).head(10)
def getDataWithHighCorrFeatures(data, numberFeatures=10):
high_corr_feat_names = data.corr()['SalePrice'].drop('SalePrice').sort_values(ascending=False).head(numberFeatures).axes[0].tolist()
#high_corr_feat_names.remove('SalePrice')
return data[high_corr_feat_names]
Explanation: There are a lot of features in this dataset so we are going to select only the most correlated features with the 'SalePrice'.
With the following query we can see that the ten first features have a good correlation. In case we want to change the number of correlated features to be retrieved, we'll create a function to customize that.
End of explanation
from sklearn.model_selection import KFold
kf = KFold(n_splits=10, random_state=13)#, shuffle=True)
print(kf)
Explanation: KFold
As usual, it's going to use the 'KFold' to split the dataset in different buckets.
End of explanation
#plotting PCA
from sklearn.decomposition import PCA
def getX_PCA(X):
pca = PCA(n_components=1)
return pca.fit(X).transform(X)
def plotPCA(X, y):
pca = PCA(n_components=1)
X_r = pca.fit(X).transform(X)
plt.plot(X_r, y, 'x')
Explanation: We implement two methos to plot the PCA in case we need to visualize the information in a 2D graph. We'll need to reduce all the features to only one feature (component).
End of explanation
from sklearn.covariance import EllipticEnvelope
def removeAnomalies(X_train, y_train, verbose=False):
# fit the model
ee = EllipticEnvelope(contamination=0.01,
assume_centered=True,
random_state=13)
ee.fit(X_train)
pred = ee.predict(X_train)
X_anom = X_train[pred != 1]
y_anom = y_train[pred != 1]
X_no_anom = X_train[pred == 1]
y_no_anom = y_train[pred == 1]
if (verbose):
print("Number samples no anomalies: {}".format(X_no_anom.shape[0]))
#after removing anomalies
#plt.scatter(getX_PCA(X_no_anom), y_no_anom)
#plt.scatter(getX_PCA(X_anom), y_anom)
return X_no_anom, y_no_anom
def idxNotAnomalies(X):
ee = EllipticEnvelope(contamination=0.01,
assume_centered=True,
random_state=13)
ee.fit(X)
pred = ee.predict(X)
return [index[0] for index, x in np.ndenumerate(pred) if x == 1]
Explanation: Removing the 1% of the anomalies we can get an stable prediction but it's not sure. Probably it's going to help but this would be removed from the final calculation.
End of explanation
# Linear regression
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
def train(X_train, y_train, verbose=False):
#lr = LinearRegression()
import xgboost as xgb
lr = xgb.XGBRegressor(max_depth=5,
n_estimators=250,
min_child_weight=7,
n_jobs=4)
#
batch = 0
for train_idx, val_idx in kf.split(X_train, y_train):
X_t, X_v = X_train[train_idx], X_train[val_idx]
y_t, y_v = y_train[train_idx], y_train[val_idx]
#training
lr.fit(X_t, y_t)
#calculate costs
t_error = mean_squared_error(y_t, lr.predict(X_t))**0.5
v_error = mean_squared_error(y_v, lr.predict(X_v))**0.5
if verbose:
print("{}) Training error: {:.2f} Validation error: {:.2f} Score: {:.2f}"
.format(batch, t_error, v_error, lr.score(X_v, y_v)))
batch += 1
return lr
def metrics(model, X, y, verbose=False):
#Scores
if verbose:
print("Training score: {:.4f}".format(model.score(X, y)))
#RMSLE
#print(np.count_nonzero(~np.isfinite(model.predict(X))))
rmsle = mean_squared_error(y, model.predict(X))**0.5
if verbose:
print("RMSLE: {:.4f}".format(rmsle))
# Plotting the results
plt.scatter(model.predict(X), y)
return rmsle, model.score(X, y)
# Get polynomial features
from sklearn.preprocessing import PolynomialFeatures
def getPolynomial(X_train, X_no_anom, X_test):
poly = PolynomialFeatures(degree=2)
return poly.fit_transform(X_train), poly.fit_transform(X_no_anom), poly.fit_transform(X_test)
def getKeyWithMinError(X_train, X_no_amon, y_train, y_no_anom, verbose=False):
rmsles = {}
for f in range(1,X_train.shape[1]):
model = train(X_no_anom[:,:f], y_no_anom, verbose=False)
rmsles[f] = metrics(model, X_train[:,:f], y_train, verbose=False)
min_error_key = min(rmsles, key=rmsles.get)
if (verbose):
print("Min error (k={}):{}".format(min_error_key, rmsles[min_error_key]))
#model = train(X_train_pol[:,:min_error_key], y_train)
#metrics(model, X_train_orig_pol[:,:min_error_key], y_train_orig)
#pd.Series(rmsles).plot()
return min_error_key
Explanation: Model
Two methods are created, one of them is for training and the other for showing the calculated metrics.
This is still under study so it can be modified in the future. Until now the best regressor has been the gradient boost.
End of explanation
import warnings
warnings.filterwarnings('ignore')
errors = []
for f in range(1,17):
#print("====Corr feat: {}====".format(f))
datanum_high_corr = getDataWithHighCorrFeatures(datanum, f)
y = np.array(data['SalePrice'])
X = np.array(datanum_high_corr)
#split by idx
idx = train_samples
X_train, X_test = X[:idx], X[idx:]
y_train, y_test = y[:idx], y[idx:]
#print("Shape X train: {}".format(X_train.shape))
X_no_anom, y_no_anom = removeAnomalies(X_train, y_train)
#print("Shape X train (no anom): {}".format(X_no_anom.shape))
X_train, X_no_anom, X_test = getPolynomial(X_train, X_no_anom, X_test)
#print("Shape X train (poly): {}".format(X_no_anom.shape))
key = 1000 #getKeyWithMinError(X_train, X_no_anom, y_train, y_no_anom)
model = train(X_no_anom[:,:key], y_no_anom)
error, score = metrics(model, X_train[:,:key], y_train)
print("f:{} err:{:.3f} score:{:.3f}".format(f, error, score))
errors.append(error)
# show graph
pd.Series(errors).plot()
Explanation: Running Models
End of explanation
features = data.select_dtypes([np.object]).axes[1].tolist()
features.append('SalePrice')
datacat = pd.get_dummies(data[features])
datacat = datacat.fillna(datacat.dropna().mean())
datacat.corr()['SalePrice'].drop('SalePrice').sort_values(ascending=False).head(10)
features_order_by_corr = datacat.corr()['SalePrice'].drop('SalePrice').sort_values(ascending=False).axes[0].tolist()
import warnings
warnings.filterwarnings('ignore')
datanum_high_corr = getDataWithHighCorrFeatures(datanum, 15)
Xn = np.array(datanum_high_corr)
#choosing the number of categorical
num_cat = 10
Xc = np.array(datacat[features_order_by_corr[:num_cat]])
y = np.array(data['SalePrice'])
poly = PolynomialFeatures(degree=2)
Xpn = poly.fit_transform(Xn)
X = np.concatenate([Xpn, Xc], axis=1)
no_anom_idx = idxNotAnomalies(Xn[:idx]) #only from numeric features
print(Xpn.shape)
print(Xc.shape)
print(X.shape)
#split by idx
idx = train_samples
X_train, X_test = X[:idx], X[idx:]
y_train, y_test = y[:idx], y[idx:]
print("Shape X train: {}".format(X_train.shape))
print("Shape X test: {}".format(X_test.shape))
print("Shape X train (no anom): {}".format(X_train[no_anom_idx].shape))
X_no_anom = X_train[no_anom_idx]
y_no_anom = y_train[no_anom_idx]
errors = {}
scores = {}
for f in range(15^2, X_train.shape[1]):
modelt = train(X_no_anom[:,:f], y_no_anom, verbose=False)
err, score = metrics(modelt, X_train[:,:f], y_train, verbose=False)
if err > 1e-7:
errors[f] = err
scores[f] = score
else:
break
min_error_key = min(errors, key=errors.get)
max_score_key = max(scores, key=scores.get)
print("Min error: {:.3f}".format(errors[min_error_key]))
print("Max score: {:.3f}".format(scores[max_score_key]))
modelt = train(X_no_anom[:,:235], y_no_anom, verbose=False)
err, score = metrics(modelt, X_train[:,:235], y_train, verbose=False)
print(err)
print(score)
pd.Series(errors).plot()
pd.Series(scores).plot()
print(min_error_key)
Explanation: Adding Categorical
End of explanation
import os
from sklearn.model_selection import cross_val_score
from sklearn.metrics import make_scorer
min_error_key = 235
def RMSE(y_true,y_pred):
rmse = mean_squared_error(y_true, y_pred)**0.5
return rmse
modelt = train(X_no_anom[:,:min_error_key], y_no_anom, verbose=False)
err, score = metrics(modelt, X_train[:,:min_error_key], y_train, verbose=False)
print("Err: {:.3f} | R2: {:.3f}".format(err, score))
scores = cross_val_score(modelt, X_train[:,:min_error_key], y_train,
scoring=make_scorer(RMSE, greater_is_better=True), cv=10)
print("Scores: {}".format(scores))
print("Score (mean): {:.3f}".format(scores.mean()))
predict = modelt.predict(X_test[:,:min_error_key])
#predictions are logs, return to the value
predict = np.exp(predict)
file = "Id,SalePrice" + os.linesep
startId = 1461
for i in range(len(X_test)):
file += "{},{}".format(startId, (int)(predict[i])) + os.linesep
startId += 1
# Save to file
with open('attempt.txt', 'w') as f:
f.write(file)
# Using XGRegressor?
#lr = XGBRegressor(max_depth=5, n_estimators=250,min_child_weight=10)
Explanation: Get Predictions
End of explanation |
4,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Raw data structure
Step1: Loading continuous data
.. sidebar
Step2: As you can see above,
Step3: By default, the
Step4: Querying the Raw object
.. sidebar
Step5: <div class="alert alert-info"><h4>Note</h4><p>Most of the fields of ``raw.info`` reflect metadata recorded at
acquisition time, and should not be changed by the user. There are a few
exceptions (such as ``raw.info['bads']`` and ``raw.info['projs']``), but
in most cases there are dedicated MNE-Python functions or methods to
update the
Step6: Modifying Raw objects
.. sidebar
Step7: Similar to the
Step8: If you want the channels in a specific order (e.g., for plotting),
Step9: Changing channel name and type
.. sidebar
Step10: This next example replaces spaces in the channel names with underscores,
using a Python dict comprehension_
Step11: If for some reason the channel types in your
Step12: Selection in the time domain
If you want to limit the time domain of a
Step13:
Step14: Remember that sample times don't always align exactly with requested tmin
or tmax values (due to sampling), which is why the max values of the
cropped files don't exactly match the requested tmax (see
time-as-index for further details).
If you need to select discontinuous spans of a
Step15: <div class="alert alert-danger"><h4>Warning</h4><p>Be careful when concatenating
Step16: You can see that it contains 2 arrays. This combination of data and times
makes it easy to plot selections of raw data (although note that we're
transposing the data array so that each channel is a column instead of a row,
to match what matplotlib expects when plotting 2-dimensional y against
1-dimensional x)
Step17: Extracting channels by name
The
Step18: Extracting channels by type
There are several ways to select all channels of a given type from a
Step19: Some of the parameters of
Step20: If you want the array of times,
Step21: The
Step22: Summary of ways to extract data from Raw objects
The following table summarizes the various ways of extracting data from a
Step23: It is also possible to export the data to a | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import mne
Explanation: The Raw data structure: continuous data
This tutorial covers the basics of working with raw EEG/MEG data in Python. It
introduces the :class:~mne.io.Raw data structure in detail, including how to
load, query, subselect, export, and plot data from a :class:~mne.io.Raw
object. For more info on visualization of :class:~mne.io.Raw objects, see
tut-visualize-raw. For info on creating a :class:~mne.io.Raw object
from simulated data in a :class:NumPy array <numpy.ndarray>, see
tut-creating-data-structures.
As usual we'll start by importing the modules we need:
End of explanation
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
Explanation: Loading continuous data
.. sidebar:: Datasets in MNE-Python
There are ``data_path`` functions for several example datasets in
MNE-Python (e.g., :func:`mne.datasets.kiloword.data_path`,
:func:`mne.datasets.spm_face.data_path`, etc). All of them will check the
default download location first to see if the dataset is already on your
computer, and only download it if necessary. The default download
location is also configurable; see the documentation of any of the
``data_path`` functions for more information.
As mentioned in the introductory tutorial <tut-overview>,
MNE-Python data structures are based around
the :file:.fif file format from Neuromag. This tutorial uses an
example dataset <sample-dataset> in :file:.fif format, so here we'll
use the function :func:mne.io.read_raw_fif to load the raw data; there are
reader functions for a wide variety of other data formats
<data-formats> as well.
There are also several other example datasets
<datasets> that can be downloaded with just a few lines
of code. Functions for downloading example datasets are in the
:mod:mne.datasets submodule; here we'll use
:func:mne.datasets.sample.data_path to download the "sample-dataset"
dataset, which contains EEG, MEG, and structural MRI data from one subject
performing an audiovisual experiment. When it's done downloading,
:func:~mne.datasets.sample.data_path will return the folder location where
it put the files; you can navigate there with your file browser if you want
to examine the files yourself. Once we have the file path, we can load the
data with :func:~mne.io.read_raw_fif. This will return a
:class:~mne.io.Raw object, which we'll store in a variable called raw.
End of explanation
print(raw)
Explanation: As you can see above, :func:~mne.io.read_raw_fif automatically displays
some information about the file it's loading. For example, here it tells us
that there are three "projection items" in the file along with the recorded
data; those are :term:SSP projectors <projector> calculated to remove
environmental noise from the MEG signals, and are discussed in a the tutorial
tut-projectors-background.
In addition to the information displayed during loading, you can
get a glimpse of the basic details of a :class:~mne.io.Raw object by
printing it:
End of explanation
raw.crop(tmax=60)
Explanation: By default, the :samp:mne.io.read_raw_{*} family of functions will not
load the data into memory (instead the data on disk are memory-mapped_,
meaning the data are only read from disk as-needed). Some operations (such as
filtering) require that the data be copied into RAM; to do that we could have
passed the preload=True parameter to :func:~mne.io.read_raw_fif, but we
can also copy the data into RAM at any time using the
:meth:~mne.io.Raw.load_data method. However, since this particular tutorial
doesn't do any serious analysis of the data, we'll first
:meth:~mne.io.Raw.crop the :class:~mne.io.Raw object to 60 seconds so it
uses less memory and runs more smoothly on our documentation server.
End of explanation
n_time_samps = raw.n_times
time_secs = raw.times
ch_names = raw.ch_names
n_chan = len(ch_names) # note: there is no raw.n_channels attribute
print('the (cropped) sample data object has {} time samples and {} channels.'
''.format(n_time_samps, n_chan))
print('The last time sample is at {} seconds.'.format(time_secs[-1]))
print('The first few channel names are {}.'.format(', '.join(ch_names[:3])))
print() # insert a blank line in the output
# some examples of raw.info:
print('bad channels:', raw.info['bads']) # chs marked "bad" during acquisition
print(raw.info['sfreq'], 'Hz') # sampling frequency
print(raw.info['description'], '\n') # miscellaneous acquisition info
print(raw.info)
Explanation: Querying the Raw object
.. sidebar:: Attributes vs. Methods
**Attributes** are usually static properties of Python objects — things
that are pre-computed and stored as part of the object's representation
in memory. Attributes are accessed with the ``.`` operator and do not
require parentheses after the attribute name (example: ``raw.ch_names``).
**Methods** are like specialized functions attached to an object.
Usually they require additional user input and/or need some computation
to yield a result. Methods always have parentheses at the end; additional
arguments (if any) go inside those parentheses (examples:
``raw.estimate_rank()``, ``raw.drop_channels(['EEG 030', 'MEG 2242'])``).
We saw above that printing the :class:~mne.io.Raw object displays some
basic information like the total number of channels, the number of time
points at which the data were sampled, total duration, and the approximate
size in memory. Much more information is available through the various
attributes and methods of the :class:~mne.io.Raw class. Some useful
attributes of :class:~mne.io.Raw objects include a list of the channel
names (:attr:~mne.io.Raw.ch_names), an array of the sample times in seconds
(:attr:~mne.io.Raw.times), and the total number of samples
(:attr:~mne.io.Raw.n_times); a list of all attributes and methods is given
in the documentation of the :class:~mne.io.Raw class.
The Raw.info attribute
There is also quite a lot of information stored in the raw.info
attribute, which stores an :class:~mne.Info object that is similar to a
:class:Python dictionary <dict> (in that it has fields accessed via named
keys). Like Python dictionaries, raw.info has a .keys() method that
shows all the available field names; unlike Python dictionaries, printing
raw.info will print a nicely-formatted glimpse of each field's data. See
tut-info-class for more on what is stored in :class:~mne.Info
objects, and how to interact with them.
End of explanation
print(raw.time_as_index(20))
print(raw.time_as_index([20, 30, 40]), '\n')
print(np.diff(raw.time_as_index([1, 2, 3])))
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Most of the fields of ``raw.info`` reflect metadata recorded at
acquisition time, and should not be changed by the user. There are a few
exceptions (such as ``raw.info['bads']`` and ``raw.info['projs']``), but
in most cases there are dedicated MNE-Python functions or methods to
update the :class:`~mne.Info` object safely (such as
:meth:`~mne.io.Raw.add_proj` to update ``raw.info['projs']``).</p></div>
Time, sample number, and sample index
.. sidebar:: Sample numbering in VectorView data
For data from VectorView systems, it is important to distinguish *sample
number* from *sample index*. See :term:`first_samp` for more information.
One method of :class:~mne.io.Raw objects that is frequently useful is
:meth:~mne.io.Raw.time_as_index, which converts a time (in seconds) into
the integer index of the sample occurring closest to that time. The method
can also take a list or array of times, and will return an array of indices.
It is important to remember that there may not be a data sample at exactly
the time requested, so the number of samples between time = 1 second and
time = 2 seconds may be different than the number of samples between
time = 2 and time = 3:
End of explanation
eeg_and_eog = raw.copy().pick_types(meg=False, eeg=True, eog=True)
print(len(raw.ch_names), '→', len(eeg_and_eog.ch_names))
Explanation: Modifying Raw objects
.. sidebar:: len(raw)
Although the :class:`~mne.io.Raw` object underlyingly stores data samples
in a :class:`NumPy array <numpy.ndarray>` of shape (n_channels,
n_timepoints), the :class:`~mne.io.Raw` object behaves differently from
:class:`NumPy arrays <numpy.ndarray>` with respect to the :func:`len`
function. ``len(raw)`` will return the number of timepoints (length along
data axis 1), not the number of channels (length along data axis 0).
Hence in this section you'll see ``len(raw.ch_names)`` to get the number
of channels.
:class:~mne.io.Raw objects have a number of methods that modify the
:class:~mne.io.Raw instance in-place and return a reference to the modified
instance. This can be useful for method chaining_
(e.g., raw.crop(...).pick_channels(...).filter(...).plot())
but it also poses a problem during interactive analysis: if you modify your
:class:~mne.io.Raw object for an exploratory plot or analysis (say, by
dropping some channels), you will then need to re-load the data (and repeat
any earlier processing steps) to undo the channel-dropping and try something
else. For that reason, the examples in this section frequently use the
:meth:~mne.io.Raw.copy method before the other methods being demonstrated,
so that the original :class:~mne.io.Raw object is still available in the
variable raw for use in later examples.
Selecting, dropping, and reordering channels
Altering the channels of a :class:~mne.io.Raw object can be done in several
ways. As a first example, we'll use the :meth:~mne.io.Raw.pick_types method
to restrict the :class:~mne.io.Raw object to just the EEG and EOG channels:
End of explanation
raw_temp = raw.copy()
print('Number of channels in raw_temp:')
print(len(raw_temp.ch_names), end=' → drop two → ')
raw_temp.drop_channels(['EEG 037', 'EEG 059'])
print(len(raw_temp.ch_names), end=' → pick three → ')
raw_temp.pick_channels(['MEG 1811', 'EEG 017', 'EOG 061'])
print(len(raw_temp.ch_names))
Explanation: Similar to the :meth:~mne.io.Raw.pick_types method, there is also the
:meth:~mne.io.Raw.pick_channels method to pick channels by name, and a
corresponding :meth:~mne.io.Raw.drop_channels method to remove channels by
name:
End of explanation
channel_names = ['EOG 061', 'EEG 003', 'EEG 002', 'EEG 001']
eog_and_frontal_eeg = raw.copy().reorder_channels(channel_names)
print(eog_and_frontal_eeg.ch_names)
Explanation: If you want the channels in a specific order (e.g., for plotting),
:meth:~mne.io.Raw.reorder_channels works just like
:meth:~mne.io.Raw.pick_channels but also reorders the channels; for
example, here we pick the EOG and frontal EEG channels, putting the EOG
first and the EEG in reverse order:
End of explanation
raw.rename_channels({'EOG 061': 'blink detector'})
Explanation: Changing channel name and type
.. sidebar:: Long channel names
Due to limitations in the :file:`.fif` file format (which MNE-Python uses
to save :class:`~mne.io.Raw` objects), channel names are limited to a
maximum of 15 characters.
You may have noticed that the EEG channel names in the sample data are
numbered rather than labelled according to a standard nomenclature such as
the 10-20 or 10-05 systems, or perhaps it
bothers you that the channel names contain spaces. It is possible to rename
channels using the :meth:~mne.io.Raw.rename_channels method, which takes a
Python dictionary to map old names to new names. You need not rename all
channels at once; provide only the dictionary entries for the channels you
want to rename. Here's a frivolous example:
End of explanation
print(raw.ch_names[-3:])
channel_renaming_dict = {name: name.replace(' ', '_') for name in raw.ch_names}
raw.rename_channels(channel_renaming_dict)
print(raw.ch_names[-3:])
Explanation: This next example replaces spaces in the channel names with underscores,
using a Python dict comprehension_:
End of explanation
raw.set_channel_types({'EEG_001': 'eog'})
print(raw.copy().pick_types(meg=False, eog=True).ch_names)
Explanation: If for some reason the channel types in your :class:~mne.io.Raw object are
inaccurate, you can change the type of any channel with the
:meth:~mne.io.Raw.set_channel_types method. The method takes a
:class:dictionary <dict> mapping channel names to types; allowed types are
ecg, eeg, emg, eog, exci, ias, misc, resp, seeg, dbs, stim, syst, ecog,
hbo, hbr. A common use case for changing channel type is when using frontal
EEG electrodes as makeshift EOG channels:
End of explanation
raw_selection = raw.copy().crop(tmin=10, tmax=12.5)
print(raw_selection)
Explanation: Selection in the time domain
If you want to limit the time domain of a :class:~mne.io.Raw object, you
can use the :meth:~mne.io.Raw.crop method, which modifies the
:class:~mne.io.Raw object in place (we've seen this already at the start of
this tutorial, when we cropped the :class:~mne.io.Raw object to 60 seconds
to reduce memory demands). :meth:~mne.io.Raw.crop takes parameters tmin
and tmax, both in seconds (here we'll again use :meth:~mne.io.Raw.copy
first to avoid changing the original :class:~mne.io.Raw object):
End of explanation
print(raw_selection.times.min(), raw_selection.times.max())
raw_selection.crop(tmin=1)
print(raw_selection.times.min(), raw_selection.times.max())
Explanation: :meth:~mne.io.Raw.crop also modifies the :attr:~mne.io.Raw.first_samp and
:attr:~mne.io.Raw.times attributes, so that the first sample of the cropped
object now corresponds to time = 0. Accordingly, if you wanted to re-crop
raw_selection from 11 to 12.5 seconds (instead of 10 to 12.5 as above)
then the subsequent call to :meth:~mne.io.Raw.crop should get tmin=1
(not tmin=11), and leave tmax unspecified to keep everything from
tmin up to the end of the object:
End of explanation
raw_selection1 = raw.copy().crop(tmin=30, tmax=30.1) # 0.1 seconds
raw_selection2 = raw.copy().crop(tmin=40, tmax=41.1) # 1.1 seconds
raw_selection3 = raw.copy().crop(tmin=50, tmax=51.3) # 1.3 seconds
raw_selection1.append([raw_selection2, raw_selection3]) # 2.5 seconds total
print(raw_selection1.times.min(), raw_selection1.times.max())
Explanation: Remember that sample times don't always align exactly with requested tmin
or tmax values (due to sampling), which is why the max values of the
cropped files don't exactly match the requested tmax (see
time-as-index for further details).
If you need to select discontinuous spans of a :class:~mne.io.Raw object —
or combine two or more separate :class:~mne.io.Raw objects — you can use
the :meth:~mne.io.Raw.append method:
End of explanation
sampling_freq = raw.info['sfreq']
start_stop_seconds = np.array([11, 13])
start_sample, stop_sample = (start_stop_seconds * sampling_freq).astype(int)
channel_index = 0
raw_selection = raw[channel_index, start_sample:stop_sample]
print(raw_selection)
Explanation: <div class="alert alert-danger"><h4>Warning</h4><p>Be careful when concatenating :class:`~mne.io.Raw` objects from different
recordings, especially when saving: :meth:`~mne.io.Raw.append` only
preserves the ``info`` attribute of the initial :class:`~mne.io.Raw`
object (the one outside the :meth:`~mne.io.Raw.append` method call).</p></div>
Extracting data from Raw objects
So far we've been looking at ways to modify a :class:~mne.io.Raw object.
This section shows how to extract the data from a :class:~mne.io.Raw object
into a :class:NumPy array <numpy.ndarray>, for analysis or plotting using
functions outside of MNE-Python. To select portions of the data,
:class:~mne.io.Raw objects can be indexed using square brackets. However,
indexing :class:~mne.io.Raw works differently than indexing a :class:NumPy
array <numpy.ndarray> in two ways:
Along with the requested sample value(s) MNE-Python also returns an array
of times (in seconds) corresponding to the requested samples. The data
array and the times array are returned together as elements of a tuple.
The data array will always be 2-dimensional even if you request only a
single time sample or a single channel.
Extracting data by index
To illustrate the above two points, let's select a couple seconds of data
from the first channel:
End of explanation
x = raw_selection[1]
y = raw_selection[0].T
plt.plot(x, y)
Explanation: You can see that it contains 2 arrays. This combination of data and times
makes it easy to plot selections of raw data (although note that we're
transposing the data array so that each channel is a column instead of a row,
to match what matplotlib expects when plotting 2-dimensional y against
1-dimensional x):
End of explanation
channel_names = ['MEG_0712', 'MEG_1022']
two_meg_chans = raw[channel_names, start_sample:stop_sample]
y_offset = np.array([5e-11, 0]) # just enough to separate the channel traces
x = two_meg_chans[1]
y = two_meg_chans[0].T + y_offset
lines = plt.plot(x, y)
plt.legend(lines, channel_names)
Explanation: Extracting channels by name
The :class:~mne.io.Raw object can also be indexed with the names of
channels instead of their index numbers. You can pass a single string to get
just one channel, or a list of strings to select multiple channels. As with
integer indexing, this will return a tuple of (data_array, times_array)
that can be easily plotted. Since we're plotting 2 channels this time, we'll
add a vertical offset to one channel so it's not plotted right on top
of the other one:
End of explanation
eeg_channel_indices = mne.pick_types(raw.info, meg=False, eeg=True)
eeg_data, times = raw[eeg_channel_indices]
print(eeg_data.shape)
Explanation: Extracting channels by type
There are several ways to select all channels of a given type from a
:class:~mne.io.Raw object. The safest method is to use
:func:mne.pick_types to obtain the integer indices of the channels you
want, then use those indices with the square-bracket indexing method shown
above. The :func:~mne.pick_types function uses the :class:~mne.Info
attribute of the :class:~mne.io.Raw object to determine channel types, and
takes boolean or string parameters to indicate which type(s) to retain. The
meg parameter defaults to True, and all others default to False,
so to get just the EEG channels, we pass eeg=True and meg=False:
End of explanation
data = raw.get_data()
print(data.shape)
Explanation: Some of the parameters of :func:mne.pick_types accept string arguments as
well as booleans. For example, the meg parameter can take values
'mag', 'grad', 'planar1', or 'planar2' to select only
magnetometers, all gradiometers, or a specific type of gradiometer. See the
docstring of :meth:mne.pick_types for full details.
The Raw.get_data() method
If you only want the data (not the corresponding array of times),
:class:~mne.io.Raw objects have a :meth:~mne.io.Raw.get_data method. Used
with no parameters specified, it will extract all data from all channels, in
a (n_channels, n_timepoints) :class:NumPy array <numpy.ndarray>:
End of explanation
data, times = raw.get_data(return_times=True)
print(data.shape)
print(times.shape)
Explanation: If you want the array of times, :meth:~mne.io.Raw.get_data has an optional
return_times parameter:
End of explanation
first_channel_data = raw.get_data(picks=0)
eeg_and_eog_data = raw.get_data(picks=['eeg', 'eog'])
two_meg_chans_data = raw.get_data(picks=['MEG_0712', 'MEG_1022'],
start=1000, stop=2000)
print(first_channel_data.shape)
print(eeg_and_eog_data.shape)
print(two_meg_chans_data.shape)
Explanation: The :meth:~mne.io.Raw.get_data method can also be used to extract specific
channel(s) and sample ranges, via its picks, start, and stop
parameters. The picks parameter accepts integer channel indices, channel
names, or channel types, and preserves the requested channel order given as
its picks parameter.
End of explanation
data = raw.get_data()
np.save(file='my_data.npy', arr=data)
Explanation: Summary of ways to extract data from Raw objects
The following table summarizes the various ways of extracting data from a
:class:~mne.io.Raw object.
.. cssclass:: table-bordered
.. rst-class:: midvalign
+-------------------------------------+-------------------------+
| Python code | Result |
| | |
| | |
+=====================================+=========================+
| raw.get_data() | :class:NumPy array |
| | <numpy.ndarray> |
| | (n_chans × n_samps) |
+-------------------------------------+-------------------------+
| raw[:] | :class:tuple of (data |
+-------------------------------------+ (n_chans × n_samps), |
| raw.get_data(return_times=True) | times (1 × n_samps)) |
+-------------------------------------+-------------------------+
| raw[0, 1000:2000] | |
+-------------------------------------+ |
| raw['MEG 0113', 1000:2000] | |
+-------------------------------------+ |
| raw.get_data(picks=0, | :class:`tuple` of |
| start=1000, stop=2000, | (data (1 × 1000), |
| return_times=True) | times (1 × 1000)) |
+-------------------------------------+ |
| raw.get_data(picks='MEG 0113', | |
| start=1000, stop=2000, | |
| return_times=True) | |
+-------------------------------------+-------------------------+
| raw[7:9, 1000:2000] | |
+-------------------------------------+ |
| raw[[2, 5], 1000:2000] | :class:tuple of |
+-------------------------------------+ (data (2 × 1000), |
| raw[['EEG 030', 'EOG 061'], | times (1 × 1000)) |
| 1000:2000] | |
+-------------------------------------+-------------------------+
Exporting and saving Raw objects
:class:~mne.io.Raw objects have a built-in :meth:~mne.io.Raw.save method,
which can be used to write a partially processed :class:~mne.io.Raw object
to disk as a :file:.fif file, such that it can be re-loaded later with its
various attributes intact (but see precision for an important
note about numerical precision when saving).
There are a few other ways to export just the sensor data from a
:class:~mne.io.Raw object. One is to use indexing or the
:meth:~mne.io.Raw.get_data method to extract the data, and use
:func:numpy.save to save the data array:
End of explanation
sampling_freq = raw.info['sfreq']
start_end_secs = np.array([10, 13])
start_sample, stop_sample = (start_end_secs * sampling_freq).astype(int)
df = raw.to_data_frame(picks=['eeg'], start=start_sample, stop=stop_sample)
# then save using df.to_csv(...), df.to_hdf(...), etc
print(df.head())
Explanation: It is also possible to export the data to a :class:Pandas DataFrame
<pandas.DataFrame> object, and use the saving methods that :mod:Pandas
<pandas> affords. The :class:~mne.io.Raw object's
:meth:~mne.io.Raw.to_data_frame method is similar to
:meth:~mne.io.Raw.get_data in that it has a picks parameter for
restricting which channels are exported, and start and stop
parameters for restricting the time domain. Note that, by default, times will
be converted to milliseconds, rounded to the nearest millisecond, and used as
the DataFrame index; see the scaling_time parameter in the documentation
of :meth:~mne.io.Raw.to_data_frame for more details.
End of explanation |
4,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Keras の再帰型ニューラルネットワーク(RNN)
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: ビルトイン RNN レイヤー
Step3: ビルトイン RNN は、多数の有益な特徴をサポートしています。
dropout および recurrent_dropout 引数を介した再帰ドロップアウト
go_backwards 引数を介して、入力シーケンスを逆順に処理する能力
unroll 引数を介したループ展開(CPU で短いシーケンスを処理する際に大幅な高速化が得られる)
など。
詳細については、「RNN API ドキュメント」を参照してください。
出力と状態
デフォルトでは、RNN レイヤーの出力には、サンプル当たり 1 つのベクトルが含まれます。このベクトルは、最後の時間ステップに対応する RNN セル出力で、入力シーケンス全体の情報が含まれます。この出力の形状は (batch_size, units) で、units はレイヤーのコンストラクタに渡される units 引数に対応します。
RNN レイヤーは、return_sequences=True に設定した場合、各サンプルに対する出力のシーケンス全体(各サンプルの時間ステップごとに 1 ベクトル)を返すこともできます。この出力の形状は (batch_size, timesteps, units) です。
Step4: さらに、RNN レイヤーはその最終内部状態を返すことができます。返された状態は、後で RNN 実行を再開する際に使用するか、別の RNN を初期化するために使用できます。この設定は通常、エンコーダ・デコーダ方式の Sequence-to-Sequence モデルで使用され、エンコーダの最終状態がデコーダの初期状態として使用されます。
内部状態を返すように RNN レイヤーを構成するには、レイヤーを作成する際に、return_state パラメータを True に設定します。LSTM には状態テンソルが 2 つあるのに対し、GRU には 1 つしかないことに注意してください。
レイヤーの初期状態を構成するには、追加のキーワード引数 initial_state を使ってレイヤーを呼び出します。次の例に示すように、状態の形状は、レイヤーのユニットサイズに一致する必要があることに注意してください。
Step5: RNN レイヤーと RNN セル
ビルトイン RNN レイヤーのほかに、RNN API は、セルレベルの API も提供しています。入力シーケンスの全バッチを処理する RNN レイヤーとは異なり、RNN セルは単一の時間ステップのみを処理します。
セルは、RNN レイヤーの for ループ内にあります。keras.layers.RNN レイヤー内のセルをラップすることで、シーケンスのバッチを処理できるレイヤー(RNN(LSTMCell(10)) など)を得られます。
数学的には、RNN(LSTMCell(10)) は LSTM(10) と同じ結果を出します。実際、TF v1.x でのこのレイヤーの実装は、対応する RNN セルを作成し、それを RNN レイヤーにラップするだけでした。ただし、ビルトインの GRU と LSTM レイヤーを使用すれば、CuDNN が使用できるようになり、パフォーマンスの改善を確認できることがあります。
ビルトイン RNN セルには 3 つあり、それぞれ、それに一致する RNN レイヤーに対応しています。
keras.layers.SimpleRNNCell は SimpleRNN レイヤーに対応します。
keras.layers.GRUCell は GRU レイヤーに対応します。
keras.layers.LSTMCell は LSTM レイヤーに対応します。
セルの抽象化とジェネリックな keras.layers.RNN クラスを合わせることで、リサーチ用のカスタム RNN アーキテクチャの実装を簡単に行えるようになります。
バッチ間のステートフルネス
非常に長い(無限の可能性のある)シーケンスを処理する場合は、バッチ間ステートフルネスのパターンを使用するとよいでしょう。
通常、RNN レイヤーの内部状態は、新しいバッチが確認されるたびにリセットされます(レイヤーが確認する各サンプルは、過去のサンプルとは無関係だと考えられます)。レイヤーは、あるサンプルを処理する間のみ状態を維持します。
ただし、非常に長いシーケンスがある場合、より短いシーケンスに分割し、レイヤーの状態をリセットせずにそれらの短いシーケンスを順次、RNN レイヤーにフィードすることができます。こうすると、レイヤーはサブシーケンスごとに確認していても、シーケンス全体の情報を維持することができます。
これは、コンストラクタに stateful=True を設定して行います。
シーケンス s = [t0, t1, ... t1546, t1547] があるとした場合、これを次のように分割します。
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
そして、次のようにして処理します。
python
lstm_layer = layers.LSTM(64, stateful=True)
for s in sub_sequences
Step6: RNN 状態の再利用
<a id="rnn_state_reuse"></a>
RNN の記録済みの状態は、layer.weights() には含まれません。RNN レイヤーの状態を再利用する場合は、layer.states によって状態の値を取得し、new_layer(inputs, initial_state=layer.states) などの Keras Functional API またはモデルのサブクラス化を通じて新しいレイヤーの初期状態として使用することができます。
この場合には、単一の入力と出力を持つレイヤーのみをサポートする Sequential モデルを使用できない可能性があることにも注意してください。このモデルでは追加入力としての初期状態を使用することができません。
Step7: 双方向性 RNN
時系列以外のシーケンスについては(テキストなど)、開始から終了までのシーケンスを処理だけでなく、逆順に処理する場合、RNN モデルの方がパフォーマンスに優れていることがほとんどです。たとえば、ある文で次に出現する単語を予測するには、その単語の前に出現した複数の単語だけでなく、その単語に関する文脈があると役立ちます。
Keras は、そのような双方向性のある RNN を構築するために、keras.layers.Bidirectional ラッパーという簡単な API を提供しています。
Step8: 内部的には、Bidirectional は渡された RNN レイヤーをコピーし、新たにコピーされたレイヤーの go_backwards フィールドを転換して、入力が逆順に処理されるようにします。
Bidirectional RNN の出力は、デフォルトで、フォワードレイヤー出力とバックワードレイヤー出力の総和となります。これとは異なるマージ動作が必要な場合は(連結など)、Bidirectional ラッパーコンストラクタの merge_mode パラメータを変更します。Bidirectional の詳細については、API ドキュメントをご覧ください。
パフォーマンス最適化と CuDNN カーネル
TensorFlow 2.0 では、ビルトインの LSTM と GRU レイヤーは、GPU が利用できる場合にデフォルトで CuDNN カーネルを活用するように更新されています。この変更により、以前の keras.layers.CuDNNLSTM/CuDNNGRU レイヤーは使用廃止となったため、実行するハードウェアを気にせずにモデルを構築することができます。
CuDNN カーネルは、特定の前提を以って構築されており、レイヤーはビルトイン LSTM または GRU レイヤーのデフォルト値を変更しない場合は CuDNN カーネルを使用できません。これらには次のような例があります。
activation 関数を tanh からほかのものに変更する。
recurrent_activation 関数を sigmoid からほかのものに変更する。
recurrent_dropout > 0 を使用する。
unroll を True に設定する。LSTM/GRU によって内部 tf.while_loop は展開済み for ループに分解されます。
use_bias を False に設定する。
入力データが厳密に右詰でない場合にマスキングを使用する(マスクが厳密に右詰データに対応している場合でも、CuDNN は使用されます。これは最も一般的な事例です)。
制約の詳細については、LSTM および GRU レイヤーのドキュメントを参照してください。
利用できる場合に CuDNN カーネルを使用する
パフォーマンスの違いを確認するために、単純な LSTM モデルを構築してみましょう。
入力シーケンスとして、MNIST 番号の行のシーケンスを使用し(ピクセルの各行を時間ステップとして扱います)、番号のラベルを予測します。
Step9: MNIST データセットを読み込みましょう。
Step10: モデルのインスタンスを作成してトレーニングしましょう。
sparse_categorical_crossentropy をモデルの損失関数として選択します。モデルの出力形状は [batch_size, 10] です。モデルのターゲットは整数ベクトルで、各整数は 0 から 9 の範囲内にあります。
Step11: では、CuDNN カーネルを使用しないモデルと比較してみましょう。
Step12: NVIDIA GPU と CuDNN がインストールされたマシンで実行すると、CuDNN で構築されたモデルの方が、通常の TensorFlow カーネルを使用するモデルに比べて非常に高速に実行されます。
CPU のみの環境で推論を実行する場合でも、同じ CuDNN 対応モデルを使用できます。次の tf.device 注釈は単にデバイスの交換を強制しています。GPU が利用できないな場合は、デフォルトで CPU で実行されます。
実行するハードウェアを気にする必要がなくなったのです。素晴らしいと思いませんか?
Step13: リスト/ディクショナリ入力、またはネストされた入力を使う RNN
ネスト構造の場合、インプルメンターは単一の時間ステップにより多くの情報を含めることができます。たとえば、動画のフレームに、音声と動画の入力を同時に含めることができます。この場合のデータ形状は、次のようになります。
[batch, timestep, {"video"
Step14: ネストされた入力/出力で RNN モデルを構築する
上記で定義した keras.layers.RNN レイヤーとカスタムセルを使用する Keras モデルを構築しましょう。
Step15: ランダムに生成されたデータでモデルをトレーニングする
このモデルに適した候補データセットを持ち合わせていないため、ランダムな Numpy データを使って実演することにします。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
Explanation: Keras の再帰型ニューラルネットワーク(RNN)
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/keras/rnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/rnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colabで実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/guide/keras/rnn.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHubでソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/guide/keras/rnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png"> ノートブックをダウンロード</a> </td>
</table>
はじめに
再帰型ニューラルネットワーク(RNN)は、時系列や自然言語などのシーケンスデータのモデリングを強力に行うニューラルネットワークのクラスです。
概略的には、RNN レイヤーは for ループを使用して、それまでに確認した時間ステップに関する情報をエンコードする内部状態を維持しながらシーケンスの時間ステップをイテレートします。
Keras RNN API は、次に焦点を当てて設計されています。
使いやすさ: keras.layers.RNN、keras.layers.LSTM、keras.layers.GRU レイヤーがビルトインされているため、難しい構成選択を行わずに、再帰型モデルを素早く構築できます。
カスタマイズしやすさ: カスタムビヘイビアを使って独自の RNN セルレイヤーを構築し(for ループの内部)、一般的な keras.layers.RNN レイヤー(for ループ自体)で使用することもできます。このため、異なるリサーチアイデアを最小限のコードで柔軟に素早くプロトタイプすることができます。
セットアップ
End of explanation
model = keras.Sequential()
# Add an Embedding layer expecting input vocab of size 1000, and
# output embedding dimension of size 64.
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# Add a LSTM layer with 128 internal units.
model.add(layers.LSTM(128))
# Add a Dense layer with 10 units.
model.add(layers.Dense(10))
model.summary()
Explanation: ビルトイン RNN レイヤー: 単純な例
Keras には、次の 3 つのビルトイン RNN レイヤーがあります。
keras.layers.SimpleRNN: 前の時間ステップの出力が次の時間ステップにフィードされる、完全に連結された RNN です。
keras.layers.GRU: Cho et al., 2014 で初めて提案されたレイヤー。
keras.layers.LSTM: Hochreiter & Schmidhuber, 1997 で初めて提案されたレイヤー。
2015 年始めに、Keras に、LSTM および GRU の再利用可能なオープンソース Python 実装が導入されました。
整数のシーケンスを処理し、そのような整数を 64 次元ベクトルに埋め込み、LSTM レイヤーを使用してベクトルのシーケンスを処理する Sequential モデルの単純な例を次に示しています。
End of explanation
model = keras.Sequential()
model.add(layers.Embedding(input_dim=1000, output_dim=64))
# The output of GRU will be a 3D tensor of shape (batch_size, timesteps, 256)
model.add(layers.GRU(256, return_sequences=True))
# The output of SimpleRNN will be a 2D tensor of shape (batch_size, 128)
model.add(layers.SimpleRNN(128))
model.add(layers.Dense(10))
model.summary()
Explanation: ビルトイン RNN は、多数の有益な特徴をサポートしています。
dropout および recurrent_dropout 引数を介した再帰ドロップアウト
go_backwards 引数を介して、入力シーケンスを逆順に処理する能力
unroll 引数を介したループ展開(CPU で短いシーケンスを処理する際に大幅な高速化が得られる)
など。
詳細については、「RNN API ドキュメント」を参照してください。
出力と状態
デフォルトでは、RNN レイヤーの出力には、サンプル当たり 1 つのベクトルが含まれます。このベクトルは、最後の時間ステップに対応する RNN セル出力で、入力シーケンス全体の情報が含まれます。この出力の形状は (batch_size, units) で、units はレイヤーのコンストラクタに渡される units 引数に対応します。
RNN レイヤーは、return_sequences=True に設定した場合、各サンプルに対する出力のシーケンス全体(各サンプルの時間ステップごとに 1 ベクトル)を返すこともできます。この出力の形状は (batch_size, timesteps, units) です。
End of explanation
encoder_vocab = 1000
decoder_vocab = 2000
encoder_input = layers.Input(shape=(None,))
encoder_embedded = layers.Embedding(input_dim=encoder_vocab, output_dim=64)(
encoder_input
)
# Return states in addition to output
output, state_h, state_c = layers.LSTM(64, return_state=True, name="encoder")(
encoder_embedded
)
encoder_state = [state_h, state_c]
decoder_input = layers.Input(shape=(None,))
decoder_embedded = layers.Embedding(input_dim=decoder_vocab, output_dim=64)(
decoder_input
)
# Pass the 2 states to a new LSTM layer, as initial state
decoder_output = layers.LSTM(64, name="decoder")(
decoder_embedded, initial_state=encoder_state
)
output = layers.Dense(10)(decoder_output)
model = keras.Model([encoder_input, decoder_input], output)
model.summary()
Explanation: さらに、RNN レイヤーはその最終内部状態を返すことができます。返された状態は、後で RNN 実行を再開する際に使用するか、別の RNN を初期化するために使用できます。この設定は通常、エンコーダ・デコーダ方式の Sequence-to-Sequence モデルで使用され、エンコーダの最終状態がデコーダの初期状態として使用されます。
内部状態を返すように RNN レイヤーを構成するには、レイヤーを作成する際に、return_state パラメータを True に設定します。LSTM には状態テンソルが 2 つあるのに対し、GRU には 1 つしかないことに注意してください。
レイヤーの初期状態を構成するには、追加のキーワード引数 initial_state を使ってレイヤーを呼び出します。次の例に示すように、状態の形状は、レイヤーのユニットサイズに一致する必要があることに注意してください。
End of explanation
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
output = lstm_layer(paragraph3)
# reset_states() will reset the cached state to the original initial_state.
# If no initial_state was provided, zero-states will be used by default.
lstm_layer.reset_states()
Explanation: RNN レイヤーと RNN セル
ビルトイン RNN レイヤーのほかに、RNN API は、セルレベルの API も提供しています。入力シーケンスの全バッチを処理する RNN レイヤーとは異なり、RNN セルは単一の時間ステップのみを処理します。
セルは、RNN レイヤーの for ループ内にあります。keras.layers.RNN レイヤー内のセルをラップすることで、シーケンスのバッチを処理できるレイヤー(RNN(LSTMCell(10)) など)を得られます。
数学的には、RNN(LSTMCell(10)) は LSTM(10) と同じ結果を出します。実際、TF v1.x でのこのレイヤーの実装は、対応する RNN セルを作成し、それを RNN レイヤーにラップするだけでした。ただし、ビルトインの GRU と LSTM レイヤーを使用すれば、CuDNN が使用できるようになり、パフォーマンスの改善を確認できることがあります。
ビルトイン RNN セルには 3 つあり、それぞれ、それに一致する RNN レイヤーに対応しています。
keras.layers.SimpleRNNCell は SimpleRNN レイヤーに対応します。
keras.layers.GRUCell は GRU レイヤーに対応します。
keras.layers.LSTMCell は LSTM レイヤーに対応します。
セルの抽象化とジェネリックな keras.layers.RNN クラスを合わせることで、リサーチ用のカスタム RNN アーキテクチャの実装を簡単に行えるようになります。
バッチ間のステートフルネス
非常に長い(無限の可能性のある)シーケンスを処理する場合は、バッチ間ステートフルネスのパターンを使用するとよいでしょう。
通常、RNN レイヤーの内部状態は、新しいバッチが確認されるたびにリセットされます(レイヤーが確認する各サンプルは、過去のサンプルとは無関係だと考えられます)。レイヤーは、あるサンプルを処理する間のみ状態を維持します。
ただし、非常に長いシーケンスがある場合、より短いシーケンスに分割し、レイヤーの状態をリセットせずにそれらの短いシーケンスを順次、RNN レイヤーにフィードすることができます。こうすると、レイヤーはサブシーケンスごとに確認していても、シーケンス全体の情報を維持することができます。
これは、コンストラクタに stateful=True を設定して行います。
シーケンス s = [t0, t1, ... t1546, t1547] があるとした場合、これを次のように分割します。
s1 = [t0, t1, ... t100]
s2 = [t101, ... t201]
...
s16 = [t1501, ... t1547]
そして、次のようにして処理します。
python
lstm_layer = layers.LSTM(64, stateful=True)
for s in sub_sequences:
output = lstm_layer(s)
状態をクリアする場合は、layer.reset_states() を使用できます。
注意: このセットアップでは、あるバッチのサンプル i は前のバッチのサンプル i の続きであることを前提としています。つまり、すべてのバッチには同じ数のサンプル(バッチサイズ)が含まれることになります。たとえば、バッチに [sequence_A_from_t0_to_t100, sequence_B_from_t0_to_t100] が含まれるとした場合、次のバッチには、[sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200] が含まれます。
完全な例を次に示します。
End of explanation
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph2 = np.random.random((20, 10, 50)).astype(np.float32)
paragraph3 = np.random.random((20, 10, 50)).astype(np.float32)
lstm_layer = layers.LSTM(64, stateful=True)
output = lstm_layer(paragraph1)
output = lstm_layer(paragraph2)
existing_state = lstm_layer.states
new_lstm_layer = layers.LSTM(64)
new_output = new_lstm_layer(paragraph3, initial_state=existing_state)
Explanation: RNN 状態の再利用
<a id="rnn_state_reuse"></a>
RNN の記録済みの状態は、layer.weights() には含まれません。RNN レイヤーの状態を再利用する場合は、layer.states によって状態の値を取得し、new_layer(inputs, initial_state=layer.states) などの Keras Functional API またはモデルのサブクラス化を通じて新しいレイヤーの初期状態として使用することができます。
この場合には、単一の入力と出力を持つレイヤーのみをサポートする Sequential モデルを使用できない可能性があることにも注意してください。このモデルでは追加入力としての初期状態を使用することができません。
End of explanation
model = keras.Sequential()
model.add(
layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10))
)
model.add(layers.Bidirectional(layers.LSTM(32)))
model.add(layers.Dense(10))
model.summary()
Explanation: 双方向性 RNN
時系列以外のシーケンスについては(テキストなど)、開始から終了までのシーケンスを処理だけでなく、逆順に処理する場合、RNN モデルの方がパフォーマンスに優れていることがほとんどです。たとえば、ある文で次に出現する単語を予測するには、その単語の前に出現した複数の単語だけでなく、その単語に関する文脈があると役立ちます。
Keras は、そのような双方向性のある RNN を構築するために、keras.layers.Bidirectional ラッパーという簡単な API を提供しています。
End of explanation
batch_size = 64
# Each MNIST image batch is a tensor of shape (batch_size, 28, 28).
# Each input sequence will be of size (28, 28) (height is treated like time).
input_dim = 28
units = 64
output_size = 10 # labels are from 0 to 9
# Build the RNN model
def build_model(allow_cudnn_kernel=True):
# CuDNN is only available at the layer level, and not at the cell level.
# This means `LSTM(units)` will use the CuDNN kernel,
# while RNN(LSTMCell(units)) will run on non-CuDNN kernel.
if allow_cudnn_kernel:
# The LSTM layer with default options uses CuDNN.
lstm_layer = keras.layers.LSTM(units, input_shape=(None, input_dim))
else:
# Wrapping a LSTMCell in a RNN layer will not use CuDNN.
lstm_layer = keras.layers.RNN(
keras.layers.LSTMCell(units), input_shape=(None, input_dim)
)
model = keras.models.Sequential(
[
lstm_layer,
keras.layers.BatchNormalization(),
keras.layers.Dense(output_size),
]
)
return model
Explanation: 内部的には、Bidirectional は渡された RNN レイヤーをコピーし、新たにコピーされたレイヤーの go_backwards フィールドを転換して、入力が逆順に処理されるようにします。
Bidirectional RNN の出力は、デフォルトで、フォワードレイヤー出力とバックワードレイヤー出力の総和となります。これとは異なるマージ動作が必要な場合は(連結など)、Bidirectional ラッパーコンストラクタの merge_mode パラメータを変更します。Bidirectional の詳細については、API ドキュメントをご覧ください。
パフォーマンス最適化と CuDNN カーネル
TensorFlow 2.0 では、ビルトインの LSTM と GRU レイヤーは、GPU が利用できる場合にデフォルトで CuDNN カーネルを活用するように更新されています。この変更により、以前の keras.layers.CuDNNLSTM/CuDNNGRU レイヤーは使用廃止となったため、実行するハードウェアを気にせずにモデルを構築することができます。
CuDNN カーネルは、特定の前提を以って構築されており、レイヤーはビルトイン LSTM または GRU レイヤーのデフォルト値を変更しない場合は CuDNN カーネルを使用できません。これらには次のような例があります。
activation 関数を tanh からほかのものに変更する。
recurrent_activation 関数を sigmoid からほかのものに変更する。
recurrent_dropout > 0 を使用する。
unroll を True に設定する。LSTM/GRU によって内部 tf.while_loop は展開済み for ループに分解されます。
use_bias を False に設定する。
入力データが厳密に右詰でない場合にマスキングを使用する(マスクが厳密に右詰データに対応している場合でも、CuDNN は使用されます。これは最も一般的な事例です)。
制約の詳細については、LSTM および GRU レイヤーのドキュメントを参照してください。
利用できる場合に CuDNN カーネルを使用する
パフォーマンスの違いを確認するために、単純な LSTM モデルを構築してみましょう。
入力シーケンスとして、MNIST 番号の行のシーケンスを使用し(ピクセルの各行を時間ステップとして扱います)、番号のラベルを予測します。
End of explanation
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
sample, sample_label = x_train[0], y_train[0]
Explanation: MNIST データセットを読み込みましょう。
End of explanation
model = build_model(allow_cudnn_kernel=True)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
Explanation: モデルのインスタンスを作成してトレーニングしましょう。
sparse_categorical_crossentropy をモデルの損失関数として選択します。モデルの出力形状は [batch_size, 10] です。モデルのターゲットは整数ベクトルで、各整数は 0 から 9 の範囲内にあります。
End of explanation
noncudnn_model = build_model(allow_cudnn_kernel=False)
noncudnn_model.set_weights(model.get_weights())
noncudnn_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer="sgd",
metrics=["accuracy"],
)
noncudnn_model.fit(
x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1
)
Explanation: では、CuDNN カーネルを使用しないモデルと比較してみましょう。
End of explanation
import matplotlib.pyplot as plt
with tf.device("CPU:0"):
cpu_model = build_model(allow_cudnn_kernel=True)
cpu_model.set_weights(model.get_weights())
result = tf.argmax(cpu_model.predict_on_batch(tf.expand_dims(sample, 0)), axis=1)
print(
"Predicted result is: %s, target result is: %s" % (result.numpy(), sample_label)
)
plt.imshow(sample, cmap=plt.get_cmap("gray"))
Explanation: NVIDIA GPU と CuDNN がインストールされたマシンで実行すると、CuDNN で構築されたモデルの方が、通常の TensorFlow カーネルを使用するモデルに比べて非常に高速に実行されます。
CPU のみの環境で推論を実行する場合でも、同じ CuDNN 対応モデルを使用できます。次の tf.device 注釈は単にデバイスの交換を強制しています。GPU が利用できないな場合は、デフォルトで CPU で実行されます。
実行するハードウェアを気にする必要がなくなったのです。素晴らしいと思いませんか?
End of explanation
class NestedCell(keras.layers.Layer):
def __init__(self, unit_1, unit_2, unit_3, **kwargs):
self.unit_1 = unit_1
self.unit_2 = unit_2
self.unit_3 = unit_3
self.state_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
self.output_size = [tf.TensorShape([unit_1]), tf.TensorShape([unit_2, unit_3])]
super(NestedCell, self).__init__(**kwargs)
def build(self, input_shapes):
# expect input_shape to contain 2 items, [(batch, i1), (batch, i2, i3)]
i1 = input_shapes[0][1]
i2 = input_shapes[1][1]
i3 = input_shapes[1][2]
self.kernel_1 = self.add_weight(
shape=(i1, self.unit_1), initializer="uniform", name="kernel_1"
)
self.kernel_2_3 = self.add_weight(
shape=(i2, i3, self.unit_2, self.unit_3),
initializer="uniform",
name="kernel_2_3",
)
def call(self, inputs, states):
# inputs should be in [(batch, input_1), (batch, input_2, input_3)]
# state should be in shape [(batch, unit_1), (batch, unit_2, unit_3)]
input_1, input_2 = tf.nest.flatten(inputs)
s1, s2 = states
output_1 = tf.matmul(input_1, self.kernel_1)
output_2_3 = tf.einsum("bij,ijkl->bkl", input_2, self.kernel_2_3)
state_1 = s1 + output_1
state_2_3 = s2 + output_2_3
output = (output_1, output_2_3)
new_states = (state_1, state_2_3)
return output, new_states
def get_config(self):
return {"unit_1": self.unit_1, "unit_2": unit_2, "unit_3": self.unit_3}
Explanation: リスト/ディクショナリ入力、またはネストされた入力を使う RNN
ネスト構造の場合、インプルメンターは単一の時間ステップにより多くの情報を含めることができます。たとえば、動画のフレームに、音声と動画の入力を同時に含めることができます。この場合のデータ形状は、次のようになります。
[batch, timestep, {"video": [height, width, channel], "audio": [frequency]}]
別の例では、手書きのデータに、現在のペンの位置を示す座標 x と y のほか、筆圧情報も含めることができます。データは次のように表現できます。
[batch, timestep, {"location": [x, y], "pressure": [force]}]
次のコードは、このような構造化された入力を受け入れるカスタム RNN セルの構築方法を例に示しています。
ネストされた入力/出力をサポートするカスタムセルを定義する
独自レイヤーの記述に関する詳細は、「サブクラス化による新規レイヤーとモデルの作成」を参照してください。
End of explanation
unit_1 = 10
unit_2 = 20
unit_3 = 30
i1 = 32
i2 = 64
i3 = 32
batch_size = 64
num_batches = 10
timestep = 50
cell = NestedCell(unit_1, unit_2, unit_3)
rnn = keras.layers.RNN(cell)
input_1 = keras.Input((None, i1))
input_2 = keras.Input((None, i2, i3))
outputs = rnn((input_1, input_2))
model = keras.models.Model([input_1, input_2], outputs)
model.compile(optimizer="adam", loss="mse", metrics=["accuracy"])
Explanation: ネストされた入力/出力で RNN モデルを構築する
上記で定義した keras.layers.RNN レイヤーとカスタムセルを使用する Keras モデルを構築しましょう。
End of explanation
input_1_data = np.random.random((batch_size * num_batches, timestep, i1))
input_2_data = np.random.random((batch_size * num_batches, timestep, i2, i3))
target_1_data = np.random.random((batch_size * num_batches, unit_1))
target_2_data = np.random.random((batch_size * num_batches, unit_2, unit_3))
input_data = [input_1_data, input_2_data]
target_data = [target_1_data, target_2_data]
model.fit(input_data, target_data, batch_size=batch_size)
Explanation: ランダムに生成されたデータでモデルをトレーニングする
このモデルに適した候補データセットを持ち合わせていないため、ランダムな Numpy データを使って実演することにします。
End of explanation |
4,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting distributions
First, import relevant libraries
Step1: Then, load the data (takes a few moments)
Step2: The code below creates a calls-per-person frequency distribution, which is the first thing we want to see.
Step3: Plot this distribution. This shows that 19344 people made 1 call over the 4 months, 36466 people made 2 calls over the 4 months, 41900 people made 3 calls over the 4 months, etc.
Step4: It might be more helpful to look at a cumulative distribution curve, from which we can read off quantiles (e.g., this percentage of the people in the data set had x or more calls, x or fewer calls). Specifically, 10% of people have 3 or fewer calls over the entire period, 25% have 7 of fewer, 33% have 10 or fewer, 50% have 17 of fewer calls, etc., all the way up to 90% of people having 76 or fewer calls.
Step5: We also want to look at the number of unique lat-long addresses, which will (roughly) correspond to either where cell phone towers are, and/or the level of truncation. This takes too long in pandas, so we use postgres, piping the results of the query,
\o towers_with_counts.txt
select lat, lon, count(*) as calls, count(distinct cust_id) as users, count(distinct date_trunc('day', date_time_m) ) as days from optourism.cdr_foreigners group by lat, lon order by calls desc;
\q
into the file towers_with_counts.txt. This is followed by the bash command
cat towers_with_counts.txt | sed s/\ \|\ /'\t'/g | sed s/\ //g | sed 2d > towers_with_counts2.txt
to clean up the postgres output format.
Step6: Do the same thing as above.
Step7: Unlike the previous plot, this is not very clean at all, making the cumulative distribution plot critical.
Step8: Now, we want to look at temporal data. First, convert the categorical date_time_m to a datetime object; then, extract the date component. | Python Code:
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Plotting distributions
First, import relevant libraries:
End of explanation
# Load data
uda = pd.read_csv("./aws-data/user_dist.txt", sep="\t") # User distribution, all
udf = pd.read_csv("./aws-data/user_dist_fl.txt", sep="\t") # User distribution, Florence
dra = pd.read_csv("./aws-data/user_duration.txt", sep="\t") # Duration, all
drf = pd.read_csv("./aws-data/user_duration_fl.txt", sep="\t") # Duration, Florence
dra['min'] = pd.to_datetime(dra['min'], format='%Y-%m-%d%H:%M:%S')
dra['max'] = pd.to_datetime(dra['max'], format='%Y-%m-%d%H:%M:%S')
drf['min'] = pd.to_datetime(drf['min'], format='%Y-%m-%d%H:%M:%S')
drf['max'] = pd.to_datetime(drf['max'], format='%Y-%m-%d%H:%M:%S')
dra['duration'] = dra['max'] - dra['min']
drf['duration'] = drf['max'] - drf['min']
dra['days'] = dra['duration'].dt.days
drf['days'] = drf['duration'].dt.days
cda = pd.read_csv("./aws-data/calls_per_day.txt", sep="\t") # Calls per day, all
cdf = pd.read_csv("./aws-data/calls_per_day_fl.txt", sep="\t") # Calls per day, Florence
cda['day_'] = pd.to_datetime(cda['day_'], format='%Y-%m-%d%H:%M:%S').dt.date
cdf['day_'] = pd.to_datetime(cdf['day_'], format='%Y-%m-%d%H:%M:%S').dt.date
cda.head()
mcpdf = cdf.groupby('cust_id')['count'].mean().to_frame() # Mean calls per day, Florence
mcpdf.columns = ['mean_calls_per_day']
mcpdf = mcpdf.sort_values('mean_calls_per_day',ascending=False)
mcpdf.index.name = 'cust_id'
mcpdf.reset_index(inplace=True)
mcpdf.head()
# mcpdf.plot(y='mean_calls_per_day', style='.', logy=True, figsize=(10,10))
mcpdf.plot.hist(y='mean_calls_per_day', logy=True, figsize=(10,10), bins=100)
plt.ylabel('Number of customers with x average calls per day')
# plt.xlabel('Customer rank')
plt.title('Mean number of calls per day during days in Florence by foreign SIM cards')
cvd = udf.merge(drf, left_on='cust_id', right_on='cust_id', how='outer') # Count versus days
cvd.plot.scatter(x='days', y='count', s=.1, figsize = (10, 10))
plt.ylabel('Number of calls')
plt.xlabel('Duration between first and last days active')
plt.title('Calls versus duration of records of foreign SIMs in Florence')
fr = drf['days'].value_counts().to_frame() # NOTE: FIGURE OUT HOW TO ROUND, NOT TRUNCATE
fr.columns = ['frequency']
fr.index.name = 'days'
fr.reset_index(inplace=True)
fr = fr.sort_values('days')
fr['cumulative'] = fr['frequency'].cumsum()/fr['frequency'].sum()
Explanation: Then, load the data (takes a few moments):
End of explanation
fr.plot(x='days', y='frequency', style='o-', logy=True, figsize = (10, 10))
plt.ylabel('Number of people')
plt.axvline(14,ls='dotted')
plt.title('Foreign SIM days between first and last instances in Florence')
cvd = udf.merge(drf, left_on='cust_id', right_on='cust_id', how='outer') # Count versus days
cvd.plot.scatter(x='days', y='count', s=.1, figsize = (10, 10))
plt.ylabel('Number of calls')
plt.xlabel('Duration between first and last days active')
plt.title('Calls versus duration of records of foreign SIMs in Florence')
Explanation: The code below creates a calls-per-person frequency distribution, which is the first thing we want to see.
End of explanation
fr = udf['count'].value_counts().to_frame()
fr.columns = ['frequency']
fr.index.name = 'calls'
fr.reset_index(inplace=True)
fr = fr.sort_values('calls')
fr['cumulative'] = fr['frequency'].cumsum()/fr['frequency'].sum()
fr.head()
fr.plot(x='calls', y='frequency', style='o-', logx=True, figsize = (10, 10))
# plt.axvline(5,ls='dotted')
plt.ylabel('Number of people')
plt.title('Number of people placing or receiving x number of calls over 4 months')
Explanation: Plot this distribution. This shows that 19344 people made 1 call over the 4 months, 36466 people made 2 calls over the 4 months, 41900 people made 3 calls over the 4 months, etc.
End of explanation
fr.plot(x='calls', y='cumulative', style='o-', logx=True, figsize = (10, 10))
plt.axhline(1.0,ls='dotted',lw=.5)
plt.axhline(.90,ls='dotted',lw=.5)
plt.axhline(.75,ls='dotted',lw=.5)
plt.axhline(.67,ls='dotted',lw=.5)
plt.axhline(.50,ls='dotted',lw=.5)
plt.axhline(.33,ls='dotted',lw=.5)
plt.axhline(.25,ls='dotted',lw=.5)
plt.axhline(.10,ls='dotted',lw=.5)
plt.axhline(0.0,ls='dotted',lw=.5)
plt.axvline(max(fr['calls'][fr['cumulative']<.90]),ls='dotted',lw=.5)
plt.ylabel('Cumulative fraction of people')
plt.title('Cumulative fraction of people placing or receiving x number of calls over 4 months')
Explanation: It might be more helpful to look at a cumulative distribution curve, from which we can read off quantiles (e.g., this percentage of the people in the data set had x or more calls, x or fewer calls). Specifically, 10% of people have 3 or fewer calls over the entire period, 25% have 7 of fewer, 33% have 10 or fewer, 50% have 17 of fewer calls, etc., all the way up to 90% of people having 76 or fewer calls.
End of explanation
df2 = pd.read_table("./aws-data/towers_with_counts2.txt")
df2.head()
Explanation: We also want to look at the number of unique lat-long addresses, which will (roughly) correspond to either where cell phone towers are, and/or the level of truncation. This takes too long in pandas, so we use postgres, piping the results of the query,
\o towers_with_counts.txt
select lat, lon, count(*) as calls, count(distinct cust_id) as users, count(distinct date_trunc('day', date_time_m) ) as days from optourism.cdr_foreigners group by lat, lon order by calls desc;
\q
into the file towers_with_counts.txt. This is followed by the bash command
cat towers_with_counts.txt | sed s/\ \|\ /'\t'/g | sed s/\ //g | sed 2d > towers_with_counts2.txt
to clean up the postgres output format.
End of explanation
fr2 = df2['count'].value_counts().to_frame()
fr2.columns = ['frequency']
fr2.index.name = 'count'
fr2.reset_index(inplace=True)
fr2 = fr2.sort_values('count')
fr2['cumulative'] = fr2['frequency'].cumsum()/fr2['frequency'].sum()
fr2.head()
fr2.plot(x='count', y='frequency', style='o-', logx=True, figsize = (10, 10))
# plt.axvline(5,ls='dotted')
plt.ylabel('Number of cell towers')
plt.title('Number of towers with x number of calls placed or received over 4 months')
Explanation: Do the same thing as above.
End of explanation
fr2.plot(x='count', y='cumulative', style='o-', logx=True, figsize = (10, 10))
plt.axhline(0.1,ls='dotted',lw=.5)
plt.axvline(max(fr2['count'][fr2['cumulative']<.10]),ls='dotted',lw=.5)
plt.axhline(0.5,ls='dotted',lw=.5)
plt.axvline(max(fr2['count'][fr2['cumulative']<.50]),ls='dotted',lw=.5)
plt.axhline(0.9,ls='dotted',lw=.5)
plt.axvline(max(fr2['count'][fr2['cumulative']<.90]),ls='dotted',lw=.5)
plt.ylabel('Cumulative fraction of cell towers')
plt.title('Cumulative fraction of towers with x number of calls placed or received over 4 months')
Explanation: Unlike the previous plot, this is not very clean at all, making the cumulative distribution plot critical.
End of explanation
df['datetime'] = pd.to_datetime(df['date_time_m'], format='%Y-%m-%d %H:%M:%S')
df['date'] = df['datetime'].dt.floor('d') # Faster than df['datetime'].dt.date
df2 = df.groupby(['cust_id','date']).size().to_frame()
df2.columns = ['count']
df2.index.name = 'date'
df2.reset_index(inplace=True)
df2.head(20)
df3 = (df2.groupby('cust_id')['date'].max() - df2.groupby('cust_id')['date'].min()).to_frame()
df3['calls'] = df2.groupby('cust_id')['count'].sum()
df3.columns = ['days','calls']
df3['days'] = df3['days'].dt.days
df3.head()
fr = df['cust_id'].value_counts().to_frame()['cust_id'].value_counts().to_frame()
# plt.scatter(np.log(df3['days']), np.log(df3['calls']))
# plt.show()
fr.plot(x='calls', y='freq', style='o', logx=True, logy=True)
x=np.log(fr['calls'])
y=np.log(1-fr['freq'].cumsum()/fr['freq'].sum())
plt.plot(x, y, 'r-')
# How many home_Regions
np.count_nonzero(data['home_region'].unique())
# How many customers
np.count_nonzero(data['cust_id'].unique())
# How many Nulls are there in the customer ID column?
df['cust_id'].isnull().sum()
# How many missing data are there in the customer ID?
len(df['cust_id']) - df['cust_id'].count()
df['cust_id'].unique()
data_italians = pd.read_csv("./aws-data/firence_italians_3days_past_future_sample_1K_custs.csv", header=None)
data_italians.columns = ['lat', 'lon', 'date_time_m', 'home_region', 'cust_id', 'in_florence']
regions = np.array(data_italians['home_region'].unique())
regions
'Sardegna' in data['home_region']
Explanation: Now, we want to look at temporal data. First, convert the categorical date_time_m to a datetime object; then, extract the date component.
End of explanation |
4,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step5: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
Step6: Exercise
Step7: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step8: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step9: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step10: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step11: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step12: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step13: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step14: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step15: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step16: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step17: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('./reviews.txt', 'r') as f:
reviews = f.read()
with open('./labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
labels = labels.split('\n')
labels = np.array([1 if each == 'positive' else 0 for each in labels])
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
non_zero_idx = [ii for ii, review in enumerate(reviews_ints) if len(review) != 0]
len(non_zero_idx)
reviews_ints[-1]
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
reviews_ints = [reviews_ints[ii] for ii in non_zero_idx]
labels = np.array([labels[ii] for ii in non_zero_idx])
Explanation: Turns out its the final review that has zero length. But that might not always be the case, so let's make it more general.
End of explanation
seq_len = 200
features = np.zeros((len(reviews_ints), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
features[:10,:100]
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
split_frac = 0.8
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2500, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer as 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
4,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Parser for Regular Expression
This notebook implements a parser for regular expressions. The parser that is implemented in the function parseExpr parses a regular expression
according to the following <em style="color
Step1: The function $\texttt{isWhiteSpace}(s)$ checks whether the string $s$ contains only blanks and tabulators.
Step2: The function tokenize(s) partitions the string s into a list of tokens.
It recognizes
- the operator symbols + and *,
- the parentheses (, ),
- single upper or lower case letters,
- 0,
- the empty string "".
All whitespace characters are discarded.
Step3: The function parse takes a string s and tries to parse it as a regular expression. The parse tree is returned as a nested tuple.
Step4: The function parseRegExp takes a token list TokenList and tries to interpret this list
as a regular expression. It returns the regular expression in the form of a nested tuple and
a list of those tokens that could not be parsed. It is implemented as a <em style="color
Step5: The function parseProduct implements the following grammar rule
Step6: The function parseFactor implements the following grammar rule
Step7: The function parseAtom implements the following grammar rule | Python Code:
import re
Explanation: A Parser for Regular Expression
This notebook implements a parser for regular expressions. The parser that is implemented in the function parseExpr parses a regular expression
according to the following <em style="color:blue">EBNF grammar</em>.
regExp -> product ('+' product)*
product -> factor factor*
factor -> atom '*'?
atom -> '(' expr ')' | CHAR | '""' | '0'
The parse tree is represented as a nested tuple.
- characters are represented by themselves,
- '0' is interpreted as $\emptyset$ and is represented as 0,
- "" is interpreted as the regular expression $\varepsilon$ and represented as '',
- $r_1 \cdot r_2$ is represented as ('cat', 'a', 'b'),
- $r_1 + r_2$ is represented as ('or',$r_1, r_2$ ),
- $r^*$ is represented as (star, r) .
The parser is implemented as a recursive top-down parser.
In order to tokenize strings, we need regular expressions from the module re.
End of explanation
def isWhiteSpace(s):
whitespace = re.compile(r'[ \t]+')
return whitespace.fullmatch(s)
Explanation: The function $\texttt{isWhiteSpace}(s)$ checks whether the string $s$ contains only blanks and tabulators.
End of explanation
def tokenize(s):
regExp = r'''
[+*()] | # operators and parentheses
[ \t\n] | # white space
[a-zA-Z] | # single characters from the alphabet
0 | # empty regular expression
"" # epsilon
'''
return [t for t in re.findall(regExp, s, flags=re.VERBOSE) if not isWhiteSpace(t)]
Explanation: The function tokenize(s) partitions the string s into a list of tokens.
It recognizes
- the operator symbols + and *,
- the parentheses (, ),
- single upper or lower case letters,
- 0,
- the empty string "".
All whitespace characters are discarded.
End of explanation
def parse(s):
TokenList = tokenize(s)
regExp, Rest = parseRegExp(TokenList)
assert Rest == [], f'Parse Error: could not parse {TokenList}'
return regExp
Explanation: The function parse takes a string s and tries to parse it as a regular expression. The parse tree is returned as a nested tuple.
End of explanation
def parseRegExp(TokenList):
result, Rest = parseProduct(TokenList)
while len(Rest) > 1 and Rest[0] == '+':
arg, Rest = parseProduct(Rest[1:])
result = ('or', result, arg)
return result, Rest
Explanation: The function parseRegExp takes a token list TokenList and tries to interpret this list
as a regular expression. It returns the regular expression in the form of a nested tuple and
a list of those tokens that could not be parsed. It is implemented as a <em style="color:blue">top-down-parser.</em>
The function parseRegExp implements the following grammar rule:
regExp -> product ('+' product)*
End of explanation
def parseProduct(TokenList):
result, Rest = parseFactor(TokenList)
while len(Rest) > 0 and not (Rest[0] in ["+", "*", ")"]):
arg, Rest = parseFactor(Rest)
result = ('cat', result, arg)
return result, Rest
Explanation: The function parseProduct implements the following grammar rule:
product -> factor factor*
End of explanation
def parseFactor(TokenList):
atom, Rest = parseAtom(TokenList)
if len(Rest) > 0 and Rest[0] == "*":
return ('star', atom), Rest[1:]
return atom, Rest
Explanation: The function parseFactor implements the following grammar rule:
factor -> atom '*'?
End of explanation
def parseAtom(TokenList):
if TokenList[0] == '0':
return 0, TokenList[1:]
if TokenList[0] == '(':
regExp, Rest = parseRegExp(TokenList[1:])
assert Rest[0] == ")", "Parse Error"
return regExp, Rest[1:]
if TokenList[0] == '""':
return '', TokenList[1:]
s = TokenList[0]
assert len(s) <= 1, f'parse error: {TokenList}'
return s, TokenList[1:]
parse('a*b + b*a')
Explanation: The function parseAtom implements the following grammar rule:
atom -> '0'
| '(' expr ')'
| '""'
| CHAR
End of explanation |
4,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IMDB Predictive Analytics
This notebook explores using data science techniques on a data set of 5000+ movies, and predicting whether a movie will be highly rated on IMDb.
The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development.
This notebook is adapted from "Titanic Data Science Solutions" by Manav Sehgal
Workflow stages
This workflow goes through seven stages.
Question or problem definition.
Acquire training and testing data.
Wrangle, prepare, cleanse the data.
Analyze, identify patterns, and explore the data.
Model, predict and solve the problem.
Visualize, report, and present the problem solving steps and final solution.
Supply or submit the results.
The workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.
We may combine mulitple workflow stages. We may analyze by visualizing data.
Perform a stage earlier than indicated. We may analyze data before and after wrangling.
Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.
Question and problem definition
The original data set used in this notebook can be found here at Kaggle.
Knowing from a training set of samples listing movies and their IMDb scores, can our model determine based on a given test dataset not containing the scores, if the movies in the test dataset scored highly or not?
Workflow goals
The data science solutions workflow solves for seven major goals.
Classifying. We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.
Correlating. One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a correlation among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.
Converting. For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.
Completing. Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.
Correcting. We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.
Creating. Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.
Charting. How to select the right visualization plots and charts depending on nature of the data and the solution goals. A good start is to read the Tableau paper on Which chart or graph is right for you?.
Step1: Acquire data
The Python Pandas packages helps us work with our datasets. We start by acquiring the datasets into a Pandas DataFrame.
~~We will partition off 80% as our training data and 20% of the data as our test data. We also combine these datasets to run certain operations on both datasets together.~~
Let's move the partioning to after the data wrangling. Makes the code simpler, and doesn't make a real difference. Also removes any differences in the banding portions between runs.
Step2: Analyze by describing data
Pandas also helps describe the datasets answering following questions early in our project.
Which features are available in the dataset?
Noting the feature names for directly manipulating or analyzing these. These feature names are described on the Kaggle page here.
Step3: Which features are categorical?
Color, Director name, Actor 1 name, Actor 2 name, Actor 3 name, Genres, Language, Country, Content Rating, Movie title, Plot keywords, Movie IMDb link
Which features are numerical?
Number of critics for reviews, Duration, Director Facebook likes, Actor 1 Facebook likes, Actor 2 Facebook likes, Actor 3 Facebook likes, Gross, Number of voted users, Cast total Facebook likes, Number of faces in poster, Number of users for reviews, Budget, Title year, IMDb score, Aspect ratio, Movie Facebook likes
Step4: Which features contain blank, null or empty values?
These will require correcting.
color
director_name
num_critic_for_reviews
duration
director_facebook_likes
actor_3_facebook_likes
actor_2_name
actor_1_facebook_likes
gross
actor_1_name
actor_3_name
facenumber_in_poster
plot_keywords
num_user_for_reviews
language
country
content_rating
budget
title_year
actor_2_facebook_likes
aspect_ratio
What are the data types for various features?
Helping us during converting goal.
Twelve features are floats.
Nine features are strings (object).
Step5: What is the distribution of categorical features?
Step6: Transformation of IMDb score
Let's simplify this problem into a binary classification/regression. Let us treat all movies with an IMDb score of 7.0 or higher as "good" (with a value of '1') and all below as "bad" (with a value of '0').
Step7: Assumtions based on data analysis
We arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.
Correlating.
We want to know how well does each feature correlate with IMDb score. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.
Completing.
Correcting.
Creating.
Classifying.
Analyze by pivoting features
To confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.
Pclass We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying #3). We decide to include this feature in our model.
Sex We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying #1).
SibSp and Parch These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating #1).
Step8: Analyze by visualizing data
Now we can continue confirming some of our assumptions using visualizations for analyzing the data.
Correlating numerical features
Let us start by understanding correlations between numerical features and our solution goal (IMDb score).
Observations.
Decisions.
Step9: Correlating numerical and ordinal features
We can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.
Observations.
Decisions.
Step10: Correlating categorical features
Now we can correlate categorical features with our solution goal.
Observations.
Decisions.
Step11: Correlating categorical and numerical features
Observations.
Decisions.
Step12: Wrangle data
We have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals.
Correcting by dropping features
This is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.
Based on our assumptions and decisions we want to drop the actor_2_name, genres, actor_1_name, movie_title, actor_3_name, plot_keywords, movie_imdb_link features.
Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
Step13: Creating new feature extracting from existing
We want to analyze if the Director and Actor name features can be engineered to extract number of films starred or directed and test correlation between number of films and score, before dropping Director and Actor name features.
In the following code we extract the num_of_films_director and num_of_films_actor features by iterating over the dataframe.
If the director name field is empty, we fill the field with 1.
Observations.
When we plot the number of films directed, and number of films acted in, we note the following observations.
Directors with more films under their belt tend to have a higher success rate. It seems practice does make perfect.
Films where the actors have a higher number of combined films have a higher success rate. A more experienced cast, a better movie.
Decision.
We decide to band the directors into groups by number of films directed.
We decide to band the "total combined films acted in" into groups.
Step14: We can now remove the director_name and num_of_films_director_band features.
Step15: Now, let's examine actors by number of films acted in. Since we have three actors listed per film, we'll need to combine these numbers. Let's fill any empty fields with the median value for that field, and sum the columns.
Step16: Now we can remove actor_1_name, actor_2_name, actor_3_name and ActorSumBand
Step17: Converting a categorical feature
Now we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.
Let us start by converting Color feature to a new feature called Color where black and white=1 and color=0.
Since some values are null, let's fill them with the most common value, Color
Step18: Next, let's look at the language and country features
Step19: The bulk of the films are in English. Let's convert this field to 1 for Non-English, and 0 for English
First, let's fill any null values with English
Step20: Next, let's explore country
Step21: Again, most films are from USA. Taking the same approach, we'll fill NaNs with USA, and transfrom USA to 0, all others to 1
Step22: Next up is content rating. Let's look at that
Step23: The majority of the films use the standard MPAA ratings
Step24: Aspect ratio may seem like a numerical feature, but it's somewhat of a categorial one. First, what values do we find in the dataset?
Step25: Some of these values seem to be in the wrong format, 16.00 is most likely 16
Step26: The above banding looks good. It sepearates out the two predominant aspect ratios (2.35 and 1.85), and also has two bands below and above these ratios. Let's use that.
Step27: Completing a numerical continuous feature
Now we should start estimating and completing features with missing or null values. We will first do this for the Duration feature.
We can consider three methods to complete a numerical continuous feature.
The easist way is to use the median value.
Another simple way is to generate random numbers between mean and standard deviation.
More accurate way of guessing missing values is to use other correlated features, and use the median value based on other features.
Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations.
We will use method 2.
Step28: Let us create Duration bands and determine correlations with IMDb score.
Step29: Let us replace Duration with ordinals based on these bands.
Step30: Let's apply the same techniques to the following features
Step31: director_facebook_likes
Step32: Since the standard deviation for this field is ~4x the mean, we'll just stick to using the mean value for nulls
Step33: actor_1_facebook_likes
Step34: actor_2_facebook_likes
Step35: actor_3_facebook_likes
Step36: gross
Step37: facenumber_in_poster
Step38: num_user_for_reviews
Step39: budget
Step40: title_year
Step41: num_voted_users
Step42: cast_total_facebook_likes
Step43: movie_facebook_likes
Step44: num_of_films_director
Step45: Create new feature combining existing features
Completing a categorical feature
Converting categorical feature to numeric
Quick completing and converting a numeric feature
Partion Data
Now, we randomly partion our dataset into two DataFrames. 80% of the data will be our training set, the rest will become our test set
Step46: Model, predict and solve
Now we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include
Step47: Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference Wikipedia.
Note the confidence score generated by the model based on our training dataset.
Step48: We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.
Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).
Country is highest positivie coefficient, implying as the Country value increases (USA
Step49: Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference Wikipedia.
Note that the model generates a confidence score which is higher than Logistics Regression model.
Step50: In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference Wikipedia.
KNN confidence score is better than Logistics Regression and SVM.
Step51: In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference Wikipedia.
The model generated confidence score is the lowest among the models evaluated so far.
Step52: The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference Wikipedia.
Step53: This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference Wikipedia.
The model confidence score is the highest among models evaluated so far.
Step54: The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference Wikipedia.
The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
Step55: Model evaluation
We can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set. | Python Code:
# data analysis and wrangling
import pandas as pd
import numpy as np
import random as rnd
from scipy.stats import truncnorm
# visualization
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# machine learning
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import Perceptron
from sklearn.linear_model import SGDClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
Explanation: IMDB Predictive Analytics
This notebook explores using data science techniques on a data set of 5000+ movies, and predicting whether a movie will be highly rated on IMDb.
The objective of this notebook is to follow a step-by-step workflow, explaining each step and rationale for every decision we take during solution development.
This notebook is adapted from "Titanic Data Science Solutions" by Manav Sehgal
Workflow stages
This workflow goes through seven stages.
Question or problem definition.
Acquire training and testing data.
Wrangle, prepare, cleanse the data.
Analyze, identify patterns, and explore the data.
Model, predict and solve the problem.
Visualize, report, and present the problem solving steps and final solution.
Supply or submit the results.
The workflow indicates general sequence of how each stage may follow the other. However there are use cases with exceptions.
We may combine mulitple workflow stages. We may analyze by visualizing data.
Perform a stage earlier than indicated. We may analyze data before and after wrangling.
Perform a stage multiple times in our workflow. Visualize stage may be used multiple times.
Question and problem definition
The original data set used in this notebook can be found here at Kaggle.
Knowing from a training set of samples listing movies and their IMDb scores, can our model determine based on a given test dataset not containing the scores, if the movies in the test dataset scored highly or not?
Workflow goals
The data science solutions workflow solves for seven major goals.
Classifying. We may want to classify or categorize our samples. We may also want to understand the implications or correlation of different classes with our solution goal.
Correlating. One can approach the problem based on available features within the training dataset. Which features within the dataset contribute significantly to our solution goal? Statistically speaking is there a correlation among a feature and solution goal? As the feature values change does the solution state change as well, and visa-versa? This can be tested both for numerical and categorical features in the given dataset. We may also want to determine correlation among features other than survival for subsequent goals and workflow stages. Correlating certain features may help in creating, completing, or correcting features.
Converting. For modeling stage, one needs to prepare the data. Depending on the choice of model algorithm one may require all features to be converted to numerical equivalent values. So for instance converting text categorical values to numeric values.
Completing. Data preparation may also require us to estimate any missing values within a feature. Model algorithms may work best when there are no missing values.
Correcting. We may also analyze the given training dataset for errors or possibly innacurate values within features and try to corrent these values or exclude the samples containing the errors. One way to do this is to detect any outliers among our samples or features. We may also completely discard a feature if it is not contribting to the analysis or may significantly skew the results.
Creating. Can we create new features based on an existing feature or a set of features, such that the new feature follows the correlation, conversion, completeness goals.
Charting. How to select the right visualization plots and charts depending on nature of the data and the solution goals. A good start is to read the Tableau paper on Which chart or graph is right for you?.
End of explanation
df = pd.read_csv('../input/movie_metadata.csv')
# train_df, test_df = train_test_split(df, test_size = 0.2)
# test_actual = test_df['imdb_score']
# test_df = test_df.drop('imdb_score', axis=1)
# combine = [train_df, test_df]
Explanation: Acquire data
The Python Pandas packages helps us work with our datasets. We start by acquiring the datasets into a Pandas DataFrame.
~~We will partition off 80% as our training data and 20% of the data as our test data. We also combine these datasets to run certain operations on both datasets together.~~
Let's move the partioning to after the data wrangling. Makes the code simpler, and doesn't make a real difference. Also removes any differences in the banding portions between runs.
End of explanation
print(df.columns.values)
Explanation: Analyze by describing data
Pandas also helps describe the datasets answering following questions early in our project.
Which features are available in the dataset?
Noting the feature names for directly manipulating or analyzing these. These feature names are described on the Kaggle page here.
End of explanation
pd.set_option('display.max_columns', 50)
# preview the data
df.head()
incomplete = df.columns[pd.isnull(df).any()].tolist()
df[incomplete].info()
Explanation: Which features are categorical?
Color, Director name, Actor 1 name, Actor 2 name, Actor 3 name, Genres, Language, Country, Content Rating, Movie title, Plot keywords, Movie IMDb link
Which features are numerical?
Number of critics for reviews, Duration, Director Facebook likes, Actor 1 Facebook likes, Actor 2 Facebook likes, Actor 3 Facebook likes, Gross, Number of voted users, Cast total Facebook likes, Number of faces in poster, Number of users for reviews, Budget, Title year, IMDb score, Aspect ratio, Movie Facebook likes
End of explanation
df.info()
df.describe()
Explanation: Which features contain blank, null or empty values?
These will require correcting.
color
director_name
num_critic_for_reviews
duration
director_facebook_likes
actor_3_facebook_likes
actor_2_name
actor_1_facebook_likes
gross
actor_1_name
actor_3_name
facenumber_in_poster
plot_keywords
num_user_for_reviews
language
country
content_rating
budget
title_year
actor_2_facebook_likes
aspect_ratio
What are the data types for various features?
Helping us during converting goal.
Twelve features are floats.
Nine features are strings (object).
End of explanation
df.describe(include=['O'])
Explanation: What is the distribution of categorical features?
End of explanation
df.loc[ df['imdb_score'] < 7.0, 'imdb_score'] = 0
df.loc[ df['imdb_score'] >= 7.0, 'imdb_score'] = 1
df.head()
Explanation: Transformation of IMDb score
Let's simplify this problem into a binary classification/regression. Let us treat all movies with an IMDb score of 7.0 or higher as "good" (with a value of '1') and all below as "bad" (with a value of '0').
End of explanation
df[['content_rating', 'imdb_score']].groupby(['content_rating'], as_index=False).mean().sort_values(by='imdb_score', ascending=False)
df[["color", "imdb_score"]].groupby(['color'], as_index=False).mean().sort_values(by='imdb_score', ascending=False)
df[["director_name", "imdb_score"]].groupby(['director_name'], as_index=False).mean().sort_values(by='imdb_score', ascending=False)
df[["country", "imdb_score"]].groupby(['country'], as_index=False).mean().sort_values(by='imdb_score', ascending=False)
Explanation: Assumtions based on data analysis
We arrive at following assumptions based on data analysis done so far. We may validate these assumptions further before taking appropriate actions.
Correlating.
We want to know how well does each feature correlate with IMDb score. We want to do this early in our project and match these quick correlations with modelled correlations later in the project.
Completing.
Correcting.
Creating.
Classifying.
Analyze by pivoting features
To confirm some of our observations and assumptions, we can quickly analyze our feature correlations by pivoting features against each other. We can only do so at this stage for features which do not have any empty values. It also makes sense doing so only for features which are categorical (Sex), ordinal (Pclass) or discrete (SibSp, Parch) type.
Pclass We observe significant correlation (>0.5) among Pclass=1 and Survived (classifying #3). We decide to include this feature in our model.
Sex We confirm the observation during problem definition that Sex=female had very high survival rate at 74% (classifying #1).
SibSp and Parch These features have zero correlation for certain values. It may be best to derive a feature or a set of features from these individual features (creating #1).
End of explanation
g = sns.FacetGrid(df, col='imdb_score')
g.map(plt.hist, 'title_year', bins=20)
Explanation: Analyze by visualizing data
Now we can continue confirming some of our assumptions using visualizations for analyzing the data.
Correlating numerical features
Let us start by understanding correlations between numerical features and our solution goal (IMDb score).
Observations.
Decisions.
End of explanation
# grid = sns.FacetGrid(df, col='Pclass', hue='Survived')
grid = sns.FacetGrid(df, col='imdb_score', row='color', size=2.2, aspect=1.6)
grid.map(plt.hist, 'title_year', alpha=.5, bins=20)
grid.add_legend();
Explanation: Correlating numerical and ordinal features
We can combine multiple features for identifying correlations using a single plot. This can be done with numerical and categorical features which have numeric values.
Observations.
Decisions.
End of explanation
# grid = sns.FacetGrid(df, col='Embarked')
# grid = sns.FacetGrid(df, row='Embarked', size=2.2, aspect=1.6)
# grid.map(sns.pointplot, 'Pclass', 'Survived', 'Sex', palette='deep')
# grid.add_legend()
Explanation: Correlating categorical features
Now we can correlate categorical features with our solution goal.
Observations.
Decisions.
End of explanation
# grid = sns.FacetGrid(df, col='Embarked', hue='Survived', palette={0: 'k', 1: 'w'})
# grid = sns.FacetGrid(df, row='Embarked', col='Survived', size=2.2, aspect=1.6)
# grid.map(sns.barplot, 'Sex', 'Fare', alpha=.5, ci=None)
# grid.add_legend()
Explanation: Correlating categorical and numerical features
Observations.
Decisions.
End of explanation
print("Before", df.shape)
df = df.drop(['genres', 'movie_title', 'plot_keywords', 'movie_imdb_link'], axis=1)
"After", df.shape
Explanation: Wrangle data
We have collected several assumptions and decisions regarding our datasets and solution requirements. So far we did not have to change a single feature or value to arrive at these. Let us now execute our decisions and assumptions for correcting, creating, and completing goals.
Correcting by dropping features
This is a good starting goal to execute. By dropping features we are dealing with fewer data points. Speeds up our notebook and eases the analysis.
Based on our assumptions and decisions we want to drop the actor_2_name, genres, actor_1_name, movie_title, actor_3_name, plot_keywords, movie_imdb_link features.
Note that where applicable we perform operations on both training and testing datasets together to stay consistent.
End of explanation
actors = {}
directors = {}
for index, row in df.iterrows():
for actor in row[['actor_1_name', 'actor_2_name', 'actor_3_name']]:
if actor is not np.nan:
if actor not in actors:
actors[actor] = 0
actors[actor] += 1
director = row['director_name']
if director is not np.nan:
if director not in directors:
directors[director] = 0
directors[director] += 1
df['num_of_films_director'] = df["director_name"].dropna().map(directors).astype(int)
df['num_of_films_director'] = df['num_of_films_director'].fillna(1)
df['NumFilmsBand'] = pd.cut(df['num_of_films_director'], 4)
df[['NumFilmsBand', 'imdb_score']].groupby(['NumFilmsBand'], as_index=False).mean().sort_values(by='NumFilmsBand', ascending=True)
df.loc[ df['num_of_films_director'] <= 7, 'num_of_films_director'] = 0
df.loc[(df['num_of_films_director'] > 7) & (df['num_of_films_director'] <= 13), 'num_of_films_director'] = 1
df.loc[(df['num_of_films_director'] > 13) & (df['num_of_films_director'] <= 19), 'num_of_films_director'] = 2
df.loc[ df['num_of_films_director'] > 19, 'num_of_films_director'] = 3
df.head()
Explanation: Creating new feature extracting from existing
We want to analyze if the Director and Actor name features can be engineered to extract number of films starred or directed and test correlation between number of films and score, before dropping Director and Actor name features.
In the following code we extract the num_of_films_director and num_of_films_actor features by iterating over the dataframe.
If the director name field is empty, we fill the field with 1.
Observations.
When we plot the number of films directed, and number of films acted in, we note the following observations.
Directors with more films under their belt tend to have a higher success rate. It seems practice does make perfect.
Films where the actors have a higher number of combined films have a higher success rate. A more experienced cast, a better movie.
Decision.
We decide to band the directors into groups by number of films directed.
We decide to band the "total combined films acted in" into groups.
End of explanation
df = df.drop(['director_name', 'NumFilmsBand'], axis=1)
Explanation: We can now remove the director_name and num_of_films_director_band features.
End of explanation
df["actor_1_name"].dropna().map(actors).describe()
df['num_of_films_actor_1'] = df["actor_1_name"].dropna().map(actors).astype(int)
df['num_of_films_actor_1'] = df['num_of_films_actor_1'].fillna(8)
df["actor_2_name"].dropna().map(actors).describe()
df['num_of_films_actor_2'] = df["actor_2_name"].dropna().map(actors).astype(int)
df['num_of_films_actor_2'] = df['num_of_films_actor_2'].fillna(4)
df["actor_3_name"].dropna().map(actors).describe()
df['num_of_films_actor_3'] = df["actor_3_name"].dropna().map(actors).astype(int)
df['num_of_films_actor_3'] = df['num_of_films_actor_3'].fillna(2)
df['actor_sum'] = df["num_of_films_actor_1"] + df["num_of_films_actor_2"] + df["num_of_films_actor_3"]
df['ActorSumBand'] = pd.cut(df['actor_sum'], 5)
df[['ActorSumBand', 'imdb_score']].groupby(['ActorSumBand'], as_index=False).mean().sort_values(by='ActorSumBand', ascending=True)
df.loc[ df['actor_sum'] <= 24, 'actor_sum'] = 0
df.loc[(df['actor_sum'] > 24) & (df['actor_sum'] <= 46), 'actor_sum'] = 1
df.loc[(df['actor_sum'] > 46) & (df['actor_sum'] <= 67), 'actor_sum'] = 2
df.loc[(df['actor_sum'] > 67) & (df['actor_sum'] <= 89), 'actor_sum'] = 3
df.loc[ df['actor_sum'] > 89, 'actor_sum'] = 4
df.head()
Explanation: Now, let's examine actors by number of films acted in. Since we have three actors listed per film, we'll need to combine these numbers. Let's fill any empty fields with the median value for that field, and sum the columns.
End of explanation
df = df.drop(['actor_1_name', 'num_of_films_actor_1', 'actor_2_name', 'num_of_films_actor_2', 'actor_3_name', 'num_of_films_actor_3', 'ActorSumBand'], axis=1)
df.head()
Explanation: Now we can remove actor_1_name, actor_2_name, actor_3_name and ActorSumBand
End of explanation
df['color'] = df['color'].fillna("Color")
df['color'] = df['color'].map( {' Black and White': 1, 'Color': 0} ).astype(int)
df.head()
Explanation: Converting a categorical feature
Now we can convert features which contain strings to numerical values. This is required by most model algorithms. Doing so will also help us in achieving the feature completing goal.
Let us start by converting Color feature to a new feature called Color where black and white=1 and color=0.
Since some values are null, let's fill them with the most common value, Color
End of explanation
df['language'].value_counts()
Explanation: Next, let's look at the language and country features
End of explanation
df['language'] = df['language'].fillna("English")
df['language'] = df['language'].map(lambda l: 0 if l == 'English' else 1)
df.head()
Explanation: The bulk of the films are in English. Let's convert this field to 1 for Non-English, and 0 for English
First, let's fill any null values with English
End of explanation
df['country'].value_counts()
Explanation: Next, let's explore country
End of explanation
df['country'] = df['country'].fillna("USA")
df['country'] = df['country'].map(lambda c: 0 if c == 'USA' else 1)
df.head()
Explanation: Again, most films are from USA. Taking the same approach, we'll fill NaNs with USA, and transfrom USA to 0, all others to 1
End of explanation
df['content_rating'].value_counts()
Explanation: Next up is content rating. Let's look at that
End of explanation
df['content_rating'] = df['content_rating'].map({'G':0, 'PG':1, 'PG-13': 2, 'R': 3}).fillna(4).astype(int)
df.head()
Explanation: The majority of the films use the standard MPAA ratings: G, PG, PG-13, and R
Let's group the rest of the films (and null values) into the 'Not Rated' category, and then transform them to integers
End of explanation
df['aspect_ratio'].value_counts()
Explanation: Aspect ratio may seem like a numerical feature, but it's somewhat of a categorial one. First, what values do we find in the dataset?
End of explanation
df['aspect_ratio'] = df['aspect_ratio'].fillna(2.35)
df['aspect_ratio'] = df['aspect_ratio'].map(lambda ar: 1.33 if ar == 4.00 else ar)
df['aspect_ratio'] = df['aspect_ratio'].map(lambda ar: 1.78 if ar == 16.00 else ar)
df[['aspect_ratio', 'imdb_score']].groupby(pd.cut(df['aspect_ratio'], 4)).mean()
Explanation: Some of these values seem to be in the wrong format, 16.00 is most likely 16:9 (1.78) and 4.00 is more likely 4:3 (1.33). Let's fix those.
End of explanation
df.loc[ df['aspect_ratio'] <= 1.575, 'aspect_ratio'] = 0
df.loc[(df['aspect_ratio'] > 1.575) & (df['aspect_ratio'] <= 1.97), 'aspect_ratio'] = 1
df.loc[(df['aspect_ratio'] > 1.97) & (df['aspect_ratio'] <= 2.365), 'aspect_ratio'] = 2
df.loc[ df['aspect_ratio'] > 2.365, 'aspect_ratio'] = 3
df.head()
Explanation: The above banding looks good. It sepearates out the two predominant aspect ratios (2.35 and 1.85), and also has two bands below and above these ratios. Let's use that.
End of explanation
mean = df['duration'].mean()
std = df['duration'].std()
mean, std
df['duration'] = df['duration'].map(lambda v: truncnorm.rvs(-1, 1, loc=mean, scale=std) if pd.isnull(v) else v)
Explanation: Completing a numerical continuous feature
Now we should start estimating and completing features with missing or null values. We will first do this for the Duration feature.
We can consider three methods to complete a numerical continuous feature.
The easist way is to use the median value.
Another simple way is to generate random numbers between mean and standard deviation.
More accurate way of guessing missing values is to use other correlated features, and use the median value based on other features.
Combine methods 1 and 2. So instead of guessing age values based on median, use random numbers between mean and standard deviation, based on sets of Pclass and Gender combinations.
We will use method 2.
End of explanation
df[['duration', 'imdb_score']].groupby(pd.qcut(df['duration'], 5)).mean()
Explanation: Let us create Duration bands and determine correlations with IMDb score.
End of explanation
df.loc[ df['duration'] <= 91, 'duration'] = 0
df.loc[(df['duration'] > 91) & (df['duration'] <= 99), 'duration'] = 1
df.loc[(df['duration'] > 99) & (df['duration'] <= 108), 'duration'] = 2
df.loc[(df['duration'] > 108) & (df['duration'] <= 122), 'duration'] = 3
df.loc[ df['duration'] > 122, 'duration'] = 4
df.head()
Explanation: Let us replace Duration with ordinals based on these bands.
End of explanation
mean = df['num_critic_for_reviews'].mean()
std = df['num_critic_for_reviews'].std()
mean, std
df['num_critic_for_reviews'] = df['num_critic_for_reviews'].map(lambda v: truncnorm.rvs(-1, 1, loc=mean, scale=std) if pd.isnull(v) else v)
df[['num_critic_for_reviews', 'imdb_score']].groupby(pd.qcut(df['num_critic_for_reviews'], 5)).mean()
df.loc[ df['num_critic_for_reviews'] <= 40, 'num_critic_for_reviews'] = 0
df.loc[(df['num_critic_for_reviews'] > 40) & (df['num_critic_for_reviews'] <= 84), 'num_critic_for_reviews'] = 1
df.loc[(df['num_critic_for_reviews'] > 84) & (df['num_critic_for_reviews'] <= 140), 'num_critic_for_reviews'] = 2
df.loc[(df['num_critic_for_reviews'] > 140) & (df['num_critic_for_reviews'] <= 222), 'num_critic_for_reviews'] = 3
df.loc[ df['num_critic_for_reviews'] > 222, 'num_critic_for_reviews'] = 4
df.head()
Explanation: Let's apply the same techniques to the following features:
num_critic_for_reviews
director_facebook_likes
actor_1_facebook_likes
actor_2_facebook_likes
actor_3_facebook_likes
gross
facenumber_in_poster
num_user_for_reviews
budget
title_year
num_voted_users
cast_total_facebook_likes
movie_facebook_likes
num_of_films_director
num_critic_for_reviews
End of explanation
mean = df['director_facebook_likes'].mean()
std = df['director_facebook_likes'].std()
mean, std
Explanation: director_facebook_likes
End of explanation
df['director_facebook_likes'] = df['director_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df[['director_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['director_facebook_likes'], 5)).mean()
df.loc[ df['director_facebook_likes'] <= 3, 'director_facebook_likes'] = 0
df.loc[(df['director_facebook_likes'] > 3) & (df['director_facebook_likes'] <= 27.8), 'director_facebook_likes'] = 1
df.loc[(df['director_facebook_likes'] > 27.8) & (df['director_facebook_likes'] <= 91), 'director_facebook_likes'] = 2
df.loc[(df['director_facebook_likes'] > 91) & (df['director_facebook_likes'] <= 309), 'director_facebook_likes'] = 3
df.loc[ df['director_facebook_likes'] > 309, 'director_facebook_likes'] = 4
df.head()
Explanation: Since the standard deviation for this field is ~4x the mean, we'll just stick to using the mean value for nulls
End of explanation
mean = df['actor_1_facebook_likes'].mean()
std = df['actor_1_facebook_likes'].std()
mean, std
df['actor_1_facebook_likes'] = df['actor_1_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df['actor_1_facebook_likes'].describe()
df[['actor_1_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['actor_1_facebook_likes'], 5)).mean()
df.loc[ df['actor_1_facebook_likes'] <= 523, 'actor_1_facebook_likes'] = 0
df.loc[(df['actor_1_facebook_likes'] > 523) & (df['actor_1_facebook_likes'] <= 865), 'actor_1_facebook_likes'] = 1
df.loc[(df['actor_1_facebook_likes'] > 865) & (df['actor_1_facebook_likes'] <= 2000), 'actor_1_facebook_likes'] = 2
df.loc[(df['actor_1_facebook_likes'] > 2000) & (df['actor_1_facebook_likes'] <= 13000), 'actor_1_facebook_likes'] = 3
df.loc[ df['actor_1_facebook_likes'] > 13000, 'actor_1_facebook_likes'] = 4
df.head()
Explanation: actor_1_facebook_likes
End of explanation
mean = df['actor_2_facebook_likes'].mean()
std = df['actor_2_facebook_likes'].std()
mean, std
df['actor_2_facebook_likes'] = df['actor_2_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df[['actor_2_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['actor_2_facebook_likes'], 5)).mean()
df.loc[ df['actor_2_facebook_likes'] <= 218, 'actor_2_facebook_likes'] = 0
df.loc[(df['actor_2_facebook_likes'] > 218) & (df['actor_2_facebook_likes'] <= 486), 'actor_2_facebook_likes'] = 1
df.loc[(df['actor_2_facebook_likes'] > 486) & (df['actor_2_facebook_likes'] <= 726.2), 'actor_2_facebook_likes'] = 2
df.loc[(df['actor_2_facebook_likes'] > 726.2) & (df['actor_2_facebook_likes'] <= 979), 'actor_2_facebook_likes'] = 3
df.loc[ df['actor_2_facebook_likes'] > 979, 'actor_2_facebook_likes'] = 4
df.head()
Explanation: actor_2_facebook_likes
End of explanation
mean = df['actor_3_facebook_likes'].mean()
std = df['actor_3_facebook_likes'].std()
mean, std
df['actor_3_facebook_likes'] = df['actor_3_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df['actor_3_facebook_likes'].describe()
df[['actor_3_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['actor_3_facebook_likes'], 5)).mean()
df.loc[ df['actor_3_facebook_likes'] <= 97, 'actor_3_facebook_likes'] = 0
df.loc[(df['actor_3_facebook_likes'] > 97) & (df['actor_3_facebook_likes'] <= 265), 'actor_3_facebook_likes'] = 1
df.loc[(df['actor_3_facebook_likes'] > 265) & (df['actor_3_facebook_likes'] <= 472), 'actor_3_facebook_likes'] = 2
df.loc[(df['actor_3_facebook_likes'] > 472) & (df['actor_3_facebook_likes'] <= 700), 'actor_3_facebook_likes'] = 3
df.loc[ df['actor_3_facebook_likes'] > 700, 'actor_3_facebook_likes'] = 4
df.head()
Explanation: actor_3_facebook_likes
End of explanation
mean = df['gross'].mean()
std = df['gross'].std()
mean, std
df['gross'] = df['gross'].map(lambda v: mean if pd.isnull(v) else v)
df['gross'].describe()
df[['gross', 'imdb_score']].groupby(pd.qcut(df['gross'], 5)).mean()
df.loc[ df['gross'] <= 4909758.4, 'gross'] = 0
df.loc[(df['gross'] > 4909758.4) & (df['gross'] <= 24092475.2), 'gross'] = 1
df.loc[(df['gross'] > 24092475.2) & (df['gross'] <= 48468407.527), 'gross'] = 2
df.loc[(df['gross'] > 48468407.527) & (df['gross'] <= 64212162.4), 'gross'] = 3
df.loc[ df['gross'] > 64212162.4, 'gross'] = 4
df.head()
Explanation: gross
End of explanation
mean = df['facenumber_in_poster'].mean()
std = df['facenumber_in_poster'].std()
mean, std
df['facenumber_in_poster'].value_counts()
df['facenumber_in_poster'].median()
df['facenumber_in_poster'] = df['facenumber_in_poster'].map(lambda v: 1 if pd.isnull(v) else v)
df['facenumber_in_poster'].describe()
df[['facenumber_in_poster', 'imdb_score']].groupby(pd.cut(df['facenumber_in_poster'], [-1,0,1,2,100])).mean()
df.loc[ df['facenumber_in_poster'] <= 0, 'facenumber_in_poster'] = 0
df.loc[(df['facenumber_in_poster'] > 0) & (df['facenumber_in_poster'] <= 1), 'facenumber_in_poster'] = 1
df.loc[(df['facenumber_in_poster'] > 1) & (df['facenumber_in_poster'] <= 2), 'facenumber_in_poster'] = 2
df.loc[ df['facenumber_in_poster'] > 2, 'facenumber_in_poster'] = 3
df.head()
Explanation: facenumber_in_poster
End of explanation
mean = df['num_user_for_reviews'].mean()
std = df['num_user_for_reviews'].std()
mean, std
df['num_user_for_reviews'] = df['num_user_for_reviews'].map(lambda v: mean if pd.isnull(v) else v)
df['num_user_for_reviews'].describe()
df[['num_user_for_reviews', 'imdb_score']].groupby(pd.qcut(df['num_user_for_reviews'], 5)).mean()
df.loc[ df['num_user_for_reviews'] <= 48, 'num_user_for_reviews'] = 0
df.loc[(df['num_user_for_reviews'] > 48) & (df['num_user_for_reviews'] <= 116), 'num_user_for_reviews'] = 1
df.loc[(df['num_user_for_reviews'] > 116) & (df['num_user_for_reviews'] <= 210), 'num_user_for_reviews'] = 2
df.loc[(df['num_user_for_reviews'] > 210) & (df['num_user_for_reviews'] <= 389), 'num_user_for_reviews'] = 3
df.loc[ df['num_user_for_reviews'] > 389, 'num_user_for_reviews'] = 4
df.head()
Explanation: num_user_for_reviews
End of explanation
mean = df['budget'].mean()
std = df['budget'].std()
mean, std
df['budget'] = df['budget'].map(lambda v: mean if pd.isnull(v) else v)
df['budget'].describe()
df[['budget', 'imdb_score']].groupby(pd.qcut(df['budget'], 3)).mean()
df.loc[ df['budget'] <= 12000000, 'budget'] = 0
df.loc[(df['budget'] > 12000000) & (df['budget'] <= 39752620.436), 'budget'] = 1
df.loc[ df['budget'] > 39752620.436, 'budget'] = 2
df.head()
Explanation: budget
End of explanation
mean = df['title_year'].mean()
std = df['title_year'].std()
mean, std
df['title_year'] = df['title_year'].map(lambda v: truncnorm.rvs(-1, 1, loc=mean, scale=std) if pd.isnull(v) else v)
df[['title_year', 'imdb_score']].groupby(pd.cut(df['title_year'], 5)).mean()
df.loc[ df['title_year'] <= 1936, 'title_year'] = 0
df.loc[(df['title_year'] > 1936) & (df['title_year'] <= 1956), 'title_year'] = 1
df.loc[(df['title_year'] > 1956) & (df['title_year'] <= 1976), 'title_year'] = 2
df.loc[(df['title_year'] > 1976) & (df['title_year'] <= 1996), 'title_year'] = 3
df.loc[ df['title_year'] > 1996, 'title_year'] = 4
df.head()
Explanation: title_year
End of explanation
mean = df['num_voted_users'].mean()
std = df['num_voted_users'].std()
mean, std
df['num_voted_users'] = df['num_voted_users'].map(lambda v: mean if pd.isnull(v) else v)
df['num_voted_users'].describe()
df[['num_voted_users', 'imdb_score']].groupby(pd.qcut(df['num_voted_users'], 5)).mean()
df.loc[ df['num_voted_users'] <= 5623.8, 'num_voted_users'] = 0
df.loc[(df['num_voted_users'] > 5623.8) & (df['num_voted_users'] <= 21478.4), 'num_voted_users'] = 1
df.loc[(df['num_voted_users'] > 21478.4) & (df['num_voted_users'] <= 53178.2), 'num_voted_users'] = 2
df.loc[(df['num_voted_users'] > 53178.2) & (df['num_voted_users'] <= 1.24e+05), 'num_voted_users'] = 3
df.loc[ df['num_voted_users'] > 1.24e+05, 'num_voted_users'] = 4
df.head()
Explanation: num_voted_users
End of explanation
mean = df['cast_total_facebook_likes'].mean()
std = df['cast_total_facebook_likes'].std()
mean, std
df['cast_total_facebook_likes'] = df['cast_total_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df['cast_total_facebook_likes'].describe()
df[['cast_total_facebook_likes', 'imdb_score']].groupby(pd.qcut(df['cast_total_facebook_likes'], 5)).mean()
df.loc[ df['cast_total_facebook_likes'] <= 1136, 'cast_total_facebook_likes'] = 0
df.loc[(df['cast_total_facebook_likes'] > 1136) & (df['cast_total_facebook_likes'] <= 2366.6), 'cast_total_facebook_likes'] = 1
df.loc[(df['cast_total_facebook_likes'] > 2366.6) & (df['cast_total_facebook_likes'] <= 4369.2), 'cast_total_facebook_likes'] = 2
df.loc[(df['cast_total_facebook_likes'] > 4369.2) & (df['cast_total_facebook_likes'] <= 16285.8), 'cast_total_facebook_likes'] = 3
df.loc[ df['cast_total_facebook_likes'] > 16285.8, 'cast_total_facebook_likes'] = 4
df.head()
Explanation: cast_total_facebook_likes
End of explanation
mean = df['movie_facebook_likes'].mean()
std = df['movie_facebook_likes'].std()
mean, std
df['movie_facebook_likes'] = df['movie_facebook_likes'].map(lambda v: mean if pd.isnull(v) else v)
df['movie_facebook_likes'].describe()
df[df['movie_facebook_likes'] > 0][['movie_facebook_likes', 'imdb_score']].groupby(pd.qcut(df[df['movie_facebook_likes'] > 0]['movie_facebook_likes'], 4)).mean()
df.loc[ df['movie_facebook_likes'] <= 0, 'movie_facebook_likes'] = 0
df.loc[(df['movie_facebook_likes'] > 0) & (df['movie_facebook_likes'] <= 401), 'movie_facebook_likes'] = 1
df.loc[(df['movie_facebook_likes'] > 401) & (df['movie_facebook_likes'] <= 1000), 'movie_facebook_likes'] = 2
df.loc[(df['movie_facebook_likes'] > 1000) & (df['movie_facebook_likes'] <= 17000), 'movie_facebook_likes'] = 3
df.loc[ df['movie_facebook_likes'] > 17000, 'movie_facebook_likes'] = 3
df.head()
Explanation: movie_facebook_likes
End of explanation
mean = df['num_of_films_director'].mean()
std = df['num_of_films_director'].std()
mean, std
df['num_of_films_director'].value_counts()
df['num_of_films_director'] = df['num_of_films_director'].map(lambda v: 1 if pd.isnull(v) else v)
df[['num_of_films_director', 'imdb_score']].groupby(pd.cut(df['num_of_films_director'], 3)).mean()
df.loc[ df['num_of_films_director'] <= 1, 'num_of_films_director'] = 0
df.loc[(df['num_of_films_director'] > 1) & (df['num_of_films_director'] <= 2), 'num_of_films_director'] = 1
df.loc[ df['num_of_films_director'] > 2, 'num_of_films_director'] = 2
df.head()
incomplete = df.columns[pd.isnull(df).any()].tolist()
df[incomplete].info()
Explanation: num_of_films_director
End of explanation
train_df, test_df = train_test_split(df, test_size = 0.2)
Explanation: Create new feature combining existing features
Completing a categorical feature
Converting categorical feature to numeric
Quick completing and converting a numeric feature
Partion Data
Now, we randomly partion our dataset into two DataFrames. 80% of the data will be our training set, the rest will become our test set
End of explanation
X_train = train_df.drop("imdb_score", axis=1)
Y_train = train_df["imdb_score"]
X_test = test_df.drop("imdb_score", axis=1)
Y_test = test_df["imdb_score"]
X_train.shape, Y_train.shape, X_test.shape, Y_test.shape
Explanation: Model, predict and solve
Now we are ready to train a model and predict the required solution. There are 60+ predictive modelling algorithms to choose from. We must understand the type of problem and solution requirement to narrow down to a select few models which we can evaluate. Our problem is a classification and regression problem. We want to identify relationship between output (Survived or not) with other variables or features (Gender, Age, Port...). We are also perfoming a category of machine learning which is called supervised learning as we are training our model with a given dataset. With these two criteria - Supervised Learning plus Classification and Regression, we can narrow down our choice of models to a few. These include:
Logistic Regression
KNN or k-Nearest Neighbors
Support Vector Machines
Naive Bayes classifier
Decision Tree
Random Forrest
Perceptron
Artificial neural network
RVM or Relevance Vector Machine
End of explanation
# Logistic Regression
logreg = LogisticRegression()
logreg.fit(X_train, Y_train)
Y_pred = logreg.predict(X_test)
acc_log = round(logreg.score(X_train, Y_train) * 100, 2)
acc_log
Explanation: Logistic Regression is a useful model to run early in the workflow. Logistic regression measures the relationship between the categorical dependent variable (feature) and one or more independent variables (features) by estimating probabilities using a logistic function, which is the cumulative logistic distribution. Reference Wikipedia.
Note the confidence score generated by the model based on our training dataset.
End of explanation
for i, value in enumerate(train_df.columns):
print i, value
coeff_df = pd.DataFrame(train_df.columns.delete(17))
coeff_df.columns = ['Feature']
coeff_df["Correlation"] = pd.Series(logreg.coef_[0])
coeff_df.sort_values(by='Correlation', ascending=False)
Explanation: We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals. This can be done by calculating the coefficient of the features in the decision function.
Positive coefficients increase the log-odds of the response (and thus increase the probability), and negative coefficients decrease the log-odds of the response (and thus decrease the probability).
Country is highest positivie coefficient, implying as the Country value increases (USA: 0 to Foreign: 1), the probability of IMDb score = 1 increases the most.
Inversely as Aspect Ratio increases, probability of IMDb score = 1 decreases the most.
Director Number of Films is a good artificial feature to model as it has a 0.2 positive coorelation with IMDb score.
So is Color as second highest positive correlation.
End of explanation
# Support Vector Machines
svc = SVC()
svc.fit(X_train, Y_train)
Y_pred = svc.predict(X_test)
acc_svc = round(svc.score(X_train, Y_train) * 100, 2)
acc_svc
Explanation: Next we model using Support Vector Machines which are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis. Given a set of training samples, each marked as belonging to one or the other of two categories, an SVM training algorithm builds a model that assigns new test samples to one category or the other, making it a non-probabilistic binary linear classifier. Reference Wikipedia.
Note that the model generates a confidence score which is higher than Logistics Regression model.
End of explanation
knn = KNeighborsClassifier(n_neighbors = 3)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
acc_knn = round(knn.score(X_train, Y_train) * 100, 2)
acc_knn
Explanation: In pattern recognition, the k-Nearest Neighbors algorithm (or k-NN for short) is a non-parametric method used for classification and regression. A sample is classified by a majority vote of its neighbors, with the sample being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. Reference Wikipedia.
KNN confidence score is better than Logistics Regression and SVM.
End of explanation
# Gaussian Naive Bayes
gaussian = GaussianNB()
gaussian.fit(X_train, Y_train)
Y_pred = gaussian.predict(X_test)
acc_gaussian = round(gaussian.score(X_train, Y_train) * 100, 2)
acc_gaussian
Explanation: In machine learning, naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes' theorem with strong (naive) independence assumptions between the features. Naive Bayes classifiers are highly scalable, requiring a number of parameters linear in the number of variables (features) in a learning problem. Reference Wikipedia.
The model generated confidence score is the lowest among the models evaluated so far.
End of explanation
# Perceptron
perceptron = Perceptron()
perceptron.fit(X_train, Y_train)
Y_pred = perceptron.predict(X_test)
acc_perceptron = round(perceptron.score(X_train, Y_train) * 100, 2)
acc_perceptron
# Linear SVC
linear_svc = LinearSVC()
linear_svc.fit(X_train, Y_train)
Y_pred = linear_svc.predict(X_test)
acc_linear_svc = round(linear_svc.score(X_train, Y_train) * 100, 2)
acc_linear_svc
# Stochastic Gradient Descent
sgd = SGDClassifier()
sgd.fit(X_train, Y_train)
Y_pred = sgd.predict(X_test)
acc_sgd = round(sgd.score(X_train, Y_train) * 100, 2)
acc_sgd
Explanation: The perceptron is an algorithm for supervised learning of binary classifiers (functions that can decide whether an input, represented by a vector of numbers, belongs to some specific class or not). It is a type of linear classifier, i.e. a classification algorithm that makes its predictions based on a linear predictor function combining a set of weights with the feature vector. The algorithm allows for online learning, in that it processes elements in the training set one at a time. Reference Wikipedia.
End of explanation
# Decision Tree
decision_tree = DecisionTreeClassifier()
decision_tree.fit(X_train, Y_train)
Y_pred = decision_tree.predict(X_test)
acc_decision_tree = round(decision_tree.score(X_train, Y_train) * 100, 2)
acc_decision_tree
Explanation: This model uses a decision tree as a predictive model which maps features (tree branches) to conclusions about the target value (tree leaves). Tree models where the target variable can take a finite set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. Reference Wikipedia.
The model confidence score is the highest among models evaluated so far.
End of explanation
# Random Forest
random_forest = RandomForestClassifier(n_estimators=100)
random_forest.fit(X_train, Y_train)
Y_pred = random_forest.predict(X_test)
random_forest.score(X_train, Y_train)
acc_random_forest = round(random_forest.score(X_train, Y_train) * 100, 2)
acc_random_forest
Explanation: The next model Random Forests is one of the most popular. Random forests or random decision forests are an ensemble learning method for classification, regression and other tasks, that operate by constructing a multitude of decision trees (n_estimators=100) at training time and outputting the class that is the mode of the classes (classification) or mean prediction (regression) of the individual trees. Reference Wikipedia.
The model confidence score is the highest among models evaluated so far. We decide to use this model's output (Y_pred) for creating our competition submission of results.
End of explanation
models = pd.DataFrame({
'Model': ['Support Vector Machines', 'KNN', 'Logistic Regression',
'Random Forest', 'Naive Bayes', 'Perceptron',
'Stochastic Gradient Decent', 'Linear SVC',
'Decision Tree'],
'Score': [acc_svc, acc_knn, acc_log,
acc_random_forest, acc_gaussian, acc_perceptron,
acc_sgd, acc_linear_svc, acc_decision_tree]})
models.sort_values(by='Score', ascending=False)
print accuracy_score(Y_test, Y_pred, normalize=False), '/', len(Y_test)
print accuracy_score(Y_test, Y_pred)
Explanation: Model evaluation
We can now rank our evaluation of all the models to choose the best one for our problem. While both Decision Tree and Random Forest score the same, we choose to use Random Forest as they correct for decision trees' habit of overfitting to their training set.
End of explanation |
4,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Receptive Field Estimation and Prediction
This example reproduces figures from Lalor et al's mTRF toolbox in
matlab
Step1: Load the data from the publication
First we will load the data collected in
Step2: Create and fit a receptive field model
We will construct an encoding model to find the linear relationship between
a time-delayed version of the speech envelope and the EEG signal. This allows
us to make predictions about the response to new stimuli.
Step3: Investigate model coefficients
Finally, we will look at how the linear coefficients (sometimes
referred to as beta values) are distributed across time delays as well as
across the scalp. We will recreate figure 1 and figure 2 from
Step4: Create and fit a stimulus reconstruction model
We will now demonstrate another use case for the for the
Step5: Visualize stimulus reconstruction
To get a sense of our model performance, we can plot the actual and predicted
stimulus envelopes side by side.
Step6: Investigate model coefficients
Finally, we will look at how the decoding model coefficients are distributed
across the scalp. We will attempt to recreate figure 5_ from | Python Code:
# Authors: Chris Holdgraf <[email protected]>
# Eric Larson <[email protected]>
# Nicolas Barascud <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import loadmat
from os.path import join
import mne
from mne.decoding import ReceptiveField
from sklearn.model_selection import KFold
from sklearn.preprocessing import scale
Explanation: Receptive Field Estimation and Prediction
This example reproduces figures from Lalor et al's mTRF toolbox in
matlab :footcite:CrosseEtAl2016. We will show how the
:class:mne.decoding.ReceptiveField class
can perform a similar function along with scikit-learn. We will first fit a
linear encoding model using the continuously-varying speech envelope to predict
activity of a 128 channel EEG system. Then, we will take the reverse approach
and try to predict the speech envelope from the EEG (known in the literature
as a decoding model, or simply stimulus reconstruction).
End of explanation
path = mne.datasets.mtrf.data_path()
decim = 2
data = loadmat(join(path, 'speech_data.mat'))
raw = data['EEG'].T
speech = data['envelope'].T
sfreq = float(data['Fs'])
sfreq /= decim
speech = mne.filter.resample(speech, down=decim, npad='auto')
raw = mne.filter.resample(raw, down=decim, npad='auto')
# Read in channel positions and create our MNE objects from the raw data
montage = mne.channels.make_standard_montage('biosemi128')
info = mne.create_info(montage.ch_names, sfreq, 'eeg').set_montage(montage)
raw = mne.io.RawArray(raw, info)
n_channels = len(raw.ch_names)
# Plot a sample of brain and stimulus activity
fig, ax = plt.subplots()
lns = ax.plot(scale(raw[:, :800][0].T), color='k', alpha=.1)
ln1 = ax.plot(scale(speech[0, :800]), color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['EEG', 'Speech Envelope'], frameon=False)
ax.set(title="Sample activity", xlabel="Time (s)")
mne.viz.tight_layout()
Explanation: Load the data from the publication
First we will load the data collected in :footcite:CrosseEtAl2016.
In this experiment subjects
listened to natural speech. Raw EEG and the speech stimulus are provided.
We will load these below, downsampling the data in order to speed up
computation since we know that our features are primarily low-frequency in
nature. Then we'll visualize both the EEG and speech envelope.
End of explanation
# Define the delays that we will use in the receptive field
tmin, tmax = -.2, .4
# Initialize the model
rf = ReceptiveField(tmin, tmax, sfreq, feature_names=['envelope'],
estimator=1., scoring='corrcoef')
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Prepare model data (make time the first dimension)
speech = speech.T
Y, _ = raw[:] # Outputs for the model
Y = Y.T
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
scores = np.zeros((n_splits, n_channels))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
rf.fit(speech[train], Y[train])
scores[ii] = rf.score(speech[test], Y[test])
# coef_ is shape (n_outputs, n_features, n_delays). we only have 1 feature
coefs[ii] = rf.coef_[:, 0, :]
times = rf.delays_ / float(rf.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_scores = scores.mean(axis=0)
# Plot mean prediction scores across all channels
fig, ax = plt.subplots()
ix_chs = np.arange(n_channels)
ax.plot(ix_chs, mean_scores)
ax.axhline(0, ls='--', color='r')
ax.set(title="Mean prediction score", xlabel="Channel", ylabel="Score ($r$)")
mne.viz.tight_layout()
Explanation: Create and fit a receptive field model
We will construct an encoding model to find the linear relationship between
a time-delayed version of the speech envelope and the EEG signal. This allows
us to make predictions about the response to new stimuli.
End of explanation
# Print mean coefficients across all time delays / channels (see Fig 1)
time_plot = 0.180 # For highlighting a specific time.
fig, ax = plt.subplots(figsize=(4, 8))
max_coef = mean_coefs.max()
ax.pcolormesh(times, ix_chs, mean_coefs, cmap='RdBu_r',
vmin=-max_coef, vmax=max_coef, shading='gouraud')
ax.axvline(time_plot, ls='--', color='k', lw=2)
ax.set(xlabel='Delay (s)', ylabel='Channel', title="Mean Model\nCoefficients",
xlim=times[[0, -1]], ylim=[len(ix_chs) - 1, 0],
xticks=np.arange(tmin, tmax + .2, .2))
plt.setp(ax.get_xticklabels(), rotation=45)
mne.viz.tight_layout()
# Make a topographic map of coefficients for a given delay (see Fig 2C)
ix_plot = np.argmin(np.abs(time_plot - times))
fig, ax = plt.subplots()
mne.viz.plot_topomap(mean_coefs[:, ix_plot], pos=info, axes=ax, show=False,
vmin=-max_coef, vmax=max_coef)
ax.set(title="Topomap of model coefficients\nfor delay %s" % time_plot)
mne.viz.tight_layout()
Explanation: Investigate model coefficients
Finally, we will look at how the linear coefficients (sometimes
referred to as beta values) are distributed across time delays as well as
across the scalp. We will recreate figure 1 and figure 2 from
:footcite:CrosseEtAl2016.
End of explanation
# We use the same lags as in :footcite:`CrosseEtAl2016`. Negative lags now
# index the relationship
# between the neural response and the speech envelope earlier in time, whereas
# positive lags would index how a unit change in the amplitude of the EEG would
# affect later stimulus activity (obviously this should have an amplitude of
# zero).
tmin, tmax = -.2, 0.
# Initialize the model. Here the features are the EEG data. We also specify
# ``patterns=True`` to compute inverse-transformed coefficients during model
# fitting (cf. next section and :footcite:`HaufeEtAl2014`).
# We'll use a ridge regression estimator with an alpha value similar to
# Crosse et al.
sr = ReceptiveField(tmin, tmax, sfreq, feature_names=raw.ch_names,
estimator=1e4, scoring='corrcoef', patterns=True)
# We'll have (tmax - tmin) * sfreq delays
# and an extra 2 delays since we are inclusive on the beginning / end index
n_delays = int((tmax - tmin) * sfreq) + 2
n_splits = 3
cv = KFold(n_splits)
# Iterate through splits, fit the model, and predict/test on held-out data
coefs = np.zeros((n_splits, n_channels, n_delays))
patterns = coefs.copy()
scores = np.zeros((n_splits,))
for ii, (train, test) in enumerate(cv.split(speech)):
print('split %s / %s' % (ii + 1, n_splits))
sr.fit(Y[train], speech[train])
scores[ii] = sr.score(Y[test], speech[test])[0]
# coef_ is shape (n_outputs, n_features, n_delays). We have 128 features
coefs[ii] = sr.coef_[0, :, :]
patterns[ii] = sr.patterns_[0, :, :]
times = sr.delays_ / float(sr.sfreq)
# Average scores and coefficients across CV splits
mean_coefs = coefs.mean(axis=0)
mean_patterns = patterns.mean(axis=0)
mean_scores = scores.mean(axis=0)
max_coef = np.abs(mean_coefs).max()
max_patterns = np.abs(mean_patterns).max()
Explanation: Create and fit a stimulus reconstruction model
We will now demonstrate another use case for the for the
:class:mne.decoding.ReceptiveField class as we try to predict the stimulus
activity from the EEG data. This is known in the literature as a decoding, or
stimulus reconstruction model :footcite:CrosseEtAl2016.
A decoding model aims to find the
relationship between the speech signal and a time-delayed version of the EEG.
This can be useful as we exploit all of the available neural data in a
multivariate context, compared to the encoding case which treats each M/EEG
channel as an independent feature. Therefore, decoding models might provide a
better quality of fit (at the expense of not controlling for stimulus
covariance), especially for low SNR stimuli such as speech.
End of explanation
y_pred = sr.predict(Y[test])
time = np.linspace(0, 2., 5 * int(sfreq))
fig, ax = plt.subplots(figsize=(8, 4))
ax.plot(time, speech[test][sr.valid_samples_][:int(5 * sfreq)],
color='grey', lw=2, ls='--')
ax.plot(time, y_pred[sr.valid_samples_][:int(5 * sfreq)], color='r', lw=2)
ax.legend([lns[0], ln1[0]], ['Envelope', 'Reconstruction'], frameon=False)
ax.set(title="Stimulus reconstruction")
ax.set_xlabel('Time (s)')
mne.viz.tight_layout()
Explanation: Visualize stimulus reconstruction
To get a sense of our model performance, we can plot the actual and predicted
stimulus envelopes side by side.
End of explanation
time_plot = (-.140, -.125) # To average between two timepoints.
ix_plot = np.arange(np.argmin(np.abs(time_plot[0] - times)),
np.argmin(np.abs(time_plot[1] - times)))
fig, ax = plt.subplots(1, 2)
mne.viz.plot_topomap(np.mean(mean_coefs[:, ix_plot], axis=1),
pos=info, axes=ax[0], show=False,
vmin=-max_coef, vmax=max_coef)
ax[0].set(title="Model coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.plot_topomap(np.mean(mean_patterns[:, ix_plot], axis=1),
pos=info, axes=ax[1],
show=False, vmin=-max_patterns, vmax=max_patterns)
ax[1].set(title="Inverse-transformed coefficients\nbetween delays %s and %s"
% (time_plot[0], time_plot[1]))
mne.viz.tight_layout()
plt.show()
Explanation: Investigate model coefficients
Finally, we will look at how the decoding model coefficients are distributed
across the scalp. We will attempt to recreate figure 5_ from
:footcite:CrosseEtAl2016. The
decoding model weights reflect the channels that contribute most toward
reconstructing the stimulus signal, but are not directly interpretable in a
neurophysiological sense. Here we also look at the coefficients obtained
via an inversion procedure :footcite:HaufeEtAl2014, which have a more
straightforward
interpretation as their value (and sign) directly relates to the stimulus
signal's strength (and effect direction).
End of explanation |
4,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlow 애드온 콜백
Step2: 데이터 가져오기 및 정규화
Step3: 간단한 MNIST CNN 모델 빌드하기
Step4: 간단한 TimeStopping 사용법 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -U tensorflow-addons
import tensorflow_addons as tfa
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
Explanation: TensorFlow 애드온 콜백: TimeStopping
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/addons/tutorials/time_stopping"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/addons/tutorials/time_stopping.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/addons/tutorials/time_stopping.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/addons/tutorials/time_stopping.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
개요
이 노트북은 TensorFlow 애드온에서 TimeStopping 콜백을 사용하는 방법을 보여줍니다.
설정
End of explanation
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# normalize data
x_train, x_test = x_train / 255.0, x_test / 255.0
Explanation: 데이터 가져오기 및 정규화
End of explanation
# build the model using the Sequential API
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
Explanation: 간단한 MNIST CNN 모델 빌드하기
End of explanation
# initialize TimeStopping callback
time_stopping_callback = tfa.callbacks.TimeStopping(seconds=5, verbose=1)
# train the model with tqdm_callback
# make sure to set verbose = 0 to disable
# the default progress bar.
model.fit(x_train, y_train,
batch_size=64,
epochs=100,
callbacks=[time_stopping_callback],
validation_data=(x_test, y_test))
Explanation: 간단한 TimeStopping 사용법
End of explanation |
4,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 5
Problem 5-10
Step1: Description
A synchronous machine has a synchronous reactance of $1.0\,\Omega$ per phase and an armature resistance of $0.1\,\Omega$ per phase.
If $\vec{E}A = 460\,V\angle-10°$ and $\vec{V}\phi = 480\,V\angle0°$, is this machine a motor or a generator?
How much power P is this machine consuming from or supplying to the electrical system?
How much reactive power Q is this machine consuming from or supplying to the electrical system?
Step2: SOLUTION
This machine is a motor, consuming power from the power system, because $\vec{E}A$ is lagging $\vec{V}\phi$
It is also consuming reactive power, because $E_A \cos{\delta} < V_\phi$ . The current flowing in this machine is
Step3: Therefore the real power consumed by this motor is
Step4: and the reactive power consumed by this motor is | Python Code:
%pylab notebook
Explanation: Excercises Electric Machinery Fundamentals
Chapter 5
Problem 5-10
End of explanation
Ea = 460 # [V]
EA_angle = -10/180*pi # [rad]
EA = Ea * (cos(EA_angle) + 1j*sin(EA_angle))
Vphi = 480 # [V]
VPhi_angle = 0/180*pi # [rad]
VPhi = Vphi*exp(1j*VPhi_angle)
Ra = 0.1 # [Ohm]
Xs = 1.0 # [Ohm]
Explanation: Description
A synchronous machine has a synchronous reactance of $1.0\,\Omega$ per phase and an armature resistance of $0.1\,\Omega$ per phase.
If $\vec{E}A = 460\,V\angle-10°$ and $\vec{V}\phi = 480\,V\angle0°$, is this machine a motor or a generator?
How much power P is this machine consuming from or supplying to the electrical system?
How much reactive power Q is this machine consuming from or supplying to the electrical system?
End of explanation
IA = (VPhi - EA) / (Ra + Xs*1j)
IA_angle = arctan(IA.imag/IA.real)
print('IA = {:.1f} A ∠ {:.2f}°'.format(abs(IA), IA_angle/pi*180))
Explanation: SOLUTION
This machine is a motor, consuming power from the power system, because $\vec{E}A$ is lagging $\vec{V}\phi$
It is also consuming reactive power, because $E_A \cos{\delta} < V_\phi$ . The current flowing in this machine is:
$$\vec{I}A = \frac{\vec{V}\phi - \vec{E}_A}{R_A + jX_s}$$
End of explanation
theta = abs(IA_angle)
P = 3* abs(VPhi)* abs(IA)* cos(theta)
print('''
P = {:.1f} kW
============'''.format(P/1e3))
Explanation: Therefore the real power consumed by this motor is:
$$P =3V_\phi I_A \cos{\theta}$$
End of explanation
Q = 3* abs(VPhi)* abs(IA)* sin(theta)
print('''
Q = {:.1f} kvar
============='''.format(Q/1e3))
Explanation: and the reactive power consumed by this motor is:
$$Q = 3V_\phi I_A \sin{\theta}$$
End of explanation |
4,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Map
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right
Step2: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Map in multiple ways to transform every element in the PCollection.
Map accepts a function that returns a single element for every input element in the PCollection.
Example 1
Step3: <table align="left" style="margin-right
Step4: <table align="left" style="margin-right
Step5: <table align="left" style="margin-right
Step6: <table align="left" style="margin-right
Step7: <table align="left" style="margin-right
Step8: <table align="left" style="margin-right
Step9: <table align="left" style="margin-right | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License")
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
Explanation: <a href="https://colab.research.google.com/github/apache/beam/blob/master//Users/dcavazos/src/beam/examples/notebooks/documentation/transforms/python/elementwise/map-py.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"/></a>
<table align="left"><td><a target="_blank" href="https://beam.apache.org/documentation/transforms/python/elementwise/map"><img src="https://beam.apache.org/images/logos/full-color/name-bottom/beam-logo-full-color-name-bottom-100.png" width="32" height="32" />View the docs</a></td></table>
End of explanation
!pip install --quiet -U apache-beam
Explanation: Map
<script type="text/javascript">
localStorage.setItem('language', 'language-py')
</script>
<table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://beam.apache.org/releases/pydoc/current/apache_beam.transforms.core.html#apache_beam.transforms.core.Map"><img src="https://beam.apache.org/images/logos/sdks/python.png" width="32px" height="32px" alt="Pydoc"/> Pydoc</a>
</td>
</table>
<br/><br/><br/>
Applies a simple 1-to-1 mapping function over each element in the collection.
Setup
To run a code cell, you can click the Run cell button at the top left of the cell,
or select it and press Shift+Enter.
Try modifying a code cell and re-running it to see what happens.
To learn more about Colab, see
Welcome to Colaboratory!.
First, let's install the apache-beam module.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
' 🍓Strawberry \n',
' 🥕Carrot \n',
' 🍆Eggplant \n',
' 🍅Tomato \n',
' 🥔Potato \n',
])
| 'Strip' >> beam.Map(str.strip)
| beam.Map(print)
)
Explanation: Examples
In the following examples, we create a pipeline with a PCollection of produce with their icon, name, and duration.
Then, we apply Map in multiple ways to transform every element in the PCollection.
Map accepts a function that returns a single element for every input element in the PCollection.
Example 1: Map with a predefined function
We use the function str.strip which takes a single str element and outputs a str.
It strips the input element's whitespaces, including newlines and tabs.
End of explanation
import apache_beam as beam
def strip_header_and_newline(text):
return text.strip('# \n')
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(strip_header_and_newline)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 2: Map with a function
We define a function strip_header_and_newline which strips any '#', ' ', and '\n' characters from each element.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(lambda text: text.strip('# \n'))
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 3: Map with a lambda function
We can also use lambda functions to simplify Example 2.
End of explanation
import apache_beam as beam
def strip(text, chars=None):
return text.strip(chars)
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(strip, chars='# \n')
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 4: Map with multiple arguments
You can pass functions with multiple arguments to Map.
They are passed as additional positional arguments or keyword arguments to the function.
In this example, strip takes text and chars as arguments.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
('🍓', 'Strawberry'),
('🥕', 'Carrot'),
('🍆', 'Eggplant'),
('🍅', 'Tomato'),
('🥔', 'Potato'),
])
| 'Format' >> beam.MapTuple(
lambda icon, plant: '{}{}'.format(icon, plant))
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 5: MapTuple for key-value pairs
If your PCollection consists of (key, value) pairs,
you can use MapTuple to unpack them into different function arguments.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
chars = pipeline | 'Create chars' >> beam.Create(['# \n'])
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(
lambda text, chars: text.strip(chars),
chars=beam.pvalue.AsSingleton(chars),
)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 6: Map with side inputs as singletons
If the PCollection has a single value, such as the average from another computation,
passing the PCollection as a singleton accesses that value.
In this example, we pass a PCollection the value '# \n' as a singleton.
We then use that value as the characters for the str.strip method.
End of explanation
import apache_beam as beam
with beam.Pipeline() as pipeline:
chars = pipeline | 'Create chars' >> beam.Create(['#', ' ', '\n'])
plants = (
pipeline
| 'Gardening plants' >> beam.Create([
'# 🍓Strawberry\n',
'# 🥕Carrot\n',
'# 🍆Eggplant\n',
'# 🍅Tomato\n',
'# 🥔Potato\n',
])
| 'Strip header' >> beam.Map(
lambda text, chars: text.strip(''.join(chars)),
chars=beam.pvalue.AsIter(chars),
)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Example 7: Map with side inputs as iterators
If the PCollection has multiple values, pass the PCollection as an iterator.
This accesses elements lazily as they are needed,
so it is possible to iterate over large PCollections that won't fit into memory.
End of explanation
import apache_beam as beam
def replace_duration(plant, durations):
plant['duration'] = durations[plant['duration']]
return plant
with beam.Pipeline() as pipeline:
durations = pipeline | 'Durations' >> beam.Create([
(0, 'annual'),
(1, 'biennial'),
(2, 'perennial'),
])
plant_details = (
pipeline
| 'Gardening plants' >> beam.Create([
{'icon': '🍓', 'name': 'Strawberry', 'duration': 2},
{'icon': '🥕', 'name': 'Carrot', 'duration': 1},
{'icon': '🍆', 'name': 'Eggplant', 'duration': 2},
{'icon': '🍅', 'name': 'Tomato', 'duration': 0},
{'icon': '🥔', 'name': 'Potato', 'duration': 2},
])
| 'Replace duration' >> beam.Map(
replace_duration,
durations=beam.pvalue.AsDict(durations),
)
| beam.Map(print)
)
Explanation: <table align="left" style="margin-right:1em">
<td>
<a class="button" target="_blank" href="https://github.com/apache/beam/blob/master/sdks/python/apache_beam/examples/snippets/transforms/elementwise/map.py"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" width="32px" height="32px" alt="View source code"/> View source code</a>
</td>
</table>
<br/><br/><br/>
Note: You can pass the PCollection as a list with beam.pvalue.AsList(pcollection),
but this requires that all the elements fit into memory.
Example 8: Map with side inputs as dictionaries
If a PCollection is small enough to fit into memory, then that PCollection can be passed as a dictionary.
Each element must be a (key, value) pair.
Note that all the elements of the PCollection must fit into memory for this.
If the PCollection won't fit into memory, use beam.pvalue.AsIter(pcollection) instead.
End of explanation |
4,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
North Atlantic Winter Weather Regimes from a Self-Organizing Map Perspective
The four weather regimes typically found over the North Atlantic in winter are identified as
* NAO+ (positive NAO)
* NAO- (negative NAO)
* Blocking
* Atlantic Ridge.
Each weather regime is associated with different climatic conditions over Europe and North America (Cassou, 2008). In particular, the negative NAO and the blocking regimes are generally associated with cold extreme temperatures over Europe and the eastern United States (US) (Yiou and Nogaj, 2004). As we know, the North Atlantic winter weather(DJF) regimes could be computed using a k-mean clustering algorithm applied to the monthly anomalies of the 500 hPa geopotential height (Z500) on the NCEP/NCAR reanalysis. The monthly anomalies are with respect to the 1979–2010 climatology and are computed over the [90W/60E; 20/80N] domain. However, it is not the target of this notebook.
Here we will apply another machine learning algorithm of Self-Organizing Maps(SOMs) to study the transitions among these typical weather regimes. SOMs are a nonlinear tool to optimally extract a user-specified number of patterns or icons from an input data set and to uniquely relate any input data field to an icon, allowing analyses of occurrence frequencies and transitions (Reusch et al., 2007). SOM-based analysis differs from more traditional linear analysis in a number of ways that provide additional power over nonlinear data sets. SOM-based analysis thus complements linear techniques without replacing them.
1. Load all needed libraries
Step1: 2. Load data
Step2: 3. Perform SOMs clustering to idenfity weather regimes
It is worth noting that sklearn.cluster.KMeans only support dimensions<=2. Have to convert 3D (time|lat|lon) data into 2D (time|lat*lon) using numpy.reshape. When visualizing the final identified cluster_centers(i.e., weaterh regions), have to convert them back from 1D to 2d spatial format (lat|lon).
Here we use 5X5 = 25 maps
Step3: 4. Visualize
4.1 weather regimes
Step4: 4.2 Check amounts of each weather regime using hitsmap
Step5: 4.3 K-Means clustering on weather regimes
As we know, there four typical weather regimes over there. So we use 4 clusters. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import xarray as xr
import cartopy.crs as ccrs
from sompy.sompy import SOMFactory
Explanation: North Atlantic Winter Weather Regimes from a Self-Organizing Map Perspective
The four weather regimes typically found over the North Atlantic in winter are identified as
* NAO+ (positive NAO)
* NAO- (negative NAO)
* Blocking
* Atlantic Ridge.
Each weather regime is associated with different climatic conditions over Europe and North America (Cassou, 2008). In particular, the negative NAO and the blocking regimes are generally associated with cold extreme temperatures over Europe and the eastern United States (US) (Yiou and Nogaj, 2004). As we know, the North Atlantic winter weather(DJF) regimes could be computed using a k-mean clustering algorithm applied to the monthly anomalies of the 500 hPa geopotential height (Z500) on the NCEP/NCAR reanalysis. The monthly anomalies are with respect to the 1979–2010 climatology and are computed over the [90W/60E; 20/80N] domain. However, it is not the target of this notebook.
Here we will apply another machine learning algorithm of Self-Organizing Maps(SOMs) to study the transitions among these typical weather regimes. SOMs are a nonlinear tool to optimally extract a user-specified number of patterns or icons from an input data set and to uniquely relate any input data field to an icon, allowing analyses of occurrence frequencies and transitions (Reusch et al., 2007). SOM-based analysis differs from more traditional linear analysis in a number of ways that provide additional power over nonlinear data sets. SOM-based analysis thus complements linear techniques without replacing them.
1. Load all needed libraries
End of explanation
z500 = xr.open_dataset('data/z500.DJF.anom.1979.2010.nc', decode_times=False)
print(z500)
da = z500.sel(P=500).phi.load()
Explanation: 2. Load data
End of explanation
data = da.values
nt,ny,nx = data.shape
data = np.reshape(data, [nt, ny*nx], order='F')
sm = SOMFactory().build(data, mapsize=(5,5), normalization=None, initialization='pca')
sm.train(n_job=-1, verbose=False, train_rough_len=20, train_finetune_len=10)
Explanation: 3. Perform SOMs clustering to idenfity weather regimes
It is worth noting that sklearn.cluster.KMeans only support dimensions<=2. Have to convert 3D (time|lat|lon) data into 2D (time|lat*lon) using numpy.reshape. When visualizing the final identified cluster_centers(i.e., weaterh regions), have to convert them back from 1D to 2d spatial format (lat|lon).
Here we use 5X5 = 25 maps
End of explanation
codebook = sm.codebook.matrix
print codebook.shape
x,y = np.meshgrid(da.X, da.Y)
proj = ccrs.Orthographic(0,45)
fig, axes = plt.subplots(5,5, figsize=(15,15), subplot_kw=dict(projection=proj))
for i in range(sm.codebook.nnodes):
onecen = codebook[i,:].reshape(ny,nx, order='F')
cs = axes.flat[i].contourf(x, y, onecen,
levels=np.arange(-150, 151, 30),
transform=ccrs.PlateCarree(),
cmap='RdBu_r')
cb=fig.colorbar(cs, ax=axes.flat[i], shrink=0.8, aspect=20)
cb.set_label('[unit: m]',labelpad=-7)
axes.flat[i].coastlines()
axes.flat[i].set_global()
Explanation: 4. Visualize
4.1 weather regimes
End of explanation
from sompy.visualization.bmuhits import BmuHitsView
vhts = BmuHitsView(5, 5, "Amount of each regime",text_size=12)
vhts.show(sm, anotate=True, onlyzeros=False, labelsize=12, cmap="RdBu_r", logaritmic=False)
Explanation: 4.2 Check amounts of each weather regime using hitsmap
End of explanation
from sompy.visualization.hitmap import HitMapView
sm.cluster(4)
hits = HitMapView(8,8,"Weather regimes clustering",text_size=12)
a = hits.show(sm)
Explanation: 4.3 K-Means clustering on weather regimes
As we know, there four typical weather regimes over there. So we use 4 clusters.
End of explanation |
4,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: NERC
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
4,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
MT3D-USGS Example
Demonstrates functionality of the flopy MT3D-USGS module using the 'Crank-Nicolson' example distributed with MT3D-USGS.
Problem description
Step1: Set up model discretization
Step2: Instantiate output control (oc) package for MODFLOW-NWT
Step3: Instantiate solver package for MODFLOW-NWT
Step4: Instantiate discretization (DIS) package for MODFLOW-NWT
Step5: Instantiate upstream weighting (UPW) flow package for MODFLOW-NWT
Step6: Instantiate basic (BAS or BA6) package for MODFLOW-NWT
Step7: Instantiate streamflow routing (SFR2) package for MODFLOW-NWT
Step8: Instantiate gage package for use with MODFLOW-NWT package
Step9: Instantiate streamflow routing (SFR2) package for MODFLOW-NWT
Step10: Write the MODFLOW input files
Step11: Now draft up MT3D-USGS input files.
Step12: Instantiate basic transport (BTN) package for MT3D-USGS
Step13: Instantiate advection (ADV) package for MT3D-USGS
Step14: Instatiate generalized conjugate gradient solver (GCG) package for MT3D-USGS
Step15: Instantiate source-sink mixing (SSM) package for MT3D-USGS
Step16: Instantiate streamflow transport (SFT) package for MT3D-USGS
Step17: Write the MT3D-USGS input files
Step18: Compare mt3d-usgs results to an analytical solution
Step19: Load output from SFT as well as from the OTIS solution
Step20: Set up some plotting functions
Step21: Compare output | Python Code:
%matplotlib inline
import sys
import os
import platform
import string
from io import StringIO, BytesIO
import math
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
modelpth = os.path.join('data')
modelname = 'CrnkNic'
mfexe = 'mfnwt'
mtexe = 'mt3dusgs'
if platform.system() == 'Windows':
mfexe += '.exe'
mtexe += '.exe'
# Make sure modelpth directory exists
if not os.path.exists(modelpth):
os.mkdir(modelpth)
# Instantiate MODFLOW object in flopy
mf = flopy.modflow.Modflow(modelname=modelname, exe_name=mfexe, model_ws=modelpth, version='mfnwt')
Explanation: FloPy
MT3D-USGS Example
Demonstrates functionality of the flopy MT3D-USGS module using the 'Crank-Nicolson' example distributed with MT3D-USGS.
Problem description:
Grid dimensions: 1 Layer, 3 Rows, 650 Columns
Stress periods: 3
Units are in seconds and meters
Flow package: UPW
Stress packages: SFR, GHB
Solvers: NWT, GCG
End of explanation
Lx = 650.0
Ly = 15
nrow = 3
ncol = 650
nlay = 1
delr = Lx / ncol
delc = Ly / nrow
xmax = ncol * delr
ymax = nrow * delc
X, Y = np.meshgrid(np.linspace(delr / 2, xmax - delr / 2, ncol),
np.linspace(ymax - delc / 2, 0 + delc / 2, nrow))
Explanation: Set up model discretization
End of explanation
# Output Control: Create a flopy output control object
oc = flopy.modflow.ModflowOc(mf)
Explanation: Instantiate output control (oc) package for MODFLOW-NWT
End of explanation
# Newton-Rhapson Solver: Create a flopy nwt package object
headtol = 1.0E-4
fluxtol = 5
maxiterout = 5000
thickfact = 1E-06
linmeth = 2
iprnwt = 1
ibotav = 1
nwt = flopy.modflow.ModflowNwt(mf, headtol=headtol, fluxtol=fluxtol, maxiterout=maxiterout,
thickfact=thickfact, linmeth=linmeth, iprnwt=iprnwt, ibotav=ibotav,
options='SIMPLE')
Explanation: Instantiate solver package for MODFLOW-NWT
End of explanation
# The equations for calculating the ground elevation in the 1 Layer CrnkNic model.
# Although Y isn't used, keeping it here for symetry
def topElev(X, Y):
return 100. - (np.ceil(X)-1) * 0.03
grndElev = topElev(X, Y)
bedRockElev = grndElev - 3.
Steady = [False, False, False]
nstp = [1, 1, 1]
tsmult = [1., 1., 1.]
# Stress periods extend from (12AM-8:29:59AM); (8:30AM-11:30:59AM); (11:31AM-23:59:59PM)
perlen = [30600, 10800, 45000]
# Create the discretization object
# itmuni = 1 (seconds); lenuni = 2 (meters)
dis = flopy.modflow.ModflowDis(mf, nlay, nrow, ncol, nper=3, delr=delr, delc=delc,
top=grndElev, botm=bedRockElev, laycbd=0, itmuni=1, lenuni=2,
steady=Steady, nstp=nstp, tsmult=tsmult, perlen=perlen)
Explanation: Instantiate discretization (DIS) package for MODFLOW-NWT
End of explanation
# UPW parameters
# UPW must be instantiated after DIS. Otherwise, during the mf.write_input() procedures,
# flopy will crash.
laytyp = 1
layavg = 2
chani = 1.0
layvka = 1
iphdry = 0
hk = 0.1
hani = 1
vka = 1.
ss = 0.000001
sy = 0.20
hdry = -888
upw = flopy.modflow.ModflowUpw(mf, laytyp=laytyp, layavg=layavg, chani=chani, layvka=layvka,
ipakcb=53, hdry=hdry, iphdry=iphdry, hk=hk, hani=hani,
vka=vka, ss=ss, sy=sy)
Explanation: Instantiate upstream weighting (UPW) flow package for MODFLOW-NWT
End of explanation
# Create a flopy basic package object
def calc_strtElev(X, Y):
return 99.5 - (np.ceil(X)-1) * 0.0001
ibound = np.ones((nlay, nrow, ncol))
ibound[:,0,:] *= -1
ibound[:,2,:] *= -1
strtElev = calc_strtElev(X, Y)
bas = flopy.modflow.ModflowBas(mf, ibound=ibound, hnoflo=hdry, strt=strtElev)
Explanation: Instantiate basic (BAS or BA6) package for MODFLOW-NWT
End of explanation
# Streamflow Routing Package: Try and set up with minimal options in use
# 9 11 IFACE # Data Set 1: ISTCB1 ISTCB2
nstrm = ncol
nss = 6
const = 1.0
dleak = 0.0001
istcb1 = -10
istcb2 = 11
isfropt = 1
segment_data = None
channel_geometry_data = None
channel_flow_data = None
dataset_5 = None
reachinput = True
# The next couple of lines set up the reach_data for the 30x100 hypothetical model.
# Will need to adjust the row based on which grid discretization we're doing.
# Ensure that the stream goes down one of the middle rows of the model.
strmBed_Elev = 98.75 - (np.ceil(X[1,:])-1) * 0.0001
s1 = 'k,i,j,iseg,ireach,rchlen,strtop,slope,strthick,strhc1\n'
iseg = 0
irch = 0
for y in range(ncol):
if y <= 37:
if iseg == 0:
irch = 1
else:
irch += 1
iseg = 1
strhc1 = 1.0e-10
elif y <= 104:
if iseg == 1:
irch = 1
else:
irch += 1
iseg = 2
strhc1 = 1.0e-10
elif y <= 280:
if iseg == 2:
irch = 1
else:
irch += 1
iseg = 3
strhc1 = 2.946219199e-6
elif y <= 432:
if iseg == 3:
irch = 1
else:
irch += 1
iseg = 4
strhc1 = 1.375079882e-6
elif y <= 618:
if iseg == 4:
irch = 1
else:
irch += 1
iseg = 5
strhc1 = 1.764700062e-6
else:
if iseg == 5:
irch = 1
else:
irch += 1
iseg = 6
strhc1 = 1e-10
# remember that lay, row, col need to be zero-based and are adjusted accordingly by flopy
# layer + row + col + iseg + irch + rchlen + strtop + slope + strthick + strmbed K
s1 += '0,{}'.format(1)
s1 += ',{}'.format(y)
s1 += ',{}'.format(iseg)
s1 += ',{}'.format(irch)
s1 += ',{}'.format(delr)
s1 += ',{}'.format(strmBed_Elev[y])
s1 += ',{}'.format(0.0001)
s1 += ',{}'.format(0.50)
s1 += ',{}\n'.format(strhc1)
if not os.path.exists('temp'):
os.mkdir('temp')
fpth = os.path.join('temp', 's1.csv')
f = open(fpth, 'w')
f.write(s1)
f.close()
dtype = [('k', '<i4'), ('i', '<i4'), ('j', '<i4'), ('iseg', '<i4'),
('ireach', '<f8'), ('rchlen', '<f8'), ('strtop', '<f8'),
('slope', '<f8'), ('strthick', '<f8'), ('strhc1', '<f8')]
if (sys.version_info > (3, 0)):
f = open(fpth, 'rb')
else:
f = open(fpth, 'r')
reach_data = np.genfromtxt(f, delimiter=',', names=True, dtype=dtype)
f.close()
s2 = "nseg,icalc,outseg,iupseg,nstrpts, flow,runoff,etsw,pptsw, roughch, roughbk,cdpth,fdpth,awdth,bwdth,width1,width2\n \
1, 1, 2, 0, 0, 0.0125, 0.0, 0.0, 0.0, 0.082078856000, 0.082078856000, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
2, 1, 3, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.143806300000, 0.143806300000, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
3, 1, 4, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.104569661821, 0.104569661821, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
4, 1, 5, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.126990045841, 0.126990045841, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
5, 1, 6, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.183322283828, 0.183322283828, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5\n \
6, 1, 0, 0, 0, 0.0, 0.0, 0.0, 0.0, 0.183322283828, 0.183322283828, 0.0, 0.0, 0.0, 0.0, 1.5, 1.5"
fpth = os.path.join('temp', 's2.csv')
f = open(fpth, 'w')
f.write(s2)
f.close()
if (sys.version_info > (3, 0)):
f = open(fpth, 'rb')
else:
f = open(fpth, 'r')
segment_data = np.genfromtxt(f, delimiter=',',names=True)
f.close()
# Be sure to convert segment_data to a dictionary keyed on stress period.
segment_data = np.atleast_1d(segment_data)
segment_data = {0: segment_data,
1: segment_data,
2: segment_data}
# There are 3 stress periods
dataset_5 = {0: [nss, 0, 0],
1: [nss, 0, 0],
2: [nss, 0, 0]}
sfr = flopy.modflow.ModflowSfr2(mf, nstrm=nstrm, nss=nss, const=const, dleak=dleak, isfropt=isfropt, istcb2=0,
reachinput=True, reach_data=reach_data, dataset_5=dataset_5,
segment_data=segment_data, channel_geometry_data=channel_geometry_data)
Explanation: Instantiate streamflow routing (SFR2) package for MODFLOW-NWT
End of explanation
gages = [[1,38,61,1],[2,67,62,1], [3,176,63,1], [4,152,64,1], [5,186,65,1], [6,31,66,1]]
files = ['CrnkNic.gage','CrnkNic.gag1','CrnkNic.gag2','CrnkNic.gag3','CrnkNic.gag4','CrnkNic.gag5',
'CrnkNic.gag6']
gage = flopy.modflow.ModflowGage(mf, numgage=6, gage_data=gages, filenames = files)
Explanation: Instantiate gage package for use with MODFLOW-NWT package
End of explanation
lmt = flopy.modflow.ModflowLmt(mf, output_file_name='CrnkNic.ftl', output_file_header='extended',
output_file_format='formatted', package_flows = ['sfr'])
Explanation: Instantiate streamflow routing (SFR2) package for MODFLOW-NWT
End of explanation
pth = os.getcwd()
print(pth)
mf.write_input()
# run the model
mf.run_model()
Explanation: Write the MODFLOW input files
End of explanation
# Instantiate MT3D-USGS object in flopy
mt = flopy.mt3d.Mt3dms(modflowmodel=mf, modelname=modelname, model_ws=modelpth,
version='mt3d-usgs', namefile_ext='mtnam', exe_name=mtexe,
ftlfilename='CrnkNic.ftl', ftlfree=True)
Explanation: Now draft up MT3D-USGS input files.
End of explanation
btn = flopy.mt3d.Mt3dBtn(mt, sconc=3.7, ncomp=1, prsity=0.2, cinact=-1.0,
thkmin=0.001, nprs=-1, nprobs=10, chkmas=True,
nprmas=10, dt0=180, mxstrn=2500)
Explanation: Instantiate basic transport (BTN) package for MT3D-USGS
End of explanation
adv = flopy.mt3d.Mt3dAdv(mt, mixelm=0, percel=1.00, mxpart=5000, nadvfd=1)
Explanation: Instantiate advection (ADV) package for MT3D-USGS
End of explanation
gcg = flopy.mt3d.Mt3dGcg(mt, mxiter=10, iter1=50, isolve=3, ncrs=0,
accl=1, cclose=1e-6, iprgcg=1)
Explanation: Instatiate generalized conjugate gradient solver (GCG) package for MT3D-USGS
End of explanation
# For SSM, need to set the constant head boundary conditions to the ambient concentration
# for all 1,300 constant head boundary cells.
itype = flopy.mt3d.Mt3dSsm.itype_dict()
ssm_data = {}
ssm_data[0] = [(0, 0, 0, 3.7, itype['CHD'])]
ssm_data[0].append((0, 2, 0, 3.7, itype['CHD']))
for i in [0,2]:
for j in range(1, ncol):
ssm_data[0].append((0, i, j, 3.7, itype['CHD']))
ssm = flopy.mt3d.Mt3dSsm(mt, stress_period_data=ssm_data)
Explanation: Instantiate source-sink mixing (SSM) package for MT3D-USGS
End of explanation
dispsf = []
for y in range(ncol):
if y <= 37:
dispsf.append(0.12)
elif y <= 104:
dispsf.append(0.15)
elif y <= 280:
dispsf.append(0.24)
elif y <= 432:
dispsf.append(0.31)
elif y <= 618:
dispsf.append(0.40)
else:
dispsf.append(0.40)
# Enter a list of the observation points
# Each observation is taken as the last reach within the first 5 segments
seg_len = np.unique(reach_data['iseg'], return_counts=True)
obs_sf = np.cumsum(seg_len[1])
obs_sf = obs_sf.tolist()
# The last reach is not an observation point, therefore drop
obs_sf.pop(-1)
# In the first and last stress periods, concentration at the headwater is 3.7
sf_stress_period_data = {0: [0, 0, 3.7],
1: [0, 0, 11.4],
2: [0, 0, 3.7]}
gage_output = [None, None, 'CrnkNic.sftobs']
sft = flopy.mt3d.Mt3dSft(mt, nsfinit=650, mxsfbc=650, icbcsf=81, ioutobs=82,
isfsolv=1, cclosesf=1.0E-6, mxitersf=10, crntsf=1.0, iprtxmd=0,
coldsf=3.7, dispsf=dispsf, nobssf=5, obs_sf=obs_sf,
sf_stress_period_data = sf_stress_period_data,
filenames=gage_output)
sft.dispsf.format.fortran = "(10E15.6)"
Explanation: Instantiate streamflow transport (SFT) package for MT3D-USGS
End of explanation
mt.write_input()
# run the model
mt.run_model()
Explanation: Write the MT3D-USGS input files
End of explanation
# Define a function to read SFT output file
def load_ts_from_SFT_output(fname, nd=1):
f=open(fname, 'r')
iline=0
lst = []
for line in f:
if line.strip().split()[0].replace(".", "", 1).isdigit():
l = line.strip().split()
t = float(l[0])
loc = int(l[1])
conc = float(l[2])
if(loc == nd):
lst.append( [t,conc] )
ts = np.array(lst)
f.close()
return ts
# Also define a function to read OTIS output file
def load_ts_from_otis(fname, iobs=1):
f = open(fname,'r')
iline = 0
lst = []
for line in f:
l = line.strip().split()
t = float(l[0])
val = float(l[iobs])
lst.append( [t, val] )
ts = np.array(lst)
f.close()
return ts
Explanation: Compare mt3d-usgs results to an analytical solution
End of explanation
# Model output
fname_SFTout = os.path.join('data', 'CrnkNic.sftcobs.out')
# Loading MT3D-USGS output
ts1_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=38)
ts2_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=105)
ts3_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=281)
ts4_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=433)
ts5_mt3d = load_ts_from_SFT_output(fname_SFTout, nd=619)
# OTIS results located here
fname_OTIS = os.path.join('..', 'data', 'mt3d_test', 'mfnwt_mt3dusgs', 'sft_crnkNic', 'OTIS_solution.out')
# Loading OTIS output
ts1_Otis = load_ts_from_otis(fname_OTIS, 1)
ts2_Otis = load_ts_from_otis(fname_OTIS, 2)
ts3_Otis = load_ts_from_otis(fname_OTIS, 3)
ts4_Otis = load_ts_from_otis(fname_OTIS, 4)
ts5_Otis = load_ts_from_otis(fname_OTIS, 5)
Explanation: Load output from SFT as well as from the OTIS solution
End of explanation
def set_plot_params():
import matplotlib as mpl
from matplotlib.font_manager import FontProperties
mpl.rcParams['font.sans-serif'] = 'Arial'
mpl.rcParams['font.serif'] = 'Times'
mpl.rcParams['font.cursive'] = 'Zapf Chancery'
mpl.rcParams['font.fantasy'] = 'Comic Sans MS'
mpl.rcParams['font.monospace'] = 'Courier New'
mpl.rcParams['pdf.compression'] = 0
mpl.rcParams['pdf.fonttype'] = 42
ticksize = 10
mpl.rcParams['legend.fontsize'] = 7
mpl.rcParams['axes.labelsize'] = 12
mpl.rcParams['xtick.labelsize'] = ticksize
mpl.rcParams['ytick.labelsize'] = ticksize
return
def set_sizexaxis(a,fmt,sz):
success = 0
x = a.get_xticks()
# print x
xc = np.chararray(len(x), itemsize=16)
for i in range(0,len(x)):
text = fmt % ( x[i] )
xc[i] = string.strip(string.ljust(text,16))
# print xc
a.set_xticklabels(xc, size=sz)
success = 1
return success
def set_sizeyaxis(a,fmt,sz):
success = 0
y = a.get_yticks()
# print y
yc = np.chararray(len(y), itemsize=16)
for i in range(0,len(y)):
text = fmt % ( y[i] )
yc[i] = string.strip(string.ljust(text,16))
# print yc
a.set_yticklabels(yc, size=sz)
success = 1
return success
Explanation: Set up some plotting functions
End of explanation
#set up figure
try:
plt.close('all')
except:
pass
set_plot_params()
fig = plt.figure(figsize=(6, 4), facecolor='w')
ax = fig.add_subplot(1, 1, 1)
ax.plot(ts1_Otis[:,0], ts1_Otis[:,1], 'k-', linewidth=1.0)
ax.plot(ts2_Otis[:,0], ts2_Otis[:,1], 'b-', linewidth=1.0)
ax.plot(ts3_Otis[:,0], ts3_Otis[:,1], 'r-', linewidth=1.0)
ax.plot(ts4_Otis[:,0], ts4_Otis[:,1], 'g-', linewidth=1.0)
ax.plot(ts5_Otis[:,0], ts5_Otis[:,1], 'c-', linewidth=1.0)
ax.plot((ts1_mt3d[:,0])/3600, ts1_mt3d[:,1], 'kD', markersize=2.0, mfc='none',mec='k')
ax.plot((ts2_mt3d[:,0])/3600, ts2_mt3d[:,1], 'b*', markersize=3.0, mfc='none',mec='b')
ax.plot((ts3_mt3d[:,0])/3600, ts3_mt3d[:,1], 'r+', markersize=3.0)
ax.plot((ts4_mt3d[:,0])/3600, ts4_mt3d[:,1], 'g^', markersize=2.0, mfc='none',mec='g')
ax.plot((ts5_mt3d[:,0])/3600, ts5_mt3d[:,1], 'co', markersize=2.0, mfc='none',mec='c')
#customize plot
ax.set_xlabel('Time, hours')
ax.set_ylabel('Concentration, mg L-1')
ax.set_ylim([3.5,13])
ticksize = 10
#legend
leg = ax.legend(
(
'Otis, Site 1', 'Otis, Site 2', 'Otis, Site 3', 'Otis, Site 4', 'Otis, Site 5',
'MT3D-USGS, Site 1', 'MT3D-USGS, Site 2', 'MT3D-USGS, Site 3', 'MT3D-USGS, Site 4', 'MT3D-USGS, Site 5',
),
loc='upper right', labelspacing=0.25, columnspacing=1,
handletextpad=0.5, handlelength=2.0, numpoints=1)
leg._drawFrame = False
plt.show()
Explanation: Compare output:
End of explanation |
4,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
xarray with MetPy Tutorial
xarray <http
Step1: ...and opening some sample data to work with.
Step2: While xarray can handle a wide variety of n-dimensional data (essentially anything that can
be stored in a netCDF file), a common use case is working with gridded model output. Such
model data can be obtained from a THREDDS Data Server using the siphon package
<https
Step3: This is a DataArray, which stores just a single data variable with its associated
coordinates and attributes. These individual DataArray\s are the kinds of objects that
MetPy's calculations take as input (more on that in Calculations_ section below).
If you are more interested in learning about xarray's terminology and data structures, see
the terminology section <http
Step4: When accessing multiple coordinate types simultaneously, you can use the .coordinates()
method to yield a generator for the respective coordinates
Step5: These coordinate type aliases can also be used in MetPy's wrapped .sel and .loc
for indexing and selecting on DataArray\s. For example, to access 500 hPa heights at
1800Z,
Step6: (Notice how we specified 50000 here without units...we'll go over a better alternative in
the next section on units.)
One point of warning
Step7: If your dataset doesn't have a CF-conforming grid mapping variable, you can manually specify
the CRS using the .assign_crs() method
Step8: Notice the newly added metpy_crs non-dimension coordinate. Now how can we use this in
practice? For individual DataArrays\s, we can access the cartopy and pyproj objects
corresponding to this CRS
Step9: Finally, there are times when a certain horizontal coordinate type is missing from your
dataset, and you need the other, that is, you have latitude/longitude and need y/x, or visa
versa. This is where the .assign_y_x and .assign_latitude_longitude methods come in
handy. Our current GFS sample won't work to demonstrate this (since, on its
latitude-longitude grid, y is latitude and x is longitude), so for more information, take
a look at the Non-Compliant Dataset Example_ below, or view the accessor documentation.
Units
Since unit-aware calculations are a major part of the MetPy library, unit support is a major
part of MetPy's xarray integration!
One very important point of consideration is that xarray data variables (in both
Dataset\s and DataArray\s) can store both unit-aware and unit-naive array types.
Unit-naive array types will be used by default in xarray, so we need to convert to a
unit-aware type if we want to use xarray operations while preserving unit correctness. MetPy
provides the .quantify() method for this (named since we are turning the data stored
inside the xarray object into a Pint Quantity object)
Step10: Notice how the units are now represented in the data itself, rather than as a text
attribute. Now, even if we perform some kind of xarray operation (such as taking the zonal
mean), the units are preserved
Step11: However, this "quantification" is not without its consequences. By default, xarray loads its
data lazily to conserve memory usage. Unless your data is chunked into a Dask array (using
the chunks argument), this .quantify() method will load data into memory, which
could slow your script or even cause your process to run out of memory. And so, we recommend
subsetting your data before quantifying it.
Also, these Pint Quantity data objects are not properly handled by xarray when writing
to disk. And so, if you want to safely export your data, you will need to undo the
quantification with the .dequantify() method, which converts your data back to a
unit-naive array with the unit as a text attribute
Step12: Other useful unit integration features include
Step13: Unit conversion
Step14: Unit conversion for coordinates
Step15: Accessing just the underlying unit array
Step16: Accessing just the underlying units
Step17: Calculations
MetPy's xarray integration extends to its calcuation suite as well. Most grid-capable
calculations (such as thermodynamics, kinematics, and smoothers) fully support xarray
DataArray\s by accepting them as inputs, returning them as outputs, and automatically
using the attached coordinate data/metadata to determine grid arguments
Step18: For profile-based calculations (and most remaining calculations in the metpy.calc
module), xarray DataArray\s are accepted as inputs, but the outputs remain Pint
Quantities (typically scalars)
Step19: A few remaining portions of MetPy's calculations (mainly the interpolation module and a few
other functions) do not fully support xarray, and so, use of .values may be needed to
convert to a bare NumPy array. For full information on xarray support for your function of
interest, see the
Step20: Non-Compliant Dataset Example
When CF metadata (such as grid mapping, coordinate attributes, etc.) are missing, a bit more
work is required to manually supply the required information, for example, | Python Code:
import numpy as np
import xarray as xr
# Any import of metpy will activate the accessors
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.units import units
Explanation: xarray with MetPy Tutorial
xarray <http://xarray.pydata.org/>_ is a powerful Python package that provides N-dimensional
labeled arrays and datasets following the Common Data Model. MetPy's suite of meteorological
calculations are designed to integrate with xarray DataArrays as one of its two primary data
models (the other being Pint Quantities). MetPy also provides DataArray and Dataset
accessors (collections of methods and properties attached to the .metpy property) for
coordinate/CRS and unit operations.
Full information on MetPy's accessors is available in the :doc:appropriate section of the
reference guide </api/generated/metpy.xarray>, otherwise, continue on in this
tutorial for a demonstration of the three main components of MetPy's integration with xarray
(coordinates/coordinate reference systems, units, and calculations), as well as instructive
examples for both CF-compliant and non-compliant datasets.
First, some general imports...
End of explanation
# Open the netCDF file as a xarray Dataset
data = xr.open_dataset(get_test_data('irma_gfs_example.nc', False))
# View a summary of the Dataset
data
Explanation: ...and opening some sample data to work with.
End of explanation
temperature = data['Temperature_isobaric']
temperature
Explanation: While xarray can handle a wide variety of n-dimensional data (essentially anything that can
be stored in a netCDF file), a common use case is working with gridded model output. Such
model data can be obtained from a THREDDS Data Server using the siphon package
<https://unidata.github.io/siphon/>_, but here we've used an example subset of GFS data
from Hurrican Irma (September 5th, 2017) included in MetPy's test suite. Generally,
a local file (or remote file via OPeNDAP) can be opened with xr.open_dataset("path").
Going back to the above object, this Dataset consists of dimensions and their
associated coordinates, which in turn make up the axes along which the data variables
are defined. The dataset also has a dictionary-like collection of attributes. What happens
if we look at just a single data variable?
End of explanation
temperature.metpy.time
Explanation: This is a DataArray, which stores just a single data variable with its associated
coordinates and attributes. These individual DataArray\s are the kinds of objects that
MetPy's calculations take as input (more on that in Calculations_ section below).
If you are more interested in learning about xarray's terminology and data structures, see
the terminology section <http://xarray.pydata.org/en/stable/terminology.html>_ of xarray's
documenation.
Coordinates and Coordinate Reference Systems
MetPy's first set of helpers comes with identifying coordinate types. In a given dataset,
coordinates can have a variety of different names and yet refer to the same type (such as
"isobaric1" and "isobaric3" both referring to vertical isobaric coordinates). Following
CF conventions, as well as using some fall-back regular expressions, MetPy can
systematically identify coordinates of the following types:
time
vertical
latitude
y
longitude
x
When identifying a single coordinate, it is best to use the property directly associated
with that type
End of explanation
x, y = temperature.metpy.coordinates('x', 'y')
Explanation: When accessing multiple coordinate types simultaneously, you can use the .coordinates()
method to yield a generator for the respective coordinates
End of explanation
heights = data['Geopotential_height_isobaric'].metpy.sel(
time='2017-09-05 18:00',
vertical=50000.
)
Explanation: These coordinate type aliases can also be used in MetPy's wrapped .sel and .loc
for indexing and selecting on DataArray\s. For example, to access 500 hPa heights at
1800Z,
End of explanation
# Parse full dataset
data_parsed = data.metpy.parse_cf()
# Parse subset of dataset
data_subset = data.metpy.parse_cf([
'u-component_of_wind_isobaric',
'v-component_of_wind_isobaric',
'Vertical_velocity_pressure_isobaric'
])
# Parse single variable
relative_humidity = data.metpy.parse_cf('Relative_humidity_isobaric')
Explanation: (Notice how we specified 50000 here without units...we'll go over a better alternative in
the next section on units.)
One point of warning: xarray's selection and indexing only works if these coordinates are
dimension coordinates, meaning that they are 1D and share the name of their associated
dimension. In practice, this means that you can't index a dataset that has 2D latitude and
longitude coordinates by latitudes and longitudes, instead, you must index by the 1D y and x
dimension coordinates. (What if these coordinates are missing, you may ask? See the final
subsection on .assign_y_x for more details.)
Beyond just the coordinates themselves, a common need for both calculations with and plots
of geospatial data is knowing the coordinate reference system (CRS) on which the horizontal
spatial coordinates are defined. MetPy follows the CF Conventions
<http://cfconventions.org/Data/cf-conventions/cf-conventions-1.8/cf-conventions.html#grid-mappings-and-projections>_
for its CRS definitions, which it then caches on the metpy_crs coordinate in order for
it to persist through calculations and other array operations. There are two ways to do so
in MetPy:
First, if your dataset is already conforming to the CF Conventions, it will have a grid
mapping variable that is associated with the other data variables by the grid_mapping
attribute. This is automatically parsed via the .parse_cf() method:
End of explanation
temperature = data['Temperature_isobaric'].metpy.assign_crs(
grid_mapping_name='latitude_longitude',
earth_radius=6371229.0
)
temperature
Explanation: If your dataset doesn't have a CF-conforming grid mapping variable, you can manually specify
the CRS using the .assign_crs() method:
End of explanation
# Cartopy CRS, useful for plotting
relative_humidity.metpy.cartopy_crs
# pyproj CRS, useful for projection transformations and forward/backward azimuth and great
# circle calculations
temperature.metpy.pyproj_crs
Explanation: Notice the newly added metpy_crs non-dimension coordinate. Now how can we use this in
practice? For individual DataArrays\s, we can access the cartopy and pyproj objects
corresponding to this CRS:
End of explanation
heights = heights.metpy.quantify()
heights
Explanation: Finally, there are times when a certain horizontal coordinate type is missing from your
dataset, and you need the other, that is, you have latitude/longitude and need y/x, or visa
versa. This is where the .assign_y_x and .assign_latitude_longitude methods come in
handy. Our current GFS sample won't work to demonstrate this (since, on its
latitude-longitude grid, y is latitude and x is longitude), so for more information, take
a look at the Non-Compliant Dataset Example_ below, or view the accessor documentation.
Units
Since unit-aware calculations are a major part of the MetPy library, unit support is a major
part of MetPy's xarray integration!
One very important point of consideration is that xarray data variables (in both
Dataset\s and DataArray\s) can store both unit-aware and unit-naive array types.
Unit-naive array types will be used by default in xarray, so we need to convert to a
unit-aware type if we want to use xarray operations while preserving unit correctness. MetPy
provides the .quantify() method for this (named since we are turning the data stored
inside the xarray object into a Pint Quantity object)
End of explanation
heights_mean = heights.mean('longitude')
heights_mean
Explanation: Notice how the units are now represented in the data itself, rather than as a text
attribute. Now, even if we perform some kind of xarray operation (such as taking the zonal
mean), the units are preserved
End of explanation
heights_mean_str_units = heights_mean.metpy.dequantify()
heights_mean_str_units
Explanation: However, this "quantification" is not without its consequences. By default, xarray loads its
data lazily to conserve memory usage. Unless your data is chunked into a Dask array (using
the chunks argument), this .quantify() method will load data into memory, which
could slow your script or even cause your process to run out of memory. And so, we recommend
subsetting your data before quantifying it.
Also, these Pint Quantity data objects are not properly handled by xarray when writing
to disk. And so, if you want to safely export your data, you will need to undo the
quantification with the .dequantify() method, which converts your data back to a
unit-naive array with the unit as a text attribute
End of explanation
heights_at_45_north = data['Geopotential_height_isobaric'].metpy.sel(
latitude=45 * units.degrees_north,
vertical=300 * units.hPa
)
heights_at_45_north
Explanation: Other useful unit integration features include:
Unit-based selection/indexing:
End of explanation
temperature_degC = temperature[0].metpy.convert_units('degC')
temperature_degC
Explanation: Unit conversion:
End of explanation
heights_on_hPa_levels = heights.metpy.convert_coordinate_units('isobaric3', 'hPa')
heights_on_hPa_levels['isobaric3']
Explanation: Unit conversion for coordinates:
End of explanation
heights_unit_array = heights.metpy.unit_array
heights_unit_array
Explanation: Accessing just the underlying unit array:
End of explanation
height_units = heights.metpy.units
height_units
Explanation: Accessing just the underlying units:
End of explanation
heights = data_parsed.metpy.parse_cf('Geopotential_height_isobaric').metpy.sel(
time='2017-09-05 18:00',
vertical=500 * units.hPa
)
u_g, v_g = mpcalc.geostrophic_wind(heights)
u_g
Explanation: Calculations
MetPy's xarray integration extends to its calcuation suite as well. Most grid-capable
calculations (such as thermodynamics, kinematics, and smoothers) fully support xarray
DataArray\s by accepting them as inputs, returning them as outputs, and automatically
using the attached coordinate data/metadata to determine grid arguments
End of explanation
data_at_point = data.metpy.sel(
time1='2017-09-05 12:00',
latitude=40 * units.degrees_north,
longitude=260 * units.degrees_east
)
dewpoint = mpcalc.dewpoint_from_relative_humidity(
data_at_point['Temperature_isobaric'],
data_at_point['Relative_humidity_isobaric']
)
cape, cin = mpcalc.surface_based_cape_cin(
data_at_point['isobaric3'],
data_at_point['Temperature_isobaric'],
dewpoint
)
cape
Explanation: For profile-based calculations (and most remaining calculations in the metpy.calc
module), xarray DataArray\s are accepted as inputs, but the outputs remain Pint
Quantities (typically scalars)
End of explanation
# Load data, parse it for a CF grid mapping, and promote lat/lon data variables to coordinates
data = xr.open_dataset(
get_test_data('narr_example.nc', False)
).metpy.parse_cf().set_coords(['lat', 'lon'])
# Subset to only the data you need to save on memory usage
subset = data.metpy.sel(isobaric=500 * units.hPa)
# Quantify if you plan on performing xarray operations that need to maintain unit correctness
subset = subset.metpy.quantify()
# Perform calculations
heights = mpcalc.smooth_gaussian(subset['Geopotential_height'], 5)
subset['u_geo'], subset['v_geo'] = mpcalc.geostrophic_wind(heights)
# Plot
heights.plot()
# Save output
subset.metpy.dequantify().drop_vars('metpy_crs').to_netcdf('500hPa_analysis.nc')
Explanation: A few remaining portions of MetPy's calculations (mainly the interpolation module and a few
other functions) do not fully support xarray, and so, use of .values may be needed to
convert to a bare NumPy array. For full information on xarray support for your function of
interest, see the :doc:/api/index.
CF-Compliant Dataset Example
The GFS sample used throughout this tutorial so far has been an example of a CF-compliant
dataset. These kinds of datasets are easiest to work with it MetPy, since most of the
"xarray magic" uses CF metadata. For this kind of dataset, a typical workflow looks like the
following
End of explanation
nonstandard = xr.Dataset({
'temperature': (('y', 'x'), np.arange(0, 9).reshape(3, 3) * units.degC),
'y': ('y', np.arange(0, 3) * 1e5, {'units': 'km'}),
'x': ('x', np.arange(0, 3) * 1e5, {'units': 'km'})
})
# Add both CRS and then lat/lon coords using chained methods
data = nonstandard.metpy.assign_crs(
grid_mapping_name='lambert_conformal_conic',
latitude_of_projection_origin=38.5,
longitude_of_central_meridian=262.5,
standard_parallel=38.5,
earth_radius=6371229.0
).metpy.assign_latitude_longitude()
# Preview the changes
data
Explanation: Non-Compliant Dataset Example
When CF metadata (such as grid mapping, coordinate attributes, etc.) are missing, a bit more
work is required to manually supply the required information, for example,
End of explanation |
4,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing PHOEBE 2 vs PHOEBE Legacy
NOTE
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Datasets and Compute Options
Step3: Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.
Step4: Now we add compute options for the 'legacy' backend.
Step5: And set the two RV datasets to use the correct methods (for both compute options)
Step6: Let's use the external atmospheres available for both phoebe1 and phoebe2
Step7: Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize
Step8: Let's also disable other special effect such as heating, gravity, and light-time effects.
Step9: Finally, let's compute all of our models
Step10: Plotting
Light Curve
Step11: Now let's plot the residuals between these two models
Step12: Dynamical RVs
Since the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.
Step13: And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)
Step14: Numerical (flux-weighted) RVs | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Comparing PHOEBE 2 vs PHOEBE Legacy
NOTE: PHOEBE 1.0 legacy is an alternate backend and is not installed with PHOEBE 2.0. In order to run this backend, you'll need to have PHOEBE 1.0 installed.
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u
import numpy as np
import matplotlib.pyplot as plt
phoebe.devel_on() # needed to use WD-style meshing, which isn't fully supported yet
logger = phoebe.logger()
b = phoebe.default_binary()
b['q'] = 0.7
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvdyn')
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rvnum')
Explanation: Adding Datasets and Compute Options
End of explanation
b.add_compute(compute='phoebe2marching', irrad_method='none', mesh_method='marching')
b.add_compute(compute='phoebe2wd', irrad_method='none', mesh_method='wd', eclipse_method='graham')
Explanation: Let's add compute options for phoebe using both the new (marching) method for creating meshes as well as the WD method which imitates the format of the mesh used within legacy.
End of explanation
b.add_compute('legacy', compute='phoebe1')
Explanation: Now we add compute options for the 'legacy' backend.
End of explanation
b.set_value_all('rv_method', dataset='rvdyn', value='dynamical')
b.set_value_all('rv_method', dataset='rvnum', value='flux-weighted')
Explanation: And set the two RV datasets to use the correct methods (for both compute options)
End of explanation
b.set_value_all('atm', 'extern_planckint')
Explanation: Let's use the external atmospheres available for both phoebe1 and phoebe2
End of explanation
b.set_value_all('gridsize', 30)
Explanation: Let's make sure both 'phoebe1' and 'phoebe2wd' use the same value for gridsize
End of explanation
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.,0.])
b.set_value_all('irrad_method', 'none') # phoebe
b.set_value('refl_num', 0) # legacy
b.set_value_all('rv_grav', False)
b.set_value_all('ltte', False)
Explanation: Let's also disable other special effect such as heating, gravity, and light-time effects.
End of explanation
b.run_compute(compute='phoebe2marching', model='phoebe2marchingmodel')
b.run_compute(compute='phoebe2wd', model='phoebe2wdmodel')
b.run_compute(compute='phoebe1', model='phoebe1model')
Explanation: Finally, let's compute all of our models
End of explanation
axs, artists = b['lc01@phoebe2marchingmodel'].plot(color='g')
axs, artists = b['lc01@phoebe1model'].plot(color='r')
leg = plt.legend(loc=4)
Explanation: Plotting
Light Curve
End of explanation
artist, = plt.plot(b.get_value('fluxes@lc01@phoebe2marchingmodel') - b.get_value('fluxes@lc01@phoebe1model'), 'g-')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
Explanation: Now let's plot the residuals between these two models
End of explanation
axs, artists = b['rvdyn@phoebe1model'].plot(color='r')
Explanation: Dynamical RVs
Since the dynamical RVs don't depend on the mesh, there should be no difference between the 'phoebe2marching' and 'phoebe2wd' synthetic models. Here we'll just choose one to plot.
End of explanation
artist, = plt.plot(b.get_value('rvs@rvdyn@primary@phoebe2marchingmodel') - b.get_value('rvs@rvdyn@primary@phoebe1model'), color='b', ls=':')
artist, = plt.plot(b.get_value('rvs@rvdyn@secondary@phoebe2marchingmodel') - b.get_value('rvs@rvdyn@secondary@phoebe1model'), color='b', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1e-12, 1e-12)
Explanation: And also plot the residuals of both the primary and secondary RVs (notice the scale on the y-axis)
End of explanation
axs, artists = b['rvnum@phoebe2marchingmodel'].plot(color='g')
axs, artists = b['rvnum@phoebe1model'].plot(color='r')
artist, = plt.plot(b.get_value('rvs@rvnum@primary@phoebe2marchingmodel', ) - b.get_value('rvs@rvnum@primary@phoebe1model'), color='g', ls=':')
artist, = plt.plot(b.get_value('rvs@rvnum@secondary@phoebe2marchingmodel') - b.get_value('rvs@rvnum@secondary@phoebe1model'), color='g', ls='-.')
artist = plt.axhline(0.0, linestyle='dashed', color='k')
ylim = plt.ylim(-1e-2, 1e-2)
Explanation: Numerical (flux-weighted) RVs
End of explanation |
4,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clustered Multitask GP (w/ Pyro/GPyTorch High-Level Interface)
Introduction
In this example, we use the Pyro integration for a GP model with additional latent variables.
We are modelling a multitask GP in this example. Rather than assuming a linear correlation among the different tasks, we assume that there is cluster structure for the different tasks. Let's assume there are $k$ different clusters of tasks. The generative model for task $i$ is
Step1: Adding additional latent variables to the likelihood
The standard GPyTorch variational objects will take care of inferring the latent functions $f_1 \ldots f_k$. However, we do need to add the additional latent variables $z_i$ to the models. We will do so by creating a custom likelihood that models
Step2: Constructing the PyroGP model
The PyroGP model is essentially the same as the model we used in the simple example, except for two changes
We now will use our more complicated ClusterGaussianLikelihood
The latent function should be vector valued to correspond to the k latent functions. As a result, we will learn a batched variational distribution, and use a IndependentMultitaskVariationalStrategy to convert the batched variational distribution into a MultitaskMultivariateNormal distribution. | Python Code:
import math
import torch
import pyro
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
# this is for running the notebook in our testing framework
import os
smoke_test = ('CI' in os.environ)
Explanation: Clustered Multitask GP (w/ Pyro/GPyTorch High-Level Interface)
Introduction
In this example, we use the Pyro integration for a GP model with additional latent variables.
We are modelling a multitask GP in this example. Rather than assuming a linear correlation among the different tasks, we assume that there is cluster structure for the different tasks. Let's assume there are $k$ different clusters of tasks. The generative model for task $i$ is:
$$
p(\mathbf y_i \mid \mathbf x_i) = \int \sum_{z_i=1}^k p(\mathbf y_i \mid \mathbf f (\mathbf x_i), z_i) \: p(z_i) \: p(\mathbf f (\mathbf x_i) ) \: d \mathbf f
$$
where $z_i$ is the cluster assignment for task $i$. There are therefore $k$ latent functions $\mathbf f = [f_1 \ldots f_k]$, each modelled by a GP, representing each cluster.
Our goal is therefore to infer:
The latent functions $f_1 \ldots f_k$
The cluster assignments $z_i$ for each task
End of explanation
class ClusterGaussianLikelihood(gpytorch.likelihoods.Likelihood):
def __init__(self, num_tasks, num_clusters):
super().__init__()
# These are parameters/buffers for the cluster assignment latent variables
self.register_buffer("prior_cluster_logits", torch.zeros(num_tasks, num_clusters))
self.register_parameter("variational_cluster_logits", torch.nn.Parameter(torch.randn(num_tasks, num_clusters)))
# The Gaussian observational noise
self.register_parameter("raw_noise", torch.nn.Parameter(torch.tensor(0.0)))
# Other info
self.num_tasks = num_tasks
self.num_clusters = num_clusters
self.max_plate_nesting = 1
def pyro_guide(self, function_dist, target):
# Here we add the extra variational distribution for the cluster latent variable
pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.variational_cluster_logits).to_event(1)
)
return super().pyro_guide(function_dist, target)
def pyro_model(self, function_dist, target):
# Here we add the extra prior distribution for the cluster latent variable
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.prior_cluster_logits).to_event(1)
)
return super().pyro_model(function_dist, target, cluster_assignment_samples=cluster_assignment_samples)
def forward(self, function_samples, cluster_assignment_samples=None):
# For inference, cluster_assignment_samples will be passed in
# This bit of code is for when we use the likelihood in the predictive mode
if cluster_assignment_samples is None:
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", self._cluster_dist(self.variational_cluster_logits)
)
# Now we return the observational distribution, based on the function_samples and cluster_assignment_samples
res = pyro.distributions.Normal(
loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1),
scale=torch.nn.functional.softplus(self.raw_noise).sqrt()
).to_event(1)
return res
Explanation: Adding additional latent variables to the likelihood
The standard GPyTorch variational objects will take care of inferring the latent functions $f_1 \ldots f_k$. However, we do need to add the additional latent variables $z_i$ to the models. We will do so by creating a custom likelihood that models:
$$
\sum_{z_i=1}^k p(\mathbf y_i \mid \mathbf f (\mathbf x_i), z_i) \: p(z_i)
$$
GPyTorch's likelihoods are capable of modeling additional latent variables. Our custom likelihood needs to define the following three functions:
pyro_model (needs to call through to super().pyro_model at the end), which defines the prior distribution for additional latent variables
pyro_guide (needs to call through to super().pyro_guide at the end), which defines the variational (guide) distribution for additional latent variables
forward, which defines the observation distributions conditioned on \mathbf f (\mathbf x_i) and any additional latent variables.
The pyro_model function
For each task, we will model the cluster assignment with a OneHotCategorical variable, where each cluster has equal probability. The pyro_model function will make a pyro.sample call to this prior distribution and then call the super method:
```python
# self.prior_cluster_logits = torch.zeros(num_tasks, num_clusters)
def pyro_model(self, function_dist, target):
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.prior_cluster_logits).to_event(1)
)
return super().pyro_model(
function_dist,
target,
cluster_assignment_samples=cluster_assignment_samples
)
```
Note that we are adding an additional argument cluster_assignment_samples to the super().pyro_model call. This will pass the cluster assignment samples to the forward call, which is necessary for inference.
The pyro_guide function
For each task, the variational (guide) diustribution will also be a OneHotCategorical variable, which will be defined by the parameter self.variational_cluster_logits. The pyro_guide function will make a pyro.sample call to this prior distribution and then call the super method:
python
def pyro_guide(self, function_dist, target):
pyro.sample(
self.name_prefix + ".cluster_logits", # self.name_prefix is added by PyroGP
pyro.distributions.OneHotCategorical(logits=self.variational_cluster_logits).to_event(1)
)
return super().pyro_guide(function_dist, target)
Note that we are adding an additional argument cluster_assignment_samples to the super().pyro_model call. This will pass the cluster assignment samples to the forward call, which is necessary for inference.
The forward function
The pyro_model fuction passes the additional keyword argument cluster_assignment_samples to the forward call. Therefore, our forward method will define the conditional probability $p(\mathbf y_i \mid \mathbf f(\mathbf x), z_i)$, where $\mathbf f(\mathbf x)$ corresponds to the variable function_samples and $z_i$ corresponds to the variable cluster_assignment_samples.
In our example $p(\mathbf y_i \mid \mathbf f(\mathbf x), z_i)$ corresponds to a Gaussian noise model.
``python
# self.raw_noise is the Gaussian noise parameter
# function_samples isn x k# cluster_assignment_samples isk x t, wheret` is the number of tasks
def forward(self, function_samples, cluster_assignment_samples):
return pyro.distributions.Normal(
loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1),
scale=torch.nn.functional.softplus(self.raw_noise).sqrt()
).to_event(1)
# The to_event call is necessary because we are returning a multitask distribution,
# where each task dimension corresponds to each of the `t` tasks
```
This is all we need for inference! However, if we want to use this model to make predictions, the cluster_assignment_samples keyword argument will not be passed into the function. Therefore, we need to make sure that forward can handle both inference and predictions:
```python
def forward(self, function_samples, cluster_assignment_samples=None):
if cluster_assignment_samples is None:
# We'll get here at prediction time
# We'll use the variational distribution when making predictions
cluster_assignment_samples = pyro.sample(
self.name_prefix + ".cluster_logits", self._cluster_dist(self.variational_cluster_logits)
)
return pyro.distributions.Normal(
loc=(function_samples.unsqueeze(-2) * cluster_assignment_samples).sum(-1),
scale=torch.nn.functional.softplus(self.raw_noise).sqrt()
).to_event(1)
```
End of explanation
class ClusterMultitaskGPModel(gpytorch.models.pyro.PyroGP):
def __init__(self, train_x, train_y, num_functions=2, reparam=False):
num_data = train_y.size(-2)
# Define all the variational stuff
inducing_points = torch.linspace(0, 1, 64).unsqueeze(-1)
variational_distribution = gpytorch.variational.CholeskyVariationalDistribution(
num_inducing_points=inducing_points.size(-2),
batch_shape=torch.Size([num_functions])
)
# Here we're using a IndependentMultitaskVariationalStrategy - so that the output of the
# GP latent function is a MultitaskMultivariateNormal
variational_strategy = gpytorch.variational.IndependentMultitaskVariationalStrategy(
gpytorch.variational.VariationalStrategy(self, inducing_points, variational_distribution),
num_tasks=num_functions,
)
# Standard initializtation
likelihood = ClusterGaussianLikelihood(train_y.size(-1), num_functions)
super().__init__(variational_strategy, likelihood, num_data=num_data, name_prefix=str(time.time()))
self.likelihood = likelihood
self.num_functions = num_functions
# Mean, covar
self.mean_module = gpytorch.means.ZeroMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
res = gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
return res
Explanation: Constructing the PyroGP model
The PyroGP model is essentially the same as the model we used in the simple example, except for two changes
We now will use our more complicated ClusterGaussianLikelihood
The latent function should be vector valued to correspond to the k latent functions. As a result, we will learn a batched variational distribution, and use a IndependentMultitaskVariationalStrategy to convert the batched variational distribution into a MultitaskMultivariateNormal distribution.
End of explanation |
4,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Static tumbling neural network
Imports
Step1: Load and prepare the data
Import data from static tumbling csv file
Step2: Separate the data into features and targets
Step3: Generate global vocabulary
Step4: Create dictionary to map each element to an index
Step5: Text to vector fucntion
It will convert the elements to a vector of words
Step6: Convert all static tumbling passes to vectors
Step7: Train, validation, Tests sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data.
Step8: Building the network
Step9: Initializing the model
Step10: Training the network
Step11: I know total loss is still to high, but not that bad for the first round of hyper parameters, still room for total loss improvement
Saving de model
Step12: Testing
Step13: Now we check the accuracy of the mode, this test checks which static tumbling line is more difficult, the second one is not even in the data we trianed the neural network.
First we compare to static tumblin pass that has the same elements but different transition cost or effort,
acording to flick mortal and mortal flick it's harder to execute mortal flick
Step14: Now test the model with data that wasn't in the data set
in this complex example the second element is a lot harder to execute
Step15: Test data validation
Now the test values we separeted from the begining are going to be compared with the actual values to check model accuracy | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
import matplotlib.pyplot as plt
from itertools import product
Explanation: Static tumbling neural network
Imports
End of explanation
static_tumbling = pd.read_csv('static-tumbling.csv')
Explanation: Load and prepare the data
Import data from static tumbling csv file
End of explanation
elements, score = static_tumbling['elements'], static_tumbling['score']
Explanation: Separate the data into features and targets
End of explanation
#Main vocabulary, based on the data set elements
main_vocab = set()
for line in elements:
for element in line.split(" "):
main_vocab.add(element)
main_vocab = list(main_vocab)
#Expanded vocabulary based on 49 permutations of the posible transitions
vocab = list(main_vocab)
for roll in product(main_vocab, repeat = 2 ):
vocab.append("{} {}".format(roll[0],roll[1]))
Explanation: Generate global vocabulary
End of explanation
word2idx = {word: i for i, word in enumerate(vocab)}
word2idx
Explanation: Create dictionary to map each element to an index
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
text_vector = text.split(' ')
#basic vocab matching
for element in text_vector:
idx = word2idx.get(element, None)
if idx is None:
continue
else:
word_vector[idx] += 1
#Check for transition order
for x in range(len(text_vector) -1 ):
pair = "{} {}".format(text_vector[x],text_vector[x+1])
idx2 = word2idx.get(pair, None)
if idx2 is None:
continue
else:
word_vector[idx2]+=1
return np.array(word_vector)
text_to_vector("flick flick flick mortal")
Explanation: Text to vector fucntion
It will convert the elements to a vector of words
End of explanation
word_vectors = np.zeros((len(elements), len(vocab)), dtype=np.int_)
for ii, text in enumerate(elements):
word_vectors[ii] = text_to_vector(text)
word_vectors
Explanation: Convert all static tumbling passes to vectors
End of explanation
Y = (score).astype(np.float_)
records = len(score)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
#Y values are one dimentional array of shape (1, N) in order to get the dot product we need it in the form
# (N, 1) so that's why i'm using `Y.values[train_split,None]`
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], Y.values[train_split,None]
testX, testY = word_vectors[test_split,:], Y.values[test_split,None]
trainX
Explanation: Train, validation, Tests sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#Input
net = tflearn.input_data([None, 56])
#Hidden
net = tflearn.fully_connected(net, 350, activation='sigmoid')
net = tflearn.fully_connected(net, 150, activation='sigmoid')
net = tflearn.fully_connected(net, 25, activation='sigmoid')
#output layer as a linear activation function
net = tflearn.fully_connected(net, 1, activation='linear')
net = tflearn.regression(net, optimizer='sgd', loss='mean_square',metric='R2', learning_rate=0.01)
model = tflearn.DNN(net)
return model
Explanation: Building the network
End of explanation
model = build_model()
Explanation: Initializing the model
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=2000)
Explanation: Training the network
End of explanation
# Load model
# model.load('Checkpoints/model-with-transitions-with-3-layers.tfl')
# Manually save model
model.save("Checkpoints/model-with-transitions-with-3-layers.tfl")
Explanation: I know total loss is still to high, but not that bad for the first round of hyper parameters, still room for total loss improvement
Saving de model
End of explanation
# Helper function that uses our model to get the score for the static tumbling pass
def test_score(sentence):
score = model.predict([text_to_vector(sentence.lower())])
print('Gym pass: {}'.format(sentence))
print('Score: {}'.format(score))
print()
return score
# Helper function that uses our model to compare static tumbling passes
def test_compare(pass1, pass2):
score1 = test_score(pass1)
score2 = test_score(pass2)
if score1 > score2:
print('Gym pass 1: {}'.format(pass1))
elif score2 > score1:
print('Gym pass 2: {}'.format(pass2))
else:
print('same difficulty')
Explanation: Testing
End of explanation
element1 = "flick mortal"
element2 = "mortal flick"
test_compare(element1,element2)
Explanation: Now we check the accuracy of the mode, this test checks which static tumbling line is more difficult, the second one is not even in the data we trianed the neural network.
First we compare to static tumblin pass that has the same elements but different transition cost or effort,
acording to flick mortal and mortal flick it's harder to execute mortal flick
End of explanation
test_element1 = "flick flick flick flick flick mortal giro giro giro2"
test_element2 = "mortal flick giro flick giro mortal giro2 giro2 giro2"
test_compare(test_element1,test_element2)
Explanation: Now test the model with data that wasn't in the data set
in this complex example the second element is a lot harder to execute
End of explanation
fig, ax = plt.subplots(figsize=(15,6))
predictions = model.predict(testX)
ax.plot(predictions,label='Prediction')
ax.plot(testY, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
Explanation: Test data validation
Now the test values we separeted from the begining are going to be compared with the actual values to check model accuracy
End of explanation |
4,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using python's cartopy package to georeference EASE-Grid 2.0 cube data
This notebook demonstrates the following typical tasks you might want to do with CETB EASE-Grid 2.0 cube data
Step1: Read in a CETB cube file and use it to get the projected extent
Step2: Note that mouse functions work for pan/zoom on either subplot | Python Code:
%matplotlib notebook
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
from netCDF4 import Dataset
import numpy as np
geod = ccrs.Geodetic()
e2n = ccrs.LambertAzimuthalEqualArea(central_latitude=90.0)
Explanation: Using python's cartopy package to georeference EASE-Grid 2.0 cube data
This notebook demonstrates the following typical tasks you might want to do with CETB EASE-Grid 2.0 cube data:
<ol>
<li> Display a couple of time steps of a cube on the same plot</li>
<li> Use the mouse to get the coordinates and values of the data in the subplots </li>
</ol>
You will need to be working in a python environment with the following packages installed. I think the cartopy features here require it to be a python 3 environment:
<code>
cartopy
matplotlib
netCDF4
</code>
End of explanation
file = "/Users/brodzik/cetb_data/v1.3/F16_SSMIS/N/cubes_WesternUS/CETB.cubefile.WesternUS.F16_SSMIS-19H-SIR-CSU-v1.3.2005.TB.nc"
f = Dataset(file, 'r', 'netCDF4')
x = f.variables['x'][:]
y = f.variables['y'][:]
# Define extent in projected coordinates.
# Use the x, y coordinate variables in the file, which give the centers of the pixels,
# and adjust these by 1/2 pixel to get the corners of the corner pixels
# We want extent as [x_min, x_max, y_min, y_max]
extent = [x[0], x[-1], y[-1], y[0]]
extent
x_res_m = np.fabs(x[1] - x[0])
y_res_m = np.fabs(y[1] - y[0])
x_res_m, y_res_m
extent = [x[0] - (x_res_m / 2.), x[-1] + (x_res_m / 2.),
y[-1] - (y_res_m / 2.), y[0] + (y_res_m / 2.)]
extent
tb = f.variables['TB'][:]
tb.shape
f.close()
for t in np.arange(730):
print(t, np.min(tb[t,:,:]), np.max(tb[t,:,:]))
Explanation: Read in a CETB cube file and use it to get the projected extent
End of explanation
fig = plt.figure(figsize=(8,4))
axes = fig.subplots(1, 2, subplot_kw=dict(projection=e2n))
for ax in axes:
ax.set_extent(extent, crs=e2n)
axes[0].imshow(tb[645,:,:], extent=extent, transform=e2n, origin='upper')
axes[1].imshow(tb[646,:,:], extent=extent, transform=e2n, origin='upper')
for ax in axes:
ax.gridlines(color='gray', linestyle='--')
ax.coastlines()
fig.tight_layout()
Explanation: Note that mouse functions work for pan/zoom on either subplot
End of explanation |
4,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-3', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:29
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
4,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CMCC
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
4,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scraping Webpages with BeautifulSoup
Lets try to get a list of all the years of all of Amitabh Bachchan movies! If you don't know, he's kind of the Sean Connery of India.
BeautifulSoup lets you download webpages and search them for specific HTML entities. You can use this ability to scrape data out of the webpage, or a series of webpages. It is fast and works well. Their documentation is a handy reference.
Getting the Content
First you gotta grab the content (I like to use requests for this)
Step1: How you can make your "beautiful soup"! This turns the HTML into a DOM tree that you can navigate with code.
Step2: Scraping the Info You Want
Now there are a few ways to get content out. For instance, to get the title you could treat it like an object
Step3: Or you can search for specific tags. This would get all the links (as DOM elements)
Step4: Or you can use good old CSS selectors, to actually find all the years his movies were made in
Step5: Of course, we really want to turn this into a list of years... not DOM elements
Step6: Cleaning and Analyzing the Data
So we can check if he made any films in a particular year
Step7: And we can look for messy data
Step8: And we can remove these messy entries (even though that isn't the best thing to do) | Python Code:
import requests
r = requests.get('http://www.imdb.com/name/nm0000821') # lets look at Amitabh Bachchan's list of movies
Explanation: Scraping Webpages with BeautifulSoup
Lets try to get a list of all the years of all of Amitabh Bachchan movies! If you don't know, he's kind of the Sean Connery of India.
BeautifulSoup lets you download webpages and search them for specific HTML entities. You can use this ability to scrape data out of the webpage, or a series of webpages. It is fast and works well. Their documentation is a handy reference.
Getting the Content
First you gotta grab the content (I like to use requests for this)
End of explanation
from bs4 import BeautifulSoup
webpage = BeautifulSoup(r.text, "html.parser")
Explanation: How you can make your "beautiful soup"! This turns the HTML into a DOM tree that you can navigate with code.
End of explanation
webpage.title.text
Explanation: Scraping the Info You Want
Now there are a few ways to get content out. For instance, to get the title you could treat it like an object:
End of explanation
len(webpage.find_all('a'))
Explanation: Or you can search for specific tags. This would get all the links (as DOM elements):
End of explanation
len(webpage.select('div.filmo-row span.year_column'))
Explanation: Or you can use good old CSS selectors, to actually find all the years his movies were made in:
End of explanation
raw_year_list = [e.text.strip() for e in webpage.select('div.filmo-row span.year_column')]
Explanation: Of course, we really want to turn this into a list of years... not DOM elements
End of explanation
'1972' in raw_year_list
Explanation: Cleaning and Analyzing the Data
So we can check if he made any films in a particular year
End of explanation
[year for year in raw_year_list if not year.isnumeric()]
Explanation: And we can look for messy data:
End of explanation
year_list = [year for year in raw_year_list if year.isnumeric()]
','.join(year_list)
import collections
year_freq = collections.Counter(year_list)
for year in sorted(year_freq.keys()):
print str(year)+': '+('+'*year_freq[year])
Explanation: And we can remove these messy entries (even though that isn't the best thing to do):
End of explanation |
4,872 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
use erosion operation first then dilation on the image
| Python Code::
import cv2
import numpy as np
%matplotlib notebook
%matplotlib inline
from matplotlib import pyplot as plt
img = cv2.imread("hsv_ball.jpg",cv2.IMREAD_GRAYSCALE)
_,mask = cv2.threshold(img, 220,255,cv2.THRESH_BINARY_INV)
kernal = np.ones((5,5),np.uint8)
dilation = cv2.dilate(mask,kernal,iterations = 3)
erosion = cv2.erode(mask,kernal,iterations=1)
opening = cv2.morphologyEx(mask,cv2.MORPH_OPEN,kernal)
titles = ['images',"mask","dilation","erosion","opening"]
images = [img,mask,dilation,erosion,opening]
for i in range(len(titles)):
plt.subplot(2,3,i+1)
plt.imshow(images[i],"gray")
plt.title(titles[i])
plt.show()
|
4,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
Step5: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0
Step6: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape
Step7: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
Step8: Finally we will pick the interaction effect by passing 'A
Step9: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions
Step10: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal).
Step11: Visualize the clusters
Step12: Finally, let's investigate interaction effect by reconstructing the time
courses | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Eric Larson <[email protected]>
# Denis Engemannn <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import mne
from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,
f_mway_rm, summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
Explanation: Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
src_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
# we'll load all four conditions that make up the 'two ways' of our ANOVA
event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
epochs.equalize_event_counts(event_id)
Explanation: Read epochs for all channels, removing a bad one
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
# we'll only use one hemisphere to speed up this example
# instead of a second vertex array we'll pass an empty array
sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]
# Let's average and compute inverse, then resample to speed things up
conditions = []
for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important
evoked = epochs[cond].average()
evoked.resample(50, npad='auto')
condition = apply_inverse(evoked, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition.crop(0, None)
conditions.append(condition)
tmin = conditions[0].tmin
tstep = conditions[0].tstep
Explanation: Transform to source space
End of explanation
n_vertices_sample, n_times = conditions[0].lh_data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10
for ii, condition in enumerate(conditions):
X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
End of explanation
# Read the source space we are morphing to (just left hemisphere)
src = mne.read_source_spaces(src_fname)
fsave_vertices = [src[0]['vertno'], []]
morph_mat = mne.compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 4)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately, but here since all estimates are on
'sample' we can use one morph matrix for all the heavy lifting.
End of explanation
X = np.transpose(X, [2, 1, 0, 3]) #
X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]
Explanation: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape: samples
(subjects) x time x space.
First we permute dimensions, then split the array into a list of conditions
and discard the empty dimension resulting from the split using numpy squeeze.
End of explanation
factor_levels = [2, 2]
Explanation: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
End of explanation
effects = 'A:B'
# Tell the ANOVA not to compute p-values which we don't need for clustering
return_pvals = False
# a few more convenient bindings
n_times = X[0].shape[1]
n_conditions = 4
Explanation: Finally we will pick the interaction effect by passing 'A:B'.
(this notation is borrowed from the R formula language). Without this also
the main effects will be returned.
End of explanation
def stat_fun(*args):
# get f-values only.
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=return_pvals)[0]
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects X conditions X observations (optional).
The following function catches the list input and swaps the first and the
second dimension, and finally calls ANOVA.
<div class="alert alert-info"><h4>Note</h4><p>For further details on this ANOVA function consider the
corresponding
`time-frequency tutorial <sphx_glr_auto_tutorials_plot_stats_cluster_time_frequency_repeated_measures_anova.py>`. # noqa: E501</p></div>
End of explanation
# as we only have one hemisphere we need only need half the connectivity
print('Computing connectivity.')
connectivity = mne.spatial_src_connectivity(src[:1])
# Now let's actually do the clustering. Please relax, on a small
# notebook and one single thread only this will take a couple of minutes ...
pthresh = 0.0005
f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)
# To speed things up a bit we will ...
n_permutations = 128 # ... run fewer permutations (reduces sensitivity)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_test(X, connectivity=connectivity, n_jobs=1,
threshold=f_thresh, stat_fun=stat_fun,
n_permutations=n_permutations,
buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial connectivity matrix (instead of spatio-temporal).
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# The brighter the color, the stronger the interaction between
# stimulus modality and stimulus location
brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, colormap='mne',
views='lateral',
time_label='Duration significant (ms)')
brain.save_image('cluster-lh.png')
brain.show_view('medial')
Explanation: Visualize the clusters
End of explanation
inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in
enumerate(good_cluster_inds)][0] # first cluster
times = np.arange(X[0].shape[1]) * tstep * 1e3
plt.figure()
colors = ['y', 'b', 'g', 'purple']
event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']
for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):
# extract time course at cluster vertices
condition = condition[:, :, inds_v]
# normally we would normalize values across subjects but
# here we use data from the same subject so we're good to just
# create average time series across subjects and vertices.
mean_tc = condition.mean(axis=2).mean(axis=0)
std_tc = condition.std(axis=2).std(axis=0)
plt.plot(times, mean_tc.T, color=color, label=eve_id)
plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',
alpha=0.5, label='')
ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5
plt.xlabel('Time (ms)')
plt.ylabel('Activation (F-values)')
plt.xlim(times[[0, -1]])
plt.ylim(ymin, ymax)
plt.fill_betweenx((ymin, ymax), times[inds_t[0]],
times[inds_t[-1]], color='orange', alpha=0.3)
plt.legend()
plt.title('Interaction between stimulus-modality and location.')
plt.show()
Explanation: Finally, let's investigate interaction effect by reconstructing the time
courses
End of explanation |
4,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
20160110-etl-census-with-python
Related post
Step1: Globals
File sources
Step2: Extract, transform, and load
Data dictionary
Step4: PUMS data
Step5: PUMS estimates for user verification
Step6: Export ipynb to html | Python Code:
cd ~
# Import standard packages.
import collections
import functools
import os
import pdb # Debug with pdb.
import subprocess
import sys
import time
# Import installed packages.
import numpy as np
import pandas as pd
# Import local packages.
# Insert current directory into module search path.
# Autoreload local packages after editing.
# `dsdemos` version: https://github.com/stharrold/dsdemos/releases/tag/v0.0.3
sys.path.insert(0, os.path.join(os.path.curdir, r'dsdemos'))
%reload_ext autoreload
%autoreload 2
import dsdemos as dsd
print("Timestamp:")
print(time.strftime(r'%Y-%m-%dT%H:%M:%S%Z', time.gmtime()))
print()
print("Versions:")
print("Python:", sys.version_info)
print("numpy:", np.__version__)
print("pandas:", pd.__version__)
Explanation: 20160110-etl-census-with-python
Related post:
https://stharrold.github.io/20160110-etl-census-with-python.html
Data documentation:
https://www.census.gov/programs-surveys/acs/technical-documentation/pums/documentation.2013.html
Initialization
Imports
End of explanation
# File paths
path_static = os.path.join(os.path.expanduser(r'~'), r'stharrold.github.io/content/static')
basename = r'20160110-etl-census-with-python'
filename = basename
path_ipynb = os.path.join(path_static, basename, filename+'.ipynb')
path_disk = os.path.abspath(r'/mnt/disk-20151227t211000z/')
path_acs = os.path.join(path_disk, r'www2-census-gov/programs-surveys/acs/')
path_pcsv = os.path.join(path_acs, r'data/pums/2013/5-Year/ss13pdc.csv') # 'pdc' = 'person DC'
path_hcsv = os.path.join(path_acs, r'data/pums/2013/5-Year/ss13hdc.csv') # 'hdc' = 'housing DC'
path_ecsv = os.path.join(path_acs, r'tech_docs/pums/estimates/pums_estimates_9_13.csv')
path_dtxt = os.path.join(path_acs, r'tech_docs/pums/data_dict/PUMS_Data_Dictionary_2009-2013.txt')
# Weights
pwt = 'PWGTP' # person weight
pwts = [pwt+str(inum) for inum in range(1, 81)]
hwt = 'WGTP' # housing weight
hwts = [hwt+str(inum) for inum in range(1, 81)]
Explanation: Globals
File sources:
* 2013 5-year PUMS data dictionary: PUMS_Data_Dictionary_2009-2013.txt (<1 MB)
* 2013 5-year PUMS person and housing records for Washington DC:
* Person records: csv_pdc.zip (5 MB compressed, 30 MB decompressed)
* Housing records: csv_hdc.zip (2 MB compressed, 13 MB decompressed)
* 2013 5-year PUMS estimates for user verification: pums_estimates_9_13.csv (<1 MB)
End of explanation
print("`ddict`: Load the data dictionary and display the hierarchical structure.")
# Only `ddict` is used below.
# The hierarchical data frame is only for display.
ddict = dsd.census.parse_pumsdatadict(path=path_dtxt)
tmp = dict()
for record_type in ddict['record_types']:
tmp[record_type] = pd.DataFrame.from_dict(ddict['record_types'][record_type], orient='index')
pd.concat(tmp, names=['record_type', 'var_name']).head()
print("`ddict`: First 10 unstructured notes from end of file.")
ddict['notes'][:10]
Explanation: Extract, transform, and load
Data dictionary
End of explanation
print("`dfp`, `dfh`: Load person and housing records.")
time_start = time.perf_counter()
for path in [path_pcsv, path_hcsv]:
with open(path) as fobj:
nlines = sum(1 for _ in fobj)
with open(path) as fobj:
first_line = fobj.readline()
ncols = first_line.count(',')+1
print("{path}:".format(path=path))
print(" size (MB) = {size:.1f}".format(size=os.path.getsize(path)/1e6))
print(" num lines = {nlines}".format(nlines=nlines))
print(" num columns = {ncols}".format(ncols=ncols))
print()
# For ss13pdc.csv, low_memory=False since otherwise pandas raises DtypeWarning.
dfp = pd.read_csv(path_pcsv, low_memory=False)
dfh = pd.read_csv(path_hcsv, low_memory=True)
for (name, df) in [('dfp', dfp), ('dfh', dfh)]:
print("{name} RAM usage (MB) = {mem:.1f}".format(
name=name, mem=df.memory_usage().sum()/1e6))
time_stop = time.perf_counter()
print()
print("Time elapsed (sec) = {diff:.1f}".format(diff=time_stop-time_start))
print("`dfp`: First 5 person records.")
dfp.head()
print("`dfp`: First 5 housing records.")
dfh.head()
print(
r`dfp`, `dfh`, `ddict`: Describe all columns ('variables') that aren't weights or flags.
Printed format:
[PERSON, HOUSING] RECORD
COL: Column name.
Column description.
Multi-line optional column notes.
1-3 line description of value meanings ('variable codes').
Multi-line statistical description and data type.
...
num columns described = ncols)
print()
records_dfs = collections.OrderedDict([
('PERSON RECORD', {'dataframe': dfp, 'weight': pwt, 'replicate_weights': pwts}),
('HOUSING RECORD', {'dataframe': dfh, 'weight': hwt, 'replicate_weights': hwts})])
for record_type in records_dfs:
print(record_type)
df = records_dfs[record_type]['dataframe']
ncols_desc = 0 # number of columns described
for col in df.columns:
if col in ddict['record_types'][record_type]:
col_dict = ddict['record_types'][record_type][col]
desc = col_dict['description']
else:
col_dict = None
desc = 'Column not in data dictionary.'
if not (
(col.startswith('F') and (desc.endswith(' flag') or desc.endswith(' edit')))
or ('WGTP' in col and "Weight replicate" in desc)):
print("{col}: {desc}".format(col=col, desc=desc))
ncols_desc += 1
if col_dict is not None:
if 'notes' in col_dict:
print(" {notes}".format(notes=col_dict['notes']))
for (inum, var_code) in enumerate(col_dict['var_codes']):
var_code_desc = col_dict['var_codes'][var_code]
print(" {vc}: {vcd}".format(vc=var_code, vcd=var_code_desc))
if inum >= 2:
print(" ...")
break
print(' '+repr(df[col].describe()).replace('\n', '\n '))
print("num columns described = {ncd}".format(ncd=ncols_desc))
print()
Explanation: PUMS data
End of explanation
print("`dfe`: Estimates for user verification filtered for 'District of Columbia'.")
dfe = pd.read_csv(path_ecsv)
tfmask_dc = dfe['state'] == 'District of Columbia'
dfe_dc = dfe.loc[tfmask_dc]
dfe_dc
print("`dfe`: Verify characteristic estimates, direct standard errors, and margin of error.")
# Verify the estimates following
# https://www.census.gov/programs-surveys/acs/
# technical-documentation/pums/documentation.2013.html
# tech_docs/pums/accuracy/2009_2013AccuracyPUMS.pdf
print()
tfmask_test_strs = collections.OrderedDict([
('PERSON RECORD', collections.OrderedDict([
('Total population', "np.asarray([True]*len(dfp))"),
('Housing unit population (RELP=0-15)',"np.logical_and(0 <= dfp['RELP'], dfp['RELP'] <= 15)"),
('GQ population (RELP=16-17)', "np.logical_and(16 <= dfp['RELP'], dfp['RELP'] <= 17)"),
('GQ institutional population (RELP=16)', "dfp['RELP'] == 16"),
('GQ noninstitutional population (RELP=17)', "dfp['RELP'] == 17"),
('Total males (SEX=1)', "dfp['SEX'] == 1"),
('Total females (SEX=2)', "dfp['SEX'] == 2"),
('Age 0-4', "np.logical_and(0 <= dfp['AGEP'], dfp['AGEP'] <= 4)"),
('Age 5-9', "np.logical_and(5 <= dfp['AGEP'], dfp['AGEP'] <= 9)"),
('Age 10-14', "np.logical_and(10 <= dfp['AGEP'], dfp['AGEP'] <= 14)"),
('Age 15-19', "np.logical_and(15 <= dfp['AGEP'], dfp['AGEP'] <= 19)"),
('Age 20-24', "np.logical_and(20 <= dfp['AGEP'], dfp['AGEP'] <= 24)"),
('Age 25-34', "np.logical_and(25 <= dfp['AGEP'], dfp['AGEP'] <= 34)"),
('Age 35-44', "np.logical_and(35 <= dfp['AGEP'], dfp['AGEP'] <= 44)"),
('Age 45-54', "np.logical_and(45 <= dfp['AGEP'], dfp['AGEP'] <= 54)"),
('Age 55-59', "np.logical_and(55 <= dfp['AGEP'], dfp['AGEP'] <= 59)"),
('Age 60-64', "np.logical_and(60 <= dfp['AGEP'], dfp['AGEP'] <= 64)"),
('Age 65-74', "np.logical_and(65 <= dfp['AGEP'], dfp['AGEP'] <= 74)"),
('Age 75-84', "np.logical_and(75 <= dfp['AGEP'], dfp['AGEP'] <= 84)"),
('Age 85 and over', "85 <= dfp['AGEP']")])),
('HOUSING RECORD', collections.OrderedDict([
('Total housing units (TYPE=1)', "dfh['TYPE'] == 1"),
('Total occupied units', "dfh['TEN'].notnull()"),
('Owner occupied units (TEN in 1,2)', "np.logical_or(dfh['TEN'] == 1, dfh['TEN'] == 2)"),
('Renter occupied units (TEN in 3,4)', "np.logical_or(dfh['TEN'] == 3, dfh['TEN'] == 4)"),
('Owned with a mortgage (TEN=1)', "dfh['TEN'] == 1"),
('Owned free and clear (TEN=2)', "dfh['TEN'] == 2"),
('Rented for cash (TEN=3)', "dfh['TEN'] == 3"),
('No cash rent (TEN=4)', "dfh['TEN'] == 4"),
('Total vacant units', "dfh['TEN'].isnull()"),
('For rent (VACS=1)', "dfh['VACS'] == 1"),
('For sale only (VACS=3)', "dfh['VACS'] == 3"),
('All Other Vacant (VACS in 2,4,5,6,7)',
"functools.reduce(np.logical_or, (dfh['VACS'] == vacs for vacs in [2,4,5,6,7]))")]))])
for record_type in records_dfs:
print("'{rt}'".format(rt=record_type))
df = records_dfs[record_type]['dataframe']
wt = records_dfs[record_type]['weight']
wts = records_dfs[record_type]['replicate_weights']
for char in tfmask_test_strs[record_type]:
print(" '{char}'".format(char=char))
# Select the reference verification data
# and the records for the characteristic.
tfmask_ref = dfe_dc['characteristic'] == char
tfmask_test = eval(tfmask_test_strs[record_type][char])
# Calculate and verify the estimate ('est') for the characteristic.
# The estimate is the sum of the sample weights 'WGTP'.
col = 'pums_est_09_to_13'
print(" '{col}':".format(col=col), end=' ')
ref_est = int(dfe_dc.loc[tfmask_ref, col].values[0].replace(',', ''))
test_est = df.loc[tfmask_test, wt].sum()
assert np.isclose(ref_est, test_est, rtol=0, atol=1)
print("(ref, test) = {tup}".format(tup=(ref_est, test_est)))
# Calculate and verify the "direct standard error" ('se') of the estimate.
# The direct standard error is a modified root-mean-square deviation
# using the "replicate weights" 'WGTP[1-80]'.
col = 'pums_se_09_to_13'
print(" '{col}' :".format(col=col), end=' ')
ref_se = dfe_dc.loc[tfmask_ref, col].values[0]
test_se = ((4/80)*((df.loc[tfmask_test, wts].sum() - test_est)**2).sum())**0.5
assert np.isclose(ref_se, test_se, rtol=0, atol=1)
print("(ref, test) = {tup}".format(tup=(ref_se, test_se)))
# Calculate and verify the margin of error ('moe') at the
# 90% confidence level (+/- 1.645 standard errors).
col = 'pums_moe_09_to_13'
print(" '{col}':".format(col=col), end=' ')
ref_moe = dfe_dc.loc[tfmask_ref, col].values[0]
test_moe = 1.645*test_se
assert np.isclose(ref_moe, test_moe, rtol=0, atol=1)
print("(ref, test) = {tup}".format(tup=(ref_moe, test_moe)))
Explanation: PUMS estimates for user verification
End of explanation
# Export ipynb to html
for template in ['basic', 'full']:
path_html = os.path.splitext(path_ipynb)[0]+'-'+template+'.html'
cmd = ['jupyter', 'nbconvert', '--to', 'html', '--template', template, path_ipynb, '--output', path_html]
print(' '.join(cmd))
subprocess.run(args=cmd, check=True)
print()
Explanation: Export ipynb to html
End of explanation |
4,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: M-layer experiments
This notebook trains M-layers on the problems discussed in "Intelligent Matrix Exponentiation".
Running this locally, the m_layer python module should come with the colab and should already be present.
The code of the m_layer python module can be downloaded from the google-research github repository.
Step2: Generate a spiral and show extrapolation
Step3: Train an M-layer on multivariate polynomials such as the determinant
Step4: Permanents
Step5: Determinants
Step6: Train an M-layer on periodic data
Step7: Train an M-layer on CIFAR-10 | Python Code:
# Copyright 2020 The Google Research Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import os.path
if os.path.isfile('m_layer.py'):
from m_layer import MLayer
else:
!if ! type "svn" > /dev/null; then sudo apt-get install subversion; fi
!svn export https://github.com/google-research/google-research/trunk/m_layer
from m_layer.m_layer import MLayer
GLOBAL_SEED = 1
import numpy as np
np.random.seed(GLOBAL_SEED)
import itertools
import functools
import operator
import logging
logging.getLogger('tensorflow').disabled = True
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
from matplotlib import pylab
print(tf.__version__)
print(tf.config.experimental.list_physical_devices('GPU'))
Explanation: M-layer experiments
This notebook trains M-layers on the problems discussed in "Intelligent Matrix Exponentiation".
Running this locally, the m_layer python module should come with the colab and should already be present.
The code of the m_layer python module can be downloaded from the google-research github repository.
End of explanation
SPIRAL_DIM_REP = 10
SPIRAL_DIM_MATRIX = 10
SPIRAL_LAYER_SIZE = 20
SPIRAL_LR = 0.01
SPIRAL_EPOCHS = 1000
SPIRAL_BATCH_SIZE = 16
def spiral_m_layer_model():
return tf.keras.models.Sequential(
[tf.keras.layers.Dense(SPIRAL_DIM_REP,
input_shape=(2,)),
MLayer(dim_m=SPIRAL_DIM_MATRIX,
with_bias=True,
matrix_squarings_exp=None,
matrix_init='normal'),
tf.keras.layers.ActivityRegularization(l2=1e-3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation='sigmoid')]
)
def spiral_dnn_model(activation_type):
return tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(2,)),
tf.keras.layers.Dense(SPIRAL_LAYER_SIZE,
activation=activation_type),
tf.keras.layers.Dense(SPIRAL_LAYER_SIZE,
activation=activation_type),
tf.keras.layers.Dense(1, activation='sigmoid'),
])
def spiral_generate(n_points, noise=0.5, rng=None, extra_rotation=False):
if rng is None:
rng = np.random.RandomState()
if not extra_rotation:
n = np.sqrt(0.001 + (.25)*rng.rand(n_points, 1)) * 6 * (2 * np.pi)
else:
n = np.sqrt((7.0/36)*rng.rand(n_points, 1)+.25) * 6 * (2 * np.pi)
x = 0.5 * (np.sin(n) * n + (2 * rng.rand(n_points, 1) - 1) * noise)
y = 0.5 * (np.cos(n) * n + (2 * rng.rand(n_points, 1) - 1) * noise)
return (np.vstack((np.hstack((x, y)), np.hstack((-x, -y)))),
np.hstack((np.zeros(n_points), np.ones(n_points))))
def spiral_run(model_type, fig=None, activation_type=None, ):
if fig is None:
fig = pylab.figure(figsize=(8,8), dpi=144)
model = spiral_dnn_model(activation_type) if model_type=="dnn" else\
spiral_m_layer_model()
x_train, y_train = spiral_generate(1000)
x_test, y_test = spiral_generate(333, extra_rotation=True)
model.summary()
opt = tf.keras.optimizers.RMSprop(lr=SPIRAL_LR)
model.compile(loss='binary_crossentropy',
optimizer=opt,
metrics=['accuracy'])
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(
monitor='loss', factor=0.2, patience=5, min_lr=1e-5)
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='loss',
patience=30,
min_delta=0.0001,
restore_best_weights=True)
result = model.fit(x_train, y_train, epochs=SPIRAL_EPOCHS,
batch_size=SPIRAL_BATCH_SIZE, verbose=2,
callbacks=[reduce_lr, early_stopping])
n_epochs = len(result.history['loss'])
delta = 0.5 ** 3
xs = np.arange(-14, 14.01, delta)
ys = np.arange(-14, 14.01, delta)
num_samples = len(xs)
a = []
for x in xs:
for y in ys:
a.append([x, y])
t_nn_gen = model.predict(np.array(a))
axes = fig.gca()
XX, YY = np.meshgrid(xs, ys)
axes.contourf(XX, YY, np.arcsinh(t_nn_gen.reshape(XX.shape)),
levels=[0.0, 0.5, 1.0],
colors=[(0.41, 0.67, 0.81, 0.2), (0.89, 0.51, 0.41, 0.2)])
axes.contour(XX, YY, np.arcsinh(t_nn_gen.reshape(XX.shape)),
levels=[0.5])
axes.set_aspect(1)
axes.grid()
axes.plot(x_train[y_train==0, 1], x_train[y_train==0, 0], '.', ms = 2,
label='Class 1')
axes.plot(x_train[y_train==1, 1], x_train[y_train==1, 0], '.', ms = 2,
label='Class 2')
plt.plot(x_test[y_test==1, 1], x_test[y_test==1, 0], '.', ms = .5,
label='Class 2')
plt.plot(x_test[y_test==0, 1], x_test[y_test==0, 0], '.', ms = .5,
label='Class 1')
return fig, n_epochs, result.history['loss'][-1]
fig, n_epochs, loss = spiral_run('m_layer')
Explanation: Generate a spiral and show extrapolation
End of explanation
POLY_BATCH_SIZE = 32
POLY_DIM_MATRIX = 8
POLY_DIM_INPUT_MATRIX = 3
POLY_EPOCHS = 150
POLY_SEED = 123
POLY_LOW = -1
POLY_HIGH = 1
POLY_NUM_SAMPLES = 8192
POLY_LR = 1e-3
POLY_DECAY = 1e-6
def poly_get_model():
return tf.keras.models.Sequential(
[tf.keras.layers.Flatten(input_shape=(POLY_DIM_INPUT_MATRIX,
POLY_DIM_INPUT_MATRIX)),
MLayer(dim_m=POLY_DIM_MATRIX, matrix_init='normal'),
tf.keras.layers.ActivityRegularization(l2=1e-4),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1)]
)
def poly_fun(x, permanent=False):
if permanent:
return sum(
functools.reduce(
operator.mul,
(x[i, pi] for i, pi in enumerate(perm)),
1)
for perm in itertools.permutations(range(x.shape[0])))
return np.linalg.det(x)
def poly_run(permanent=False):
rng = np.random.RandomState(seed=POLY_SEED)
num_train = POLY_NUM_SAMPLES * 5 // 4
x_train = rng.uniform(size=(num_train, POLY_DIM_INPUT_MATRIX,
POLY_DIM_INPUT_MATRIX), low=POLY_LOW,
high=POLY_HIGH)
x_test = rng.uniform(size=(100000, POLY_DIM_INPUT_MATRIX,
POLY_DIM_INPUT_MATRIX), low=POLY_LOW,
high=POLY_HIGH)
y_train = np.array([poly_fun(x, permanent=permanent) for x in x_train])
y_test = np.array([poly_fun(x, permanent=permanent) for x in x_test])
model = poly_get_model()
model.summary()
opt = tf.keras.optimizers.RMSprop(lr=POLY_LR, decay=POLY_DECAY)
model.compile(loss='mse', optimizer=opt)
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_loss', factor=0.2, patience=5, min_lr=1e-5)
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=30,
restore_best_weights=True)
model.fit(x_train, y_train, batch_size=POLY_BATCH_SIZE,
epochs=POLY_EPOCHS,
validation_split=0.2,
shuffle=True,
verbose=2,
callbacks=[reduce_lr, early_stopping])
score_train = model.evaluate(x=x_train, y=y_train)
score_test = model.evaluate(x=x_test, y=y_test)
print('Train, range %s - %s: %s' % (POLY_LOW, POLY_HIGH, score_train))
print('Test, range %s - %s: %s' % (POLY_LOW, POLY_HIGH, score_test))
Explanation: Train an M-layer on multivariate polynomials such as the determinant
End of explanation
poly_run(permanent=True)
Explanation: Permanents
End of explanation
poly_run(permanent=False)
Explanation: Determinants
End of explanation
PERIODIC_EPOCHS = 1000
PERIODIC_BATCH_SIZE = 128
PERIODIC_LR = 0.00001
PERIODIC_DIM_MATRIX = 10
PERIODIC_INIT_SCALE = 0.01
PERIODIC_DIAG_INIT = 10
PERIODIC_SEED = 123
def periodic_matrix_init(shape, rng=None, **kwargs):
if rng is None:
rng = np.random.RandomState()
data = np.float32(rng.normal(loc=0, scale=PERIODIC_INIT_SCALE, size=shape))
for i in range(shape[1]):
data[:, i, i] -= PERIODIC_DIAG_INIT
return data
def periodic_get_model(rng=None):
if rng is None:
rng = np.random.RandomState()
return tf.keras.models.Sequential([
tf.keras.layers.Dense(
2, input_shape=(1,),
kernel_initializer=tf.keras.initializers.RandomNormal()),
MLayer(PERIODIC_DIM_MATRIX, with_bias=True, matrix_squarings_exp=None,
matrix_init=lambda shape, **kwargs:
periodic_matrix_init(shape, rng=rng, **kwargs)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1)
])
def periodic_dist2(y_true, y_pred):
return tf.nn.l2_loss(y_true - y_pred)
def periodic_run(get_model):
rng = np.random.RandomState(seed=PERIODIC_SEED)
# See README file for information about this dataset.
with gfile.Open('daily-min-temperatures.csv', 'r') as f:
data = pd.read_csv(f)
dates = data['Date']
y = data['Temp']
temperatures = data['Temp']
y = list(np.convolve(temperatures - np.mean(temperatures), np.full(7, 1 / 7),
mode='valid'))
num_train = 9 * len(y) // 10
num_test = len(y) - num_train
x_all = np.arange(len(y)).tolist()
x_train = x_all[:num_train]
y_train = y[:num_train]
x_test = x_all[num_train:]
y_targets = y[num_train:]
model_to_train = get_model(rng=rng)
input = tf.keras.layers.Input(shape=(1,))
output = model_to_train(input)
model = tf.keras.models.Model(inputs=input, outputs=output)
opt = tf.keras.optimizers.RMSprop(lr=PERIODIC_LR, decay=0)
early_stopping = tf.keras.callbacks.EarlyStopping(restore_best_weights=True)
model.compile(
loss='mean_squared_error', optimizer=opt,
metrics=[periodic_dist2])
history = model.fit(x_train, y_train,
batch_size=PERIODIC_BATCH_SIZE, epochs=PERIODIC_EPOCHS,
shuffle=True, verbose=1, callbacks=[early_stopping])
y_predictions = model.predict(x_all)
plt.plot(x_train, y_train, linewidth=1, alpha=0.7)
plt.plot(x_test, y_targets, linewidth=1, alpha=0.7)
plt.plot(x_all, y_predictions, color='magenta')
plt.legend(['y_train', 'y_targets', 'y_predictions'])
plt.xlim([0, 3650])
plt.ylabel('Temperature (Celsius)')
plt.grid(True, which='major', axis='both')
plt.grid(True, which='minor', axis='both')
xtick_index = [i for i, date in enumerate(dates) if date.endswith('-01-01')]
plt.xticks(ticks=xtick_index,
labels=[x[:4] for x in dates[xtick_index].to_list()],
rotation=30)
plt.show()
periodic_run(periodic_get_model)
Explanation: Train an M-layer on periodic data
End of explanation
CIFAR_DIM_REP = 35
CIFAR_DIM_MAT = 30
CIFAR_LR = 1e-3
CIFAR_DECAY = 1e-6
CIFAR_MOMENTUM = 0.9
CIFAR_BATCH_SIZE = 32
CIFAR_EPOCHS = 150
CIFAR_NAME = 'cifar10'
CIFAR_NUM_CLASSES = 10
def cifar_load_dataset():
train = tfds.load(CIFAR_NAME, split='train', with_info=False, batch_size=-1)
test = tfds.load(CIFAR_NAME, split='test', with_info=False, batch_size=-1)
train_np = tfds.as_numpy(train)
test_np = tfds.as_numpy(test)
x_train, y_train = train_np['image'], train_np['label']
x_test, y_test = test_np['image'], test_np['label']
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
y_train = tf.keras.utils.to_categorical(y_train, CIFAR_NUM_CLASSES)
y_test = tf.keras.utils.to_categorical(y_test, CIFAR_NUM_CLASSES)
x_train_range01 = x_train.astype('float32') / 255
x_test_range01 = x_test.astype('float32') / 255
return (x_train_range01, y_train), (x_test_range01, y_test)
def cifar_get_model():
return tf.keras.models.Sequential(
[
tf.keras.layers.Flatten(input_shape=(32, 32, 3)),
tf.keras.layers.Dense(CIFAR_DIM_REP),
MLayer(dim_m=CIFAR_DIM_MAT, with_bias=True, matrix_squarings_exp=3),
tf.keras.layers.ActivityRegularization(1e-3),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(CIFAR_NUM_CLASSES, activation='softmax')
])
def cifar_run():
(x_train, y_train), (x_test, y_test) = cifar_load_dataset()
model = cifar_get_model()
model.summary()
opt = tf.keras.optimizers.SGD(lr=CIFAR_LR, momentum=CIFAR_MOMENTUM,
decay=CIFAR_DECAY)
model.compile(loss='categorical_crossentropy', optimizer=opt,
metrics=['accuracy'])
reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(
monitor='val_acc', factor=0.2, patience=5, min_lr=1e-5)
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_acc',
patience=15,
restore_best_weights=True)
history = model.fit(
x_train,
y_train,
batch_size=CIFAR_BATCH_SIZE,
epochs=CIFAR_EPOCHS,
validation_split=0.1,
shuffle=True,
verbose=2,
callbacks=[reduce_lr, early_stopping])
scores = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', scores[0])
print('Test accuracy:', scores[1])
cifar_run()
Explanation: Train an M-layer on CIFAR-10
End of explanation |
4,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Structured data prediction using Vertex AI Platform
Learning Objectives
Create a BigQuery Dataset and Google Cloud Storage Bucket
Export from BigQuery to CSVs in GCS
Training on Cloud AI Platform
Deploy trained model
Introduction
In this notebook, you train, evaluate, and deploy a machine learning model to predict a baby's weight.
Step1: Note
Step2: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight. We'll do the same for a GCS bucket for our project too.
Step3: Create the training and evaluation data tables
Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting.
Note
Step4: Augment dataset to simulate missing data
Now we want to augment our dataset with our simulated babyweight data by setting all gender information to Unknown and setting plurality of all non-single births to Multiple(2+).
Step5: Split augmented dataset into train and eval sets
Using hashmonth, apply a module to get approximately a 75/25 train-eval split.
Split augmented dataset into train dataset
Step6: Split augmented dataset into eval dataset
Step7: Verify table creation
Verify that you created the dataset and training data table.
Step8: Export from BigQuery to CSVs in GCS
Use BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
Step9: Verify CSV creation
Verify that we correctly created the CSV files in our bucket.
Step10: Check data exists
Verify that you previously created CSV files we'll be using for training and evaluation.
Step11: Training on Cloud AI Platform
Now that we see everything is working locally, it's time to train on the cloud!
To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service
Step12: The training job should complete within 15 to 20 minutes. You do not need to wait for this training job to finish before moving forward in the notebook, but will need a trained model.
Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported. We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.
Step13: Deploy trained model
Deploying the trained model to act as a REST web service is a simple gcloud call. | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==2.26.0
Explanation: Structured data prediction using Vertex AI Platform
Learning Objectives
Create a BigQuery Dataset and Google Cloud Storage Bucket
Export from BigQuery to CSVs in GCS
Training on Cloud AI Platform
Deploy trained model
Introduction
In this notebook, you train, evaluate, and deploy a machine learning model to predict a baby's weight.
End of explanation
# change these to try this notebook out
BUCKET = 'qwiklabs-gcp-02-2372fbdc4b9d' # Replace with the your bucket name
PROJECT = 'qwiklabs-gcp-02-2372fbdc4b9d' # Replace with your project-id
REGION = 'us-central1'
import os
from google.cloud import bigquery
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = "2.6"
os.environ["PYTHONVERSION"] = "3.7"
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
Set up environment variables and load necessary libraries
Set environment variables so that we can use them throughout the entire notebook. We will be using our project name for our bucket, so you only need to change your project and region.
End of explanation
%%bash
# Create a BigQuery dataset for babyweight if it doesn't exist
datasetexists=$(bq ls -d | grep -w babyweight)
if [ -n "$datasetexists" ]; then
echo -e "BigQuery dataset already exists, let's not recreate it."
else
echo "Creating BigQuery dataset titled: babyweight"
bq --location=US mk --dataset \
--description "Babyweight" \
$PROJECT:babyweight
echo "Here are your current datasets:"
bq ls
fi
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "Here are your current buckets:"
gsutil ls
fi
Explanation: The source dataset
Our dataset is hosted in BigQuery. The CDC's Natality data has details on US births from 1969 to 2008 and is a publically available dataset, meaning anyone with a GCP account has access. Click here to access the dataset.
The natality dataset is relatively large at almost 138 million rows and 31 columns, but simple to understand. weight_pounds is the target, the continuous value we’ll train a model to predict.
Create a BigQuery Dataset and Google Cloud Storage Bucket
A BigQuery dataset is a container for tables, views, and models built with BigQuery ML. Let's create one called babyweight. We'll do the same for a GCS bucket for our project too.
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data AS
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
mother_age,
CASE
WHEN plurality = 1 THEN "Single(1)"
WHEN plurality = 2 THEN "Twins(2)"
WHEN plurality = 3 THEN "Triplets(3)"
WHEN plurality = 4 THEN "Quadruplets(4)"
WHEN plurality = 5 THEN "Quintuplets(5)"
END AS plurality,
gestation_weeks,
FARM_FINGERPRINT(
CONCAT(
CAST(year AS STRING),
CAST(month AS STRING)
)
) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
Explanation: Create the training and evaluation data tables
Since there is already a publicly available dataset, we can simply create the training and evaluation data tables using this raw input data. First we are going to create a subset of the data limiting our columns to weight_pounds, is_male, mother_age, plurality, and gestation_weeks as well as some simple filtering and a column to hash on for repeatable splitting.
Note: The dataset in the create table code below is the one created previously, e.g. "babyweight".
Preprocess and filter dataset
We have some preprocessing and filtering we would like to do to get our data in the right format for training.
Preprocessing:
* Cast is_male from BOOL to STRING
* Cast plurality from INTEGER to STRING where [1, 2, 3, 4, 5] becomes ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)"]
* Add hashcolumn hashing on year and month
Filtering:
* Only want data for years later than 2000
* Only want baby weights greater than 0
* Only want mothers whose age is greater than 0
* Only want plurality to be greater than 0
* Only want the number of weeks of gestation to be greater than 0
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_augmented_data AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
UNION ALL
SELECT
weight_pounds,
"Unknown" AS is_male,
mother_age,
CASE
WHEN plurality = "Single(1)" THEN plurality
ELSE "Multiple(2+)"
END AS plurality,
gestation_weeks,
hashmonth
FROM
babyweight.babyweight_data
Explanation: Augment dataset to simulate missing data
Now we want to augment our dataset with our simulated babyweight data by setting all gender information to Unknown and setting plurality of all non-single births to Multiple(2+).
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_train AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) < 3
Explanation: Split augmented dataset into train and eval sets
Using hashmonth, apply a module to get approximately a 75/25 train-eval split.
Split augmented dataset into train dataset
End of explanation
%%bigquery
CREATE OR REPLACE TABLE
babyweight.babyweight_data_eval AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
babyweight.babyweight_augmented_data
WHERE
ABS(MOD(hashmonth, 4)) = 3
Explanation: Split augmented dataset into eval dataset
End of explanation
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
Explanation: Verify table creation
Verify that you created the dataset and training data table.
End of explanation
# Construct a BigQuery client object.
client = bigquery.Client()
dataset_name = "babyweight"
# Create dataset reference object
dataset_ref = client.dataset(
dataset_id=dataset_name, project=client.project)
# Export both train and eval tables
for step in ["train", "eval"]:
destination_uri = os.path.join(
"gs://", BUCKET, dataset_name, "data", "{}*.csv".format(step))
table_name = "babyweight_data_{}".format(step)
table_ref = dataset_ref.table(table_name)
extract_job = client.extract_table(
table_ref,
destination_uri,
# Location must match that of the source table.
location="US",
) # API request
extract_job.result() # Waits for job to complete.
print("Exported {}:{}.{} to {}".format(
client.project, dataset_name, table_name, destination_uri))
Explanation: Export from BigQuery to CSVs in GCS
Use BigQuery Python API to export our train and eval tables to Google Cloud Storage in the CSV format to be used later for TensorFlow/Keras training. We'll want to use the dataset we've been using above as well as repeat the process for both training and evaluation data.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*.csv
Explanation: Verify CSV creation
Verify that we correctly created the CSV files in our bucket.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/data/*000000000000.csv
Explanation: Check data exists
Verify that you previously created CSV files we'll be using for training and evaluation.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
gcloud ai-platform jobs submit training ${JOBID} \
--region=${REGION} \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=${OUTDIR} \
--staging-bucket=gs://${BUCKET} \
--master-machine-type=n1-standard-8 \
--scale-tier=CUSTOM \
--runtime-version=${TFVERSION} \
--python-version=${PYTHONVERSION} \
-- \
--train_data_path=gs://${BUCKET}/babyweight/data/train*.csv \
--eval_data_path=gs://${BUCKET}/babyweight/data/eval*.csv \
--output_dir=${OUTDIR} \
--num_epochs=10 \
--train_examples=10000 \
--eval_steps=100 \
--batch_size=32 \
--nembeds=8
Explanation: Training on Cloud AI Platform
Now that we see everything is working locally, it's time to train on the cloud!
To submit to the Cloud we use gcloud ai-platform jobs submit training [jobname] and simply specify some additional parameters for AI Platform Training Service:
- jobname: A unique identifier for the Cloud job. We usually append system time to ensure uniqueness
- job-dir: A GCS location to upload the Python package to
- runtime-version: Version of TF to use.
- python-version: Version of Python to use.
- region: Cloud region to train in. See here for supported AI Platform Training Service regions
Below the -- \ we add in the arguments for our task.py file.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/trained_model
%%bash
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1)
gsutil ls ${MODEL_LOCATION}
Explanation: The training job should complete within 15 to 20 minutes. You do not need to wait for this training job to finish before moving forward in the notebook, but will need a trained model.
Check our trained model files
Let's check the directory structure of our outputs of our trained model in folder we exported. We'll want to deploy the saved_model.pb within the timestamped directory as well as the variable values in the variables folder. Therefore, we need the path of the timestamped directory so that everything within it can be found by Cloud AI Platform's model deployment service.
End of explanation
%%bash
gcloud config set ai_platform/region global
%%bash
MODEL_NAME="babyweight"
MODEL_VERSION="ml_on_gcp"
MODEL_LOCATION=$(gsutil ls -ld -- gs://${BUCKET}/babyweight/trained_model/2* \
| tail -1 | tr -d '[:space:]')
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION"
# gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
# gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions ${REGION}
gcloud ai-platform versions create ${MODEL_VERSION} \
--model=${MODEL_NAME} \
--origin=${MODEL_LOCATION} \
--runtime-version=2.6 \
--python-version=3.7
Explanation: Deploy trained model
Deploying the trained model to act as a REST web service is a simple gcloud call.
End of explanation |
4,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
One interesting question for open source communities is whether they are growing. Often the founding members of a community would like to see new participants join and become active in the community. This is important for community longevity; ultimatley new members are required to take leadership roles if a project is to sustain itself over time.
The data available for community participation is very granular, as it can include the exact traces of the messages sent by participants over a long history. One way of summarizing this information to get a sense of overall community growth is a cohort visualization.
In this notebook, we will produce a visualization of changing participation over time.
Step1: Archive objects have a method that reports for each user how many emails they sent each day.
Step2: This plot will show when each sender sent their first post. A slow ascent means a period where many people joined.
Step3: This is the same data, but plotted as a histogram. It's easier to see the trends here.
Step4: While this is interesting, what if we are interested in how much different "cohorts" of participants stick around and continue to participate in the community over time?
What we want to do is divide the participants into N cohorts based on the percentile of when they joined the mailing list. I.e, the first 1/N people to participate in the mailing list are the first cohort. The second 1/N people are in the second cohort. And so on.
Then we can combine the activities of each cohort and do a stackplot of how each cohort has participated over time.
Step5: This gives us a sense of when new members are taking the lead in the community. But what if the old members are just changing their email addresses? To test that case, we should clean our data with entity resolution techniques. | Python Code:
url = "6lo"
arx = Archive(url,archive_dir="../archives")
arx.data[:1]
Explanation: One interesting question for open source communities is whether they are growing. Often the founding members of a community would like to see new participants join and become active in the community. This is important for community longevity; ultimatley new members are required to take leadership roles if a project is to sustain itself over time.
The data available for community participation is very granular, as it can include the exact traces of the messages sent by participants over a long history. One way of summarizing this information to get a sense of overall community growth is a cohort visualization.
In this notebook, we will produce a visualization of changing participation over time.
End of explanation
act = arx.get_activity()
Explanation: Archive objects have a method that reports for each user how many emails they sent each day.
End of explanation
fig = plt.figure(figsize=(12.5, 7.5))
#act.idxmax().order().T.plot()
(act > 0).idxmax().order().plot()
fig.axes[0].yaxis_date()
Explanation: This plot will show when each sender sent their first post. A slow ascent means a period where many people joined.
End of explanation
fig = plt.figure(figsize=(12.5, 7.5))
(act > 0).idxmax().order().hist()
fig.axes[0].xaxis_date()
Explanation: This is the same data, but plotted as a histogram. It's easier to see the trends here.
End of explanation
n = 5
from bigbang import plot
# A series, indexed by users, of the day of their first post
# This series is ordered by time
first_post = (act > 0).idxmax().order()
# Splitting the previous series into five equal parts,
# each representing a chronological quintile of list members
cohorts = np.array_split(first_post,n)
cohorts = [c.keys() for c in cohorts]
plot.stack(act,partition=cohorts,smooth=10)
Explanation: While this is interesting, what if we are interested in how much different "cohorts" of participants stick around and continue to participate in the community over time?
What we want to do is divide the participants into N cohorts based on the percentile of when they joined the mailing list. I.e, the first 1/N people to participate in the mailing list are the first cohort. The second 1/N people are in the second cohort. And so on.
Then we can combine the activities of each cohort and do a stackplot of how each cohort has participated over time.
End of explanation
cohorts[1].index.values
Explanation: This gives us a sense of when new members are taking the lead in the community. But what if the old members are just changing their email addresses? To test that case, we should clean our data with entity resolution techniques.
End of explanation |
4,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
QuTiP example
Step1: Energy spectrum of three coupled qubits
Step2: Versions | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from numpy import pi
from qutip import *
Explanation: QuTiP example: Energy-levels of a quantum systems as a function of a single parameter
J.R. Johansson and P.D. Nation
For more information about QuTiP see http://qutip.org
End of explanation
def compute(w1list, w2, w3, g12, g13):
# Pre-compute operators for the hamiltonian
sz1 = tensor(sigmaz(), qeye(2), qeye(2))
sx1 = tensor(sigmax(), qeye(2), qeye(2))
sz2 = tensor(qeye(2), sigmaz(), qeye(2))
sx2 = tensor(qeye(2), sigmax(), qeye(2))
sz3 = tensor(qeye(2), qeye(2), sigmaz())
sx3 = tensor(qeye(2), qeye(2), sigmax())
idx = 0
evals_mat = np.zeros((len(w1list),2*2*2))
for w1 in w1list:
# evaluate the Hamiltonian
H = w1 * sz1 + w2 * sz2 + w3 * sz3 + g12 * sx1 * sx2 + g13 * sx1 * sx3
# find the energy eigenvalues of the composite system
evals, ekets = H.eigenstates()
evals_mat[idx,:] = np.real(evals)
idx += 1
return evals_mat
w1 = 1.0 * 2 * pi # atom 1 frequency: sweep this one
w2 = 0.9 * 2 * pi # atom 2 frequency
w3 = 1.1 * 2 * pi # atom 3 frequency
g12 = 0.05 * 2 * pi # atom1-atom2 coupling strength
g13 = 0.05 * 2 * pi # atom1-atom3 coupling strength
w1list = np.linspace(0.75, 1.25, 50) * 2 * pi # atom 1 frequency range
evals_mat = compute(w1list, w2, w3, g12, g13)
fig, ax = plt.subplots(figsize=(12,6))
for n in [1,2,3]:
ax.plot(w1list / (2*pi), (evals_mat[:,n]-evals_mat[:,0]) / (2*pi), 'b')
ax.set_xlabel('Energy splitting of atom 1')
ax.set_ylabel('Eigenenergies')
ax.set_title('Energy spectrum of three coupled qubits');
Explanation: Energy spectrum of three coupled qubits
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Versions
End of explanation |
4,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to TensorFlow, now leveraging tensors!
In this notebook, we modify our intro to TensorFlow notebook to use tensors in place of our for loop. This is a derivation of Jared Ostmeyer's Naked Tensor code.
The initial steps are identical to the earlier notebook
Step1: Define the cost as a tensor -- more elegant than a for loop and enables distributed computing in TensorFlow
Step2: The remaining steps are also identical to the earlier notebook! | Python Code:
import numpy as np
np.random.seed(42)
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
tf.set_random_seed(42)
xs = [0., 1., 2., 3., 4., 5., 6., 7.]
ys = [-.82, -.94, -.12, .26, .39, .64, 1.02, 1.]
fig, ax = plt.subplots()
_ = ax.scatter(xs, ys)
m = tf.Variable(-0.5)
b = tf.Variable(1.0)
Explanation: Introduction to TensorFlow, now leveraging tensors!
In this notebook, we modify our intro to TensorFlow notebook to use tensors in place of our for loop. This is a derivation of Jared Ostmeyer's Naked Tensor code.
The initial steps are identical to the earlier notebook
End of explanation
ys_model = m*xs+b
total_error = tf.reduce_sum((ys-ys_model)**2) # use an op to calculate SSE across all values instead of one by one
Explanation: Define the cost as a tensor -- more elegant than a for loop and enables distributed computing in TensorFlow
End of explanation
optimizer_operation = tf.train.GradientDescentOptimizer(learning_rate=0.001).minimize(total_error)
initializer_operation = tf.global_variables_initializer()
with tf.Session() as session:
session.run(initializer_operation)
n_epochs = 1000 # 10, then 1000
for iteration in range(n_epochs):
session.run(optimizer_operation)
slope, intercept = session.run([m, b])
slope
intercept
y_hat = intercept + slope*np.array(xs)
pd.DataFrame(list(zip(ys, y_hat)), columns=['y', 'y_hat'])
fig, ax = plt.subplots()
ax.scatter(xs, ys)
x_min, x_max = ax.get_xlim()
y_min, y_max = intercept, intercept + slope*(x_max-x_min)
ax.plot([x_min, x_max], [y_min, y_max])
_ = ax.set_xlim([x_min, x_max])
Explanation: The remaining steps are also identical to the earlier notebook!
End of explanation |
4,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Interactive Monty Hall Simulation
nbinteract was designed to make interactive explanations easy to create. In this tutorial, we will show the process of writing a simulation from scratch and visualizing the results interactively.
In this section, you will create an interactive simulation of the Monty Hall Problem. You may continue writing code in the notebook from the previous section or create a new one for this section.
The Monty Hall Problem
The Monty Hall Problem (Wikipedia) is a famous probability problem that has stumped many, mathematicians included. The problem goes something like this
Step1: Note that the example_num argument is passed in but not used in the monty_hall function. Although it's unneeded for the function, it is easier to use interact to call functions when they have arguments to manipulate
Step2: By interacting with the function above, we are able to informally verify that the function never allows the host to open a door with a car behind it. Even though the function is random we are able to use interaction to examine its long-term behavior!
We'll continue by defining a function to simulate a game of Monty Hall and output the winning strategy for that game
Step3: Again, a bit of interaction lets us quickly examine the behavior of winner. We can see that switch appears more often than stay.
Brief Introduction to Plotting with nbinteract
Let's create an interactive bar chart of the number of times each strategy wins. We'll use nbinteract's plotting functionality.
nbi.bar creates a bar chart
Step4: To make an interactive chart, pass a response function in place of one or both of bar's arguments.
Step5: Visualizing the Winners
Now, let's turn back to our original goal
Step6: Note that by default the plot will adjust its y-axis to match the limits of the data. We can manually set the y-axis limits to better visualize this plot being built up. We will also add labels to our plot
Step7: We can get even fancy and use the Play widget from ipywidgets to animate the plot.
Step8: Now we have an interactive, animated bar plot showing the distribution of wins over time for both Monty Hall strategies. This is a convincing argument that switching is better than staying. In fact, the bar plot above suggests that switching is about twice as likely to win as staying!
Simulating Sets of Games
Is switching actually twice as likely to win? We can again use simulation to answer this question by simulating sets of 50 games at a time. recording the proportion of times switch wins.
Step9: We can then define a function to play sets of games and generate a list of win proportions for each set
Step10: Interacting with generate_proportions shows the relationship between its arguments sample_size and repetitions more quickly than reading the function itself!
Visualizing Proportions
We can then use nbi.hist to show these proportions being computed over runs.
Again, we pre-compute the simulations and interact with a function that takes a slice of the simulations to make the interaction faster.
Step11: As with last time, it's illustrative to specify the limits of the axes
Step12: We can see that the distribution of wins is centered at roughly 0.66 but the distribution almost spans the entire x-axis. Will increasing the sample size make our distribution more narrow? Will increasing repetitions do the trick? Or both? We can find out through simulation and interaction.
We'll start with increasing the sample size
Step13: So increasing the sample size makes the distribution narrower. We can now see more clearly that the distribution is centered at 0.66.
We can repeat the process for the number of repetitions | Python Code:
from ipywidgets import interact
import numpy as np
import random
PRIZES = ['Car', 'Goat 1', 'Goat 2']
def monty_hall(example_num=0):
'''
Simulates one round of the Monty Hall Problem. Outputs a tuple of
(result if stay, result if switch, result behind opened door) where
each results is one of PRIZES.
'''
pick = random.choice(PRIZES)
opened = random.choice(
[p for p in PRIZES if p != pick and p != 'Car']
)
remainder = next(p for p in PRIZES if p != pick and p != opened)
return (pick, remainder, opened)
Explanation: An Interactive Monty Hall Simulation
nbinteract was designed to make interactive explanations easy to create. In this tutorial, we will show the process of writing a simulation from scratch and visualizing the results interactively.
In this section, you will create an interactive simulation of the Monty Hall Problem. You may continue writing code in the notebook from the previous section or create a new one for this section.
The Monty Hall Problem
The Monty Hall Problem (Wikipedia) is a famous probability problem that has stumped many, mathematicians included. The problem goes something like this:
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to switch your choice to door No. 2?" Is it to your advantage to switch your choice?
Perhaps unintuitively, you will win the prize about twice as often if you switch doors. We can show this through simulation.
Simulating a Game
One way to write an interactive explanation is to write functions and create interactions for each one as applicable. Composing the functions allows you to create more complicated processes. nbinteract also provides tools for interactive visualizations as we will soon see.
Let's start with defining a function to simulate one round of the Monty Hall Problem.
End of explanation
interact(monty_hall, example_num=(0, 100));
Explanation: Note that the example_num argument is passed in but not used in the monty_hall function. Although it's unneeded for the function, it is easier to use interact to call functions when they have arguments to manipulate:
End of explanation
def winner(example_num=0):
'''
Plays a game of Monty Hall. If staying with the original door wins
a car, return 'stay'. Otherwise, the remaining door contains the car
so 'switch' would have won.
'''
picked, _, _ = monty_hall()
return 'stay' if picked == 'Car' else 'switch'
interact(winner, example_num=(0, 100));
Explanation: By interacting with the function above, we are able to informally verify that the function never allows the host to open a door with a car behind it. Even though the function is random we are able to use interaction to examine its long-term behavior!
We'll continue by defining a function to simulate a game of Monty Hall and output the winning strategy for that game:
End of explanation
import nbinteract as nbi
nbi.bar(['a', 'b'], [4, 6])
Explanation: Again, a bit of interaction lets us quickly examine the behavior of winner. We can see that switch appears more often than stay.
Brief Introduction to Plotting with nbinteract
Let's create an interactive bar chart of the number of times each strategy wins. We'll use nbinteract's plotting functionality.
nbi.bar creates a bar chart:
End of explanation
# This function generates the x-values
def categories(n):
return list('abcdefg')[:n]
# This function generates the y-values (heights of bars)
# The y response function always takes in the x-values as its
# first argument
def offset_y(xs, offset):
num_categories = len(xs)
return np.arange(num_categories) + offset
# Each argument of the response functions is passed in as a keyword
# argument to `nbi.bar` in the same format as `interact`
nbi.bar(categories, offset_y, n=(1, 7), offset=(0, 10))
Explanation: To make an interactive chart, pass a response function in place of one or both of bar's arguments.
End of explanation
categories = ['stay', 'switch']
winners = [winner() for _ in range(1000)]
# Note that the the first argument to the y response function
# will be the x-values which we don't need
def won(_, num_games):
'''
Outputs a 2-tuple of the number of times each strategy won
after num_games games.
'''
return (winners[:num_games].count('stay'),
winners[:num_games].count('switch'))
nbi.bar(categories, won, num_games=(1, 1000))
Explanation: Visualizing the Winners
Now, let's turn back to our original goal: plotting the winners as games are played.
We can call winner many times and use nbi.bar to show the bar chart as it's built over the trials.
Note that we compute the results before defining our function won. This has two benefits over running the simulation directly in won:
It gives us consistency in our interaction. If we run a random simulation in won, moving the slider from 500 to a different number back to 500 will give us a slightly different bar chart.
It makes the interaction smoother since less work is being done each time the slider is moved.
End of explanation
options = {
'title': 'Number of times each strategy wins',
'xlabel': 'Strategy',
'ylabel': 'Number of wins',
'ylim': (0, 700),
}
nbi.bar(categories, won, options=options, num_games=(1, 1000))
Explanation: Note that by default the plot will adjust its y-axis to match the limits of the data. We can manually set the y-axis limits to better visualize this plot being built up. We will also add labels to our plot:
End of explanation
from ipywidgets import Play
nbi.bar(categories, won, options=options,
num_games=Play(min=0, max=1000, step=10, value=0, interval=17))
Explanation: We can get even fancy and use the Play widget from ipywidgets to animate the plot.
End of explanation
def prop_wins(sample_size):
'''Returns proportion of times switching wins after sample_size games.'''
return sum(winner() == 'switch' for _ in range(sample_size)) / sample_size
interact(prop_wins, sample_size=(10, 100));
Explanation: Now we have an interactive, animated bar plot showing the distribution of wins over time for both Monty Hall strategies. This is a convincing argument that switching is better than staying. In fact, the bar plot above suggests that switching is about twice as likely to win as staying!
Simulating Sets of Games
Is switching actually twice as likely to win? We can again use simulation to answer this question by simulating sets of 50 games at a time. recording the proportion of times switch wins.
End of explanation
def generate_proportions(sample_size, repetitions):
'''
Returns an array of length reptitions. Each element in the list is the
proportion of times switching won in sample_size games.
'''
return np.array([prop_wins(sample_size) for _ in range(repetitions)])
interact(generate_proportions, sample_size=(10, 100), repetitions=(10, 100));
Explanation: We can then define a function to play sets of games and generate a list of win proportions for each set:
End of explanation
# Play the game 10 times, recording the proportion of times switching wins.
# Repeat 100 times to record 100 proportions
proportions = generate_proportions(sample_size=10, repetitions=100)
def props_up_to(num_sets):
return proportions[:num_sets]
nbi.hist(props_up_to, num_sets=Play(min=0, max=100, value=0, interval=50))
Explanation: Interacting with generate_proportions shows the relationship between its arguments sample_size and repetitions more quickly than reading the function itself!
Visualizing Proportions
We can then use nbi.hist to show these proportions being computed over runs.
Again, we pre-compute the simulations and interact with a function that takes a slice of the simulations to make the interaction faster.
End of explanation
options = {
'title': 'Distribution of win proportion over 100 sets of 10 games when switching',
'xlabel': 'Proportions',
'ylabel': 'Percent per area',
'xlim': (0.3, 1),
'ylim': (0, 3),
'bins': 7,
}
nbi.hist(props_up_to, options=options, num_sets=Play(min=0, max=100, value=0, interval=50))
Explanation: As with last time, it's illustrative to specify the limits of the axes:
End of explanation
varying_sample_size = [generate_proportions(sample_size, repetitions=100)
for sample_size in range(10, 101)]
def props_for_sample_size(sample_size):
return varying_sample_size[sample_size - 10]
changed_options = {
'title': 'Distribution of win proportions as sample size increases',
'ylim': (0, 6),
'bins': 20,
}
nbi.hist(props_for_sample_size,
options={**options, **changed_options},
sample_size=Play(min=10, max=100, value=10, interval=50))
Explanation: We can see that the distribution of wins is centered at roughly 0.66 but the distribution almost spans the entire x-axis. Will increasing the sample size make our distribution more narrow? Will increasing repetitions do the trick? Or both? We can find out through simulation and interaction.
We'll start with increasing the sample size:
End of explanation
varying_reps = [generate_proportions(sample_size=10, repetitions=reps) for reps in range(10, 101)]
def props_for_reps(reps):
return varying_reps[reps - 10]
changed_options = {
'title': 'Distribution of win proportions as repetitions increase',
'ylim': (0, 5),
}
nbi.hist(props_for_reps,
options={**options, **changed_options},
reps=Play(min=10, max=100, value=10, interval=50))
Explanation: So increasing the sample size makes the distribution narrower. We can now see more clearly that the distribution is centered at 0.66.
We can repeat the process for the number of repetitions:
End of explanation |
4,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step5: Tutorial
Step6: 2. Harmonic Oscillator Example
In this first example we apply the GPFA to spike train data derived from dynamics of a harmonic oscillator defined in a 2-dimensional latent variable space. The aim is to extract these 2-dimensional latent dynamics from the spike train data.
2.1. Generate synthetic spike train data
Here we generate 50-dimensional synthetic spike train data based on a trajectory of a 2-dimensional harmonic oscillator, as described in Section 1.1.
Step7: Let's see how the trajectory and the spike trains look like.
Step8: Thus, we have generated 50-dimensional spike train data, derived from 2-dimensional latent dynamics, i.e., two cycles of circular rotation.
2.2. Apply GPFA to the generated data
Now we try to extract the original latent dynamics from the generated spike train data, by means of GPFA.
We first initialize an instance of the gpfa.GPFA() class.
One can specify some parameters for model fitting at this timing.
Here we set the size of the bin for spike train binning to 20 ms, and the dimensionality of latent variables to 2.
Step9: Then we call the fit() method of the class, with the generated spike train data as input.
This fits a GPFA model to the given data, yielding estimates of the model parameters that best explain the data, which are stored in the params_estimated attribute of the class.
Here we use the first half of the trials for fitting.
Step10: Then we transform the spike trains from the remaining half of the trials into tranjectories in the latent variable space, using the transform() method.
Step11: Let's see how the extracted trajectories look like.
Step12: GPFA successfuly exatracted, as the trial averaged trajectory, the two cycles of rotation in 2-dimensional latent space from the 50-dimensional spike train data.
In the above application we split the trials into two halves and performed fitting and transforming separately on these two sets of trials.
One can also simply perform fitting and transforming on the whole dataset to obtain latent trajectories for all trials.
In such a senario, the fit_transform() method can be used to perform the fitting and transforming at once, as shown in the example below.
Step13: We obtain almost the same latent dynamics, but single trial trajectories are slightly modified owing to an increased amount of the data used for fitting.
3. Lorentz System Example
3.1. Generate sysnthetic spike train data
Now we move on to the next example.
Here we generate 50-dimensional synthetic spike train data based on a trajectory of a 3-dimensional Lorentz system.
Note that, as we want to adopt a part of the trajectory where the double-wing structure of the attractor is fully developed as "trial", we drop off an initial part of the trajectory as "transient".
Step14: Let's plot the obtained trajectory and the spike trains.
Step15: The 3-dimensional latent trajectory exhibit a charactistic structure of the Lorentz attractor
Step16: Let's see how well the method worked in this case.
Step17: Again, the characteristic structure of the original latent dynamics was successfully extracted by GPFA.
Let's take a closer look into the time series of the extracted latent variables, and compare them with the x, y, and z time series of the original Lorentz system.
Step18: Any of the extracted dimension does not correspond solely to a single dimension of the original latent dynamics. In addition, the amplitude of Dim 3 is much smaller than the other two, reflecting the fact that the dimensionality of the original latent dynamics is close to 2, evident from the very similar time series of $x$ and $y$ of the original latent dynamics.
4. Cross-validation
The gpfa.GPFA() class is compatible to the cross-validation functions of sklearn.model_selection, such that users can perform cross-validation to search for a set of parameters yielding best performance using these functions.
Here we demonstrate a use of the sklearn.model_selection.cross_val_score() function to search for an optimal dimension of the latent variables for the spike train data derived from the Lorentz system.
We vary the dimensionality between 1 and 5, and perform 3-fold cross-varidation for each dimensionality value, to obtain an estimate the log-likelihood of the data under the GPFA model with the given dimensionality.
Note
Step19: Let's plot the obtained log-likeliyhood as a function of the dimensionality.
Step20: The red cross denotes the maximum log-likelihood, which is taken at the dimensionality of 2.
This means that the best-fitting GPFA model is the one with a latent dimensionality of 2, which does not match the ground-truth dimensionality of 3 in this example.
This "underestimate" of dimensionality would possibly be becouse the dimensionality of the Lorentz attractor is very close to 2 (to be precise, its Hausdorff dimension is estimated to be 2.06... [3]), and the stochastic "encoding" of the trajectory by spike trains would not allow for reprsenting such a subtle excess of dimensionality above 2.
References
[1] Yu MB, Cunningham JP, Santhanam G, Ryu SI, Shenoy K V, Sahani M (2009) Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J. Neurophysiol. 102 | Python Code:
import numpy as np
from scipy.integrate import odeint
import quantities as pq
import neo
from elephant.spike_train_generation import inhomogeneous_poisson_process
def integrated_oscillator(dt, num_steps, x0=0, y0=1, angular_frequency=2*np.pi*1e-3):
Parameters
----------
dt : float
Integration time step in ms.
num_steps : int
Number of integration steps -> max_time = dt*(num_steps-1).
x0, y0 : float
Initial values in three dimensional space.
angular_frequency : float
Angular frequency in 1/ms.
Returns
-------
t : (num_steps) np.ndarray
Array of timepoints
(2, num_steps) np.ndarray
Integrated two-dimensional trajectory (x, y, z) of the harmonic oscillator
assert isinstance(num_steps, int), "num_steps has to be integer"
t = dt*np.arange(num_steps)
x = x0*np.cos(angular_frequency*t) + y0*np.sin(angular_frequency*t)
y = -x0*np.sin(angular_frequency*t) + y0*np.cos(angular_frequency*t)
return t, np.array((x, y))
def integrated_lorenz(dt, num_steps, x0=0, y0=1, z0=1.05,
sigma=10, rho=28, beta=2.667, tau=1e3):
Parameters
----------
dt :
Integration time step in ms.
num_steps : int
Number of integration steps -> max_time = dt*(num_steps-1).
x0, y0, z0 : float
Initial values in three dimensional space
sigma, rho, beta : float
Parameters defining the lorenz attractor
tau : characteristic timescale in ms
Returns
-------
t : (num_steps) np.ndarray
Array of timepoints
(3, num_steps) np.ndarray
Integrated three-dimensional trajectory (x, y, z) of the Lorenz attractor
def _lorenz_ode(point_of_interest, timepoint, sigma, rho, beta, tau):
Fit the model with `spiketrains` data and apply the dimensionality
reduction on `spiketrains`.
Parameters
----------
point_of_interest : tuple
Tupel containing coordinates (x,y,z) in three dimensional space.
timepoint : a point of interest in time
dt :
Integration time step in ms.
num_steps : int
Number of integration steps -> max_time = dt*(num_steps-1).
sigma, rho, beta : float
Parameters defining the lorenz attractor
tau : characteristic timescale in ms
Returns
-------
x_dot, y_dot, z_dot : float
Values of the lorenz attractor's partial derivatives
at the point x, y, z.
x, y, z = point_of_interest
x_dot = (sigma*(y - x)) / tau
y_dot = (rho*x - y - x*z) / tau
z_dot = (x*y - beta*z) / tau
return x_dot, y_dot, z_dot
assert isinstance(num_steps, int), "num_steps has to be integer"
t = dt*np.arange(num_steps)
poi = (x0, y0, z0)
return t, odeint(_lorenz_ode, poi, t, args=(sigma, rho, beta, tau)).T
def random_projection(data, embedding_dimension, loc=0, scale=None):
Parameters
----------
data : np.ndarray
Data to embed, shape=(M, N)
embedding_dimension : int
Embedding dimension, dimensionality of the space to project to.
loc : float or array_like of floats
Mean (“centre”) of the distribution.
scale : float or array_like of floats
Standard deviation (spread or “width”) of the distribution.
Returns
-------
np.ndarray
Random (normal) projection of input data, shape=(dim, N)
See Also
--------
np.random.normal()
if scale is None:
scale = 1 / np.sqrt(data.shape[0])
projection_matrix = np.random.normal(loc, scale, (embedding_dimension, data.shape[0]))
return np.dot(projection_matrix, data)
def generate_spiketrains(instantaneous_rates, num_trials, timestep):
Parameters
----------
instantaneous_rates : np.ndarray
Array containing time series.
timestep :
Sample period.
num_steps : int
Number of timesteps -> max_time = timestep*(num_steps-1).
Returns
-------
spiketrains : list of neo.SpikeTrains
List containing spiketrains of inhomogeneous Poisson
processes based on given instantaneous rates.
spiketrains = []
for _ in range(num_trials):
spiketrains_per_trial = []
for inst_rate in instantaneous_rates:
anasig_inst_rate = neo.AnalogSignal(inst_rate, sampling_rate=1/timestep, units=pq.Hz)
spiketrains_per_trial.append(inhomogeneous_poisson_process(anasig_inst_rate))
spiketrains.append(spiketrains_per_trial)
return spiketrains
Explanation: Tutorial: GPFA (Gaussian Process Factor Analysis)
Gaussian-process factor analysis (GPFA) is a dimensionality reduction method
[1] for neural trajectory visualization of parallel spike trains. GPFA applies
factor analysis (FA) to time-binned spike count data to reduce the
dimensionality and at the same time smoothes the resulting low-dimensional
trajectories by fitting a Gaussian process (GP) model to them.
The input consists of a set of trials ($Y$), each containing a list of spike
trains (N neurons). The output is the projection ($X$) of the data in a space
of pre-chosen dimensionality $x_{dim} < N$.
Under the assumption of a linear relation (transform matrix $C$) between the
latent variable $X$ following a Gaussian process and the spike train data $Y$ with
a bias $d$ and a noise term of zero mean and (co)variance $R$ (i.e.,
$Y = C X + d + \mathcal{N}(0,R)$), the projection corresponds to the
conditional probability $E[X|Y]$.
The parameters $(C, d, R)$ as well as the time scales and variances of the
Gaussian process are estimated from the data using an expectation-maximization
(EM) algorithm.
Internally, the analysis consists of the following steps:
bin the spike train data to get a sequence of $N$ dimensional vectors of spike counts in respective time bins, and choose the reduced dimensionality $x_{dim}$
expectation-maximization for fitting of the parameters $C, d, R$ and the time-scales and variances of the Gaussian process, using all the trials provided as input (c.f., gpfa_core.em())
projection of single trials in the low dimensional space (c.f., gpfa_core.exact_inference_with_ll())
orthonormalization of the matrix $C$ and the corresponding subspace, for visualization purposes: (c.f., gpfa_core.orthonormalize())
1. Idea of This Tutorial
This tutorial illustrates the usage of the gpfa.GPFA() class implemented in elephant, through its applications to synthetic spike train data, of which the ground truth low-dimensional structure is known.
The examples were inspired by the supplementary material of [2]
1.1. Generation of synthetic spike trains
A set of spike trains are generated as follows.
First, a time series of either a 2-dimensional harmonic oscillator (Section 2) or a 3-dimensional Lorentz system (Section 3; the "standard" parameter values as seen in https://en.wikipedia.org/wiki/Lorenz_system are used) is projected into a high-dimensional space (as high dimension as the desired number of parallel spike trains) via a random projection.
Then the resulting high-dimensional time series serves as time-dependent rates for an inhomogeneous multivariate Poisson process.
Finally, multiple realizations of this Poisson process, which mimic spike trains from multiple trials, serve as input data to the GPFA.
Below are the functions used for spike train generation:
End of explanation
# set parameters for the integration of the harmonic oscillator
timestep = 1 * pq.ms
trial_duration = 2 * pq.s
num_steps = int((trial_duration.rescale('ms')/timestep).magnitude)
# set parameters for spike train generation
max_rate = 70 * pq.Hz
np.random.seed(42) # for visualization purposes, we want to get identical spike trains at any run
# specify data size
num_trials = 20
num_spiketrains = 50
# generate a low-dimensional trajectory
times_oscillator, oscillator_trajectory_2dim = integrated_oscillator(
timestep.magnitude, num_steps=num_steps, x0=0, y0=1)
times_oscillator = (times_oscillator*timestep.units).rescale('s')
# random projection to high-dimensional space
oscillator_trajectory_Ndim = random_projection(
oscillator_trajectory_2dim, embedding_dimension=num_spiketrains)
# convert to instantaneous rate for Poisson process
normed_traj = oscillator_trajectory_Ndim / oscillator_trajectory_Ndim.max()
instantaneous_rates_oscillator = np.power(max_rate.magnitude, normed_traj)
# generate spike trains
spiketrains_oscillator = generate_spiketrains(
instantaneous_rates_oscillator, num_trials, timestep)
Explanation: 2. Harmonic Oscillator Example
In this first example we apply the GPFA to spike train data derived from dynamics of a harmonic oscillator defined in a 2-dimensional latent variable space. The aim is to extract these 2-dimensional latent dynamics from the spike train data.
2.1. Generate synthetic spike train data
Here we generate 50-dimensional synthetic spike train data based on a trajectory of a 2-dimensional harmonic oscillator, as described in Section 1.1.
End of explanation
import matplotlib.pyplot as plt
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(15, 10))
ax1.set_title('2-dim Harmonic Oscillator')
ax1.set_xlabel('time [s]')
for i, y in enumerate(oscillator_trajectory_2dim):
ax1.plot(times_oscillator, y, label=f'dimension {i}')
ax1.legend()
ax2.set_title('Trajectory in 2-dim space')
ax2.set_xlabel('Dim 1')
ax2.set_ylabel('Dim 2')
ax2.set_aspect(1)
ax2.plot(oscillator_trajectory_2dim[0], oscillator_trajectory_2dim[1])
ax3.set_title(f'Projection to {num_spiketrains}-dim space')
ax3.set_xlabel('time [s]')
y_offset = oscillator_trajectory_Ndim.std() * 3
for i, y in enumerate(oscillator_trajectory_Ndim):
ax3.plot(times_oscillator, y + i*y_offset)
trial_to_plot = 0
ax4.set_title(f'Raster plot of trial {trial_to_plot}')
ax4.set_xlabel('Time (s)')
ax4.set_ylabel('Spike train index')
for i, spiketrain in enumerate(spiketrains_oscillator[trial_to_plot]):
ax4.plot(spiketrain, np.ones_like(spiketrain) * i, ls='', marker='|')
plt.tight_layout()
plt.show()
Explanation: Let's see how the trajectory and the spike trains look like.
End of explanation
from elephant.gpfa import GPFA
# specify fitting parameters
bin_size = 20 * pq.ms
latent_dimensionality = 2
gpfa_2dim = GPFA(bin_size=bin_size, x_dim=latent_dimensionality)
Explanation: Thus, we have generated 50-dimensional spike train data, derived from 2-dimensional latent dynamics, i.e., two cycles of circular rotation.
2.2. Apply GPFA to the generated data
Now we try to extract the original latent dynamics from the generated spike train data, by means of GPFA.
We first initialize an instance of the gpfa.GPFA() class.
One can specify some parameters for model fitting at this timing.
Here we set the size of the bin for spike train binning to 20 ms, and the dimensionality of latent variables to 2.
End of explanation
gpfa_2dim.fit(spiketrains_oscillator[:num_trials//2])
print(gpfa_2dim.params_estimated.keys())
Explanation: Then we call the fit() method of the class, with the generated spike train data as input.
This fits a GPFA model to the given data, yielding estimates of the model parameters that best explain the data, which are stored in the params_estimated attribute of the class.
Here we use the first half of the trials for fitting.
End of explanation
trajectories = gpfa_2dim.transform(spiketrains_oscillator[num_trials//2:])
Explanation: Then we transform the spike trains from the remaining half of the trials into tranjectories in the latent variable space, using the transform() method.
End of explanation
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
linewidth_single_trial = 0.5
color_single_trial = 'C0'
alpha_single_trial = 0.5
linewidth_trial_average = 2
color_trial_average = 'C1'
ax1.set_title('Original latent dynamics')
ax1.set_xlabel('Dim 1')
ax1.set_ylabel('Dim 2')
ax1.set_aspect(1)
ax1.plot(oscillator_trajectory_2dim[0], oscillator_trajectory_2dim[1])
ax2.set_title('Latent dynamics extracted by GPFA')
ax2.set_xlabel('Dim 1')
ax2.set_ylabel('Dim 2')
ax2.set_aspect(1)
# single trial trajectories
for single_trial_trajectory in trajectories:
ax2.plot(single_trial_trajectory[0], single_trial_trajectory[1], '-', lw=linewidth_single_trial, c=color_single_trial, alpha=alpha_single_trial)
# trial averaged trajectory
average_trajectory = np.mean(trajectories, axis=0)
ax2.plot(average_trajectory[0], average_trajectory[1], '-', lw=linewidth_trial_average, c=color_trial_average, label='Trial averaged trajectory')
ax2.legend()
plt.tight_layout()
plt.show()
Explanation: Let's see how the extracted trajectories look like.
End of explanation
# here we just reuse the existing instance of the GPFA() class as we use the same fitting parameters as before
trajectories_all = gpfa_2dim.fit_transform(spiketrains_oscillator)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
ax1.set_title('Latent dynamics extracted by GPFA')
ax1.set_xlabel('Dim 1')
ax1.set_ylabel('Dim 2')
ax1.set_aspect(1)
for single_trial_trajectory in trajectories_all:
ax1.plot(single_trial_trajectory[0], single_trial_trajectory[1], '-', lw=linewidth_single_trial, c=color_single_trial, alpha=alpha_single_trial)
average_trajectory = np.mean(trajectories_all, axis=0)
ax1.plot(average_trajectory[0], average_trajectory[1], '-', lw=linewidth_trial_average, c=color_trial_average, label='Trial averaged trajectory')
ax1.legend()
trial_to_plot = 0
ax2.set_title(f'Trajectory for trial {trial_to_plot}')
ax2.set_xlabel('Time [s]')
times_trajectory = np.arange(len(trajectories_all[trial_to_plot][0])) * bin_size.rescale('s')
ax2.plot(times_trajectory, trajectories_all[0][0], c='C0', label="Dim 1, fitting with all trials")
ax2.plot(times_trajectory, trajectories[0][0], c='C0', alpha=0.2, label="Dim 1, fitting with a half of trials")
ax2.plot(times_trajectory, trajectories_all[0][1], c='C1', label="Dim 2, fitting with all trials")
ax2.plot(times_trajectory, trajectories[0][1], c='C1', alpha=0.2, label="Dim 2, fitting with a half of trials")
ax2.legend()
plt.tight_layout()
plt.show()
Explanation: GPFA successfuly exatracted, as the trial averaged trajectory, the two cycles of rotation in 2-dimensional latent space from the 50-dimensional spike train data.
In the above application we split the trials into two halves and performed fitting and transforming separately on these two sets of trials.
One can also simply perform fitting and transforming on the whole dataset to obtain latent trajectories for all trials.
In such a senario, the fit_transform() method can be used to perform the fitting and transforming at once, as shown in the example below.
End of explanation
# set parameters for the integration of the Lorentz attractor
timestep = 1 * pq.ms
transient_duration = 10 * pq.s
trial_duration = 30 * pq.s
num_steps_transient = int((transient_duration.rescale('ms')/timestep).magnitude)
num_steps = int((trial_duration.rescale('ms')/timestep).magnitude)
# set parameters for spike train generation
max_rate = 70 * pq.Hz
np.random.seed(42) # for visualization purposes, we want to get identical spike trains at any run
# specify data
num_trials = 20
num_spiketrains = 50
# calculate the oscillator
times, lorentz_trajectory_3dim = integrated_lorenz(
timestep, num_steps=num_steps_transient+num_steps, x0=0, y0=1, z0=1.25)
times = (times - transient_duration).rescale('s').magnitude
times_trial = times[num_steps_transient:]
# random projection
lorentz_trajectory_Ndim = random_projection(
lorentz_trajectory_3dim[:, num_steps_transient:], embedding_dimension=num_spiketrains)
# calculate instantaneous rate
normed_traj = lorentz_trajectory_Ndim / lorentz_trajectory_Ndim.max()
instantaneous_rates_lorentz = np.power(max_rate.magnitude, normed_traj)
# generate spiketrains
spiketrains_lorentz = generate_spiketrains(
instantaneous_rates_lorentz, num_trials, timestep)
Explanation: We obtain almost the same latent dynamics, but single trial trajectories are slightly modified owing to an increased amount of the data used for fitting.
3. Lorentz System Example
3.1. Generate sysnthetic spike train data
Now we move on to the next example.
Here we generate 50-dimensional synthetic spike train data based on a trajectory of a 3-dimensional Lorentz system.
Note that, as we want to adopt a part of the trajectory where the double-wing structure of the attractor is fully developed as "trial", we drop off an initial part of the trajectory as "transient".
End of explanation
from mpl_toolkits.mplot3d import Axes3D
f = plt.figure(figsize=(15, 10))
ax1 = f.add_subplot(2, 2, 1)
ax2 = f.add_subplot(2, 2, 2, projection='3d')
ax3 = f.add_subplot(2, 2, 3)
ax4 = f.add_subplot(2, 2, 4)
ax1.set_title('Lorentz system')
ax1.set_xlabel('Time [s]')
labels = ['x', 'y', 'z']
for i, x in enumerate(lorentz_trajectory_3dim):
ax1.plot(times, x, label=labels[i])
ax1.axvspan(-transient_duration.rescale('s').magnitude, 0, color='gray', alpha=0.1)
ax1.text(-5, -20, 'Initial transient', ha='center')
ax1.legend()
ax2.set_title(f'Trajectory in 3-dim space')
ax2.set_xlabel('x')
ax2.set_ylabel('y')
ax2.set_ylabel('z')
ax2.plot(lorentz_trajectory_3dim[0, :num_steps_transient],
lorentz_trajectory_3dim[1, :num_steps_transient],
lorentz_trajectory_3dim[2, :num_steps_transient], c='C0', alpha=0.3)
ax2.plot(lorentz_trajectory_3dim[0, num_steps_transient:],
lorentz_trajectory_3dim[1, num_steps_transient:],
lorentz_trajectory_3dim[2, num_steps_transient:], c='C0')
ax3.set_title(f'Projection to {num_spiketrains}-dim space')
ax3.set_xlabel('Time [s]')
y_offset = lorentz_trajectory_Ndim.std() * 3
for i, y in enumerate(lorentz_trajectory_Ndim):
ax3.plot(times_trial, y + i*y_offset)
trial_to_plot = 0
ax4.set_title(f'Raster plot of trial {trial_to_plot}')
ax4.set_xlabel('Time (s)')
ax4.set_ylabel('Neuron id')
for i, spiketrain in enumerate(spiketrains_lorentz[trial_to_plot]):
ax4.plot(spiketrain, np.ones(len(spiketrain)) * i, ls='', marker='|')
plt.tight_layout()
plt.show()
Explanation: Let's plot the obtained trajectory and the spike trains.
End of explanation
# specify fitting parameters
bin_size = 20 * pq.ms
latent_dimensionality = 3
gpfa_3dim = GPFA(bin_size=bin_size, x_dim=latent_dimensionality)
trajectories = gpfa_3dim.fit_transform(spiketrains_lorentz)
Explanation: The 3-dimensional latent trajectory exhibit a charactistic structure of the Lorentz attractor: intermittent switching between rotations around two foci, which is difficult to recognize in the spike train data derived from it.
3.2. Apply GPFA to the generated data
Now we apply the GPFA to the data, with the same bin size as before but with a latent dimensionality of 3 this time. We fit and transform all trials at once using the fit_transform() method.
End of explanation
f = plt.figure(figsize=(15, 5))
ax1 = f.add_subplot(1, 2, 1, projection='3d')
ax2 = f.add_subplot(1, 2, 2, projection='3d')
linewidth_single_trial = 0.5
color_single_trial = 'C0'
alpha_single_trial = 0.5
linewidth_trial_average = 2
color_trial_average = 'C1'
ax1.set_title('Original latent dynamics')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_zlabel('z')
ax1.plot(lorentz_trajectory_3dim[0, num_steps_transient:],
lorentz_trajectory_3dim[1, num_steps_transient:],
lorentz_trajectory_3dim[2, num_steps_transient:])
ax2.set_title('Latent dynamics extracted by GPFA')
ax2.set_xlabel('Dim 1')
ax2.set_ylabel('Dim 2')
ax2.set_zlabel('Dim 3')
# single trial trajectories
for single_trial_trajectory in trajectories:
ax2.plot(single_trial_trajectory[0], single_trial_trajectory[1], single_trial_trajectory[2],
lw=linewidth_single_trial, c=color_single_trial, alpha=alpha_single_trial)
# trial averaged trajectory
average_trajectory = np.mean(trajectories, axis=0)
ax2.plot(average_trajectory[0], average_trajectory[1], average_trajectory[2], lw=linewidth_trial_average, c=color_trial_average, label='Trial averaged trajectory')
ax2.legend()
ax2.view_init(azim=-5, elev=60) # an optimal viewing angle for the trajectory extracted from our fixed spike trains
plt.tight_layout()
plt.show()
Explanation: Let's see how well the method worked in this case.
End of explanation
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
ax1.set_title('Original latent dynamics')
ax1.set_xlabel('Time [s]')
labels = ['x', 'y', 'z']
for i, x in enumerate(lorentz_trajectory_3dim[:, num_steps_transient:]):
ax1.plot(times_trial, x, label=labels[i])
ax1.legend()
ax2.set_title('Latent dynamics extracted by GPFA')
ax2.set_xlabel('Time [s]')
for i, x in enumerate(average_trajectory):
ax2.plot(np.arange(len(x))*0.02, x, label=f'Dim {i+1}')
ax2.legend()
plt.tight_layout()
plt.show()
Explanation: Again, the characteristic structure of the original latent dynamics was successfully extracted by GPFA.
Let's take a closer look into the time series of the extracted latent variables, and compare them with the x, y, and z time series of the original Lorentz system.
End of explanation
from sklearn.model_selection import cross_val_score
x_dims = [1, 2, 3, 4, 5]
log_likelihoods = []
for x_dim in x_dims:
gpfa_cv = GPFA(x_dim=x_dim)
# estimate the log-likelihood for the given dimensionality as the mean of the log-likelihoods from 3 cross-vailidation folds
cv_log_likelihoods = cross_val_score(gpfa_cv, spiketrains_lorentz, cv=3, n_jobs=3, verbose=True)
log_likelihoods.append(np.mean(cv_log_likelihoods))
Explanation: Any of the extracted dimension does not correspond solely to a single dimension of the original latent dynamics. In addition, the amplitude of Dim 3 is much smaller than the other two, reflecting the fact that the dimensionality of the original latent dynamics is close to 2, evident from the very similar time series of $x$ and $y$ of the original latent dynamics.
4. Cross-validation
The gpfa.GPFA() class is compatible to the cross-validation functions of sklearn.model_selection, such that users can perform cross-validation to search for a set of parameters yielding best performance using these functions.
Here we demonstrate a use of the sklearn.model_selection.cross_val_score() function to search for an optimal dimension of the latent variables for the spike train data derived from the Lorentz system.
We vary the dimensionality between 1 and 5, and perform 3-fold cross-varidation for each dimensionality value, to obtain an estimate the log-likelihood of the data under the GPFA model with the given dimensionality.
Note: The following step is time consuming.
End of explanation
f = plt.figure(figsize=(7, 5))
plt.xlabel('Dimensionality of latent variables')
plt.ylabel('Log-likelihood')
plt.plot(x_dims, log_likelihoods, '.-')
plt.plot(x_dims[np.argmax(log_likelihoods)], np.max(log_likelihoods), 'x', markersize=10, color='r')
plt.tight_layout()
plt.show()
Explanation: Let's plot the obtained log-likeliyhood as a function of the dimensionality.
End of explanation
print(scipy.__version__)
Explanation: The red cross denotes the maximum log-likelihood, which is taken at the dimensionality of 2.
This means that the best-fitting GPFA model is the one with a latent dimensionality of 2, which does not match the ground-truth dimensionality of 3 in this example.
This "underestimate" of dimensionality would possibly be becouse the dimensionality of the Lorentz attractor is very close to 2 (to be precise, its Hausdorff dimension is estimated to be 2.06... [3]), and the stochastic "encoding" of the trajectory by spike trains would not allow for reprsenting such a subtle excess of dimensionality above 2.
References
[1] Yu MB, Cunningham JP, Santhanam G, Ryu SI, Shenoy K V, Sahani M (2009) Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. J. Neurophysiol. 102:614-635.
[2] Pandarinath, C. et al. (2018) Inferring single-trial neural population dynamics using sequential auto-encoders. Nat. Methods 15:805–815.
[3] Viswanath, D (2004) The fractal property of the Lorenz attractor. Physica D 190(1-2):115-128.
End of explanation |
4,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute MNE inverse solution on evoked data in a mixed source space
Create a mixed source space and compute MNE inverse solution on evoked dataset.
Step1: Set up our source space.
Step2: Export source positions to nift file | Python Code:
# Author: Annalisa Pascarella <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne import setup_volume_source_space
from mne import make_forward_solution
from mne.minimum_norm import make_inverse_operator, apply_inverse
from nilearn import plotting
# Set dir
data_path = sample.data_path()
subject = 'sample'
data_dir = op.join(data_path, 'MEG', subject)
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
# Set file names
fname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)
fname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')
fname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)
fname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)
fname_evoked = data_dir + '/sample_audvis-ave.fif'
fname_trans = data_dir + '/sample_audvis_raw-trans.fif'
fname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'
fname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'
Explanation: Compute MNE inverse solution on evoked data in a mixed source space
Create a mixed source space and compute MNE inverse solution on evoked dataset.
End of explanation
# List substructures we are interested in. We select only the
# sub structures we want to include in the source space
labels_vol = ['Left-Amygdala',
'Left-Thalamus-Proper',
'Left-Cerebellum-Cortex',
'Brain-Stem',
'Right-Amygdala',
'Right-Thalamus-Proper',
'Right-Cerebellum-Cortex']
# Get a surface-based source space. We could set one up like this::
#
# >>> src = setup_source_space(subject, fname=None, spacing='oct6',
# add_dist=False, subjects_dir=subjects_dir)
#
# But we already have one saved:
src = mne.read_source_spaces(op.join(bem_dir, 'sample-oct-6-src.fif'))
# Now we create a mixed src space by adding the volume regions specified in the
# list labels_vol. First, read the aseg file and the source space bounds
# using the inner skull surface (here using 10mm spacing to save time):
vol_src = setup_volume_source_space(
subject, mri=fname_aseg, pos=7.0, bem=fname_model,
volume_label=labels_vol, subjects_dir=subjects_dir, verbose=True)
# Generate the mixed source space
src += vol_src
# Visualize the source space.
src.plot(subjects_dir=subjects_dir)
n = sum(src[i]['nuse'] for i in range(len(src)))
print('the src space contains %d spaces and %d points' % (len(src), n))
# We could write the mixed source space with::
#
# >>> write_source_spaces(fname_mixed_src, src, overwrite=True)
#
Explanation: Set up our source space.
End of explanation
nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)
src.export_volume(nii_fname, mri_resolution=True)
plotting.plot_img(nii_fname, cmap=plt.cm.spectral)
plt.show()
# Compute the fwd matrix
fwd = make_forward_solution(fname_evoked, fname_trans, src, fname_bem,
mindist=5.0, # ignore sources<=5mm from innerskull
meg=True, eeg=False, n_jobs=1)
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
src_fwd = fwd['src']
n = sum(src_fwd[i]['nuse'] for i in range(len(src_fwd)))
print('the fwd src space contains %d spaces and %d points' % (len(src_fwd), n))
# Load data
condition = 'Left Auditory'
evoked = mne.read_evokeds(fname_evoked, condition=condition,
baseline=(None, 0))
noise_cov = mne.read_cov(fname_cov)
# Compute inverse solution and for each epoch
snr = 3.0 # use smaller SNR for raw data
inv_method = 'MNE' # sLORETA, MNE, dSPM
parc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'
lambda2 = 1.0 / snr ** 2
# Compute inverse operator
inverse_operator = make_inverse_operator(evoked.info, fwd, noise_cov,
depth=None, fixed=False)
stcs = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori=None)
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels_parc = mne.read_labels_from_annot(subject, parc=parc,
subjects_dir=subjects_dir)
# Average the source estimates within each label of the cortical parcellation
# and each sub structure contained in the src space
# If mode = 'mean_flip' this option is used only for the surface cortical label
src = inverse_operator['src']
label_ts = mne.extract_label_time_course([stcs], labels_parc, src,
mode='mean',
allow_empty=True,
return_generator=False)
# plot the times series of 2 labels
fig, axes = plt.subplots(1)
axes.plot(1e3 * stcs.times, label_ts[0][0, :], 'k', label='bankssts-lh')
axes.plot(1e3 * stcs.times, label_ts[0][71, :].T, 'r',
label='Brain-stem')
axes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')
axes.legend()
mne.viz.tight_layout()
plt.show()
Explanation: Export source positions to nift file:
End of explanation |
4,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jupyter Notebook -- это удобно!
Код организван отдельными болками. Блоки кода можно выполнять в произвольном порядке. Сочетает в себе достоинства полноценных скриптов и интерактивной оболочки. Порядок выполнения блоков указан слева от ячейки.
Step1: К коду можно добавлять пояснения в ячейках с режимом markdown. Поддерживается различное форматирование
Step2: Горячие клавиши
Использование горячих клавиш позволяет очень мильно облегчить и ускорить работу с любой программой, особенно связанной со вводом текста. В плане горячих клавиш очень продвинутым является текстовый редактор vim. В нем существует отдельный режим, мозволяющий пользоваться горячими клавишами. В Jupyter notebook поддерживается та же концепция.
Существуют 2 режима работы
Step3: Замер времени выполнения кода
Измерение времени выполнения ячейки кода
Step4: Время выполнения конкретной строки из блока
Step5: Замер времени выполнения в цикле
Step6: Построение графиков и вывод изображений
Step7: Интерактивные графики | Python Code:
print(math.sqrt(4))
import math
Explanation: Jupyter Notebook -- это удобно!
Код организван отдельными болками. Блоки кода можно выполнять в произвольном порядке. Сочетает в себе достоинства полноценных скриптов и интерактивной оболочки. Порядок выполнения блоков указан слева от ячейки.
End of explanation
import numpy as np
np.array([[1,2,3],[4,5,6],[7,8,9]])
Explanation: К коду можно добавлять пояснения в ячейках с режимом markdown. Поддерживается различное форматирование: bold, italic, $\LaTeX$
заголовки
различного
уровня
списки
в
том
числе
и
вложенные
<a href=https://github.com/Erhil/PythonNpCourse><strong><i>HTML</i></strong></a>
<img src=lena.jpg>картиночки</img>
И многое дугое...
Для некоторых объектов доступно особое отображение.
End of explanation
!dir
Explanation: Горячие клавиши
Использование горячих клавиш позволяет очень мильно облегчить и ускорить работу с любой программой, особенно связанной со вводом текста. В плане горячих клавиш очень продвинутым является текстовый редактор vim. В нем существует отдельный режим, мозволяющий пользоваться горячими клавишами. В Jupyter notebook поддерживается та же концепция.
Существуют 2 режима работы: режим ввода и командный режим. Режим ввода отображается зеленой рамкой вокруг активной ячейки, командный режим -- синей. Для того, чтобы переключиться из режима ввода в командный режим необходимо нажать Esc, переход в режим редактирования осуществляется по нажатию клавиши Enter.
В командном режиме часто используются следующие горячие клавиши:
* b -- вставить пустую ячейку после текущей
* a -- вставить пустую ячейку перед текучей
c -- копировать текущую ячейку
x -- вырезать ячейку
v -- вставить ячейку из буфера после текущей
dd -- удалить текущую ячейку
z -- отменить предыдущее действие (в юпитере не последней версии можно отменить ТОЛЬКО ОДНО действие)
m -- перевести ячейку в режим Markdown
y -- перевести ячейку в режим кода
l -- отображать номера строк
o -- скрыть/показать вывод текущей ячейки
ii -- остановить выполнение текущей ячейки
00 -- перезагрузить ядро
s -- сохранить notebook
h -- Показать справку по горячим клавишам
Некоторые сочетания используются в режиме редактирования:
* Tab (в начале строки либо для выделенных строк) -- добавить отступ
* Shift+Tab (в начале строки либо для выделенных строк) -- убрать отступ
Tab -- автодополнение вводимой команды
Shift+Tab -- показать docstring введеннной команды
Clrl+/ -- закомментировать выделеные строки
Shift+Enter -- выполнить текущую ячейку кода и перейти к следующей
Ctrl+Enter -- выполнить текущую ячейку кода и остаться на ней
Alt+Enter -- выполнить текущую ячейку кода и создать новую
Также Jupytr Notebook имеет большое количество встроенных "магических" функций. С некоторыми познакомимся в этом курсе, про другие стоит рассказать сразу.
Выполнение bat/bash комманд:
End of explanation
%%time
a = 352e+11
for i in range(10000):
a = math.sin(a)
Explanation: Замер времени выполнения кода
Измерение времени выполнения ячейки кода:
End of explanation
%time a = 352e+10
for i in range(1000):
a = math.sin(a)
Explanation: Время выполнения конкретной строки из блока
End of explanation
%timeit a = 352e+10
for i in range(1000):
a = math.sin(a)
Explanation: Замер времени выполнения в цикле
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
a = np.linspace(-50, 50, 10000)
b = np.sin(a)/a
plt.figure(figsize=(10,8))
plt.plot(a, b, label=r"$\frac{sin(x)}{x}$")
plt.legend()
plt.title(r'$f(x) = \frac{sin(x)}{x}$')
plt.xlabel('x')
plt.ylabel('f(x)');
from sklearn.datasets import make_classification
X, y = make_classification(n_features=2, n_redundant=0)
plt.figure(figsize=(10,8))
plt.scatter(X[:,0], X[:,1], c=y, marker='+')
plt.title("Classification problem");
lenna = plt.imread("lena.jpg")
plt.figure(figsize=(10,10))
plt.imshow(lenna)
plt.title("image")
plt.figure(figsize=(10,10))
plt.title("histograms")
plt.subplot(2,2,1)
plt.hist((lenna[:,:,0]*0.3+lenna[:,:,2]*0.59+lenna[:,:,2]*0.11).reshape(-1),
bins = 256, color="black")
plt.subplot(2,2,2)
plt.hist(lenna[:,:,0].reshape(-1), color="red", bins = 256)
plt.subplot(2,2,3)
plt.hist(lenna[:,:,1].reshape(-1), color="green", bins = 256)
plt.subplot(2,2,4)
plt.hist(lenna[:,:,2].reshape(-1), color="blue", bins = 256);
Explanation: Построение графиков и вывод изображений
End of explanation
from ipywidgets import interactive, widgets
def func(a,b,delta):
plt.figure(figsize=(8,8))
plt.title('Lissajous curves')
t = np.linspace(0,100,10000)
x = np.sin(a*t+delta)
y = np.sin(b*t)
plt.plot(x,y)
plt.show()
interactive(func,a=(0., 3., .01), b=(0., 3., .01), delta=(0., np.pi, .01))
Explanation: Интерактивные графики
End of explanation |
4,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis of Dataset Variance
Data which is collected differently, look differently. This principle extends to all data (that I can think of), and of course MRI is no exception. In the case of MRI, batch effects exist across studies due to such minor differences as gradient amplitudes, technician working the machine, and time of day, as well as much larger differences such as imaging sequence used, and manufacturer of scanner. Here, we investigate these batch effect differences and illustrate where we believe we can find the true "signal" in the acquired data.
Step1: KKI2009
Step2: BNU1
Step3: BNU3
Step4: NKI1
Step5: MRN114
Step6: NKIENH
Step7: SWU4
Step8: MRN1313
Step9: (Old method) | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import nibabel as nb
import os
from histogram_window import histogram_windowing
Explanation: Analysis of Dataset Variance
Data which is collected differently, look differently. This principle extends to all data (that I can think of), and of course MRI is no exception. In the case of MRI, batch effects exist across studies due to such minor differences as gradient amplitudes, technician working the machine, and time of day, as well as much larger differences such as imaging sequence used, and manufacturer of scanner. Here, we investigate these batch effect differences and illustrate where we believe we can find the true "signal" in the acquired data.
End of explanation
kki2009 = histogram_windowing('./data/KKI2009_b0_pdfs.pkl','KKI2009')
Explanation: KKI2009
End of explanation
bnu1 = histogram_windowing('./data/BNU1_b0_pdfs.pkl', 'BNU1')
Explanation: BNU1
End of explanation
bnu3 = histogram_windowing('./data/BNU3_b0_pdfs.pkl', 'BNU3')
Explanation: BNU3
End of explanation
nki1 = histogram_windowing('./data/NKI1_b0_pdfs.pkl', 'NKI1')
Explanation: NKI1
End of explanation
mrn114 = histogram_windowing('./data/MRN114_b0_pdfs.pkl', 'MRN114')
Explanation: MRN114
End of explanation
nkienh = histogram_windowing('./data/NKIENH_b0_pdfs.pkl', 'NKIENH')
Explanation: NKIENH
End of explanation
swu4 = histogram_windowing('./data/SWU4_b0_pdfs.pkl', 'SWU4')
Explanation: SWU4
End of explanation
mrn1313 = histogram_windowing('./data/MRN1313_b0_pdfs.pkl', 'MRN1313')
Explanation: MRN1313
End of explanation
datasets = list(('./data/BNU1', './data/BNU3', './data/HCP500',
'./data/Jung2015', './data/KKI2009', './data/MRN114',
'./data/NKI1', './data/NKIENH', './data/SWU4'))
files = list()
for f in datasets:
files.append([f + '/' + single for single in os.listdir(f)])
for scan in files:
bval = np.loadtxt(scan[0])
bval[np.where(bval==np.min(bval))] = 0
im = nb.load(scan[2])
b0_loc = np.where(bval==0)[0][0]
dti = im.get_data()[:,:,:,b0_loc]
print "----------"
print "Scan: " + os.path.basename(scan[2])
print "Shape of B0 volume: " + str(dti.shape)
print "Datatype: " + str(dti.dtype)
try:
print "Min: " + str(dti.min()) + " (" + str(np.iinfo(dti.dtype).min) + ")"
print "Max: " + str(dti.max()) + " (" + str(np.iinfo(dti.dtype).max) + ")"
except ValueError:
print "Min: " + str(dti.min()) + " (" + str(np.finfo(dti.dtype).min) + ")"
print "Max: " + str(dti.max()) + " (" + str(np.finfo(dti.dtype).max) + ")"
plt.hist(np.ravel(dti), bins=2000) #adding 1 to prevent divide by 0
plt.title('Histogram for: ' + os.path.basename(scan[2]))
plt.xscale('log')
plt.xlabel("Value (log scale)")
plt.ylabel("Frequency")
plt.show()
Explanation: (Old method)
End of explanation |
4,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building operators
Step1: Looks like exactly what we wanted! We can even check that the anticommutation relation holds
Step2: It was instructive to build it ourselves, but dynamite actually has a Majorana operator built-in, for ease of use. It is the same as ours
Step3: Definition of the SYK Hamltonian
We want to build the model
$$H_{\text{SYK}} = \sum_{i,j,k,l} J_{ijkl} \cdot \chi_i \chi_j \chi_k \chi_l$$
where the $\chi_i$ represent a Majorana creation/annihilation operator for particle index $i$, and the $J_{ijkl}$ are some random coefficients.
First we must import the things we need
Step4: We need to generate all combinations of indices for i,j,k,l, without repeats. Sounds like a task for Python's itertools
Step5: Looks good! Now let's use that to build the Hamiltonian
Step6: Let's try it for a (very) small system!
Step7: Neat, looks good! Why don't we build it for a bigger system, say 16 Majoranas? (which lives on 8 spins)
Step8: Improving operator build performance
Yikes, that was awfully slow for such a small system size. The problem is that the individual Majorana operators are being rebuilt for every term of the sum, and there are a lot of terms. Maybe we can do better by precomputing the Majorana operators. We also use op_product and operator.scale to avoid making unnecessary copies.
Step9: That's a huge speedup!
One last thing to note. It may seem odd that we've never actually specified a spin chain length during this whole process. Don't we need to tell dynamite how many spins we need, and thus how big to make our matrices? If the spin chain length is not specified, dynamite just assumes it to extend to the position of the last non-trivial Pauli operator
Step10: We can use operator.table() to take a look at it | Python Code:
from dynamite.operators import sigmax, sigmay, sigmaz, index_product
# product of sigmaz along the spin chain up to index k
k = 4
index_product(sigmaz(), size=k)
# with that, we can easily build our operator
def majorana(i):
k = i//2
edge_op = sigmay(k) if (i%2) else sigmax(k)
bulk = index_product(sigmaz(), size=k)
return edge_op*bulk
# let's check it out!
majorana(8)
Explanation: Building operators: the Sachdev-Ye-Kitaev model on Majoranas
dynamite can be used for not just the obvious spin chain problems, but anything that can be mapped onto a set of spins. Here we will build a model of interacting Majoranas.
Defining Majoranas on a spin chain
There are multiple ways to define a Majorana creation/annihilation operator in a spin basis. In particular, we want to satisfy the anticommutation relation
$${ \chi_i, \chi_j } = 2 \delta_{ij}$$
for $i \neq j$. It turns out we can do so with the following mapping:
$$\chi_i = \frac{1}{2} \sigma_{\lfloor i/2 \rfloor}^{x/y} \prod_{k}^{\lfloor i/2 \rfloor - 1} \sigma^z_k$$
where that first Pauli matrix is $\sigma^x$ if $i$ is even, and $\sigma^y$ if $i$ is odd.
This basis can be shown fairly easily to satisfy the anticommutation relation we desired. Now let's implement it in dynamite!
Implementation
We need just a couple tools for this: the Pauli matrices and the product operator.
End of explanation
from dynamite.operators import zero, identity
def anticommutator(a, b):
return a*b + b*a
def check_anticom():
print('i', 'j', 'correct', sep='\t')
print('=======================')
for i in range(3):
for j in range(3):
if i == j:
correct_val = 2*identity()
else:
correct_val = zero()
print(i, j, anticommutator(majorana(i), majorana(j)) == correct_val, sep='\t')
check_anticom()
Explanation: Looks like exactly what we wanted! We can even check that the anticommutation relation holds:
End of explanation
# rename our function, so that we can set majorana to be the dynamite one
my_majorana = majorana
from dynamite.extras import majorana
majorana(8)
majorana(8) == my_majorana(8)
Explanation: It was instructive to build it ourselves, but dynamite actually has a Majorana operator built-in, for ease of use. It is the same as ours:
End of explanation
from dynamite.operators import op_sum, op_product, index_sum
Explanation: Definition of the SYK Hamltonian
We want to build the model
$$H_{\text{SYK}} = \sum_{i,j,k,l} J_{ijkl} \cdot \chi_i \chi_j \chi_k \chi_l$$
where the $\chi_i$ represent a Majorana creation/annihilation operator for particle index $i$, and the $J_{ijkl}$ are some random coefficients.
First we must import the things we need:
End of explanation
from itertools import combinations
def get_all_indices(n):
'''
Get all combinations of indices i,j,k,l for a system of n Majoranas.
'''
return combinations(range(n), 4)
# does it do what we expect?
for n,idxs in enumerate(get_all_indices(6)):
print(idxs)
if n > 5:
break
print('...')
Explanation: We need to generate all combinations of indices for i,j,k,l, without repeats. Sounds like a task for Python's itertools:
End of explanation
import numpy as np
from numpy.random import seed, normal
# abbreviate
maj = majorana
def syk_hamiltonian(n, random_seed=0):
'''
Build the SYK Hamiltonian for a system of n Majoranas.
'''
# so the norm scales correctly
factor = np.sqrt(6/(n**3))/4
# it's very important to have the same seed on each process if we run in parallel!
# if we don't set the seed, each process will have a different operator!!
seed(random_seed)
return op_sum(factor*normal(-1,1)*maj(i)*maj(j)*maj(k)*maj(l) for i,j,k,l in get_all_indices(n))
Explanation: Looks good! Now let's use that to build the Hamiltonian:
End of explanation
syk_hamiltonian(5)
Explanation: Let's try it for a (very) small system!
End of explanation
H = syk_hamiltonian(16)
Explanation: Neat, looks good! Why don't we build it for a bigger system, say 16 Majoranas? (which lives on 8 spins)
End of explanation
def syk_hamiltonian_fast(n, random_seed=0):
'''
Build the SYK Hamiltonian for a system of n Majoranas.
'''
factor = np.sqrt(6/(n**3))/4
seed(random_seed)
majs = [maj(i) for i in range(n)]
return op_sum(op_product(majs[i] for i in idxs).scale(factor*normal(-1,1)) for idxs in get_all_indices(n))
# make sure they agree
assert(syk_hamiltonian(10) == syk_hamiltonian_fast(10))
# check which one is faster!
from timeit import timeit
orig = timeit('syk_hamiltonian(16)', number=1, globals=globals())
fast = timeit('syk_hamiltonian_fast(16)', number=1, globals=globals())
print('syk_hamiltonian: ', orig, 's')
print('syk_hamiltonian_fast:', fast, 's')
Explanation: Improving operator build performance
Yikes, that was awfully slow for such a small system size. The problem is that the individual Majorana operators are being rebuilt for every term of the sum, and there are a lot of terms. Maybe we can do better by precomputing the Majorana operators. We also use op_product and operator.scale to avoid making unnecessary copies.
End of explanation
m8 = majorana(8)
print('spin chain length:', m8.get_length())
Explanation: That's a huge speedup!
One last thing to note. It may seem odd that we've never actually specified a spin chain length during this whole process. Don't we need to tell dynamite how many spins we need, and thus how big to make our matrices? If the spin chain length is not specified, dynamite just assumes it to extend to the position of the last non-trivial Pauli operator:
End of explanation
print(m8.table())
Explanation: We can use operator.table() to take a look at it:
End of explanation |
4,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Labels
We want to use the Radio Galaxy Zoo click data as training data. To do this, we first need to convert the raw click data to a useful label — most likely the $(x, y)$ coordinate of the host galaxy for a given set of radio emissions.
Let's start by analysing what we're looking at in the raw data.
Step1: Reading raw classification data
We'll grab a random subject (complete, from the ATLAS survey) and look at its classifications.
Step2: This is a kinda nice subject to look at because there are multiple radio sources. This is often true for Radio Galaxy Zoo subjects. Let's look at a classification.
Step5: We only really care about the annotations, so I've pulled those out. Some of them aren't useful — the langauge, user agent, and start/end times here, though there may potentially be more in other classifications. It looks like the ones that are useful contain both 'ir' and 'radio' keys. I'll use "annotations" to refer to these exclusively from now on.
Each annotation contains an IR component and a radio component. The IR component is obviously where people clicked, though the format is a little unusual — it seems to have the option for multiple clicks, though I can't come up with a situation in which this would arise. I think we can safely ignore all but the first IR coordinate. The radio component describes which contours are included in this annotation. Volunteers may click on multiple contours that they think have been emitted from the same radio source, and these are grouped here. The combination of radio contours here is important as it affects what the volunteers believe they are classifying (see Banfield et al. 2015 for more details).
We need to pull out something describing which combination of radio contours were selected, as well as the actual IR coordinate associated with each one.
Step6: Now let's try running that over a few classifications, to see if anything breaks.
Step7: This seems to work pretty well! The next step is to freeze the whole classification database in this simplified form. I'll do this in a script in crowdastro.__main__.
This section is summarised in crowdastro.labels.parse_classification.
Now we can move on to using these to find the consensus for each subject.
Finding consensus radio/location mapping
Given these simplified data points, it should be a lot easier to figure out what the consensus is for a given subject
Step8: Pulling out the coordinates for each radio source was reasonably straightforward. Now, we want to use these to figure out which location these volunteers agree on. Let's just focus on \xe3\x0c}? for now. Plotting it to get an idea of what we have
Step10: A Gaussian assumption is not reasonable — maybe the volunteers think there are two galaxies that the radio source could belong to? I think a Gaussian mixture assumption is reasonable, though, so I will use PG-means to try and find this.
Step11: Let's try that.
Step12: That seems alright. Let's try with the other radio source.
Step13: Note that we get different results for significance = 0.01 and significance = 0.02. This is unfortunately a hyperparameter that will need to be tweaked. Let's bundle this up into a function. | Python Code:
import collections
import operator
from pprint import pprint
import sqlite3
import sys
import warnings
import matplotlib.pyplot
import numpy
import scipy.stats
import sklearn.mixture
%matplotlib inline
sys.path.insert(1, '..')
import crowdastro.data
import crowdastro.show
warnings.simplefilter('ignore', UserWarning) # astropy always raises warnings on Windows.
Explanation: Labels
We want to use the Radio Galaxy Zoo click data as training data. To do this, we first need to convert the raw click data to a useful label — most likely the $(x, y)$ coordinate of the host galaxy for a given set of radio emissions.
Let's start by analysing what we're looking at in the raw data.
End of explanation
subject = crowdastro.data.db.radio_subjects.find_one({'metadata.survey': 'atlas', 'state': 'complete',
'zooniverse_id': 'ARG0003r18'})
crowdastro.show.subject(subject)
matplotlib.pyplot.show()
Explanation: Reading raw classification data
We'll grab a random subject (complete, from the ATLAS survey) and look at its classifications.
End of explanation
classification = crowdastro.data.db.radio_classifications.find_one({'subject_ids': subject['_id']})
pprint(classification)
Explanation: This is a kinda nice subject to look at because there are multiple radio sources. This is often true for Radio Galaxy Zoo subjects. Let's look at a classification.
End of explanation
def make_radio_combination_signature(radio_annotation):
Generates a unique signature for a radio annotation.
radio_annotation: 'radio' dictionary from a classification.
-> Something immutable
# My choice of immutable object will be a tuple of the xmax values,
# sorted to ensure determinism, and rounded to nix floating point errors.
# Note that the x scale is not the same as the IR scale, but the scale factor is
# included in the annotation, so I have multiplied this out here for consistency.
# Sometimes, there's no scale, so I've included a default scale.
xmaxes = [round(float(c['xmax']) * float(c.get('scale_width', '2.1144278606965172')), 14)
for c in radio_annotation.values()]
return tuple(sorted(xmaxes))
def read_classification(classification):
Converts a raw RGZ classification into radio combinations and IR locations.
classification: RGZ classification dictionary.
-> dict mapping radio combination signatures to IR locations.
result = {}
for annotation in classification['annotations']:
if 'radio' not in annotation:
# This is a metadata annotation and we can ignore it.
continue
radio_signature = make_radio_combination_signature(annotation['radio'])
if annotation['ir'] == 'No Sources':
ir_location = None
else:
ir_location = (float(annotation['ir']['0']['x']), float(annotation['ir']['0']['y']))
result[radio_signature] = ir_location
return result
read_classification(classification)
Explanation: We only really care about the annotations, so I've pulled those out. Some of them aren't useful — the langauge, user agent, and start/end times here, though there may potentially be more in other classifications. It looks like the ones that are useful contain both 'ir' and 'radio' keys. I'll use "annotations" to refer to these exclusively from now on.
Each annotation contains an IR component and a radio component. The IR component is obviously where people clicked, though the format is a little unusual — it seems to have the option for multiple clicks, though I can't come up with a situation in which this would arise. I think we can safely ignore all but the first IR coordinate. The radio component describes which contours are included in this annotation. Volunteers may click on multiple contours that they think have been emitted from the same radio source, and these are grouped here. The combination of radio contours here is important as it affects what the volunteers believe they are classifying (see Banfield et al. 2015 for more details).
We need to pull out something describing which combination of radio contours were selected, as well as the actual IR coordinate associated with each one.
End of explanation
for classification in crowdastro.data.db.radio_classifications.find().limit(100):
print(read_classification(classification))
Explanation: Now let's try running that over a few classifications, to see if anything breaks.
End of explanation
conn = sqlite3.connect('../crowdastro-data/processed.db')
conn.row_factory = sqlite3.Row
cur = conn.cursor()
classifications = list(cur.execute('SELECT full_radio_signature, part_radio_signature, source_x, source_y '
'FROM classifications WHERE subject_id=?', [str(subject['_id'])]))
frs_counter = collections.Counter([c['full_radio_signature'] for c in classifications])
most_common_frs = frs_counter.most_common(1)[0][0]
radio_consensus_classifications = collections.defaultdict(list)
for classification in classifications:
if classification['full_radio_signature'] == most_common_frs:
radio_consensus_classifications[classification['part_radio_signature']].append((classification['source_x'],
classification['source_y']))
conn.close()
radio_consensus_classifications
Explanation: This seems to work pretty well! The next step is to freeze the whole classification database in this simplified form. I'll do this in a script in crowdastro.__main__.
This section is summarised in crowdastro.labels.parse_classification.
Now we can move on to using these to find the consensus for each subject.
Finding consensus radio/location mapping
Given these simplified data points, it should be a lot easier to figure out what the consensus is for a given subject: Pull up all the classifications for a given subject, find the most common full radio signature, and get the clicks for each resulting radio signature.
I've frozen the previous results into a neat little database, so we should be good to go.
End of explanation
crowdastro.show.ir(subject)
xs = [a[0] * crowdastro.config.get('click_to_fits_x') for a in radio_consensus_classifications[b'\xe3\x0c}?']]
ys = [crowdastro.config.get('fits_image_height') - a[1] * crowdastro.config.get('click_to_fits_y')
for a in radio_consensus_classifications[b'\xe3\x0c}?']]
matplotlib.pyplot.xlim((170, 200))
matplotlib.pyplot.ylim((170, 185))
matplotlib.pyplot.scatter(xs, ys, marker='+')
matplotlib.pyplot.show()
Explanation: Pulling out the coordinates for each radio source was reasonably straightforward. Now, we want to use these to figure out which location these volunteers agree on. Let's just focus on \xe3\x0c}? for now. Plotting it to get an idea of what we have:
End of explanation
def pg_means(points, significance=0.01, projections=24):
Cluster points with the PG-means algorithm.
k = 1
while True:
# Fit a Gaussian mixture model with k components.
gmm = sklearn.mixture.GMM(n_components=k)
try:
gmm.fit(points)
except ValueError:
return None
for _ in range(projections):
# Project the data to one dimension.
projection_vector = numpy.random.random(size=(2,))
projected_points = points @ projection_vector
# Project the model to one dimension.
# We need the CDF in one dimension, so we'll sample some data points and project them.
n_samples = 1000
samples = gmm.sample(n_samples) @ projection_vector
samples.sort()
def cdf(x):
for sample, y in zip(samples, numpy.arange(n_samples) / n_samples):
if sample >= x:
break
return y
_, p_value = scipy.stats.kstest(projected_points, numpy.vectorize(cdf))
if p_value < significance:
# Reject the null hypothesis.
break
else:
# Null hypothesis was not broken.
return gmm
k += 1
Explanation: A Gaussian assumption is not reasonable — maybe the volunteers think there are two galaxies that the radio source could belong to? I think a Gaussian mixture assumption is reasonable, though, so I will use PG-means to try and find this.
End of explanation
xs = [a[0] * crowdastro.config.get('click_to_fits_x') for a in radio_consensus_classifications[b'\xe3\x0c}?']]
ys = [crowdastro.config.get('fits_image_height') - a[1] * crowdastro.config.get('click_to_fits_y')
for a in radio_consensus_classifications[b'\xe3\x0c}?']]
points = numpy.vstack([numpy.array(xs), numpy.array(ys)])
crowdastro.show.ir(subject)
matplotlib.pyplot.xlim((170, 200))
matplotlib.pyplot.ylim((170, 185))
matplotlib.pyplot.scatter(xs, ys, marker='+')
matplotlib.pyplot.scatter(*pg_means(points.T, significance=0.02).means_.T)
matplotlib.pyplot.show()
Explanation: Let's try that.
End of explanation
xs = [a[0] * crowdastro.config.get('click_to_fits_x')
for a in radio_consensus_classifications[b'\xb4\xd7\x1c?']
if a[0] is not None]
ys = [crowdastro.config.get('fits_image_height') - a[1] * crowdastro.config.get('click_to_fits_y')
for a in radio_consensus_classifications[b'\xb4\xd7\x1c?']
if a[1] is not None]
points = numpy.vstack([numpy.array(xs), numpy.array(ys)])
crowdastro.show.ir(subject)
matplotlib.pyplot.xlim((80, 120))
matplotlib.pyplot.ylim((80, 150))
matplotlib.pyplot.scatter(xs, ys, marker='+')
matplotlib.pyplot.scatter(*pg_means(points.T, significance=0.02, projections=24).means_.T)
matplotlib.pyplot.show()
Explanation: That seems alright. Let's try with the other radio source.
End of explanation
def get_subject_consensus(subject, conn=None, significance=0.02):
conn.row_factory = sqlite3.Row
cur = conn.cursor()
classifications = list(cur.execute('SELECT full_radio_signature, part_radio_signature, source_x, source_y '
'FROM classifications WHERE subject_id=?', [str(subject['_id'])]))
frs_counter = collections.Counter([c['full_radio_signature'] for c in classifications])
most_common_frs = frs_counter.most_common(1)[0][0]
radio_consensus_classifications = collections.defaultdict(list)
for classification in classifications:
if classification['full_radio_signature'] == most_common_frs:
radio_consensus_classifications[classification['part_radio_signature']].append((classification['source_x'],
classification['source_y']))
consensus = {} # Maps radio signatures to (x, y) NumPy arrays.
for radio_signature in radio_consensus_classifications:
xs = [a[0] * crowdastro.config.get('click_to_fits_x')
for a in radio_consensus_classifications[radio_signature]
if a[0] is not None]
ys = [crowdastro.config.get('fits_image_height') - a[1] * crowdastro.config.get('click_to_fits_y')
for a in radio_consensus_classifications[radio_signature]
if a[1] is not None]
points = numpy.vstack([xs, ys])
gmm = pg_means(points.T, significance=significance, projections=24)
consensus[radio_signature] = gmm.means_[gmm.weights_.argmax()]
return consensus
conn = sqlite3.connect('../crowdastro-data/processed.db')
consensus = get_subject_consensus(subject, conn=conn)
conn.close()
matplotlib.pyplot.figure(figsize=(10, 10))
matplotlib.pyplot.axis('off')
crowdastro.show.subject(subject)
matplotlib.pyplot.scatter(*numpy.array(list(consensus.values())).T)
matplotlib.pyplot.show()
Explanation: Note that we get different results for significance = 0.01 and significance = 0.02. This is unfortunately a hyperparameter that will need to be tweaked. Let's bundle this up into a function.
End of explanation |
4,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grouping all encounter nbrs under respective person nbr
Step1: Now grouping other measurements and properties under encounter_nbrs
Step2: Aggregating encounter entities under respective person entity
Step3: Dropping duplicated columns and then full na rows across tables | Python Code:
encounter_key = 'Enc_Nbr'
person_key = 'Person_Nbr'
encounters_by_person = {}
for df in dfs:
if df is not None:
df_columns =set(df.columns.values)
if encounter_key in df_columns and person_key in df_columns:
for row_index, dfrow in df.iterrows():
rowdict = dict(dfrow)
person_nbr = rowdict[person_key]
encounter_nbr = rowdict[encounter_key]
encounters_by_person.setdefault(person_nbr, set()).add(encounter_nbr)
for person_nbr in encounters_by_person:
if len(encounters_by_person[person_nbr])>5:
pprint(encounters_by_person[person_nbr])
break
Explanation: Grouping all encounter nbrs under respective person nbr
End of explanation
encounter_key = 'Enc_Nbr'
# columns_to_ignore = [u'Person_ID', u'Person_Nbr', u'Enc_ID', u'Enc_Nbr', u'Enc_Date']
data_by_encounters = {}
data_by_encounters_type = {}
for df_index, df in enumerate(dfs):
df_name = table_names[df_index]
print df_name
data_by_encounters[df_name] = {}
if df is not None:
df_columns =set(df.columns.values)
if encounter_key in df_columns:
# check if encounter is primary key in the table
if len(df) == len(df[encounter_key].unique()):
data_by_encounters_type[df_name] = 'single'
for row_index, dfrow in df.iterrows():
rowdict = dict(dfrow)
for k, v in rowdict.iteritems():
if isinstance(v, pd.tslib.Timestamp):
rowdict[k] = v.toordinal()
encounter_nbr = rowdict[encounter_key]
data_by_encounters[df_name][encounter_nbr] = rowdict
else:
data_by_encounters_type[df_name] = 'list'
for row_index, dfrow in df.iterrows():
rowdict = dict(dfrow)
for k, v in rowdict.iteritems():
if isinstance(v, pd.tslib.Timestamp):
rowdict[k] = v.toordinal()
encounter_nbr = rowdict[encounter_key]
data_by_encounters[df_name].setdefault(encounter_nbr, []).append(rowdict)
Explanation: Now grouping other measurements and properties under encounter_nbrs
End of explanation
all_persons = []
for person_nbr in encounters_by_person:
person_object = {person_key:person_nbr, 'encounter_objects':[]}
for enc_nbr in encounters_by_person[person_nbr]:
encounter_object = {encounter_key: enc_nbr}
for df_name in data_by_encounters_type:
if enc_nbr in data_by_encounters[df_name]:
encounter_object[df_name] = data_by_encounters[df_name][enc_nbr]
if data_by_encounters_type[df_name] !="single":
encounter_object[df_name+"_count"] = len(data_by_encounters[df_name][enc_nbr])
person_object['encounter_objects'].append(encounter_object)
all_persons.append(person_object)
# checking for aggregation consistency
n = 0
for person in all_persons:
person_nbr=person[person_key]
for enc_obj in person['encounter_objects']:
enc_nbr=enc_obj[encounter_key]
for df_name in data_by_encounters_type:
if data_by_encounters_type[df_name] == "single":
if df_name in enc_obj:
if person_key in enc_obj[df_name]:
if person_nbr != enc_obj[df_name][person_key]:
print "Person nbr does not match", person_nbr, enc_nbr, df_name
if encounter_key in enc_obj[df_name]:
if enc_nbr != enc_obj[df_name][encounter_key]:
print "Encounter nbr does not match", person_nbr, enc_nbr, df_name
else:
if df_name in enc_obj:
for rp_index, repeated_property in enumerate(enc_obj[df_name]):
if person_key in repeated_property:
if person_nbr != repeated_property[person_key]:
print "Person nbr does not match", person_nbr, enc_nbr, df_name, rp_index
if encounter_key in repeated_property:
if enc_nbr != repeated_property[encounter_key]:
print "Encounter nbr does not match", person_nbr, enc_nbr, df_name, rp_index
# n+=1
# if n>2:break
Explanation: Aggregating encounter entities under respective person entity
End of explanation
with open('20170224_encounter_objects_before_duplicate_fields_drop.json', 'w') as fh:
json.dump(all_persons, fh)
# drop repeated columns in nested fields except from table "encounters"
columns_to_drop = ['Enc_ID', 'Enc_Nbr', 'Enc_Date', 'Person_ID', 'Person_Nbr','Date_Created', 'Enc_Timestamp']
for person_index in range(len(all_persons)):
for enc_obj_index in range(len(all_persons[person_index]['encounter_objects'])):
enc_obj = all_persons[person_index]['encounter_objects'][enc_obj_index]
for df_name in data_by_encounters_type:
if data_by_encounters_type[df_name] == "single":
if df_name in enc_obj and df_name!='encounters':
for column_to_drop in columns_to_drop:
try:
del enc_obj[df_name][column_to_drop]
except:
pass
else:
if df_name in enc_obj and df_name!='encounters':
for rp_index in range(len(enc_obj[df_name])):
for column_to_drop in columns_to_drop:
try:
del enc_obj[df_name][rp_index][column_to_drop]
except:
pass
all_persons[person_index]['encounter_objects'][enc_obj_index] = enc_obj
# drop full na object rows
# !does not seem to be working!!
for person_index in range(len(all_persons)):
for enc_obj_index in range(len(all_persons[person_index]['encounter_objects'])):
enc_obj = all_persons[person_index]['encounter_objects'][enc_obj_index]
for df_name in data_by_encounters_type:
if data_by_encounters_type[df_name] == "single":
if df_name in enc_obj:
if all(pd.isnull(enc_obj[df_name].values())):
enc_obj[df_name] = float('nan')
else:
if df_name in enc_obj:
for rp_index in reversed(range(len(enc_obj[df_name]))):
if all(pd.isnull(enc_obj[df_name][rp_index].values())):
del enc_obj[df_name][rp_index]
all_persons[person_index]['encounter_objects'][enc_obj_index] = enc_obj
with open('20170224_encounter_objects.json', 'w') as fh:
json.dump(all_persons, fh)
# creating a dataframe from aggregated data
combined_ecounters_df = pd.DataFrame.from_dict({(person_obj[person_key],enc_obj[encounter_key]): enc_obj
for person_obj in all_persons
for enc_obj in person_obj['encounter_objects']},
orient='index')
combined_ecounters_df.head(10)
combined_ecounters_df.loc[89,'family_hist_for_Enc']
Explanation: Dropping duplicated columns and then full na rows across tables
End of explanation |
4,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A count of the total number of mitochondria within the bounds (694×1794, 1750×2460, 1004×1379).
Step1: We can count annotated mitochondria by referencing the mitochondria channel
Step2: We can now use the built-in connected-components to count mitochondria. | Python Code:
import ndio.remote.OCP as OCP
oo = OCP()
token = "kasthuri2015_ramon_v1"
Explanation: A count of the total number of mitochondria within the bounds (694×1794, 1750×2460, 1004×1379).
End of explanation
mito_cutout = oo.get_cutout(token, 'mitochondria', 694, 1794, 1750, 2460, 1004, 1379, resolution=3)
Explanation: We can count annotated mitochondria by referencing the mitochondria channel:
End of explanation
import ndio.utils.stats as ndstats
c, f = ndstats.connected_components(mito_cutout)
print "There are {} mitochondria total in the annotated volume.".format(f)
Explanation: We can now use the built-in connected-components to count mitochondria.
End of explanation |
4,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Retrieve HiC dataset from NCBI
We will use data from <a name="ref-1"/>(Stadhouders R, Vidal E, Serra F, Di Stefano B et al. 2018), which comes from mouse cells where Hi-C experiment where conducted in different states during highly-efficient somatic cell reprogramming.
The data can be downloaded from
Step1: Files are renamed for convenience.
Note
Step2: After compression we reduce the total size to 27 Gb (20% of the original size, and dsrc ensures fast reading of the compressed data)
Note | Python Code:
%%bash
mkdir -p FASTQs
fastq-dump SRR5344921 --defline-seq '@$ac.$si' -X 100000000 --split-files --outdir FASTQs/
mv FASTQs/SRR5344921_1.fastq FASTQs/mouse_B_rep1_1.fastq
mv FASTQs/SRR5344921_2.fastq FASTQs/mouse_B_rep1_2.fastq
fastq-dump SRR5344925 --defline-seq '@$ac.$si' -X 100000000 --split-files --outdir FASTQs/
mv FASTQs/SRR5344925_1.fastq FASTQs/mouse_B_rep2_1.fastq
mv FASTQs/SRR5344925_2.fastq FASTQs/mouse_B_rep2_2.fastq
fastq-dump SRR5344969 --defline-seq '@$ac.$si' -X 100000000 --split-files --outdir FASTQs
mv FASTQs/SRR5344969_1.fastq FASTQs/mouse_PSC_rep1_1.fastq
mv FASTQs/SRR5344969_2.fastq FASTQs/mouse_PSC_rep1_2.fastq
fastq-dump SRR5344973 --defline-seq '@$ac.$si' -X 100000000 --split-files --outdir FASTQs/
mv FASTQs/SRR5344973_1.fastq FASTQs/mouse_PSC_rep2_1.fastq
mv FASTQs/SRR5344973_2.fastq FASTQs/mouse_PSC_rep2_2.fastq
Explanation: Retrieve HiC dataset from NCBI
We will use data from <a name="ref-1"/>(Stadhouders R, Vidal E, Serra F, Di Stefano B et al. 2018), which comes from mouse cells where Hi-C experiment where conducted in different states during highly-efficient somatic cell reprogramming.
The data can be downloaded from:
https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE53463
Once downloaded the files can be converted to the FASTQ format in order for TADbit to read them.
The easiest way to download the data might be through the fastq-dump program from the SRA Toolkit (http://www.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?cmd=show&f=software&m=software&s=software).
We download 100M reads for each of 4 replicates (2 replicates from B cells and 2 from Pluripotent Stem Cells),and organize each in two files, one per read-end (this step is long and can take up to 6 hours):
End of explanation
%%bash
dsrc c -t8 FASTQs/mouse_B_rep1_1.fastq FASTQs/mouse_B_rep1_1.fastq.dsrc
dsrc c -t8 FASTQs/mouse_B_rep1_2.fastq FASTQs/mouse_B_rep1_2.fastq.dsrc
dsrc c -t8 FASTQs/mouse_B_rep2_1.fastq FASTQs/mouse_B_rep2_1.fastq.dsrc
dsrc c -t8 FASTQs/mouse_B_rep2_2.fastq FASTQs/mouse_B_rep2_2.fastq.dsrc
dsrc c -t8 FASTQs/mouse_PSC_rep1_1.fastq FASTQs/mouse_PSC_rep1_1.fastq.dsrc
dsrc c -t8 FASTQs/mouse_PSC_rep1_2.fastq FASTQs/mouse_PSC_rep1_2.fastq.dsrc
dsrc c -t8 FASTQs/mouse_PSC_rep2_1.fastq FASTQs/mouse_PSC_rep2_1.fastq.dsrc
dsrc c -t8 FASTQs/mouse_PSC_rep2_2.fastq FASTQs/mouse_PSC_rep2_2.fastq.dsrc
Explanation: Files are renamed for convenience.
Note: the parameter used here for fastq-dump are for generating simple FASTQ files, --defline-seq ‘@$ac.$si’ reduces the information in the headers to the accession number and the read id, --split-files is to separate both read-ends in different files, finally -X 100000000 is to download only the first 100 Million reads of each replicate
Note: alternatively you can also directly download the FASTQ from http://www.ebi.ac.uk/
Compression
Each of these 8 files, contains 100M reads of 75 nucleotides each, and occupies ~17 Gb (total 130 Gb).
Internally we use DSRC <a name="ref-4"/>(Roguski and Deorowicz, 2014) that allows better compression ration and, more importantly, faster decompression:
End of explanation
%%bash
rm -f FASTQs/mouse_B_rep1_1.fastq
rm -f FASTQs/mouse_B_rep1_2.fastq
rm -f FASTQs/mouse_B_rep2_1.fastq
rm -f FASTQs/mouse_B_rep2_2.fastq
rm -f FASTQs/mouse_PSC_rep1_1.fastq
rm -f FASTQs/mouse_PSC_rep1_2.fastq
rm -f FASTQs/mouse_PSC_rep2_1.fastq
rm -f FASTQs/mouse_PSC_rep2_2.fastq
Explanation: After compression we reduce the total size to 27 Gb (20% of the original size, and dsrc ensures fast reading of the compressed data)
Note:
- using gzip instead reduces size to ~38 Gb (occupies ~40% more than dsrc compressed files)
- using bzip2 instead reduces size to ~31 Gb (occupies ~15% more than dsrc compressed files)
Both are much slower to generate and read
Cleanup
End of explanation |
4,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
## Your code here
import random
threshold = 1e-5
wordsInt = sorted(int_words)
print(wordsInt[:30])
pass
bins = np.bincount(wordsInt)
print(bins[:30])
frequencies = np.zeros(len(words), dtype=float)
for index, singlebin in enumerate(bins):
frequencies[index] = singlebin / len(int_words)
print(frequencies[:30])
probs = np.zeros(len(words), dtype=float)
for index, singlefrequency in enumerate(frequencies):
probs[index] = 1 - np.sqrt(threshold/singlefrequency)
print(probs[:30])
# Discard some word considering single word discarding probability
train_words = []
for int_word in int_words:
discardRandom = random.random()
if probs[int_word] > discardRandom:
print("Skip one occurence of " + int_to_vocab[int_word])
else:
train_words.append(int_word)
print(train_words[:30])
print(len(train_words))
#Solution (faster and cleaner)
from collections import Counter
import random
threshold_2 = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold_2/freqs[word]) for word in word_counts}
train_words_2 = [word for word in int_words if p_drop[word] < random.random()]
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# My wrong implementation
#C = random.uniform(1,window_size,1)
#return words[idx-C:idx-1] + words[idx+1:idx+C]
#Solution
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, shape=(None), name="inputs")
labels = tf.placeholder(tf.int32, shape=(None, None), name="labels")
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform(shape=(n_vocab, n_embedding), minval=-1.0, maxval=1.0))
embed = tf.nn.embedding_lookup(embedding, inputs)
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal(shape=(n_embedding, n_vocab), mean=0.0, stddev=0.01))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(weights=tf.transpose(softmax_w), biases=softmax_b,
labels=labels, inputs=embed,
num_sampled=100, num_classes=n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
4,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algebra Lineal con Python
Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Mi blog sobre Python. El contenido esta bajo la licencia BSD.
<img alt="Algebra lineal" title="Algebra lineal" src="https
Step3: Representación gráfica
Tradicionalmente, los vectores son representados visualmente como flechas que parten desde el origen hacia un punto.
Por ejemplo, si quisiéramos representar graficamente a los vectores $v1=[2, 4]$, $v2=[-3, 3]$ y $v3=[-4, -3.5]$, podríamos hacerlo de la siguiente manera.
Step4: Operaciones con vectores
Las operaciones más comunes que utilizamos cuando trabajamos con vectores son la suma, la resta y la multiplicación por <a href="https
Step5: Producto escalar o interior
El producto escalar de dos vectores se define como la suma de los productos de sus elementos, suele representarse matemáticamente como < x, y > o x'y, donde x e y son dos vectores.
$$< x, y >
Step6: Matrices
Las <a href="https
Step7: Multiplicacion o Producto de matrices
La regla para la multiplicación de matrices generaliza la idea del producto interior que vimos con los vectores; y esta diseñada para facilitar las operaciones lineales básicas.
Cuando multiplicamos matrices, el número de columnas de la primera <a href="https
Step8: Este ultimo ejemplo vemos que la propiedad conmutativa no se cumple, es más, Python nos arroja un error, ya que el número de columnas de B no coincide con el número de filas de A, por lo que ni siquiera se puede realizar la multiplicación de B x A.
Para una explicación más detallada del proceso de multiplicación de matrices, pueden consultar el siguiente tutorial.
La matriz identidad, la matriz inversa, la matriz transpuesta y el determinante
La matriz identidad es el elemento neutro en la multiplicación de matrices, es el equivalente al número 1. Cualquier matriz multiplicada por la matriz identidad nos da como resultado la misma matriz. La matriz identidad es una matriz cuadrada (tiene siempre el mismo número de filas que de columnas); y su diagonal principal se compone de todos elementos 1 y el resto de los elementos se completan con 0. Suele representase con la letra I
Por ejemplo la matriz identidad de 3x3 sería la siguiente
Step9: Sistemas de ecuaciones lineales
Una de las principales aplicaciones del Álgebra lineal consiste en resolver problemas de sistemas de ecuaciones lineales.
Una ecuación lineal es una ecuación que solo involucra sumas y restas de una variable o mas variables a la primera potencia. Es la ecuación de la línea recta.Cuando nuestro problema esta representado por más de una ecuación lineal, hablamos de un sistema de ecuaciones lineales. Por ejemplo, podríamos tener un sistema de dos ecuaciones con dos incógnitas como el siguiente
Step10: Luego de haber graficado las funciones, podemos ver que ambas rectas se cruzan en el punto (3, 1), es decir que la solución de nuestro sistema sería $x=3$ e $y=1$. En este caso, al tratarse de un sistema simple y con solo dos incógnitas, la solución gráfica puede ser de utilidad, pero para sistemas más complicados se necesita una solución numérica, es aquí donde entran a jugar las <a href="https
Step11: Para resolver en forma numérica los sistema de ecuaciones, existen varios métodos
Step12: Programación lineal
La programación lineal estudia las situaciones en las que se exige maximizar o minimizar funciones que se encuentran sujetas a determinadas restricciones.
Consiste en optimizar (minimizar o maximizar) una función lineal, denominada función objetivo, de tal forma que las variables de dicha función estén sujetas a una serie de restricciones que expresamos mediante un sistema de inecuaciones lineales.
Para resolver un problema de programación lineal, debemos seguir los siguientes pasos | Python Code:
# Vector como lista de Python
v1 = [2, 4, 6]
v1
# Vectores con numpy
import numpy as np
v2 = np.ones(3) # vector de solo unos.
v2
v3 = np.array([1, 3, 5]) # pasando una lista a las arrays de numpy
v3
v4 = np.arange(1, 8) # utilizando la funcion arange de numpy
v4
Explanation: Algebra Lineal con Python
Esta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Mi blog sobre Python. El contenido esta bajo la licencia BSD.
<img alt="Algebra lineal" title="Algebra lineal" src="https://relopezbriega.github.io/images/lin-alg.jpg">
Introducción
Una de las herramientas matemáticas más utilizadas en machine learning y data mining es el Álgebra lineal; por tanto, si queremos incursionar en el fascinante mundo del aprendizaje automático y el análisis de datos es importante reforzar los conceptos que forman parte de sus cimientos.
El Álgebra lineal es una rama de las matemáticas que es sumamente utilizada en el estudio de una gran variedad de ciencias, como ser, ingeniería, finanzas, investigación operativa, entre otras. Es una extensión del álgebra que aprendemos en la escuela secundaria, hacia un mayor número de dimensiones; en lugar de trabajar con incógnitas a nivel de <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> comenzamos a trabajar con <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> y vectores.
El estudio del Álgebra lineal implica trabajar con varios objetos matemáticos, como ser:
Los <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">Escalares</a>: Un escalar es un solo número, en contraste con la mayoría de los otros objetos estudiados en Álgebra lineal, que son generalmente una colección de múltiples números.
Los Vectores:Un vector es una serie de números. Los números tienen una orden preestablecido, y podemos identificar cada número individual por su índice en ese orden. Podemos pensar en los vectores como la identificación de puntos en el espacio, con cada elemento que da la coordenada a lo largo de un eje diferente. Existen dos tipos de vectores, los vectores de fila y los vectores de columna. Podemos representarlos de la siguiente manera, dónde f es un vector de fila y c es un vector de columna:
$$f=\begin{bmatrix}0&1&-1\end{bmatrix} ; c=\begin{bmatrix}0\1\-1\end{bmatrix}$$
Las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">Matrices</a>: Una matriz es un arreglo bidimensional de números (llamados entradas de la matriz) ordenados en filas (o renglones) y columnas, donde una fila es cada una de las líneas horizontales de la matriz y una columna es cada una de las líneas verticales. En una matriz cada elemento puede ser identificado utilizando dos índices, uno para la fila y otro para la columna en que se encuentra. Las podemos representar de la siguiente manera, A es una matriz de 3x2.
$$A=\begin{bmatrix}0 & 1& \-1 & 2 \ -2 & 3\end{bmatrix}$$
Los Tensores:En algunos casos necesitaremos una matriz con más de dos ejes. En general, una serie de números dispuestos en una cuadrícula regular con un número variable de ejes es conocido como un tensor.
Sobre estos objetos podemos realizar las operaciones matemáticas básicas, como ser adición, multiplicación, sustracción y <a href="https://es.wikipedia.org/wiki/Divisi%C3%B3n_(matem%C3%A1tica)" >división</a>, es decir que vamos a poder sumar vectores con <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>, multiplicar <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> a vectores y demás.
Librerías de Python para álgebra lineal
Los principales módulos que Python nos ofrece para realizar operaciones de Álgebra lineal son los siguientes:
Numpy: El popular paquete matemático de Python, nos va a permitir crear vectores, <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> y tensores con suma facilidad.
numpy.linalg: Este es un submodulo dentro de Numpy con un gran número de funciones para resolver ecuaciones de Álgebra lineal.
scipy.linalg: Este submodulo del paquete científico Scipy es muy similar al anterior, pero con algunas más funciones y optimaciones.
Sympy: Esta librería nos permite trabajar con matemática simbólica, convierte a Python en un sistema algebraico computacional. Nos va a permitir trabajar con ecuaciones y fórmulas simbólicamente, en lugar de numéricamente.
CVXOPT: Este módulo nos permite resolver problemas de optimizaciones de programación lineal.
PuLP: Esta librería nos permite crear modelos de programación lineal en forma muy sencilla con Python.
Operaciones básicas
Vectores
Un vector de largo n es una secuencia (o array, o tupla) de n números. La solemos escribir como $x=(x1,...,xn)$ o $x=[x1,...,xn]$
En Python, un vector puede ser representado con una simple lista, o con un array de Numpy; siendo preferible utilizar esta última opción.
End of explanation
import matplotlib.pyplot as plt
from warnings import filterwarnings
%matplotlib inline
filterwarnings('ignore') # Ignorar warnings
def move_spines():
Crea la figura de pyplot y los ejes. Mueve las lineas de la izquierda y de abajo
para que se intersecten con el origen. Elimina las lineas de la derecha y la de arriba.
Devuelve los ejes.
fix, ax = plt.subplots()
for spine in ["left", "bottom"]:
ax.spines[spine].set_position("zero")
for spine in ["right", "top"]:
ax.spines[spine].set_color("none")
return ax
def vect_fig():
Genera el grafico de los vectores en el plano
ax = move_spines()
ax.set_xlim(-5, 5)
ax.set_ylim(-5, 5)
ax.grid()
vecs = [[2, 4], [-3, 3], [-4, -3.5]] # lista de vectores
for v in vecs:
ax.annotate(" ", xy=v, xytext=[0, 0],
arrowprops=dict(facecolor="blue",
shrink=0,
alpha=0.7,
width=0.5))
ax.text(1.1 * v[0], 1.1 * v[1], v)
vect_fig() # crea el gráfico
Explanation: Representación gráfica
Tradicionalmente, los vectores son representados visualmente como flechas que parten desde el origen hacia un punto.
Por ejemplo, si quisiéramos representar graficamente a los vectores $v1=[2, 4]$, $v2=[-3, 3]$ y $v3=[-4, -3.5]$, podríamos hacerlo de la siguiente manera.
End of explanation
# Ejemplo en Python
x = np.arange(1, 5)
y = np.array([2, 4, 6, 8])
x, y
# sumando dos vectores numpy
x + y
# restando dos vectores
x - y
# multiplicando por un escalar
x * 2
y * 3
Explanation: Operaciones con vectores
Las operaciones más comunes que utilizamos cuando trabajamos con vectores son la suma, la resta y la multiplicación por <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>.
Cuando sumamos dos vectores, vamos sumando elemento por elemento de cada
vector.
$$ \begin{split}x + y = \left[
\begin{array}{c}
x_1 \
x_2 \
\vdots \
x_n
\end{array}
\right] + \left[
\begin{array}{c}
y_1 \
y_2 \
\vdots \
y_n
\end{array}
\right] := \left[
\begin{array}{c}
x_1 + y_1 \
x_2 + y_2 \
\vdots \
x_n + y_n
\end{array}
\right]\end{split}$$
De forma similar funciona la operación de resta.
$$ \begin{split}x - y = \left[
\begin{array}{c}
x_1 \
x_2 \
\vdots \
x_n
\end{array}
\right] - \left[
\begin{array}{c}
y_1 \
y_2 \
\vdots \
y_n
\end{array}
\right] := \left[
\begin{array}{c}
x_1 - y_1 \
x_2 - y_2 \
\vdots \
x_n - y_n
\end{array}
\right]\end{split}$$
La multiplicación por <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a> es una operación que toma a un número $\gamma$, y a un vector $x$ y produce un nuevo vector donde cada elemento del vector $x$ es multiplicado por el número $\gamma$.
$$\begin{split}\gamma x := \left[
\begin{array}{c}
\gamma x_1 \
\gamma x_2 \
\vdots \
\gamma x_n
\end{array}
\right]\end{split}$$
En Python podríamos realizar estas operaciones en forma muy sencilla:
End of explanation
# Calculando el producto escalar de los vectores x e y
x @ y
# o lo que es lo mismo, que:
sum(x * y), np.dot(x, y)
# Calculando la norma del vector X
np.linalg.norm(x)
# otra forma de calcular la norma de x
np.sqrt(x @ x)
# vectores ortogonales
v1 = np.array([3, 4])
v2 = np.array([4, -3])
v1 @ v2
Explanation: Producto escalar o interior
El producto escalar de dos vectores se define como la suma de los productos de sus elementos, suele representarse matemáticamente como < x, y > o x'y, donde x e y son dos vectores.
$$< x, y > := \sum_{i=1}^n x_i y_i$$
Dos vectores son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a> o perpendiculares cuando forman ángulo recto entre sí. Si el producto escalar de dos vectores es cero, ambos vectores son <a href="https://es.wikipedia.org/wiki/Ortogonalidad_(matem%C3%A1ticas)">ortogonales</a>.
Adicionalmente, todo producto escalar induce una norma sobre el espacio en el que está definido, de la siguiente manera:
$$\| x \| := \sqrt{< x, x>} := \left( \sum_{i=1}^n x_i^2 \right)^{1/2}$$
En Python lo podemos calcular de la siguiente forma:
End of explanation
# Ejemplo en Python
A = np.array([[1, 3, 2],
[1, 0, 0],
[1, 2, 2]])
B = np.array([[1, 0, 5],
[7, 5, 0],
[2, 1, 1]])
# suma de las matrices A y B
A + B
# resta de matrices
A - B
# multiplicando matrices por escalares
A * 2
B * 3
# ver la dimension de una matriz
A.shape
# ver cantidad de elementos de una matriz
A.size
Explanation: Matrices
Las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> son una forma clara y sencilla de organizar los datos para su uso en operaciones lineales.
Una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> n × k es una agrupación rectangular de números con n filas y k columnas; se representa de la siguiente forma:
$$\begin{split}A = \left[
\begin{array}{cccc}
a_{11} & a_{12} & \cdots & a_{1k} \
a_{21} & a_{22} & \cdots & a_{2k} \
\vdots & \vdots & & \vdots \
a_{n1} & a_{n2} & \cdots & a_{nk}
\end{array}
\right]\end{split}$$
En la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A, el símbolo $a_{nk}$ representa el elemento n-ésimo de la fila en la k-ésima columna. La <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A también puede ser llamada un vector si cualquiera de n o k son iguales a 1. En el caso de n=1, A se llama un vector fila, mientras que en el caso de k=1 se denomina un vector columna.
Las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se utilizan para múltiples aplicaciones y sirven, en particular, para representar los coeficientes de los sistemas de ecuaciones lineales o para representar transformaciones lineales dada una base. Pueden sumarse, multiplicarse y descomponerse de varias formas.
Operaciones con matrices
Al igual que con los vectores, que no son más que un caso particular, las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> se pueden sumar, restar y la multiplicar por <a href="https://es.wikipedia.org/wiki/Escalar_(matem%C3%A1tica)">escalares</a>.
Multiplicacion por escalares:
$$\begin{split}\gamma A
\left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \
\vdots & \vdots & \vdots \
a_{n1} & \cdots & a_{nk} \
\end{array}
\right] := \left[
\begin{array}{ccc}
\gamma a_{11} & \cdots & \gamma a_{1k} \
\vdots & \vdots & \vdots \
\gamma a_{n1} & \cdots & \gamma a_{nk} \
\end{array}
\right]\end{split}$$
Suma de matrices: $$\begin{split}A + B = \left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \
\vdots & \vdots & \vdots \
a_{n1} & \cdots & a_{nk} \
\end{array}
\right]
+
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \
\vdots & \vdots & \vdots \
b_{n1} & \cdots & b_{nk} \
\end{array}
\right]
:=
\left[
\begin{array}{ccc}
a_{11} + b_{11} & \cdots & a_{1k} + b_{1k} \
\vdots & \vdots & \vdots \
a_{n1} + b_{n1} & \cdots & a_{nk} + b_{nk} \
\end{array}
\right]\end{split}$$
Resta de matrices: $$\begin{split}A - B = \left[
\begin{array}{ccc}
a_{11} & \cdots & a_{1k} \
\vdots & \vdots & \vdots \
a_{n1} & \cdots & a_{nk} \
\end{array}
\right]-
\left[
\begin{array}{ccc}
b_{11} & \cdots & b_{1k} \
\vdots & \vdots & \vdots \
b_{n1} & \cdots & b_{nk} \
\end{array}
\right] := \left[
\begin{array}{ccc}
a_{11} - b_{11} & \cdots & a_{1k} - b_{1k} \
\vdots & \vdots & \vdots \
a_{n1} - b_{n1} & \cdots & a_{nk} - b_{nk} \
\end{array}
\right]\end{split}$$
Para los casos de suma y resta, hay que tener en cuenta que solo se pueden sumar o restar <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a> que tengan las mismas dimensiones, es decir que si tengo una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimensión 3x2 (3 filas y 2 columnas) solo voy a poder sumar o restar la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B si esta también tiene 3 filas y 2 columnas.
End of explanation
# Ejemplo multiplicación de matrices
A = np.arange(1, 13).reshape(3, 4) #matriz de dimension 3x4
A
B = np.arange(8).reshape(4,2) #matriz de dimension 4x2
B
# Multiplicando A x B
A @ B #resulta en una matriz de dimension 3x2
# Multiplicando B x A
B @ A
Explanation: Multiplicacion o Producto de matrices
La regla para la multiplicación de matrices generaliza la idea del producto interior que vimos con los vectores; y esta diseñada para facilitar las operaciones lineales básicas.
Cuando multiplicamos matrices, el número de columnas de la primera <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> debe ser igual al número de filas de la segunda <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>; y el resultado de esta multiplicación va a tener el mismo número de filas que la primer <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> y el número de la columnas de la segunda <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a>. Es decir, que si yo tengo una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de dimensión 3x4 y la multiplico por una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> B de dimensión 4x2, el resultado va a ser una <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> C de dimensión 3x2.
Algo a tener en cuenta a la hora de multiplicar matrices es que la propiedad connmutativa no se cumple. AxB no es lo mismo que BxA.
Veamos los ejemplos en Python.
End of explanation
# Creando una matriz identidad de 2x2
I = np.eye(2)
I
# Multiplicar una matriz por la identidad nos da la misma matriz
A = np.array([[4, 7],
[2, 6]])
A
A @ I # AxI = A
# Calculando el determinante de la matriz A
np.linalg.det(A)
# Calculando la inversa de A.
A_inv = np.linalg.inv(A)
A_inv
# A x A_inv nos da como resultado I.
A @ A_inv
# Trasponiendo una matriz
A = np.arange(6).reshape(3, 2)
A
np.transpose(A)
Explanation: Este ultimo ejemplo vemos que la propiedad conmutativa no se cumple, es más, Python nos arroja un error, ya que el número de columnas de B no coincide con el número de filas de A, por lo que ni siquiera se puede realizar la multiplicación de B x A.
Para una explicación más detallada del proceso de multiplicación de matrices, pueden consultar el siguiente tutorial.
La matriz identidad, la matriz inversa, la matriz transpuesta y el determinante
La matriz identidad es el elemento neutro en la multiplicación de matrices, es el equivalente al número 1. Cualquier matriz multiplicada por la matriz identidad nos da como resultado la misma matriz. La matriz identidad es una matriz cuadrada (tiene siempre el mismo número de filas que de columnas); y su diagonal principal se compone de todos elementos 1 y el resto de los elementos se completan con 0. Suele representase con la letra I
Por ejemplo la matriz identidad de 3x3 sería la siguiente:
$$I=\begin{bmatrix}1 & 0 & 0 & \0 & 1 & 0\ 0 & 0 & 1\end{bmatrix}$$
Ahora que conocemos el concepto de la matriz identidad, podemos llegar al concepto de la matriz inversa. Si tenemos una matriz A, la matriz inversa de A, que se representa como $A^{-1}$ es aquella matriz cuadrada que hace que la multiplicación $A$x$A^{-1}$ sea igual a la matriz identidad I. Es decir que es la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> recíproca de A.
$$A × A^{-1} = A^{-1} × A = I$$
Tener en cuenta que esta matriz inversa en muchos casos puede no existir.En este caso se dice que la matriz es singular o degenerada. Una matriz es singular si y solo si su <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es nulo.
El <a href="https://es.wikipedia.org/wiki/Determinante_(matem%C3%A1tica)">determinante</a> es un número especial que puede calcularse sobre las matrices cuadradas. Se calcula como la suma de los productos de las diagonales de la matriz en una dirección menos la suma de los productos de las diagonales en la otra dirección. Se represente con el símbolo |A|.
$$A=\begin{bmatrix}a_{11} & a_{12} & a_{13} & \a_{21} & a_{22} & a_{23} & \ a_{31} & a_{32} & a_{33} & \end{bmatrix}$$
$$|A| = (a_{11} a_{22} a_{33} + a_{12} a_{23} a_{31} + a_{13} a_{21} a_{32} ) - (a_{31} a_{22} a_{13} + a_{32} a_{23} a_{11} + a_{33} a_{21} a_{12})
$$
Por último, la matriz transpuesta es aquella en que las filas se transforman en columnas y las columnas en filas. Se representa con el símbolo $A^\intercal$
$$\begin{bmatrix}a & b & \c & d & \ e & f & \end{bmatrix}^T:=\begin{bmatrix}a & c & e &\b & d & f & \end{bmatrix}$$
Ejemplos en Python:
End of explanation
# graficando el sistema de ecuaciones.
x_vals = np.linspace(0, 5, 50) # crea 50 valores entre 0 y 5
plt.plot(x_vals, (1 - x_vals)/-2) # grafica x - 2y = 1
plt.plot(x_vals, (11 - (3*x_vals))/2) # grafica 3x + 2y = 11
plt.axis(ymin = 0)
Explanation: Sistemas de ecuaciones lineales
Una de las principales aplicaciones del Álgebra lineal consiste en resolver problemas de sistemas de ecuaciones lineales.
Una ecuación lineal es una ecuación que solo involucra sumas y restas de una variable o mas variables a la primera potencia. Es la ecuación de la línea recta.Cuando nuestro problema esta representado por más de una ecuación lineal, hablamos de un sistema de ecuaciones lineales. Por ejemplo, podríamos tener un sistema de dos ecuaciones con dos incógnitas como el siguiente:
$$ x - 2y = 1$$
$$3x + 2y = 11$$
La idea es encontrar el valor de $x$ e $y$ que resuelva ambas ecuaciones. Una forma en que podemos hacer esto, puede ser representando graficamente ambas rectas y buscar los puntos en que las rectas se cruzan.
En Python esto se puede hacer en forma muy sencilla con la ayuda de matplotlib.
End of explanation
# Comprobando la solucion con la multiplicación de matrices.
A = np.array([[1., -2.],
[3., 2.]])
x = np.array([[3.],[1.]])
A @ x
Explanation: Luego de haber graficado las funciones, podemos ver que ambas rectas se cruzan en el punto (3, 1), es decir que la solución de nuestro sistema sería $x=3$ e $y=1$. En este caso, al tratarse de un sistema simple y con solo dos incógnitas, la solución gráfica puede ser de utilidad, pero para sistemas más complicados se necesita una solución numérica, es aquí donde entran a jugar las <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matrices</a>.
Ese mismo sistema se podría representar como una ecuación matricial de la siguiente forma:
$$\begin{bmatrix}1 & -2 & \3 & 2 & \end{bmatrix} \begin{bmatrix}x & \y & \end{bmatrix} = \begin{bmatrix}1 & \11 & \end{bmatrix}$$
Lo que es lo mismo que decir que la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A por la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> $x$ nos da como resultado el vector b.
$$ Ax = b$$
En este caso, ya sabemos el resultado de $x$, por lo que podemos comprobar que nuestra solución es correcta realizando la multiplicación de matrices.
End of explanation
# Creando matriz de coeficientes
A = np.array([[1, 2, 3],
[2, 5, 2],
[6, -3, 1]])
A
# Creando matriz de resultados
b = np.array([6, 4, 2])
b
# Resolviendo sistema de ecuaciones
x = np.linalg.solve(A, b)
x
# Comprobando la solucion
A @ x == b
Explanation: Para resolver en forma numérica los sistema de ecuaciones, existen varios métodos:
El método de sustitución: El cual consiste en despejar en una de las ecuaciones cualquier incógnita, preferiblemente la que tenga menor coeficiente y a continuación sustituirla en otra ecuación por su valor.
El método de igualacion: El cual se puede entender como un caso particular del método de sustitución en el que se despeja la misma incógnita en dos ecuaciones y a continuación se igualan entre sí la parte derecha de ambas ecuaciones.
El método de reduccion: El procedimiento de este método consiste en transformar una de las ecuaciones (generalmente, mediante productos), de manera que obtengamos dos ecuaciones en la que una misma incógnita aparezca con el mismo coeficiente y distinto signo. A continuación, se suman ambas ecuaciones produciéndose así la reducción o cancelación de dicha incógnita, obteniendo una ecuación con una sola incógnita, donde el método de resolución es simple.
El método gráfico: Que consiste en construir el gráfica de cada una de las ecuaciones del sistema. Este método (manualmente aplicado) solo resulta eficiente en el plano cartesiano (solo dos incógnitas).
El método de Gauss: El método de eliminación de Gauss o simplemente método de Gauss consiste en convertir un sistema lineal de n ecuaciones con n incógnitas, en uno escalonado, en el que la primera ecuación tiene n incógnitas, la segunda ecuación tiene n - 1 incógnitas, ..., hasta la última ecuación, que tiene 1 incógnita. De esta forma, será fácil partir de la última ecuación e ir subiendo para calcular el valor de las demás incógnitas.
El método de Eliminación de Gauss-Jordan: El cual es una variante del método anterior, y consistente en triangular la matriz aumentada del sistema mediante transformaciones elementales, hasta obtener ecuaciones de una sola incógnita.
El método de Cramer: El cual consiste en aplicar la regla de Cramer para resolver el sistema. Este método solo se puede aplicar cuando la matriz de coeficientes del sistema es cuadrada y de determinante no nulo.
La idea no es explicar cada uno de estos métodos, sino saber que existen y que Python nos hacer la vida mucho más fácil, ya que para resolver un sistema de ecuaciones simplemente debemos llamar a la función solve().
Por ejemplo, para resolver este sistema de 3 ecuaciones y 3 incógnitas.
$$ x + 2y + 3z = 6$$
$$ 2x + 5y + 2z = 4$$
$$ 6x - 3y + z = 2$$
Primero armamos la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> A de coeficientes y la <a href="https://es.wikipedia.org/wiki/Matriz_(matem%C3%A1ticas)">matriz</a> b de resultados y luego utilizamos solve() para resolverla.
End of explanation
# Resolviendo la optimizacion con pulp
from pulp import *
# declarando las variables
x1 = LpVariable("x1", 0, 800) # 0<= x1 <= 40
x2 = LpVariable("x2", 0, 1000) # 0<= x2 <= 1000
# definiendo el problema
prob = LpProblem("problem", LpMaximize)
# definiendo las restricciones
prob += x1+1.5*x2 <= 750
prob += 2*x1+x2 <= 1000
prob += x1>=0
prob += x2>=0
# definiendo la funcion objetivo a maximizar
prob += 50*x1+40*x2
# resolviendo el problema
status = prob.solve(GLPK(msg=0))
LpStatus[status]
# imprimiendo los resultados
(value(x1), value(x2))
# Resolviendo el problema con cvxopt
from cvxopt import matrix, solvers
A = matrix([[-1., -2., 1., 0.], # columna de x1
[-1.5, -1., 0., 1.]]) # columna de x2
b = matrix([750., 1000., 0., 0.]) # resultados
c = matrix([50., 40.]) # funcion objetivo
# resolviendo el problema
sol=solvers.lp(c,A,b)
# imprimiendo la solucion.
print('{0:.2f}, {1:.2f}'.format(sol['x'][0]*-1, sol['x'][1]*-1))
# Resolviendo la optimizacion graficamente.
x_vals = np.linspace(0, 800, 10) # 10 valores entre 0 y 800
plt.plot(x_vals, ((750 - x_vals)/1.5)) # grafica x1 + 1.5x2 = 750
plt.plot(x_vals, (1000 - 2*x_vals)) # grafica 2x1 + x2 = 1000
plt.axis(ymin = 0)
plt.show()
Explanation: Programación lineal
La programación lineal estudia las situaciones en las que se exige maximizar o minimizar funciones que se encuentran sujetas a determinadas restricciones.
Consiste en optimizar (minimizar o maximizar) una función lineal, denominada función objetivo, de tal forma que las variables de dicha función estén sujetas a una serie de restricciones que expresamos mediante un sistema de inecuaciones lineales.
Para resolver un problema de programación lineal, debemos seguir los siguientes pasos:
Elegir las incógnitas.
Escribir la función objetivo en función de los datos del problema.
Escribir las restricciones en forma de sistema de inecuaciones.
Averiguar el conjunto de soluciones factibles representando gráficamente las restricciones.
Calcular las coordenadas de los vértices del recinto de soluciones factibles (si son pocos).
Calcular el valor de la función objetivo en cada uno de los vértices para ver en cuál de ellos presenta el valor máximo o mínimo según nos pida el problema (hay que tener en cuenta aquí la posible no existencia de solución).
Veamos un ejemplo y como Python nos ayuda a resolverlo en forma sencilla.
Supongamos que tenemos la siguiente función objetivo:
$$f(x_{1},x_{2})= 50x_{1} + 40x_{2}$$
y las siguientes restricciones:
$$x_{1} + 1.5x_{2} \leq 750$$
$$2x_{1} + x_{2} \leq 1000$$
$$x_{1} \geq 0$$
$$x_{2} \geq 0$$
Podemos resolverlo utilizando PuLP, CVXOPT o graficamente (con matplotlib) de la siguiente forma.
End of explanation |
4,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob)
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Step13: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
4,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Plotting with Matplotlib
Matplotlib is a standard plotting package for python.
Autor
Step1: Import several modules which will be useful for doing plots.
Step2: Scatter Plots
Here we will play with the basics of plotting data and creating simple figures.
In matplotlib there are sometimes several ways to make simple figures, but we'll start with an easy way.
Single-panel plots
Step3: Now, do some processing of the data, to compute the line ratio and extract only valid ratios.
Step4: Now, let's plot the data, HCN/HCO+ agains PAH EQW.
Step5: That's okay, but what do the error bars look like?
Step6: But we should plot our limits as well.
Step7: Let's colorize the points by something. How about the PAH EQW?
There are lots of colormaps, but we'll use viridis, since it's perceptual and colorblind-friendly.
Here we'll plot the points separately from their limits.
I'll leave adding the upper limits as an exercise for the reader
Step8: Histograms and Boxplots
Let's start with a standard histogram. We'll let matplotlib pick the bins, but you can provide your own as an array via the bins argument.
Step9: Let's pick our own bins now.
Step10: Let's revisit the plot of the HCN/HCO+ ratio against PAH EQW, and look at the distribution of points, via a boxplot.
Step11: A slightly more information-dense version of this is the violin plot, which adds a kernel estimated density distribution.
Step12: Multi-Panel Figures
Let's say we want to show the boxplot and the data next to each other. How do we do that? | Python Code:
%matplotlib inline
Explanation: Introduction to Plotting with Matplotlib
Matplotlib is a standard plotting package for python.
Autor: George Privon
Preliminaries
Show plots in the notebook.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
Explanation: Import several modules which will be useful for doing plots.
End of explanation
data = np.genfromtxt('hcn_hco+.dat',
usecols=(1, 2, 3, 4, 5, 6, 7, 8, 9),
dtype=[('LprimeHCN', float),
('LprimeerrHCN', float),
('HCNlim', int),
('LprimeHCO', float),
('LprimeerrHCO', float),
('HCOlim', int),
('PAH62', float),
('PAH62err', float),
('PAH62lim', int)])
Explanation: Scatter Plots
Here we will play with the basics of plotting data and creating simple figures.
In matplotlib there are sometimes several ways to make simple figures, but we'll start with an easy way.
Single-panel plots
End of explanation
ratio = data['LprimeHCN'] / data['LprimeHCO']
ratioerr = ratio * np.sqrt((data['LprimeerrHCN']/data['LprimeHCN'])**2 +
(data['LprimeerrHCO']/data['LprimeHCO'])**2)
# work out which ratios are valid (i.e., not both upper and lower limits)
# and which ratios are which types of limits
valid = np.invert(np.logical_and(data['HCNlim'], data['HCOlim']))
nolim = np.invert(np.logical_or(data['HCNlim'], data['HCOlim']))
uplim = (valid * data['HCNlim']) > 0
lolim = (valid * data['HCOlim']) > 0
Explanation: Now, do some processing of the data, to compute the line ratio and extract only valid ratios.
End of explanation
# create a "figure"
f = plt.figure()
plt.plot(data['PAH62'][nolim],
ratio[nolim],
marker='o',
color='black',
linestyle='',
label='IRAM 30m and Spitzer data!')
# let's label our axes
plt.xlabel(r'PAH EQW ($\mu m$)', fontsize='large')
plt.ylabel(r'HCN/HCO$^+$', fontsize='large')
# i like showing minor tickmarks
plt.minorticks_on()
# let's show a legend
plt.legend(loc='best', frameon=True)
Explanation: Now, let's plot the data, HCN/HCO+ agains PAH EQW.
End of explanation
plt.figure()
plt.errorbar(data['PAH62'][nolim],
ratio[nolim],
marker='o',
linestyle='',
xerr=data['PAH62err'][nolim],
yerr=ratioerr[nolim],
label='Data')
plt.xlabel(r'PAH EQW ($\mu m$)', fontsize='large')
plt.ylabel(r'HCN/HCO$^+$', fontsize='large')
plt.minorticks_on()
# there's no simple theoretical model for this, so let's just plot a couple lines
# on top of the data
plt.plot([0,0.7],
[2.0, 0.6],
color='red',
linestyle=':',
label='Random line 1')
plt.plot([0,0.7],
[1.5, 2.6],
color='green',
linestyle='--',
label='Random line 2')
plt.legend(loc='best', frameon=False)
Explanation: That's okay, but what do the error bars look like?
End of explanation
plt.figure()
plt.minorticks_on()
plt.errorbar(data['PAH62'][nolim],
ratio[nolim],
marker='o',
color='black',
linestyle='',
ecolor='gray',
xerr=data['PAH62err'][nolim],
yerr=ratioerr[nolim],
label='Good ratios')
plt.xlabel(r'PAH EQW ($\mu m$)', fontsize='large')
plt.ylabel(r'HCN/HCO$^+$', fontsize='large')
# we can issue multiple plot commands to put things on the same figure axes
# (dividing and multiplying by 3 to make them 3-sigma limits)
nlim = len(data['PAH62'][uplim])
arrowlen = 0.2 * np.ones(nlim)
plt.errorbar(data['PAH62'][uplim],
3*ratio[uplim],
marker='o',
color='green',
linestyle='',
xerr=data['PAH62err'][uplim],
yerr=arrowlen,
ecolor='gray',
uplims=True,
label=r'3$\sigma$ upper limits')
nlim = len(data['PAH62'][lolim])
arrowlen = 0.2 * np.ones(nlim)
plt.errorbar(data['PAH62'][lolim],
ratio[lolim]/3.,
marker='o',
color='blue',
linestyle='',
xerr=data['PAH62err'][lolim],
yerr=arrowlen,
ecolor='gray',
lolims=True,
label=r'3$\sigma$ lower limits')
plt.legend(loc='best', frameon=False)
Explanation: But we should plot our limits as well.
End of explanation
plt.figure()
plt.minorticks_on()
# first plot the errobars
plt.errorbar(data['PAH62'][nolim],
ratio[nolim],
marker='',
linestyle='',
ecolor='gray',
xerr=data['PAH62err'][nolim],
yerr=ratioerr[nolim])
# now, overplot the points colored by PAH EQW
plt.scatter(data['PAH62'][nolim],
ratio[nolim],
s=60,
c=data['PAH62'][nolim],
cmap=plt.get_cmap('viridis'))
plt.xlabel(r'PAH EQW ($\mu m$)', fontsize='large')
plt.ylabel(r'HCN/HCO$^+$', fontsize='large')
# show and label the color bar
cbar = plt.colorbar()
cbar.set_label(r'PAH EQW ($\mu m$)')
# manually set the scaling, because otherwise it goes past 0 on the x-axis
plt.xlim([0,0.75])
# this is the same as the autoscaling for the y-axis, but just to show you can
# adjust it too
plt.ylim([0, 3.])
Explanation: Let's colorize the points by something. How about the PAH EQW?
There are lots of colormaps, but we'll use viridis, since it's perceptual and colorblind-friendly.
Here we'll plot the points separately from their limits.
I'll leave adding the upper limits as an exercise for the reader :)
End of explanation
n, bins, patches = plt.hist(ratio[nolim])
plt.xlabel(r'HCN/HCO$^+$', fontsize='large')
plt.ylabel('N', fontsize='large')
Explanation: Histograms and Boxplots
Let's start with a standard histogram. We'll let matplotlib pick the bins, but you can provide your own as an array via the bins argument.
End of explanation
mybins = np.arange(0, 2.5, 0.4)
n, bins, patches = plt.hist(ratio[nolim],
bins=mybins,
color='gray',
normed=False)
plt.ylabel('N', fontsize='large')
Explanation: Let's pick our own bins now.
End of explanation
plt.boxplot([ratio[nolim][data[nolim]['PAH62']<0.2],
ratio[nolim][data[nolim]['PAH62']>=0.2]],
showcaps=True,
showmeans=True,
labels=['AGN', 'Starbursts and\nComposites'])
plt.ylabel(r'HCN/HCO$^+$')
Explanation: Let's revisit the plot of the HCN/HCO+ ratio against PAH EQW, and look at the distribution of points, via a boxplot.
End of explanation
fig = plt.figure()
res = plt.violinplot([ratio[nolim][data[nolim]['PAH62']<0.2],
ratio[nolim][data[nolim]['PAH62']>=0.2]],
showmedians=True,
showmeans=False,
showextrema=True)
# it takes a bit more work to add labels to this plot
plt.xticks(np.arange(1,3,1), ['AGN', 'Composites and\nStarbursts'])
# let's change the default colors
for elem in res['bodies']:
elem.set_facecolor('blue')
elem.set_edgecolor('purple')
Explanation: A slightly more information-dense version of this is the violin plot, which adds a kernel estimated density distribution.
End of explanation
fig, ax = plt.subplots(1, 3,
sharey=True,
squeeze=False,
figsize=(15,4))
# now, ax is a 1x3 array
print(ax.shape)
# we can do the same commands as above. But now instead of issuing plot commands
# via "plt.", we assign them directly to the axes.
ax[0][0].errorbar(data['PAH62'][nolim],
ratio[nolim],
marker='',
linestyle='',
ecolor='gray',
xerr=data['PAH62err'][nolim],
yerr=ratioerr[nolim])
# now, overplot the points colored by PAH EQW
im = ax[0][0].scatter(data['PAH62'][nolim],
ratio[nolim],
s=60,
c=data['PAH62'][nolim],
cmap=plt.get_cmap('viridis'))
# setting labels using the axis is slightly different
ax[0][0].set_xlabel(r'PAH EQW ($\mu m$)', fontsize='large')
ax[0][0].set_ylabel(r'HCN/HCO$^+$', fontsize='large')
# show and label the color bar
#cbar = ax[0][0].colorbar()
#cbar.set_label(r'PAH EQW ($\mu m$)')
# manually set the scaling, because otherwise it goes past 0 on the x-axis
ax[0][0].set_xlim([0,0.75])
# this is the same as the autoscaling for the y-axis, but just to show you can
# adjust it too
ax[0][0].set_ylim([0, 3.])
ax[0][1].boxplot([ratio[nolim][data[nolim]['PAH62']<0.2],
ratio[nolim][data[nolim]['PAH62']>=0.2]],
labels=['AGN', 'Starbursts and\nComposites'])
res = ax[0][2].violinplot([ratio[nolim][data[nolim]['PAH62']<0.2],
ratio[nolim][data[nolim]['PAH62']>=0.2]],
showmedians=True,
showmeans=False,
showextrema=True)
# it takes a bit more work to add labels to this plot
ax[0][2].set_xticks(np.arange(1,3,1))
ax[0][2].set_xticklabels(['AGN', 'Composites and\nStarbursts'])
# let's change the default colors
for elem in res['bodies']:
elem.set_facecolor('blue')
elem.set_edgecolor('purple')
# now let's make the two plots without any space between them
fig.subplots_adjust(hspace=0, wspace=0)
# adding a colorbar is slightly more complicated when doing subplots.
# here's one way...
# not we added a "im =", to the scatter plot for this.
fig.subplots_adjust(right=0.8)
cbar_ax = fig.add_axes([0.85, 0.10, 0.03, 0.8])
cbar = fig.colorbar(im, cax=cbar_ax)
cbar.set_label(r'PAH EQW ($\mu m$)', fontsize='12')
Explanation: Multi-Panel Figures
Let's say we want to show the boxplot and the data next to each other. How do we do that?
End of explanation |
4,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2>Analisi comparativa dei metodi di dosaggio degli anticorpi anti recettore del TSH</h2>
<h3>Metodo Routine
Step1: <h4>Importazione del file con i dati </h4>
Step2: Varibili d'ambiete in comune
Step3: <h3>Aggiungo due colonne con pos neg in base al cut-off</h3>
Step4: <h4>Calcolo la tabella delle frequenze</h4>
<font color='red'> modulo utilizzato scipy.stat </font> http
Step5: <h4> Test chi quadrato</h4>
Step6: <h4> Test esatto di Fisher</h4>
Step7: <h3>test corretto per questo caso è il test di McNemar
Step8: -
<h2> Analisi della regressione</h2>
Step9: eseguiamo ora lo studio di regressione con tre modelli diversi
<font color='red'>Moduli statmodels e scipy </font>
Step10: Ortogonal Distance Regression (Deming Regression)
Step11: <h4>Bias</h4>
Step12: Normalize data | Python Code:
%matplotlib inline
#importo le librerie
import pandas as pd
import os
from __future__ import print_function,division
import numpy as np
import seaborn as sns
os.environ["NLS_LANG"] = "ITALIAN_ITALY.UTF8"
Explanation: <h2>Analisi comparativa dei metodi di dosaggio degli anticorpi anti recettore del TSH</h2>
<h3>Metodo Routine:<h3>
<ul>
<li><small>Brahms Trak Human con metodica LIA</small> </li>
<li>Metodo Siemens XPi TSI Assay chemiluminescenza Immulite 2000</li>
</ul>
<h3>Metodo di comparazione Thermophisher: anti TSH-R Elia su Immunocap 250</h3>
Analisi dei dati effettuata con la suite CONTINUUM ANALITICS https://www.continuum.io/
basata sui seguenti moduli python:
<ul>
<li>Pandas: per la gestione dei dati e le analisi di base </li>
<li> Matplotlib: per i grafici di base</li>
<li>Seaborn per grafici avanzati</li>
<li>Statmodels e scipy per le analisi avanzate</li>
</ul>
** tutti i software utilizzati sono open source**
End of explanation
#importo il file con i dati
path=r"D:\d\05 Lavscien\autoimmunita\corr_thibya\compar_thibya_immulite.csv"
database=pd.read_csv(path,sep=';',usecols=[1, 2, 3,4,5])#colonne da utilizzare
database['valore_cap']=database['valore_cap'].apply(lambda x: round(x,2))
database.drop(['codificato','accettazione'],axis=1,inplace=True)
database.tail(6)
database.describe()
Explanation: <h4>Importazione del file con i dati </h4>
End of explanation
#variabili d'ambiente comuni
cutoff_cap=2.9 #tre 2.9 r 3.3 dubbi
#cutoff_cap=3.3
#cutoff_rout=1 #brahms 1-1.5 dubbi
cutoff_rout=0.55 #Siemens
METODO_ROUTINE="Siemens Immulite 2000 Chemil."
CAP="Thermo Fisher ELIA anti-TSH-R Cap250 "
Explanation: Varibili d'ambiete in comune
End of explanation
database['cap_PN']=(database['valore_cap']>=cutoff_cap)
database['rut_PN']=(database['valore_rut']>=cutoff_rout)
database.head(5)
database['cap_PN'].replace([True,False],['Pos','Neg'],inplace=True)
database['rut_PN'].replace([True,False],['Pos','Neg'],inplace=True)
database.describe()
Explanation: <h3>Aggiungo due colonne con pos neg in base al cut-off</h3>
End of explanation
#sci.py moduli
from scipy.stats import chi2_contingency, fisher_exact
pd.crosstab(database.cap_PN,database.rut_PN)
ax=pd.crosstab(database.cap_PN,database.rut_PN).plot(kind='bar',stacked=True, )
ax.legend(['Neg','Pos'])
ax.set_xlabel(CAP)
Explanation: <h4>Calcolo la tabella delle frequenze</h4>
<font color='red'> modulo utilizzato scipy.stat </font> http://docs.scipy.org/doc/scipy/reference/stats.html
End of explanation
# test chi square
chi2, pvalue, dof, ex = chi2_contingency(pd.crosstab(database.cap_PN,database.rut_PN))
print ('valore di p:{}'.format(pvalue))
Explanation: <h4> Test chi quadrato</h4>
End of explanation
# test esatto di Fisher
oddsratio, pvalue =fisher_exact(pd.crosstab(database.cap_PN,database.rut_PN))
print ('valore di p:{}'.format(pvalue))
Explanation: <h4> Test esatto di Fisher</h4>
End of explanation
from statsmodels.sandbox.stats.runs import mcnemar
stat,p=mcnemar(pd.crosstab(database.cap_PN,database.rut_PN))
print("valore di p:{}".format(p))
Explanation: <h3>test corretto per questo caso è il test di McNemar:</h3>
test non parametrico dati appaiati risposte nominali binarie
<h4> Test esatto McNemar (per la dipendenza delle variabili)</h4>
<font color='red'> modulo utilizzato statsmodels </font> http://statsmodels.sourceforge.net/stable/index.html
End of explanation
# grafico di dispersione
import matplotlib.pyplot as plt
fig = plt.figure()
fig.suptitle('Scatterplot', fontsize=14, fontweight='bold')
ax = fig.add_subplot(111)
ax.set_xlabel(METODO_ROUTINE)
ax.set_ylabel(CAP)
ax.plot(database.valore_rut,database.valore_cap,'o')
plt.show()
Explanation: -
<h2> Analisi della regressione</h2>
End of explanation
# con statmodel : regressione minimi quadrati
##res_ols = sm.OLS(y, statsmodels.tools.add_constant(X)).fit() per vecchia versione
import statsmodels.api as sm
#sm.OLS(Y,X)
X = sm.add_constant(database.valore_rut )
modello_minquad=sm.OLS(database.valore_cap,X)
regressione_minquad=modello_minquad.fit()
regressione_minquad.summary()
# con statmodel : regressione robusta (Robust Linear Model)
X = sm.add_constant(database.valore_rut)
modello=sm.RLM(database.valore_cap,X)
regressione_robusta=modello.fit()
regressione_robusta.summary()
#importo la librearia seborn per una migliore visualizzazione grafica
sns.set(color_codes=True)
ax = sns.regplot(x=database.valore_rut,y=database.valore_cap, color="g",robust=True)
ax = sns.regplot(x=database.valore_rut,y=database.valore_cap, color="b")
ax.set_title('Regressione lineare OLS + RLM ')
ax.set_xlabel(METODO_ROUTINE)
ax.set_ylabel(CAP)
ax.set(ylim=(0, None))
ax.set(xlim=(0, None))
sns.set(color_codes=True)
ax2 = sns.regplot(x=database.valore_rut,y=database.valore_cap, color="g",robust=True)
ax2 = sns.regplot(x=database.valore_rut,y=database.valore_cap, color="b")
ax2.set_title('Regressione lineare OLS + RLM ')
ax2.set_xlabel(METODO_ROUTINE)
ax2.set_ylabel(CAP)
ax2.set(ylim=(0, 20))
ax2.set(xlim=(0, 8))
ax=sns.jointplot(x=database.valore_rut,y=database.valore_cap, kind="reg");
ax.set_axis_labels(METODO_ROUTINE,CAP)
Explanation: eseguiamo ora lo studio di regressione con tre modelli diversi
<font color='red'>Moduli statmodels e scipy </font>
End of explanation
# regressione ODR (ortogonal distance regression Deming)
import scipy.odr as odr
#modello di fitting
def funzione(B,x):
return B[0]*x+B[1]
linear= odr.Model(funzione)
variabili=odr.Data(database.valore_rut,database.valore_cap)
regressione_ortogonale=odr.ODR(variabili,linear,beta0=[1., 2.])
output=regressione_ortogonale.run()
output.pprint()
output
Explanation: Ortogonal Distance Regression (Deming Regression)
End of explanation
database_b=database
database_b['bias']=database['valore_rut']-database['valore_cap']
database_b.head()
sns.distplot(database_b.bias)
database.describe()
Explanation: <h4>Bias</h4>
End of explanation
'''from sklearn import preprocessing
min_max_scaler = preprocessing.MinMaxScaler()
x_scaled = min_max_scaler.fit_transform(x)
normalized = pd.DataFrame(database.valore_cap)
normalized'''
Explanation: Normalize data
End of explanation |
4,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Logistic Regression (scikit-learn) Experiment Versioning & Registry
<a href="https
Step1: This example features
Step2: Phase 1
Step3: Prepare data
Step4: Prepare hyperparameters
Step5: Train models
Step6: Revisit Workflow
This section demonstrates querying and retrieving runs via the Client.
Retrieve best run
Step7: Train on full dataset
Step8: Calculate accuracy on full training set
Step9: Phase 2 | Python Code:
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
Explanation: Logistic Regression (scikit-learn) Experiment Versioning & Registry
<a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/demos/census-experiment-versioning-registry.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Basic Verta Setup
End of explanation
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification"
EXPERIMENT_NAME = "Logistic Regression"
WORKSPACE = "XXXXX"
import os
os.environ['VERTA_EMAIL'] = 'XXXXXXXXXX'
os.environ['VERTA_DEV_KEY'] = 'XXXXXXXXXXXXXXXXXXXX'
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
try:
import wget
except ImportError:
!pip install wget # you may need pip3
import wget
Explanation: This example features:
- scikit-learn's LinearRegression model
- verta model versioning and experiment tracking
- verta model staging and registry
End of explanation
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME, workspace=WORKSPACE, public_within_org=True)
expt = client.set_experiment(EXPERIMENT_NAME)
Explanation: Phase 1: Model Development
This section demonstrates logging model metadata and training artifacts to ModelDB.
Instantiate client
End of explanation
train_data_url = "http://s3.amazonaws.com/verta-starter/census-train.csv"
train_data_filename = wget.detect_filename(train_data_url)
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
test_data_url = "http://s3.amazonaws.com/verta-starter/census-test.csv"
test_data_filename = wget.detect_filename(test_data_url)
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
from verta.dataset import Path
dataset = client.set_dataset(name="Census Income Local")
dataset_version = dataset.create_version(Path(train_data_filename))
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
Explanation: Prepare data
End of explanation
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
Explanation: Prepare hyperparameters
End of explanation
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# log attributes
run.log_attributes({
'library': "scikit-learn",
'model_type': "logistic regression",
})
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, model.predict(X_train))
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api, custom_modules=[])
run.log_requirements(requirements)
# log training data
run.log_dataset_version("census_data", dataset_version) # log dataset metadata
# log git information
run.log_code(
repo_url="[email protected]:VertaAI/modeldb.git",
commit_hash="d412a0d9",
autocapture=False,
)
# NOTE: run_experiment() could also be defined in a module, and executed in parallel
for hyperparams in hyperparam_sets:
run_experiment(hyperparams)
Explanation: Train models
End of explanation
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
Explanation: Revisit Workflow
This section demonstrates querying and retrieving runs via the Client.
Retrieve best run
End of explanation
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
Explanation: Train on full dataset
End of explanation
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
Explanation: Calculate accuracy on full training set
End of explanation
registered_model = client.get_or_create_registered_model(name="census", workspace=WORKSPACE, public_within_org=True)
registered_model.create_version_from_run(best_run.id, name="v0")
Explanation: Phase 2: Staging
Register the best model
The best-performing model can be staged as a registered model, for use downstream.
End of explanation |
4,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 2, part 1 (40 points)
This warm-up problem set is provided to help you get used to PyTorch.
Please, only fill parts marked with "Your code here".
Step1: To learn best practices $-$ for example,
how to choose between .sqrt() and .sqrt_(),
when to use .view() and how is it different from .reshape(),
which dtype to use,
$-$ you are expected to google a lot, read tutorials on the Web and study documentation.
Quick documentation on functions and modules is available with ? and help(), like so
Step2: Task 1 (3 points)
Use tensors only
Step4: Task 2 (7 points)
Use tensors only
Step5: More fun with Game of Life
Step7: The cell below has an example layout for encapsulating your neural network. Feel free to modify the interface if you need to (add arguments, add return values, add methods etc.). For example, you may want to add a method do_gradient_step() that executes one optimization algorithm (SGD / Adadelta / Adam / ...) step.
Step9: Define subroutines for one-hot encoding, accuracy calculating and batch generating
Step10: Prepare dataset
Step11: Define model and train
Step12: Plot loss
Step13: Final evalutation | Python Code:
import numpy as np
import math
import matplotlib.pyplot as plt
%matplotlib inline
import torch
assert torch.__version__ >= '1.0.0'
import tqdm
Explanation: Homework 2, part 1 (40 points)
This warm-up problem set is provided to help you get used to PyTorch.
Please, only fill parts marked with "Your code here".
End of explanation
help(torch.sqrt)
# to close the Jupyter help bar, press `Esc` or `q`
?torch.cat
Explanation: To learn best practices $-$ for example,
how to choose between .sqrt() and .sqrt_(),
when to use .view() and how is it different from .reshape(),
which dtype to use,
$-$ you are expected to google a lot, read tutorials on the Web and study documentation.
Quick documentation on functions and modules is available with ? and help(), like so:
End of explanation
theta = torch.linspace(-math.pi, math.pi, 1000)
assert theta.shape == (1000,)
rho = (1 + 0.9 * torch.cos(8 * theta)) * (1 + 0.1 * torch.cos(24 * theta)) * (0.9 + 0.05 * torch.cos(200 * theta)) * (1 + torch.sin(theta))
assert torch.is_same_size(rho, theta)
x = rho * torch.cos(theta)
y = rho * torch.sin(theta)
# Run this cell and make sure the plot is correct
plt.figure(figsize=[6,6])
plt.fill(x.numpy(), y.numpy(), color='green')
plt.grid()
Explanation: Task 1 (3 points)
Use tensors only: no lists, loops, numpy arrays etc.
Clarification update:
you mustn't emulate PyTorch tensors with lists or tuples. Using a list for scaffolding utilities not provided by PyTorch core (e.g. to store model's layers or to group function arguments) is OK;
no loops;
you mustn't use numpy or other tensor libraries except PyTorch.
$\rho(\theta)$ is defined in polar coordinate system:
$$\rho(\theta) = (1 + 0.9 \cdot \cos{8\theta} ) \cdot (1 + 0.1 \cdot \cos{24\theta}) \cdot (0.9 + 0.05 \cdot \cos {200\theta}) \cdot (1 + \sin{\theta})$$
Create a regular grid of 1000 values of $\theta$ between $-\pi$ and $\pi$.
Compute $\rho(\theta)$ at these values.
Convert it into Cartesian coordinates (howto).
End of explanation
from scipy.signal import correlate2d as conv2d
def numpy_update(alive_map):
# Count neighbours with convolution
conv_kernel = np.array([[1,1,1],
[1,0,1],
[1,1,1]])
num_alive_neighbors = conv2d(alive_map, conv_kernel, mode='same')
# Apply game rules
born = np.logical_and(num_alive_neighbors == 3, alive_map == 0)
survived = np.logical_and(np.isin(num_alive_neighbors, [2,3]), alive_map == 1)
np.copyto(alive_map, np.logical_or(born, survived))
def torch_update(alive_map):
Game of Life update function that does to `alive_map` exactly the same as `numpy_update`.
:param alive_map: `torch.tensor` of shape `(height, width)` and dtype `torch.float32`
containing 0s (dead) an 1s (alive)
conv_kernel = torch.Tensor([[[[1, 1, 1], [1, 0, 1], [1, 1, 1]]]])
neighbors_map = torch.conv2d(alive_map.unsqueeze(0).unsqueeze(0),
conv_kernel, padding=1).squeeze()
born = (neighbors_map == 3) & (alive_map == 0)
survived = ((neighbors_map == 2) | (neighbors_map == 3)) & (alive_map == 1)
alive_map.copy_(born | survived)
# Generate a random initial map
alive_map_numpy = np.random.choice([0, 1], p=(0.5, 0.5), size=(100, 100))
alive_map_torch = torch.tensor(alive_map_numpy).float().clone()
numpy_update(alive_map_numpy)
torch_update(alive_map_torch)
# results should be identical
assert np.allclose(alive_map_torch.numpy(), alive_map_numpy), \
"Your PyTorch implementation doesn't match numpy_update."
print("Well done!")
%matplotlib notebook
plt.ion()
# initialize game field
alive_map = np.random.choice([0, 1], size=(100, 100))
alive_map = torch.tensor(alive_map).float()
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
for _ in range(100):
torch_update(alive_map)
# re-draw image
ax.clear()
ax.imshow(alive_map.numpy(), cmap='gray')
fig.canvas.draw()
# A fun setup for your amusement
alive_map = np.arange(100) % 2 + np.zeros([100, 100])
alive_map[48:52, 50] = 1
alive_map = torch.tensor(alive_map).float()
fig = plt.figure()
ax = fig.add_subplot(111)
fig.show()
for _ in range(150):
torch_update(alive_map)
ax.clear()
ax.imshow(alive_map.numpy(), cmap='gray')
fig.canvas.draw()
Explanation: Task 2 (7 points)
Use tensors only: no lists, loops, numpy arrays etc.
Clarification update: see task 1.
We will implement Conway's Game of Life in PyTorch.
If you skipped the URL above, here are the rules:
* You have a 2D grid of cells, where each cell is "alive"(1) or "dead"(0)
* At one step in time, the generation update happens:
* Any living cell that has 2 or 3 neighbors survives, otherwise (0,1 or 4+ neighbors) it dies
* Any cell with exactly 3 neighbors becomes alive if it was dead
You are given a reference numpy implementation of the update step. Your task is to convert it to PyTorch.
End of explanation
np.random.seed(666)
torch.manual_seed(666)
from notmnist import load_notmnist
letters = 'ABCDEFGHIJ'
X_train, y_train, X_test, y_test = map(torch.tensor, load_notmnist(letters=letters))
X_train.squeeze_()
X_test.squeeze_();
%matplotlib inline
fig, axarr = plt.subplots(2, 10, figsize=(15,3))
for idx, ax in enumerate(axarr.ravel()):
ax.imshow(X_train[idx].numpy(), cmap='gray')
ax.axis('off')
ax.set_title(letters[y_train[idx]])
Explanation: More fun with Game of Life: video
Task 3 (30 points)
You have to solve yet another character recognition problem: 10 letters, ~14 000 train samples.
For this, we ask you to build a multilayer perceptron (i.e. a neural network of linear layers) from scratch using low-level PyTorch interface.
Requirements:
1. at least 82% accuracy
2. at least 2 linear layers
3. use softmax followed by categorical cross-entropy
You are NOT allowed to use
* numpy arrays
* torch.nn, torch.optim, torch.utils.data.DataLoader
* convolutions
Clarification update:
you mustn't emulate PyTorch tensors with lists or tuples. Using a list for scaffolding utilities not provided by PyTorch core (e.g. to store model's layers or to group function arguments) is OK;
you mustn't use numpy or other tensor libraries except PyTorch;
the purpose of part 1 is to make you google and read the documentation a LOT so that you learn which intrinsics PyTorch provides and what are their interfaces. This is why if there is some tensor functionality that is directly native to PyTorch, you mustn't emulate it with loops. Example:
```
x = torch.rand(1_000_000)
Wrong: slow and unreadable
for idx in range(x.numel()):
x[idx] = math.sqrt(x[idx])
Correct
x.sqrt_()
```
Loops are prohibited except for iterating over
parameters (and their companion tensors used by optimizer, e.g. running averages),
layers,
epochs (or "global" gradient steps if you don't use epoch logic),
batches in the dataset (using loops for collecting samples into a batch is not allowed).
Tips:
Pick random batches (either shuffle data before each epoch or sample each batch randomly).
Do not initialize weights with zeros (learn why). Gaussian noise with small variance will do.
50 hidden neurons and a sigmoid nonlinearity will do for a start. Many ways to improve.
To improve accuracy, consider changing layers' sizes, nonlinearities, optimization methods, weights initialization.
Don't use GPU yet.
Reproducibility requirement: you have to format your code cells so that Cell -> Run All on a fresh notebook reliably trains your model to the desired accuracy in a couple of minutes and reports the accuracy reached.
Happy googling!
End of explanation
class NeuralNet:
def __init__(self, lr):
# Your code here
self.lr = lr
self.EPS = 1e-15
# First linear layer
self.linear1w = torch.randn(784, 300, dtype=torch.float32, requires_grad=True)
self.linear1b = torch.randn(1, 300, dtype=torch.float32, requires_grad=True)
# Second linear layer
self.linear2w = torch.randn(300, 10, dtype=torch.float32, requires_grad=True)
self.linear2b = torch.randn(1, 10, dtype=torch.float32, requires_grad=True)
def predict(self, images):
images: `torch.tensor` of shape `batch_size x height x width`
and dtype `torch.float32`.
returns: `output`, a `torch.tensor` of shape `batch_size x 10`,
where `output[i][j]` is the probability of `i`-th
batch sample to belong to `j`-th class.
def log_softmax(input):
input = input - torch.max(input, dim=1, keepdim=True)[0]
return input - torch.log(torch.sum(torch.exp(input), dim=1, keepdim=True))
linear1_out = torch.add(images @ self.linear1w, self.linear1b).clamp(min=0)
linear2_out = torch.add(linear1_out @ self.linear2w, self.linear2b)
return log_softmax(linear2_out)
def get_loss(self, input, target):
def nll(input, target):
return -torch.sum(target * input) /input.shape[0]
return nll(input, target)
def zero_grad(self):
with torch.no_grad():
self.linear1w.grad.zero_()
self.linear1b.grad.zero_()
self.linear2w.grad.zero_()
self.linear2b.grad.zero_()
def update_weights(self, loss):
loss.backward()
with torch.no_grad():
self.linear1w -= self.lr * self.linear1w.grad
self.linear1b -= self.lr * self.linear1b.grad
self.linear2w -= self.lr * self.linear2w.grad
self.linear2b -= self.lr * self.linear2b.grad
self.zero_grad()
Explanation: The cell below has an example layout for encapsulating your neural network. Feel free to modify the interface if you need to (add arguments, add return values, add methods etc.). For example, you may want to add a method do_gradient_step() that executes one optimization algorithm (SGD / Adadelta / Adam / ...) step.
End of explanation
def one_hot_encode(input, classes=10):
return torch.eye(classes)[input]
def accuracy(model, images, labels):
model: `NeuralNet`
images: `torch.tensor` of shape `N x height x width`
and dtype `torch.float32`
labels: `torch.tensor` of shape `N` and dtype `torch.int64`. Contains
class index for each sample
returns:
fraction of samples from `images` correctly classified by `model`
with torch.no_grad():
labels_pred = model.predict(images)
numbers = labels_pred.argmax(dim=-1)
numbers_target = labels.argmax(dim=-1)
return (numbers == numbers_target).float().mean()
class batch_generator:
def __init__(self, images, batch_size):
dataset_size = images[0].size()[0]
permutation = torch.randperm(dataset_size)
self.images = images[0][permutation]
self.targets = images[1][permutation]
self.images = self.images.split(batch_size, dim=0)
self.targets = self.targets.split(batch_size, dim=0)
self.current = 0
self.high = len(self.targets)
def __iter__(self):
return self
def __next__(self):
if self.current >= self.high:
raise StopIteration
else:
self.current += 1
return self.images[self.current - 1], self.targets[self.current - 1]
Explanation: Define subroutines for one-hot encoding, accuracy calculating and batch generating:
End of explanation
train_size, _, _ = X_train.shape
test_size, _, _ = X_test.shape
X_train = X_train.reshape(train_size, -1)
X_test = X_test.reshape(test_size, -1)
y_train_oh = one_hot_encode(y_train)
y_test_oh = one_hot_encode(y_test)
print("Train size: ", X_train.shape)
print("Test size: ", X_test.shape)
Explanation: Prepare dataset: reshape and one-hot encode:
End of explanation
model = NeuralNet(1e-2)
batch_size = 128
epochs = 50
loss_history = torch.Tensor(epochs)
for epoch in tqdm.trange(epochs):
# Update weights
for X_batch, y_batch in batch_generator((X_train, y_train_oh), batch_size):
predicted = model.predict(X_batch)
loss = model.get_loss(predicted, y_batch)
model.update_weights(loss)
# Calculate loss
test_predicted = model.predict(X_test)
loss = model.get_loss(test_predicted, y_test_oh)
loss_history[epoch] = loss
model.zero_grad()
Explanation: Define model and train
End of explanation
plt.figure(figsize=(14, 7))
plt.title("Loss")
plt.xlabel("#epoch")
plt.ylabel("Loss")
plt.plot(loss_history.detach().numpy(), label="Validation loss")
plt.legend(loc='best')
plt.grid()
plt.show()
Explanation: Plot loss:
End of explanation
train_acc = accuracy(model, X_train, y_train_oh) * 100
test_acc = accuracy(model, X_test, y_test_oh) * 100
print("Train accuracy: %.2f, test accuracy: %.2f" % (train_acc, test_acc))
assert test_acc >= 82.0, "You have to do better"
Explanation: Final evalutation:
End of explanation |
4,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VarData Speed Calculation Comparison
Notebook for calculation a VarData Test Case and have a Unit Number for Comparison
For more speed calculations see VarData Speed Calculations
Step1: Test Example
Step2: Your Test Here | Python Code:
import math
import datetime
def varData_speedCalc_comparison(test_case, test_title, test_description, test_time=None, cups_speed=250.0, substrate_size_mm=None, substrate_size_px=None, dpi_image=[360.0,360.0]):
inch2mm = 25.4 # mm/inch
bpp = 4.0 # bit/px
if test_time == None:
test_time = datetime.datetime.now()
# cup speed
cup_speed_sec = cups_speed / 60 # cup/sec
cup_cycle = 1/cup_speed_sec # sec
# pixel pitch
pixel_pitch = [inch2mm / dpi_image[0] , inch2mm / dpi_image[1]]# mm
# Substrate size (image size)
if not(substrate_size_px == None):
substrate_size_mm = [int(math.ceil(substrate_size_px[0]/pixel_pitch[0])), int(math.ceil(substrate_size_px[1]/pixel_pitch[1]))]
substrate_size_px = substrate_size_px
elif not(substrate_size_mm == None):
substrate_size_mm = substrate_size_mm
substrate_size_px = [int(math.ceil(substrate_size_mm[0]/pixel_pitch[0])), int(math.ceil(substrate_size_mm[1]/pixel_pitch[1]))]
# print speed
print_speed = (substrate_size_mm[0]/cup_cycle) # mm/s
print_speed_area = (substrate_size_mm[0]*substrate_size_mm[1])/cup_cycle # mm^2/s
# data size per image
image_size_bytes = (substrate_size_px[0]*substrate_size_px[1])*bpp/8 # bytes
# Print all results
print("------------------- {} -------------------".format(test_title))
print("-- Test Case: #{}".format(str(test_case)))
print("-- Test Time: {}".format(str(test_time)))
print("----------------------------------------------------------------")
print(test_description)
print("----------------------------------------------------------------")
print("Image info: {:.3f} mm x {:.3f} mm".format(substrate_size_mm[0], substrate_size_mm[1]))
print(" {:.3f} px x {:.3f} px".format(substrate_size_px[0], substrate_size_px[1]))
if image_size_bytes < 1024:
print(" {:.3f} Bytes".format(image_size_bytes))
elif image_size_bytes/1024 < 1024:
print(" {:.3f} kB".format(image_size_bytes/1024))
elif image_size_bytes/1024/1024 < 1024:
print(" {:.3f} MB".format(image_size_bytes/1024/1024))
else:
print(" {:.3f} GB".format(image_size_bytes/1024/1024/1024))
print("")
print("Cup info: {:.3f} cups/sec".format(cup_speed_sec))
print(" {:.3f} ms/cup".format(cup_cycle*1000))
print("")
print("Print Speed Linear {:.3f} mm/s => {:.3f} m/min".format(print_speed, print_speed/1000*60))
print("Print Speed Area {:.3f} mm^2/s => {:.3f} m^2/s".format(print_speed_area, print_speed_area*10e-7))
print("----------------------------------------------------------------")
Explanation: VarData Speed Calculation Comparison
Notebook for calculation a VarData Test Case and have a Unit Number for Comparison
For more speed calculations see VarData Speed Calculations
End of explanation
test_case = 1 # Unique test number (UTN)
test_title = "Digiround Max Speed Test"
test_description = "Digiround Spectra PC \n CPU: i5-4570 \n RAM: 8GB 12800MHz \n SSD: 120GB \n 1 Gbps Ethernet Port for Calmar data"
#test_time = "2017-11-29 14:02:00" # if not existing uses current time
test_time = None
cups_speed = 250.0 # cups/min
substrate_size_mm = [280.0, 140.0] # [mm]
substrate_size_px = None # [px]
dpi_image = [360.0, 360.0] # [dpi] dpi_x and dpy_y
varData_speedCalc_comparison(test_case, test_title, test_description, test_time, cups_speed, substrate_size_mm, substrate_size_px, dpi_image)
Explanation: Test Example
End of explanation
test_case = 1 # Unique test number (UTN)
test_title = "Digiround Max Speed Test"
test_description = "Digiround Spectra PC \n CPU: i5-4570 \n RAM: 8GB 12800MHz \n SSD: 120GB \n 1 Gbps Ethernet Port for Calmar data"
#test_time = "2017-11-29 14:02:00" # if not existing uses current time
test_time = None
cups_speed = 120.0 # cups/min
substrate_size_mm = [280.0, 140.0] # [mm]
substrate_size_px = None # [px]
dpi_image = [360.0, 360.0] # [dpi] dpi_x and dpy_y
varData_speedCalc_comparison(test_case, test_title, test_description, test_time, cups_speed, substrate_size_mm, substrate_size_px, dpi_image)
test_case = 1 # Unique test number (UTN)
test_title = "WK308 120 cups/min + Ergosoft running"
test_description = "Spectra PC \n CPU: i5-4570 \n RAM: 24GB 12800MHz \n SSD: 223GB \n 1 Gbps Ethernet Port for Calmar data"
#test_time = "2017-11-29 14:02:00" # if not existing uses current time
test_time = None
cups_speed = 240.0 # cups/min
substrate_size_mm = None # [mm]
substrate_size_px = [3971, 2014] # [px]
dpi_image = [360.0, 360.0] # [dpi] dpi_x and dpy_y
varData_speedCalc_comparison(test_case, test_title, test_description, test_time, cups_speed, substrate_size_mm, substrate_size_px, dpi_image)
Explanation: Your Test Here
End of explanation |
4,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notes
Permuation matrices and graphs
$P$ obtained by permuting rows of an identity matrix. $N!$ possile permutations possible of an identity matrix. $PA$ permutes the $i^{th}$ row of A to $\pi(i^{th})$ row of $PA$. $AP$ moves the $i^{th}$ column of $A$ to $\pi(i^{th})$ column of $AP$.
$PP^T = P^TP = I$ so $P^T = P^{-1}$
$Me_j$ selects the $j^{th}$ column of $M$. $e^T_iM$ selects the $i^{th}$ row of $M$. $e^T_iMe_j$ selects the $i^{th}$ row of $j^{th}$ column, which is equal to $M_{ij}$
let $A_1$ and $A_2$ be the adjacency matrices of two isomorphic graphs with permutation $\pi_A$. Edge $(i,j)$ in $A_1$ corresponds to $(\pi_A(i),\pi_A(j))$ in $A_2$, so
$$(A_2){(\pi_A(i),\pi_A(j))} = e^T_iA_1e_j$$
$$(Pe_i)^TA_1(Pe_j) = e^T{i}A_1e_{j}$$
more generally, $A_2 = PA_1P^T$, which is equivalent to $A_2P=PA_1$
Projection onto Bistochastic Matrices
Step1: Matrix Concentration Inequalities
Step2: Visualizing Erdos Renyi Kronecker Products | Python Code:
from IPython.display import IFrame
IFrame("./projection_onto_bistochastic_matrices.pdf", width=800, height=500)
Explanation: Notes
Permuation matrices and graphs
$P$ obtained by permuting rows of an identity matrix. $N!$ possile permutations possible of an identity matrix. $PA$ permutes the $i^{th}$ row of A to $\pi(i^{th})$ row of $PA$. $AP$ moves the $i^{th}$ column of $A$ to $\pi(i^{th})$ column of $AP$.
$PP^T = P^TP = I$ so $P^T = P^{-1}$
$Me_j$ selects the $j^{th}$ column of $M$. $e^T_iM$ selects the $i^{th}$ row of $M$. $e^T_iMe_j$ selects the $i^{th}$ row of $j^{th}$ column, which is equal to $M_{ij}$
let $A_1$ and $A_2$ be the adjacency matrices of two isomorphic graphs with permutation $\pi_A$. Edge $(i,j)$ in $A_1$ corresponds to $(\pi_A(i),\pi_A(j))$ in $A_2$, so
$$(A_2){(\pi_A(i),\pi_A(j))} = e^T_iA_1e_j$$
$$(Pe_i)^TA_1(Pe_j) = e^T{i}A_1e_{j}$$
more generally, $A_2 = PA_1P^T$, which is equivalent to $A_2P=PA_1$
Projection onto Bistochastic Matrices
End of explanation
IFrame("./wip/bounding_erdos_renyi/main.pdf", width=800, height=500)
Explanation: Matrix Concentration Inequalities
End of explanation
import numpy as np
import igraph as ig
import matplotlib.pyplot as plt
%matplotlib inline
def get_graph(n, m):
num_edges = int(round(n*m))
g = ig.Graph.Erdos_Renyi(n, m=num_edges)
p = ig.RainbowPalette(num_edges)
g.es['color'] = [p.get(idx) for idx in xrange(num_edges)]
return g
def get_ones_graph(n):
J = np.ones((n,n))
return ig.Graph.Adjacency(J.tolist(), mode=ig.ADJ_UNDIRECTED)
def adj_mat(g):
return np.matrix(g.get_adjacency().data)
def get_kronecker_graph(g1, g2, graph_first=True):
# setup
graph = g1 if graph_first else g2
ones = g2 if graph_first else g1
p = len(g2.vs)
# map colors to idx
eid_color_map = {}
for e, col in zip(graph.es, graph.es['color']):
eid_color_map[(e.source, e.target)] = eid_color_map[(e.target, e.source)] = col
# kron
ak = np.kron(adj_mat(g1), adj_mat(g2))
gk = ig.Graph.Adjacency(ak.tolist(), mode=ig.ADJ_UNDIRECTED)
# map kron edge to color
for edge in gk.es:
i, j = edge.source, edge.target
if graph_first: gi, gj = (i)//p, (j)//p
else: gi, gj = i % p, j % p
edge['color'] = eid_color_map[(gi, gj)]
return gk
def plot_graph(graph, **kw2):
kw = dict(bbox=(150,150), vertex_size=7, vertex_color='gray', edge_width=1)
if 'color' in graph.es.attributes(): kw['edge_color'] =graph.es['color']
kw.update(kw2)
return ig.plot(graph, **kw)
G2 = get_graph(2,.5)
G3 = get_graph(3,1)
J2 = get_ones_graph(2)
J3 = get_ones_graph(3)
J = lambda p: get_ones_graph(p)
plot_graph(G2)
# G2 kron J2
gk = get_kronecker_graph(G2, J2)
print (adj_mat(gk))
plot_graph(gk)
# G2 kron J3
gk = get_kronecker_graph(G2, J(3))
print (adj_mat(gk))
plot_graph(gk)
# J2 kron G2
plot_graph(get_kronecker_graph(J2, G2, graph_first=False))
plot_graph(G3)
# G3 kron J2
G3J2 = get_kronecker_graph(G3, J(2))
plot_graph(G3J2)
# J2 kron G3
J2G3 = get_kronecker_graph(J2, G3, graph_first=False)
plot_graph(J2G3)
Explanation: Visualizing Erdos Renyi Kronecker Products
End of explanation |
4,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sparse Linear Inverse Demo with AMP
In this demo, we illustrate how to use the vampyre package for a simple sparse linear inverse problem. The problem is to estimate a sparse vector z0 from linear measurements of the form y=A.dot(z0)+w where w is Gaussian noise and A is a known linear transform -- a basic problem in compressed sensing. By sparse, we mean that the vector z0 has few non-zero values. Knowing that the vector is sparse can be used for improved reconstruction if an appropriate sparse reconstruction algorithm is used.
There are a large number of algorithms for sparse linear inverse problems. This demo uses the Generalized Approximate Message Passing (GAMP) method, one of several methods that will be included in the vampyre package. In going through this demo, you will learn to
Step1: We will also load the other packages we will use in this demo. This could be done before the above import.
Step2: Generating Synthetic Data
We begin by generating synthetic data. The model is
Step3: To generate the synthetic data for this demo, we use the following simple probabilistic model. For the input z0, we will use Bernouli-Gaussian (BG) distribution, a simple model in sparse signal processing. In the BG model, the components z0[j] are i.i.d. where each component can be on or off.
With probability prob_on, z0[i] is on with z0[i] ~ N(z0_mean_on,z0_var_on)
With probability 1-prob_on, z0[i] is off with z0[i]=0.
Thus, on average, on prob_on*nz0 are *on$. We set the parameters for the model as well as the SNR for the measurements.
Step4: Using these parameters, we can generate random sparse z0 following this distribution with the following simple code.
Step5: To illustrate the sparsity, we plot the vector z0. We can see from this plot that the majority of the components of z0 are zero.
Step6: Now, we create a random transform A and output z1 = A.dot(z0)
Step7: Finally, we add noise at the desired SNR
Step8: Setting up the AMP / GAMP Solver
Now that we have created the sparse data, we will use the vampyre package to recover z0 and z1 from y. In vampyre the methods to perform this estimation are called solvers. The basic Approximate Message Passing (AMP) algorithm was developed in
Step9: We next use the vampyre class, MixEst, to describe a mixture of the two distributions. This is done by creating a list, est_list, of the estimators and an array pz with the probability of each component. The resulting estimator, est_in, is the estimator for the prior $z$, which is also the input to the transform $A$. We give this a name Input since it corresponds to the input. But, any naming is fine. Or, you can let vampyre give it a generic name.
Step10: We next define the operator A. In this case the operator is defined by a matrix so we use the MatrixLT class.
Step11: Finally, we describe the likelihood function, p(y|z1). Since y=z1+w, we can describe this as a Gaussian estimator.
Step12: Running the GAMP Solver
Having described the input and output estimators and the variance handler, we can now construct a GAMP solver. The construtor takes the input and output estimators, the variance handler and other parameters. The paramter nit is the number of iterations. This is fixed for now. Later, we will add auto-termination. The other parameter, hist_list is optional, and will be described momentarily.
Step13: We can print a summary of the model which indicates the dimensions and the estimators.
Step14: We now run the solver by calling the solve() method. For a small problem like this, this should be close to instantaneous.
Step15: The VAMP solver estimate is the field zhat. We plot one column of this (icol=0) and compare it to the corresponding column of the true matrix z. You should see a very good match.
Step16: We can measure the normalized mean squared error as follows. The GAMP solver also produces an estimate of the MSE in the variable zvar0. We can extract this variable to compute the predicted MSE. We see that the normalized MSE is indeed low and closely matches the predicted value from VAMP.
Step18: Finally, we can plot the actual and predicted MSE as a function of the iteration number. When solver was contructed, we passed an argument hist_list=['z0', 'zvar0']. This indicated to store the value of the estimate z0 and predicted error variance zvar0 with each iteration. We can recover these values from solver.hist_dict, the history dictionary. Using the values we can compute and plot the normalized MSE on each iteartion. Since we are going to plot several times in this demo, we wrap the plotting routine in a function, plot_z0est().
When we run plot_z0est() we see that GAMP gets a low MSE in very few iterations, about 10.
Step19: Damping and Stability
A significant problem with GAMP is its stability. GAMP and AMP are designed for Gaussian i.i.d. matrices. For other matrices, the algorithms can diverge. This divergence issue is one of the main difficulties in using GAMP and AMP in practivce.
Recent research has shown that the convergence appears to be related to condition number of the matrix. Matrices A with higher condition numbers tend to cause GAMP / AMP to diverge. See, for example
Step20: Now, we create a synthetic data based on the matrix and re-run GAMP.
Step21: We plot the results and we can see that the algorithm diverges.
Step22: To fix the problem, one can apply damping. In damping, the GAMP algorithm is adjusted to take a partial step as controlled by a parameter step between 0 and 1. In general, the theory is that step <= 1/sqrt(cond_num). In practice, you can try different step sizes until you get reasonable results. A warning though | Python Code:
import os
import sys
vp_path = os.path.abspath('../../')
if not vp_path in sys.path:
sys.path.append(vp_path)
import vampyre as vp
Explanation: Sparse Linear Inverse Demo with AMP
In this demo, we illustrate how to use the vampyre package for a simple sparse linear inverse problem. The problem is to estimate a sparse vector z0 from linear measurements of the form y=A.dot(z0)+w where w is Gaussian noise and A is a known linear transform -- a basic problem in compressed sensing. By sparse, we mean that the vector z0 has few non-zero values. Knowing that the vector is sparse can be used for improved reconstruction if an appropriate sparse reconstruction algorithm is used.
There are a large number of algorithms for sparse linear inverse problems. This demo uses the Generalized Approximate Message Passing (GAMP) method, one of several methods that will be included in the vampyre package. In going through this demo, you will learn to:
* Load the vampyre package
* Create synthetic data for a sparse linear inverse problem
* Set up the GAMP method in the vampyre package to perform the estimation for the linear inverse problem
* Measure the mean squared error (MSE) and compare the value to the predicted value from the VAMP method.
* Using the hist_list feature to track variables per iteration of the algorithm.
* Adjust the damping factor for ill-conditioned matrices.
An almost identical demo is available for the Vector AMP (VAMP) method. The VAMP method is more robust and similar to use. You can start on that demo instead.
Importing the Package
First we need to import the vampyre package. Since python does not have relative imports, you need to add the path location for the vampyre package to the system path. In this case, we have specified the path use a relative path location, but you can change this depending on where vampyre is located.
End of explanation
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: We will also load the other packages we will use in this demo. This could be done before the above import.
End of explanation
# Parameters
nz0 = 1000 # number of components of z0
nz1 = 500 # number of measurements z1
# Compute the shapes
zshape0 = (nz0,) # Shape of z0 matrix
zshape1 = (nz1,) # Shape of z1 matrix = shape of y matrix
Ashape = (nz1,nz0) # Shape of A matrix
Explanation: Generating Synthetic Data
We begin by generating synthetic data. The model is:
y = z1 + w, z1 = A.dot(z0)
where z0 and z1 are the unknown vectors, A is transform and w is noise. First, we set the dimensions and the shapes of the vectors we wil use.
End of explanation
prob_on = 0.1 # fraction of components that are *on*
z0_mean_on = 0 # mean for the on components
z0_var_on = 1 # variance for the on components
snr = 30 # SNR in dB
Explanation: To generate the synthetic data for this demo, we use the following simple probabilistic model. For the input z0, we will use Bernouli-Gaussian (BG) distribution, a simple model in sparse signal processing. In the BG model, the components z0[j] are i.i.d. where each component can be on or off.
With probability prob_on, z0[i] is on with z0[i] ~ N(z0_mean_on,z0_var_on)
With probability 1-prob_on, z0[i] is off with z0[i]=0.
Thus, on average, on prob_on*nz0 are *on$. We set the parameters for the model as well as the SNR for the measurements.
End of explanation
# Generate the random input
z0_on = np.random.normal(z0_mean_on, np.sqrt(z0_var_on), zshape0)
u = np.random.uniform(0, 1, zshape0) < prob_on
z0 = z0_on*u
Explanation: Using these parameters, we can generate random sparse z0 following this distribution with the following simple code.
End of explanation
ind = np.arange(nz0)
plt.plot(ind,z0);
Explanation: To illustrate the sparsity, we plot the vector z0. We can see from this plot that the majority of the components of z0 are zero.
End of explanation
A = np.random.normal(0, 1/np.sqrt(nz0), Ashape)
z1 = A.dot(z0)
Explanation: Now, we create a random transform A and output z1 = A.dot(z0)
End of explanation
zvar1 = np.mean(np.abs(z1)**2)
wvar = zvar1*np.power(10, -0.1*snr)
w = np.random.normal(0,np.sqrt(wvar), zshape1)
y = z1 + w
Explanation: Finally, we add noise at the desired SNR
End of explanation
est0_off = vp.estim.DiscreteEst(0,1,zshape0)
est0_on = vp.estim.GaussEst(z0_mean_on, z0_var_on,zshape0)
Explanation: Setting up the AMP / GAMP Solver
Now that we have created the sparse data, we will use the vampyre package to recover z0 and z1 from y. In vampyre the methods to perform this estimation are called solvers. The basic Approximate Message Passing (AMP) algorithm was developed in:
Donoho, David L., Arian Maleki, and Andrea Montanari. "Message-passing algorithms for compressed sensing." Proceedings of the National Academy of Sciences 106.45 (2009): 18914-18919.
The vampyre package currently implements a slightly more general solver, called Generalized AMP described in:
Rangan, Sundeep. "Generalized approximate message passing for estimation with random linear mixing." Proc. IEEE Internation Symposium on Information Theory (ISIT), 2011.
GAMP can handle nonlinear output channels. In this demo, we will restrict our attention to the linear Gaussian channel, so the GAMP solver essentially implements the AMP algorithm.
Similar to most of the solvers in the vampyre package, the GAMP solver needs precise specifications of the probability distributions of z0, z1 and y. For the linear inverse problem, we will specify three components:
* The prior p(z0);
* The transform A such that z1 = A.dot(z_0)
* The likelihood p(y|z1).
Both the prior and likelihood are described by estimators. The transform is described by an operator.
We first describe the estimator for the prior p(z0). The vampyre package will eventually have a large number of estimators to describe various densities. In this simple demo, p(z0) is what is called a mixture distribution since z0 is one distribution with probability 1-prob_on and a second distribution with probability prob_on. To describe this mixture distribution in the vampyre package, we need to first create estimator classes for each component distribution. To this end, the following code creates two estimators:
* est0_off: The estimator corresponding to the z0[j]=0. This is simply a discrete distribution with a point mass at zero.
* est0_on: The estimator corresponding to the case when z0[j] = N(z0_mean_on, z0_var_on). This is a Gaussian distribution
End of explanation
est_list = [est0_off, est0_on]
pz0 = np.array([1-prob_on, prob_on])
est0 = vp.estim.MixEst(est_list, w=pz0, name='Input')
Explanation: We next use the vampyre class, MixEst, to describe a mixture of the two distributions. This is done by creating a list, est_list, of the estimators and an array pz with the probability of each component. The resulting estimator, est_in, is the estimator for the prior $z$, which is also the input to the transform $A$. We give this a name Input since it corresponds to the input. But, any naming is fine. Or, you can let vampyre give it a generic name.
End of explanation
Aop = vp.trans.MatrixLT(A,zshape0)
Explanation: We next define the operator A. In this case the operator is defined by a matrix so we use the MatrixLT class.
End of explanation
est1 = vp.estim.GaussEst(y,wvar,zshape1,name='Output')
Explanation: Finally, we describe the likelihood function, p(y|z1). Since y=z1+w, we can describe this as a Gaussian estimator.
End of explanation
nit = 20 # number of iterations
solver = vp.solver.Gamp(est0,est1,Aop,hist_list=['z0', 'zvar0'],nit=nit)
Explanation: Running the GAMP Solver
Having described the input and output estimators and the variance handler, we can now construct a GAMP solver. The construtor takes the input and output estimators, the variance handler and other parameters. The paramter nit is the number of iterations. This is fixed for now. Later, we will add auto-termination. The other parameter, hist_list is optional, and will be described momentarily.
End of explanation
solver.summary()
Explanation: We can print a summary of the model which indicates the dimensions and the estimators.
End of explanation
solver.solve()
Explanation: We now run the solver by calling the solve() method. For a small problem like this, this should be close to instantaneous.
End of explanation
zhat0 = solver.z0
ind = np.array(range(nz0))
plt.plot(ind,z0)
plt.plot(ind,zhat0)
plt.legend(['True', 'Estimate']);
Explanation: The VAMP solver estimate is the field zhat. We plot one column of this (icol=0) and compare it to the corresponding column of the true matrix z. You should see a very good match.
End of explanation
zerr0_act = np.mean(np.abs(zhat0-z0)**2)
zerr0_pred = solver.zvar0
zpow0 = np.mean(np.abs(z0)**2)
mse_act = 10*np.log10(zerr0_act/zpow0)
mse_pred = 10*np.log10(zerr0_pred/zpow0)
print("Normalized MSE (dB): actual {0:f} pred {1:f}".format(mse_act, mse_pred))
Explanation: We can measure the normalized mean squared error as follows. The GAMP solver also produces an estimate of the MSE in the variable zvar0. We can extract this variable to compute the predicted MSE. We see that the normalized MSE is indeed low and closely matches the predicted value from VAMP.
End of explanation
def plot_z0_est(solver,z0):
Plots the true and predicted MSE for the estimates of z0
# Compute the MSE as a function of the iteration
zhat0_hist = solver.hist_dict['z0']
zvar0_hist = solver.hist_dict['zvar0']
nit = len(zhat0_hist)
mse_act = np.zeros(nit)
mse_pred = np.zeros(nit)
for it in range(nit):
zerr0_act = np.mean(np.abs(zhat0_hist[it]-z0)**2)
zerr0_pred = zvar0_hist[it]
mse_act[it] = 10*np.log10(zerr0_act/zpow0)
mse_pred[it] = 10*np.log10(zerr0_pred/zpow0)
plt.plot(range(nit), mse_act, 'o-', linewidth=2)
plt.plot(range(nit), mse_pred, 's', linewidth=1)
plt.xlabel('Iteration')
plt.ylabel('Normalized MSE (dB)')
plt.legend(['Actual', 'Predicted'])
plt.grid()
plot_z0_est(solver,z0)
Explanation: Finally, we can plot the actual and predicted MSE as a function of the iteration number. When solver was contructed, we passed an argument hist_list=['z0', 'zvar0']. This indicated to store the value of the estimate z0 and predicted error variance zvar0 with each iteration. We can recover these values from solver.hist_dict, the history dictionary. Using the values we can compute and plot the normalized MSE on each iteartion. Since we are going to plot several times in this demo, we wrap the plotting routine in a function, plot_z0est().
When we run plot_z0est() we see that GAMP gets a low MSE in very few iterations, about 10.
End of explanation
# Generate a random transform
A = vp.trans.rand_rot_invariant_mat(nz1,nz0,cond_num=10)
Aop = vp.trans.MatrixLT(A,zshape0)
z1 = A.dot(z0)
Explanation: Damping and Stability
A significant problem with GAMP is its stability. GAMP and AMP are designed for Gaussian i.i.d. matrices. For other matrices, the algorithms can diverge. This divergence issue is one of the main difficulties in using GAMP and AMP in practivce.
Recent research has shown that the convergence appears to be related to condition number of the matrix. Matrices A with higher condition numbers tend to cause GAMP / AMP to diverge. See, for example:
* Rangan, Sundeep, Philip Schniter, and Alyson Fletcher. "On the convergence of approximate message passing with arbitrary matrices." Proc. IEEE International Symposium on Information Theory (ISIT), 2014.
To illustrate we create a random matrix with a specified condition number. This can be done with the rand_rot_invariant command. Specifically, it creates a matrix A=USV.T where U and V are random orthogonal matrices and S has a specified condition number.
End of explanation
# Add noise
zvar1 = np.mean(np.abs(z1)**2)
wvar = zvar1*np.power(10, -0.1*snr)
w = np.random.normal(0,np.sqrt(wvar), zshape1)
y = z1 + w
# Create the estimator
est1 = vp.estim.GaussEst(y,wvar,zshape1,name='Output')
# Run GAMP
nit = 20
solver = vp.solver.Gamp(est0,est1,Aop,hist_list=['z0', 'zvar0'],nit=nit)
solver.solve()
Explanation: Now, we create a synthetic data based on the matrix and re-run GAMP.
End of explanation
plot_z0_est(solver,z0)
Explanation: We plot the results and we can see that the algorithm diverges.
End of explanation
# Run GAMP with damping
nit = 200
solver = vp.solver.Gamp(est0,est1,Aop,hist_list=['z0', 'zvar0'],nit=nit,step=0.3)
solver.solve()
# Plot the results
plot_z0_est(solver,z0)
Explanation: To fix the problem, one can apply damping. In damping, the GAMP algorithm is adjusted to take a partial step as controlled by a parameter step between 0 and 1. In general, the theory is that step <= 1/sqrt(cond_num). In practice, you can try different step sizes until you get reasonable results. A warning though: Sometimes you never get great results.
In this case, we take step=0.3. We also need to run the algorithm for many more iterations. We see we get better results although we have to run for more iterations.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.