Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
4,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EECS 445
Step1: Let's try several polynomial fits to the data
Step2: Let's plot the data with the estimators! | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from numpy.matlib import repmat
from sklearn
degrees = [1,2,3,4,5]
#define data
n = 20
sub = 1000
mean = 0
std = 0.25
#define test set
Xtest = np.random.random((n,1))*2*np.pi
ytest = np.sin(Xtest) + np.random.normal(mean,std,(n,1))
#pre-allocate variables
preds = np.zeros((n,sub))
bias = np.zeros(len(degrees))
variance = np.zeros(len(degrees))
mse = np.zeros(len(degrees))
values = np.expand_dims(np.linspace(0,2*np.pi,100),1)
Explanation: EECS 445: Machine Learning
Hands On 10: Bias Variance Tradeoff
Consider a sequence of IID random variable:
$$
X_i =
\begin{cases}
100 & \text{ with prob. } 0.02 \
0 & \text{ with prob. } 0.97 \
-100 & \text{ with prob. } 0.01 \
\end{cases}
$$
The true mean of $X_i$ is
$$
0.02 \times 100 + 0.97 \times 0 + 0.01 \times -100 = 1
$$
We want to estimate the true mean of this distribution. We will consider two different estimators of the true mean.
Let's say you take three samples $X_1, X_2, X_3$, and you compute the empirical mean $Z=\frac{X_1 + X_2 + X_3}{3}$ and empirical median $Y$ of these three samples (recall that the median is obtained by sorting $X_1, X_2, X_3$ and then choosing the middle (2nd) entry).
What is the bias-variance tradeoff of the $Y$ and $Z$ for estimating the true mean of the above distribution?
They are both unbiased estimators of the true mean, and have the same variance.
The median has higher bias and higher variance.
The mean has higher bias and higher variance.
They both have no bias, but the mean has lower variance.
The mean has no bias but some variance, and the median has non-zero bias but less variance
Solution
The last answer is correct.
The empirical mean of a sample of random $n$ IID random variables is always an unbiased estimate of the true mean. However, the empirical mean estimator can have high variance. Here it is $ \text{Var}(Z) = \frac{\text{Var}(X_i)}{3} = \frac{(100-1)^2 \times 0.02 + (-100 - 1)^2 \times 0.01 + (0-1)^2 \times 0.97}{3} = 99 \frac 2 3.$
The median, on the other hand, is a biased estimator. It is a little bit hard to calculate exactly, but here goes:
$$
median = \begin{cases} 100 & w.p. 0.02^3 + \binom{3}{1} 0.02^2 \times 0.98 \
-100 & w.p. 0.01^3 + \binom{3}{1} 0.01^2 \times 0.99
\end{cases}
$$
If you work this out, you see that the median on average is $0.089$. This means that the $\text{bias}^2 \approx (1-0.089)^2$ which is no more than 1. Using a similar argument, you can check that the variance of the median is no more than 20. This can be checked experimentally!
Derivation of Bias-Variance Tradeoff eqaution
Assume that we have noisy data, modeled by $f = y + \epsilon$, where $\epsilon \in \mathcal{N}(0,\sigma)$. Given an estimator $\hat{f}$, the squared error can be derived as follows:
$$
\begin{align}
\mathbb{E}\left[\left(\hat{f} - f\right)^2\right] &= \mathbb{E}\left[\hat{f}^2 - 2f\hat{f} + f^2\right]\
&= \mathbb{E}\left[\hat{f}^2\right] + \mathbb{E}\left[f^2\right] - 2\mathbb{E}\left[f\hat{f}^2\right] \text{ By linearity of expectation} \
\end{align}
$$
Now, by definition, $Var(x) = \mathbb{E}\left[x^2\right] - \left(\mathbb{E}\left[x\right]\right)^2$. Subsituting this definition into the eqaution above, we get:
$$
\begin{align}
\mathbb{E}\left[\hat{f}^2\right] + \mathbb{E}\left[f^2\right] - 2\mathbb{E}\left[f\hat{f}^2\right] &= Var(\hat{f}) + \left(\mathbb{E}[\hat{f}]\right)^2 + Var(f) + \left(\mathbb{E}[f]\right)^2 - 2f\mathbb{E}[\hat{F}^2] \
&= Var(\hat{f}) + Var(f) + \left(\mathbb{E}[\hat{f}] - f\right)^2\
&= \boxed{\sigma + Var(\hat{f}) + \left(\mathbb{E}[\hat{f}] - f\right)^2}
\end{align}
$$
The first term $\sigma$ is the irreducible error due to the noise in the data (from the distribution of $\epsilon$). The second term is the variance of the estimator $\hat{f}$ and the final term is the bias of the estimator. There is an inherent tradeoff between the bias and variance of an estimator. Generally, more complex estimators (think of high-degree polynomials as an example) will have a low bias since they will fit the sampled data really well. However, this accuracy will not be maintained if we continued to resample the data, which implies that the variance of this estimator is high.
Activity 1: Bias Variance Tradeoff
We will now see try to see the inherent tradeoff between bias and variance of estimators through linear regression. Consider the following dataset.
End of explanation
for j,degree in enumerate(degrees):
for i in range(sub):
#create data - sample from sine wave
x = np.random.random((n,1))*2*np.pi
y = np.sin(x) + np.random.normal(mean,std,(n,1))
#TODO
#create features corresponding to degree - ex: 1, x, x^2, x^3...
A =
#TODO:
#fit model using least squares solution (linear regression)
#later include ridge regression/normalization
coeffs =
#store predictions for each sampling
preds[:,i] = poly.fit_transform(Xtest).dot(coeffs)[:,0]
#plot 9 images
if i < 9:
plt.subplot(3,3,i+1)
plt.plot(values,poly.fit_transform(values).dot(coeffs),x,y,'.b')
plt.axis([0,2*np.pi,-2,2])
plt.suptitle('PolyFit = %i' % (degree))
plt.show()
#TODO
#Calculate mean bias, variance, and MSE (UNCOMMENT CODE BELOW!)
#bias[j] =
#variance[j] =
#mse[j] =
Explanation: Let's try several polynomial fits to the data:
End of explanation
plt.subplot(3,1,1)
plt.plot(degrees,bias)
plt.title('bias')
plt.subplot(3,1,2)
plt.plot(degrees,variance)
plt.title('variance')
plt.subplot(3,1,3)
plt.plot(degrees,mse)
plt.title('MSE')
plt.show()
Explanation: Let's plot the data with the estimators!
End of explanation |
4,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a RNN model to text generation
RNN model at character level
Input
Step1: Download data and generate sequences
Download quijote from guttenberg project
wget http
Step2: Train the model
Step3: Evaluate model | Python Code:
# Header
from __future__ import print_function
import numpy as np
import tensorflow as tf
print('Tensorflow version: ', tf.__version__)
import time
#Show images
import matplotlib.pyplot as plt
%matplotlib inline
# plt configuration
plt.rcParams['figure.figsize'] = (10, 10) # size of images
plt.rcParams['image.interpolation'] = 'nearest' # show exact image
plt.rcParams['image.cmap'] = 'gray' # use grayscale
# GPU devices visible by python
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"]="0"
path = '/home/ubuntu/data/training/text/quijote/'
Explanation: Create a RNN model to text generation
RNN model at character level
Input: n character previous
Output: next character
Model LSTM
Use 'El Quijote' to train the generator
End of explanation
#Read book
text = open(path + "pg2000.txt").read().lower()
print('corpus length:', len(text))
# Simplify text to improve the semantic capacities of the model.
delete_chars = [ '"', '#', '$', '%', "'", '(', ')', '*', '-', '/', '0', '1', '2', '3', '4', '5', '6',
'7', '8', '9', '@', '[', ']', '«', '»', 'à', 'ï', 'ù', '\ufeff']
for ch in delete_chars:
text=text.replace(ch,"")
print('corpus length deleted:', len(text))
chars = sorted(list(set(text)))
print('Chars list: ', chars)
print('total chars:', len(chars))
#Dictionaries to convert char to num & num to char
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
# cut the text in semi-redundant sequences of maxlen characters
# One sentence of length 20 for each 3 characters
maxlen = 20
step = 3
sentences = []
next_chars = []
for i in range(3000, len(text) - maxlen, step): #Start in character 3000 to exclude Gutenberg header.
sentences.append(text[i: i + maxlen])
next_chars.append(text[i + maxlen])
print('nb sequences:', len(sentences))
print(sentences[4996], '-', next_chars[4996])
Explanation: Download data and generate sequences
Download quijote from guttenberg project
wget http://www.gutenberg.org/cache/epub/2000/pg2000.txt
End of explanation
'''
X: One row by sentence
in each row a matrix of bool 0/1 of dim length_sentence x num_chars coding the sentence. Dummy variables
y: One row by sentence
in each row a vector of bool of lengt num_chars with 1 in the next char position
'''
print('Vectorization...')
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
#X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.float16)
#y = np.zeros((len(sentences), len(chars)), dtype=np.float16)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
print('X shape: ',X.shape)
print('y shape: ',y.shape)
# build the model: 2 stacked LSTM
from tensorflow.contrib.keras import models, layers, optimizers
print('Build model 1')
seq_prev_input = layers.Input(shape=(maxlen, len(chars)), name='prev')
# apply forwards LSTM
forwards1 = layers.LSTM(1024, return_sequences=True, dropout=0.3, recurrent_dropout=0.3)(seq_prev_input)
forwards2 = layers.LSTM(1024, return_sequences=True, dropout=0.3, recurrent_dropout=0.3)(forwards1)
forwards3 = layers.LSTM(1024, return_sequences=False, dropout=0.3, recurrent_dropout=0.3)(forwards2)
output = layers.Dense(len(chars), activation='softmax')(forwards3)
model = models.Model(inputs=seq_prev_input, outputs=output)
model.summary()
# try using different optimizers and different optimizer configs
nadam = optimizers.Nadam(lr=0.0002, schedule_decay=0.000025)
model.compile(loss='categorical_crossentropy', optimizer=nadam, metrics=['accuracy'])
#Plot the model graph
from tensorflow.contrib.keras import utils
# Create model image
utils.plot_model(model, '/tmp/model.png')
# Show image
plt.imshow(plt.imread('/tmp/model.png'))
#Fit model
history = model.fit(X[:600000], y[:600000], batch_size=256, epochs=12,
validation_data=(X[600000:], y[600000:]))
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (8, 8)
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.show()
# Save model
models.save_model(model, path + 'models/text_generation_model1024.h5')
Explanation: Train the model
End of explanation
# Load model
model1 = models.load_model(path + 'models/text_generation_model1024.h5')
maxlen = 20
def sample(a, diversity=1.0):
'''
helper function to sample an index from a probability array
- Diversity control the level of randomless
'''
a = np.log(a) / diversity
a = np.exp(a) / np.sum(np.exp(a), axis=0)
a /= np.sum(a+0.0000001) #Precission error
return np.argmax(np.random.multinomial(1, a, 1))
def generate_text(sentence, diversity, current_model, num_char=400):
sentence_init = sentence
generated = ''
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = current_model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
print()
print('DIVERSITY: ',diversity)
print(sentence_init + generated)
sentence = 'mire vuestra merced '
generate_text(sentence, 0.2, model1)
sentence = 'mire vuestra merced '
generate_text(sentence, 0.2, model1)
generate_text(sentence, 0.5, model1)
generate_text(sentence, 1, model1)
generate_text(sentence, 1.2, model1)
sentence = 'a mi señora dulcinea'
generate_text(sentence, 0.2, model1)
generate_text(sentence, 0.5, model1)
generate_text(sentence, 1, model1)
generate_text(sentence, 1.2, model1)
sentence = 'el caballero andante'
generate_text(sentence, 0.2, model1)
generate_text(sentence, 0.5, model1)
generate_text(sentence, 1, model1)
generate_text(sentence, 1.2, model1)
Explanation: Evaluate model
End of explanation |
4,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualisation of the electrostatic field around a charged circular plate
This notebook shows how to numerically calculate and visualise the fields around a circular homogeneously charged insulating plate.
(C) Jo Verbeeck, EMAT, University of Antwerp, sept 2019
load the required libraries for calculation and plotting
Step1: Create a polar grid to describe infinitesimal charged areas that make up the plate
Step2: Define a grid for the plane in which we want to get the field information (is 3D but this is hard to visualise so we choose a single 2D plane perpendicular to the disc and cutting through the center of the disc
Step3: Now perform the integration over the disc summing up all the field stemming from the infinitesimal charged areas using the superposition principle. For clarity I will use the naive way using for loops. This is ridicoulously slow in an interpreted language as python, but it is easier to understand what is happening.
Step4: And now its showtime! | Python Code:
import numpy as np
import matplotlib.pyplot as plt
Explanation: Visualisation of the electrostatic field around a charged circular plate
This notebook shows how to numerically calculate and visualise the fields around a circular homogeneously charged insulating plate.
(C) Jo Verbeeck, EMAT, University of Antwerp, sept 2019
load the required libraries for calculation and plotting
End of explanation
rpoints=100 #nr of radial points on the plate
thetapoints=100 #number of azimuthal steps
rmax=1 #extension of grid [m]
pref=9e9 # 1/(4pi eps0)
sigma=1 #surface charge density [C/m^2]
r=np.linspace(rmax/rpoints,rmax,rpoints)
dr=r[2]-r[1]
theta=np.linspace(2*np.pi/thetapoints,2*np.pi,thetapoints) #careful, avoid double counting of theta=0 and theta=2pi
dtheta=theta[2]-theta[1]
[r2d,theta2d]=np.meshgrid(r,theta,indexing='ij') #2D matrices holding x or y coordinate for each point on the grid
#cartesian coordinates for each element in the plate
x2dp=np.multiply(r2d,np.cos(theta2d))
y2dp=np.multiply(r2d,np.sin(theta2d))
Explanation: Create a polar grid to describe infinitesimal charged areas that make up the plate
End of explanation
xpoints=100
zpoints=100
xmax=2*rmax
zmax=xmax
x=np.linspace(-xmax,xmax,xpoints)
z=np.linspace(-zmax,zmax,zpoints)
x2d,z2d=np.meshgrid(x,z)
Explanation: Define a grid for the plane in which we want to get the field information (is 3D but this is hard to visualise so we choose a single 2D plane perpendicular to the disc and cutting through the center of the disc
End of explanation
ex=np.zeros(x2d.shape)
ez=np.zeros(x2d.shape)
for rid in range(rpoints):
for thetaid in range(thetapoints):
dq=sigma*r[rid]*dr*dtheta #infinitesimal charge in this segment
rx=x2dp[rid,thetaid]-x2d #keep track of the vector connecting the segment with any point on the xz grid
ry=y2dp[rid,thetaid]-0
rz=0-z2d
dist=np.sqrt(np.square(rx)+np.square(ry)+np.square(rz)) #distance of this segment to any point in the xz grid
ex=ex+pref*dq*np.multiply(np.power(dist,-3),rx)
ez=ez+pref*dq*np.multiply(np.power(dist,-3),rz)
#ey #zero because of symmetry (the plane cuts the disc in half)
e=np.sqrt(np.square(ex)+np.square(ez))
Explanation: Now perform the integration over the disc summing up all the field stemming from the infinitesimal charged areas using the superposition principle. For clarity I will use the naive way using for loops. This is ridicoulously slow in an interpreted language as python, but it is easier to understand what is happening.
End of explanation
plt.imshow(e,extent=[-xmax, xmax, -xmax, xmax])
plt.title('electric field and fieldlines')
plt.xlabel('x');
plt.ylabel('z');
plt.streamplot(x2d,z2d,ex,ez)
plt.axis('square')
plt.colorbar
plt.show()
Explanation: And now its showtime!
End of explanation |
4,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 17
Step1: In addition to the simulation parameters, we start with an initial seed of concentration data. Unlike our other analytical strategies there are no coefficients to compute, no functions to fit. The data could be generated from a function or from measurements. Here we choose a sin function as our initial condition.
Step2: Set up other simulation parameters
We now set
Step3: Our choice of timestep is restricted
Step4: You can think of the 2D array as having one space axis (the first index, we will use i for this one) and one time axis (the second index, we will use j for this one).
We will set our initial conditions by assigning the initialCondition array to the first row of the arrayWithAllTheData. Note the slicing in the first index so that we can copy the contents of the initialCondition into the whole first row with a single assignment statement.
Step5: With our initial conditions in place we need to develop the computational steps to advance our solution in time. The PDE we are solving (with a constant diffusion coefficient) is
Step7: DIY
Step8: DIY
Step9: DIY | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Lecture 17: Numerical Solutions to the Diffusion Equation (Explicit Methods)
Reading and Reference
Numerical Recipes, W. Press, Cambridge University Press, 1986
Numerical Methods, R. Hornbeck, Quantum Publishers, 1975
B. Gustafsson, Fundamentals of Scientific Computing, Springer, 2011
S. Farlow, Partial Differential Equations for Scientists and Engineers, Dover, 1993
What to Learn?
How to set up the finite difference equations
How to set up a data structure to organize the solution
How to apply the initial and boundary conditions to permit computation of the solution
What to do?
Write the solver code for three example problems
Visualize the results
Introduction
In each of the next three lectures a different numerical technique will be studied and implemented to solve the diffusion equation. Each technique is built from mathematics that we've studied in previous lectures. These or similar methods are used in numerical software and an in-depth understanding of the basic presented will give the student a foundation for more advanced methods that may be encountered in commercial codes.
The first technique we will study is the Explicit Finite Difference method. This is one of three common finite difference methods that are easy to program and built from the definition of Taylor's polynomial.
Taylor Series and Derivatives
Taylor's approximation of a first and second derivatives are defined as central differences and are second order accurate:
$$
- 2 h \left. \frac{d}{d \xi_{1}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} - f{\left (c - h \right )} + f{\left (c + h \right )} = 0
$$
$$
h^{2} \left. \frac{d^{2}}{d \xi_{1}^{2}} f{\left (\xi_{1} \right )} \right|{\substack{ \xi{1}=c }} + 2 f{\left (c \right )} - f{\left (c - h \right )} - f{\left (c + h \right )} = 0
$$
Dividing Space and Time
The diffusion equation is a partial differential equation in two independent variables, space and time. In order that we may use our Taylor's series approximations for the time and space derivatives we need to descretize the domain of the problem. One easy way to visualize time and space descretization is to use a grid construction. The figure below shows one way that the time and space variables can be represented.
Each cell in the grid holds the value of the dependent parameter at a particular value of time and space. We indicate this using the following symbols:
$$
u_{i,j}
$$
The $i$ and $j$ are the indices of the grid and reference a particular location where the value of the dependent parameter is stored. How much memory is required to store these values depends on the type of data and the size of the grid. In addition, the grid spacing must be specified for each of the independent variables, in this case we need both a $\delta x$ and a $\delta t$. For example, a difference in time might be represented as:
$$
u(i,j) - u(i,j+1) = c(x,t) - c(x,t+\delta t)
$$
Typically, the grid is a uniform grid in each independent variable - meaning that the distance in the independent variable between grid points is the same in any one variable. There are cases where non-uniform grids may be desirable.
Finite Difference Form for the Diffusion Equation
The finite difference representation of the diffusion equation with $u_{i,j}$ as the stored value that represents $c(x,t)$ is:
$$
\frac{u_{i,\, j+1} - u_{i,\, j}}{\Delta t} = D \frac{u_{i - 1,\, j} - 2 u_{i,\, j} + u_{i + 1,\, j}}{\Delta x^2}
$$
Note:
* A forward difference is used on the LHS
* A central difference is used on the RHS
The general steps required to solve a finite difference equation are as follows:
Identify the geometry of the system
Define the initial conditions and boundary conditions
Write the difference equation in terms of the unknowns
Compute the unknowns subject to any stability requirments and boundary conditions
Store and update the results in the grid
Write the results to disk
Visualize the results
The following parameters are needed to solve the finite difference equation above:
numberOfPoints - the number of grid points within our computational domain
numberOfIterations - the number of timesteps within our computational domain
lengthOfDomain - the physical length of our computational domain
dx - the distance between successive grid points in our domain
xPoints - a linear space divided into numberOfPoints points
initialCondition - our starting distribution of solute (i.e. $c(x,0)$)
End of explanation
numberOfPoints = 100
numberOfIterations = 1000
lengthOfDomain = 1.0
dx = lengthOfDomain/numberOfPoints
xPoints = np.linspace(0.0, lengthOfDomain, numberOfPoints)
initialCondition = np.sin(xPoints*np.pi/lengthOfDomain)
def plotIC():
fig, ax = plt.subplots(figsize=(10,5))
ax.plot(xPoints, initialCondition, 'ro')
ax.set_xlabel(r'Distance $x$')
ax.set_ylabel(r'Concentration $c(x,t)$')
ax.set_title(r'Initial Conditions')
plt.show()
return
plotIC()
Explanation: In addition to the simulation parameters, we start with an initial seed of concentration data. Unlike our other analytical strategies there are no coefficients to compute, no functions to fit. The data could be generated from a function or from measurements. Here we choose a sin function as our initial condition.
End of explanation
diffusionCoefficient = 10.0
dt = dx**2/(4*diffusionCoefficient)
Explanation: Set up other simulation parameters
We now set:
diffusionCoefficient - the diffusion coefficient
dt - the discrete time step (the formulaic choice of dt is needed to satisfy stability constraints)
End of explanation
arrayWithAllTheData = np.zeros((numberOfPoints,numberOfIterations), dtype='float32')
Explanation: Our choice of timestep is restricted:
$$
dt \leq \frac{\Delta x^2}{2 \, D}
$$
Set up the data structure
There are potentially three strategies for storing the results of numerical computations. One is to store ALL the data, another is to store SOME of the data, and the last is to store NONE of the data except the very last computation. Each strategy has advantages and disadvantages although all strategies may seem equally difficult to implement. In this lecture we will design design a data structure that stores all the data.
Let us create a numpy array that has one dimension equal to the number of points in our grid and another dimension that is equal to the number of iterations we wish to compute:
End of explanation
arrayWithAllTheData[:,0] = initialCondition
Explanation: You can think of the 2D array as having one space axis (the first index, we will use i for this one) and one time axis (the second index, we will use j for this one).
We will set our initial conditions by assigning the initialCondition array to the first row of the arrayWithAllTheData. Note the slicing in the first index so that we can copy the contents of the initialCondition into the whole first row with a single assignment statement.
End of explanation
# Note the counting for j in this loop. You may wish to print out
# the values of i,j to help build this operation.
for j in range(1,numberOfIterations):
for i in range(1,numberOfPoints-1):
arrayWithAllTheData[i,j] = 0 # What should you put here?
Explanation: With our initial conditions in place we need to develop the computational steps to advance our solution in time. The PDE we are solving (with a constant diffusion coefficient) is:
$$
\frac{\partial c(x,t)}{\partial t} = D \frac{\partial^2 c(x,t)}{\partial x^2}
$$
we transform this into:
$$
\frac{u_{i,\, j+1} - u_{i,\, j}}{\Delta t} = D \frac{u_{i - 1,\, j} - 2 u_{i,\, j} + u_{i + 1,\, j}}{\Delta x^2}
$$
so that we can algebraically solve for $u_{i+1,\, j}$:
$$
u_{i,\, j+1} = \frac{D \Delta t}{\Delta x^2} \left( u_{i - 1,\, j} - 2 u_{i,\, j} + u_{i + 1,\, j} \right) + u_{i,\, j}
$$
From the expression above you can see that all the terms on the RHS of the expression are at the index $j$ (the last iteration) and all the terms on the LHS are for the $j+1$ index (the next iteration). This scheme defines a simple method (with a restrictive timestep) for integrating a PDE. Re-examine the figure below in comparison to the finite difference scheme:
Write the Solver and Compute the Results
Add your code below. As a reminder, the algebraic equation we want to solve is:
$$
u_{i,\, j+1} = \frac{D \Delta t}{\Delta x^2} \left( u_{i - 1,\, j} - 2 u_{i,\, j} + u_{i + 1,\, j} \right) + u_{i,\, j}
$$
To make all of this work we proceed as follows:
Compute the prefactor $D \Delta t/ \Delta x^2$.
Apply the boundary conditions in the $j$th row of the array.
Using the $j$ row of the array (plus the boundary conditions), fill in row $j+1$ of the array with values corresponding to the new time $t + \Delta t$ according to the equation above.
Advance the index and repeat until all rows are filled.
Visualize the results.
End of explanation
%matplotlib inline
import numpy as np
from ipywidgets import interact, fixed
import matplotlib.pyplot as plt
def plotArray(xPoints, dataArray, rowID=0):
This function in conjunction with interact() permits
inspection of the contents of an array, row by row. This
is useful for some small tasks such as examining the results
of a PDE solution.
x = xPoints
y = dataArray[:,rowID]
fig = plt.figure(figsize=(7,4))
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.set_ylim(0,1)
axes.plot(x, y, 'ro', label=r"$c(x,t)$")
axes.legend()
axes.grid(False)
plt.show()
return
interact(plotArray,
xPoints=fixed(xPoints),
dataArray=fixed(arrayWithAllTheData),
rowID=(0,numberOfIterations-1,1), );
Explanation: DIY: Sketch the algorithm up to this point and for the cell below.
Doing this will help you visualize the operations and it will increase your ability to make modifications in the future and devise new more compact ways to integrate this PDE.
If you've sketched the algorithm as advised above then you see that in our development of this solution we implicitly set the boundary conditions. We initialize arrayWithAllTheData with np.zeros and then compute on all the interior rows/columns. This creates a condition where all the boundary cells are set to zero and their values remain untouched throughout the computation.
Plot the results
End of explanation
# Your solver code goes here.
Explanation: DIY: Compute a solution where you change the boundary conditions on the LHS to be $c(x=L,t) = 1.0$.
End of explanation
# Your solver code goes here.
Explanation: DIY: Vectorize the above solution method.
End of explanation |
4,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing McLennan-Tourky in Python
Daisuke Oyama
Faculty of Economics, University of Tokyo
Step1: Univariate example
Step2: Let us try the logistic function which is well known to generate chaotic behavior.
Step3: Comare compute_fixed_point from quantecon
Step4: Example 4.6 | Python Code:
%matplotlib inline
import time
import numpy as np
import matplotlib.pyplot as plt
import quantecon as qe
from quantecon.compute_fp import _print_after_skip
from quantecon.game_theory import Player, NormalFormGame, lemke_howson
def compute_fixed_point_ig(f, x_init, error_tol=1e-3, max_iter=50, verbose=1,
*args, **kwargs):
_skip = 1
if verbose:
start_time = time.time()
_print_after_skip(_skip, it=None)
x_new = x_init
iterate = 0
y_new = f(x_new, *args, **kwargs)
error = np.max(np.abs(y_new - x_new))
if error <= error_tol or iterate >= max_iter:
if verbose:
etime = time.time() - start_time
_print_after_skip(_skip, iterate, error, etime)
return x_new
X = np.array([x_new])
Y = np.array([y_new])
x_new = Y[0]
iterate += 1
while True:
y_new = f(x_new, *args, **kwargs)
error = np.max(np.abs(y_new - x_new))
if error <= error_tol or iterate >= max_iter:
break
X = np.append(X, np.expand_dims(x_new, axis=0), axis=0)
Y = np.append(Y, np.expand_dims(y_new, axis=0), axis=0)
m = len(X)
D = np.expand_dims(X, axis=1) - Y
D *= D
A = np.add.reduce(np.atleast_3d(D), axis=-1) * (-1)
B = np.identity(m)
g = NormalFormGame((Player(A), Player(B)))
_, rho = lemke_howson(g, init_pivot=m-1)
x_new = rho.dot(Y)
iterate += 1
if verbose:
etime = time.time() - start_time
_print_after_skip(_skip, iterate, error, etime)
return x_new
Explanation: Implementing McLennan-Tourky in Python
Daisuke Oyama
Faculty of Economics, University of Tokyo
End of explanation
# Just a warmup
compute_fixed_point_ig(lambda x: 0.5*x, 1)
Explanation: Univariate example
End of explanation
def logistic(x, r):
return r * x * (1 - x)
x = np.linspace(0, 1, 100)
y = logistic(x, r=4)
fig, ax = plt.subplots()
ax.plot(x, y)
ax.plot([0, 1], [0, 1], ':', color='k')
ax.set_aspect(1)
plt.show()
tol = 1e-5
x_init = 0.99
compute_fixed_point_ig(logistic, x_init, error_tol=tol, r=4)
Explanation: Let us try the logistic function which is well known to generate chaotic behavior.
End of explanation
qe.compute_fixed_point(logistic, x_init, error_tol=tol, r=4)
Explanation: Comare compute_fixed_point from quantecon:
End of explanation
def f(x, M, c):
return -np.arctan(np.dot(M, (x - c)**3)) + c
x_min, x_max = -np.pi/2, np.pi/2
x = np.linspace(x_min, x_max, 100)
M = np.abs(np.random.randn())
c = 0
y = f(x, M, c)
fig, ax = plt.subplots()
ax.plot(x, y)
ax.set_xlim(x_min, x_max)
ax.set_ylim(x_min, x_max)
ax.plot([x_min, x_max], [x_min, x_max], ':', color='k')
ax.set_aspect(1)
plt.show()
n = 500
tol = 1e-5
max_iter = 200
num_trials = 3
for i in range(num_trials):
print("===== Experiment {} =====\n".format(i))
c = np.random.standard_normal(n)
M = np.abs(np.random.standard_normal(size=(n, n)))
x_init = (np.random.rand(n)-1/2)*np.pi + c
print("***Imitation game***")
x0 = compute_fixed_point_ig(f, x_init, tol, max_iter, M=M, c=c)
print("")
print("***Function iteration***")
x1 = qe.compute_fixed_point(f, x_init, tol, max_iter, verbose=1, print_skip=200, M=M, c=c)
print("")
num_trials = 3
for i in range(num_trials):
print("===== Experiment {} =====\n".format(i))
c = np.random.standard_normal(n)
M = np.random.normal(0, 1/13, size=(n, n))
x_init = (np.random.rand(n)-1/2)*np.pi + c
print("***Imitation game***")
x0 = compute_fixed_point_ig(f, x_init, tol, max_iter, M=M, c=c)
print("")
print("***Function iteration***")
x1 = qe.compute_fixed_point(f, x_init, tol, max_iter, verbose=1, print_skip=200, M=M, c=c)
print("")
Explanation: Example 4.6: 500-variable example
End of explanation |
4,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of Interactive Plotting
The IPython notebook excels at interactive science. By its very nature you can easily create a bit of code, run it, look at the output, and adjust the code. This allows you to do very rapid development and work you way through a problem while documenting you thought process. As an example, I am going simulate some data and do some interactive plotting.
Step1: Simulate data
I am going to simulate data using the various density functions available in scipy. During QC, we typically are trying to identify either samples or values (e.g. genes, exons, compounds) that do not behave as expected. We use various plots to help identify outliers and remove them from the dataset.
For this example I am going to simulate a value $\theta$. Here I expect $\theta$ to be normally distributed around 0.5, however I include some bad values that are normally distributed around 0.2. I am relating these bad values to another variable coverage. When coverage is low then we will not accurately capture $\theta$ causing a shift in the distribution of values.
Step2: Now lets look at the distribution of our coverage counts
Step3: Combine everything into a single dataset.
Step5: QC Time
Now that we have our simulated data, lets do some QC. Lets see what happens if we filter low coverage reads.
First we will create a plotting function that takes a cutoff value.
Step6: Interactive Plotting
Ipython offers a simple way to create interactive plots. You import a function called interact, and use that to call your plotting function.
Step7: If you have a lot of data, then interact can be slow because at each step along the slider it tries to calculate the filter. There is a noter interactive widget interact_manual that only runs calculations when you hit the run button.
Step8: Other types of interactivity
While there are a number of IPython widgets that may be useful, there are other packages that offer interactivity. One I have been playing with is a module that translates matplotlib plots into D3.js plots. I will demonstrate that here.
Step9: Now lets mess with a point and see if it changes. | Python Code:
# Import Module
import numpy as np
import scipy as sp
import scipy.stats as stats
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
Explanation: Example of Interactive Plotting
The IPython notebook excels at interactive science. By its very nature you can easily create a bit of code, run it, look at the output, and adjust the code. This allows you to do very rapid development and work you way through a problem while documenting you thought process. As an example, I am going simulate some data and do some interactive plotting.
End of explanation
# Simulate $\theta$
sp.random.seed(42)
theta1 = sp.random.normal(loc=0.5, scale=0.1, size=1000)
theta2 = sp.random.normal(loc=0.2, scale=0.1, size=360)
# Simulate coverage
cvg1 = sp.random.poisson(20, size=1000)
cvg2 = sp.random.poisson(4, size=360)
## I can't have a coverage of 0, so replace 0's with 1
cvg1[cvg1 == 0] = 1
cvg2[cvg2 == 0] = 1
## Create joint of theta1 and theat2
theta = np.concatenate((theta1, theta2))
## Create joint of cvg1 and cvg2
cvg = np.concatenate((cvg1, cvg2))
# Density of Plot $\theta$ 1 and 2
## Get x coordinates from 0 to 1
xs = np.linspace(0, 1, num=100)
## Get Density functions
density1 = stats.gaussian_kde(theta1)
density2 = stats.gaussian_kde(theta2)
density = stats.gaussian_kde(theta)
## Plot
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
ax1.plot(xs, density1(xs), label=r'$\theta$1')
ax1.plot(xs, density2(xs), label=r'$\theta$2')
ax1.set_title(r'Distribution of $\theta$1 and $\theta$2', fontsize=12)
ax1.legend()
ax2.plot(xs, density(xs), color='k', label=r'$\theta$1 + $\theta$2')
ax2.set_title(r'Joint Distribution of $\theta$1 and $\theta2$2', fontsize=12)
ax2.legend()
Explanation: Simulate data
I am going to simulate data using the various density functions available in scipy. During QC, we typically are trying to identify either samples or values (e.g. genes, exons, compounds) that do not behave as expected. We use various plots to help identify outliers and remove them from the dataset.
For this example I am going to simulate a value $\theta$. Here I expect $\theta$ to be normally distributed around 0.5, however I include some bad values that are normally distributed around 0.2. I am relating these bad values to another variable coverage. When coverage is low then we will not accurately capture $\theta$ causing a shift in the distribution of values.
End of explanation
# Plot Distribution of Coverage
## Figure out the x limits
xs = np.linspace(0, cvg.max(), num=100)
## Get Density functions
density1 = stats.gaussian_kde(cvg1)
density2 = stats.gaussian_kde(cvg2)
## Plot
plt.plot(xs, density1(xs), label='High Coverage')
plt.plot(xs, density2(xs), label='Low Coverage')
plt.title('Distribution of Coverage')
plt.legend()
Explanation: Now lets look at the distribution of our coverage counts
End of explanation
# Create Data Frame
dat = pd.DataFrame({'theta': theta, 'cvg': cvg})
dat.head(3)
# Plotting Desnsities is a lot easier with data frames
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
dat['theta'].plot(kind='kde', ax=ax1, title=r'Distribution of $\theta$')
dat['cvg'].plot(kind='kde', ax=ax2, title='Distribution of Coverage')
Explanation: Combine everything into a single dataset.
End of explanation
def pltLow(dat, cutoff):
Function to plot density after filtering
clean = dat[dat['cvg'] >= cutoff]
clean['theta'].plot(kind='kde', title=r'Distribution of $\theta${}Coverage Count Cutoff $\geq$ {}'.format('\n',cutoff), xlim=(-0.2, 1.2))
# Test plot function
pltLow(dat, 1)
Explanation: QC Time
Now that we have our simulated data, lets do some QC. Lets see what happens if we filter low coverage reads.
First we will create a plotting function that takes a cutoff value.
End of explanation
from IPython.html.widgets import interact, interact_manual, IntSlider, fixed
interact(pltLow, dat=fixed(dat), cutoff=IntSlider(min=0, max=20))
Explanation: Interactive Plotting
Ipython offers a simple way to create interactive plots. You import a function called interact, and use that to call your plotting function.
End of explanation
interact_manual(pltLow, dat=fixed(dat), cutoff=IntSlider(min=0, max=20))
Explanation: If you have a lot of data, then interact can be slow because at each step along the slider it tries to calculate the filter. There is a noter interactive widget interact_manual that only runs calculations when you hit the run button.
End of explanation
# Import the mpld3 library
import mpld3
# Plain Scatter plot showing relationship between coverage and theta
dat.plot(kind='scatter', x='cvg', y='theta', figsize=(10, 10))
# Plot figure with mpld3
fig, ax = plt.subplots(figsize=(10, 10))
scatter = ax.scatter(dat['cvg'], dat['theta'])
labels = ['row {}'.format(i) for i in dat.index.tolist()]
tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)
mpld3.plugins.connect(fig, tooltip)
mpld3.display()
Explanation: Other types of interactivity
While there are a number of IPython widgets that may be useful, there are other packages that offer interactivity. One I have been playing with is a module that translates matplotlib plots into D3.js plots. I will demonstrate that here.
End of explanation
dat.ix[262, 'theta'] = -0.1
# Plot figure with mpld3
fig, ax = plt.subplots(figsize=(10, 10))
scatter = ax.scatter(dat['cvg'], dat['theta'])
labels = ['row {}'.format(i) for i in dat.index.tolist()]
tooltip = mpld3.plugins.PointLabelTooltip(scatter, labels=labels)
mpld3.plugins.connect(fig, tooltip)
mpld3.display()
Explanation: Now lets mess with a point and see if it changes.
End of explanation |
4,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Yellowbrick Examples
Ths notebook is a sample of the examples that yellowbrick provides.
Step1: Load Medical Appointment Data
The data used in this example is hosted by Kaggle at following link
Step2: Feature Analysis
Feature analysis visualizers are designed to visualize instances in data space in order to detect features or targets that might impact downstream fitting. Because ML operates on high-dimensional data sets (usually at least 35), the visualizers focus on aggregation, optimization, and other techniques to give overviews of the data. It is our intent that the steering process will allow the data scientist to zoom and filter and explore the relationships between their instances and between dimensions.
At the moment we have three feature analysis visualizers implemented
Step3: Rank2D
Rank1D and Rank2D evaluate single features or pairs of features using a variety of metrics that score the features on the scale [-1, 1] or [0, 1] allowing them to be ranked. A similar concept to SPLOMs, the scores are visualized on a lower-left triangle heatmap so that patterns between pairs of features can be easily discerned for downstream analysis.
Step4: Diagnostic Interpretation from Rank2D(Covariance)
Step5: Diagnostic Interpretation from Rank2D(Pearson)
Step6: For regression, the RadViz visualizer should use a color sequence to display the target information, as opposed to discrete colors.
Diagnostic Interpretation from RadViz
Step7: Classifier Evaluation
Classification models attempt to predict a target in a discrete space, that is assign an instance of dependent variables one or more categories. Classification score visualizers display the differences between classes as well as a number of classifier-specific visual evaluations. We currently have implemented three classifier evaluations
Step8: Classification Report
The classification report visualizer displays the precision, recall, and F1 scores for the model. Integrates numerical scores as well color-coded heatmap in order for easy interpretation and detection.
Step9: ROCAUC
Plot the ROC to visualize the tradeoff between the classifier's sensitivity and specificity.
Step10: ClassBalance
Class balance chart that shows the support for each class in the fitted classification model. | Python Code:
import os
import sys
# Modify the path
sys.path.append("..")
import pandas as pd
import yellowbrick as yb
import matplotlib.pyplot as plt
Explanation: Yellowbrick Examples
Ths notebook is a sample of the examples that yellowbrick provides.
End of explanation
data = pd.read_csv("data/No-show-Issue-Comma-300k.csv")
data.head()
data.columns = ['Age','Gender','Appointment Registration','Appointment Date',
'Day Of Week','Status','Diabetes','Alcoholism','Hypertension','Handicap',
'Smoker','Scholarship','Tuberculosis','SMS Reminder','Awaiting Time']
data.describe()
features = ['Age','Gender','Appointment Registration','Appointment Date',
'Day Of Week','Diabetes','Alcoholism','Hypertension','Handicap',
'Smoker','Scholarship','Tuberculosis','SMS Reminder','Awaiting Time']
numerical_features = data.describe().columns.values
Explanation: Load Medical Appointment Data
The data used in this example is hosted by Kaggle at following link: https://www.kaggle.com/joniarroba/noshowappointments/downloads/medical-appointment-no-shows.zip
The data is part of a kaggle challenge to discover if it is possible to predict if a patient will show up for an appointment.
The data is downloaded, unzipped and stored locally within a directory named data.
End of explanation
# Feature Analysis Imports
# NOTE that all these are available for import from the `yellowbrick.features` module
from yellowbrick.features.rankd import Rank2D
from yellowbrick.features.radviz import RadViz
from yellowbrick.features.pcoords import ParallelCoordinates
Explanation: Feature Analysis
Feature analysis visualizers are designed to visualize instances in data space in order to detect features or targets that might impact downstream fitting. Because ML operates on high-dimensional data sets (usually at least 35), the visualizers focus on aggregation, optimization, and other techniques to give overviews of the data. It is our intent that the steering process will allow the data scientist to zoom and filter and explore the relationships between their instances and between dimensions.
At the moment we have three feature analysis visualizers implemented:
Rank2D: rank pairs of features to detect covariance
RadViz: plot data points along axes ordered around a circle to detect separability
Parallel Coordinates: plot instances as lines along vertical axes to detect clusters
Feature analysis visualizers implement the Transformer API from Scikit-Learn, meaning they can be used as intermediate transform steps in a Pipeline (particularly a VisualPipeline). They are instantiated in the same way, and then fit and transform are called on them, which draws the instances correctly. Finally poof or show is called which displays the image.
End of explanation
# To help interpret the column features being described in the visualization
pd.DataFrame(numerical_features)
# For this visualizer numerical features are required
X = data[numerical_features].as_matrix()
y = data.Status.as_matrix()
# Instantiate the visualizer with the Covariance ranking algorithm
visualizer = Rank2D(features=numerical_features, algorithm='covariance')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
Explanation: Rank2D
Rank1D and Rank2D evaluate single features or pairs of features using a variety of metrics that score the features on the scale [-1, 1] or [0, 1] allowing them to be ranked. A similar concept to SPLOMs, the scores are visualized on a lower-left triangle heatmap so that patterns between pairs of features can be easily discerned for downstream analysis.
End of explanation
# Instantiate the visualizer with the Pearson ranking algorithm
visualizer = Rank2D(features=numerical_features, algorithm='pearson')
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
Explanation: Diagnostic Interpretation from Rank2D(Covariance):
Some features share covariance with age but most of the features do not share any measureable covariance.
End of explanation
#Need to specify the classes of interest
classes = data.Status.unique().tolist()
# For this visualizer numerical features are required
X = data[numerical_features].as_matrix()
# Additional step here of converting categorical data 0's and 1's
y = data.Status.replace(classes,[0,1]).as_matrix()
# Instantiate the visualizer
visualizer = visualizer = RadViz(classes=classes, features=numerical_features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
Explanation: Diagnostic Interpretation from Rank2D(Pearson):
Some features share a positive linear relation mostly with age and a little bit with diabetes but most of the features do not demonstrate a relationship.
RadViz
RadViz is a multivariate data visualization algorithm that plots each feature dimension uniformely around the circumference of a circle then plots points on the interior of the circle such that the point normalizes its values on the axes from the center to each arc. This meachanism allows as many dimensions as will easily fit on a circle, greatly expanding the dimensionality of the visualization.
Data scientists use this method to dect separability between classes. E.g. is there an opportunity to learn from the feature set or is there just too much noise?
End of explanation
# Instantiate the visualizer
visualizer = visualizer = ParallelCoordinates(classes=classes, features=numerical_features)
visualizer.fit(X, y) # Fit the data to the visualizer
visualizer.transform(X) # Transform the data
visualizer.poof() # Draw/show/poof the data
Explanation: For regression, the RadViz visualizer should use a color sequence to display the target information, as opposed to discrete colors.
Diagnostic Interpretation from RadViz:
It doesn't appear from this visual for there to be much differentiation between the classes. The dimensionality still interestingly shows that the other features have any interesting relations hip with Age.
Parallel Coordinates
Parallel coordinates displays each feature as a vertical axis spaced evenly along the horizontal, and each instance as a line drawn between each individual axis. This allows many dimensions; in fact given infinite horizontal space (e.g. a scrollbar) an infinite number of dimensions can be displayed!
Data scientists use this method to detect clusters of instances that have similar classes, and to note features that have high varaince or different distributions.
End of explanation
# Classifier Evaluation Imports
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from yellowbrick.classifier import ClassificationReport, ROCAUC, ClassBalance
Explanation: Classifier Evaluation
Classification models attempt to predict a target in a discrete space, that is assign an instance of dependent variables one or more categories. Classification score visualizers display the differences between classes as well as a number of classifier-specific visual evaluations. We currently have implemented three classifier evaluations:
ClassificationReport: Presents the confusion matrix of the classifier as a heatmap
ROCAUC: Presents the graph of receiver operating characteristics along with area under the curve
ClassBalance: Displays the difference between the class balances and support
Estimator score visualizers wrap Scikit-Learn estimators and expose the Estimator API such that they have fit(), predict(), and score() methods that call the appropriate estimator methods under the hood. Score visualizers can wrap an estimator and be passed in as the final step in a Pipeline or VisualPipeline.
End of explanation
# Create the train and test data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
# Instantiate the classification model and visualizer
bayes = GaussianNB()
visualizer = ClassificationReport(bayes, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
Explanation: Classification Report
The classification report visualizer displays the precision, recall, and F1 scores for the model. Integrates numerical scores as well color-coded heatmap in order for easy interpretation and detection.
End of explanation
# Instantiate the classification model and visualizer
logistic = LogisticRegression()
visualizer = ROCAUC(logistic)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
Explanation: ROCAUC
Plot the ROC to visualize the tradeoff between the classifier's sensitivity and specificity.
End of explanation
# Instantiate the classification model and visualizer
forest = RandomForestClassifier()
visualizer = ClassBalance(forest, classes=classes)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
g = visualizer.poof() # Draw/show/poof the data
Explanation: ClassBalance
Class balance chart that shows the support for each class in the fitted classification model.
End of explanation |
4,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 1
Imports
Step2: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral
Step3: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
Step4: Results are closer to the actual values using the scipy.integrate.quad function rather than the trapz function | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy import integrate
Explanation: Integration Exercise 1
Imports
End of explanation
def trapz(f, a, b, N):
Integrate the function f(x) over the range [a,b] with N points.
h = (b-a)/N
k = np.arange(1,N)
I = h * (0.5*f(a)+0.5*f(b)+np.sum(f(a+k*h)))
return I
f = lambda x: x**2
g = lambda x: np.sin(x)
I = trapz(f, 0, 1, 1000)
assert np.allclose(I, 0.33333349999999995)
J = trapz(g, 0, np.pi, 1000)
assert np.allclose(J, 1.9999983550656628)
Explanation: Trapezoidal rule
The trapezoidal rule generates a numerical approximation to the 1d integral:
$$ I(a,b) = \int_a^b f(x) dx $$
by dividing the interval $[a,b]$ into $N$ subdivisions of length $h$:
$$ h = (b-a)/N $$
Note that this means the function will be evaluated at $N+1$ points on $[a,b]$. The main idea of the trapezoidal rule is that the function is approximated by a straight line between each of these points.
Write a function trapz(f, a, b, N) that performs trapezoidal rule on the function f over the interval $[a,b]$ with N subdivisions (N+1 points).
End of explanation
If, errf = integrate.quad(f,0,1)
print("Integral:",If)
print("Error:",errf)
Ig, errg = integrate.quad(g,0,np.pi)
print("Integral:",Ig)
print("Error:",errg)
Explanation: Now use scipy.integrate.quad to integrate the f and g functions and see how the result compares with your trapz function. Print the results and errors.
End of explanation
assert True # leave this cell to grade the previous one
Explanation: Results are closer to the actual values using the scipy.integrate.quad function rather than the trapz function
End of explanation |
4,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 13.3. External Mixture
Is Required
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step60: 14.2. Shortwave Bands
Is Required
Step61: 14.3. Longwave Bands
Is Required
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step63: 15.2. Twomey
Is Required
Step64: 15.3. Twomey Minimum Ccn
Is Required
Step65: 15.4. Drizzle
Is Required
Step66: 15.5. Cloud Lifetime
Is Required
Step67: 15.6. Longwave Bands
Is Required
Step68: 16. Model
Aerosol model
16.1. Overview
Is Required
Step69: 16.2. Processes
Is Required
Step70: 16.3. Coupling
Is Required
Step71: 16.4. Gas Phase Precursors
Is Required
Step72: 16.5. Scheme Type
Is Required
Step73: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-9d-l78', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MIROC
Source ID: NICAM16-9D-L78
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
4,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch 02
Step1: Create a boolean vector called spikes of the same dimensions as before
Step2: Restored the variable data from disk, serve warm, and enjoy
Step3: Show's over, goodnight | Python Code:
import tensorflow as tf
sess = tf.InteractiveSession()
Explanation: Ch 02: Concept 07
Loading variables
Concept 06 was about saving variables. This one's about loading what you saved. Start by creating an interactive session:
End of explanation
spikes = tf.Variable([False]*8, name='spikes')
Explanation: Create a boolean vector called spikes of the same dimensions as before:
End of explanation
saver = tf.train.Saver()
try:
saver.restore(sess, 'spikes.ckpt')
print(spikes.eval())
except:
print('file not found')
Explanation: Restored the variable data from disk, serve warm, and enjoy:
End of explanation
sess.close()
Explanation: Show's over, goodnight:
End of explanation |
4,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Forest with Grid Search (XGBoost)
<a href="https
Step1: This example features
Step2: Imports
Step3: Log Workflow
Prepare Data
Step4: Prepare Hyperparameters
Step5: Instantiate Client
Step6: Run Validation
Step7: Revisit Workflow
Retrieve Best Run
Step8: Train on Full Dataset
Step9: Calculate and Log Accuracy on Full Training Set
Step10: Log Model for Deployment
Step11: Make Live Predictions
Deploy Model Through Web App
Step12: Load Deployed Model
Step13: Query Deployed Model | Python Code:
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
Explanation: Random Forest with Grid Search (XGBoost)
<a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/examples/xgboost.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
HOST = "app.verta.ai"
PROJECT_NAME = "Wine Multiclassification"
EXPERIMENT_NAME = "Boosted Trees"
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
Explanation: This example features:
- XGBoost's native API for cross-validating and training gradient-boosted trees
- scikit-learn's ParameterGrid utility for iterating over a hyperparameter grid
- verta's Python client integrated into the grid search loop
- verta's Python client retrieving the best run from the grid search to calculate and log full training accuracy
End of explanation
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import datasets
from sklearn import model_selection
import xgboost as xgb
Explanation: Imports
End of explanation
data = datasets.load_wine()
X = data['data']
y = data['target']
dtrain = xgb.DMatrix(X, label=y)
df = pd.DataFrame(np.hstack((X, y.reshape(-1, 1))),
columns=data['feature_names'] + ['species'])
df.head()
Explanation: Log Workflow
Prepare Data
End of explanation
grid = model_selection.ParameterGrid({
'eta': [0.5, 0.7],
'max_depth': [1, 2, 3],
'num_class': [10],
})
Explanation: Prepare Hyperparameters
End of explanation
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
Explanation: Instantiate Client
End of explanation
def run_experiment(hyperparams):
run = client.set_experiment_run()
# log hyperparameters
run.log_hyperparameters(hyperparams)
# run cross validation on hyperparameters
cv_history = xgb.cv(hyperparams, dtrain,
nfold=5,
metrics=("merror", "mlogloss"))
# log observations from each iteration
for _, iteration in cv_history.iterrows():
for obs, val in iteration.iteritems():
run.log_observation(obs, val)
# log error from final iteration
final_val_error = iteration['test-merror-mean']
run.log_metric("val_error", final_val_error)
print("{} Mean error: {:.4f}".format(hyperparams, final_val_error))
# NOTE: run_experiment() could also be defined in a module, and executed in parallel
for hyperparams in grid:
run_experiment(hyperparams)
Explanation: Run Validation
End of explanation
best_run = expt.expt_runs.sort("metrics.val_error", descending=False)[0]
print("Validation Error: {:.4f}".format(best_run.get_metric("val_error")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
Explanation: Revisit Workflow
Retrieve Best Run
End of explanation
model = xgb.XGBClassifier(**best_hyperparams)
model.fit(X, y)
Explanation: Train on Full Dataset
End of explanation
train_acc = model.score(X, y)
best_run.log_metric("train_acc", train_acc)
print("Training accuracy: {:.4f}".format(train_acc))
Explanation: Calculate and Log Accuracy on Full Training Set
End of explanation
# create deployment artifacts
model_api = ModelAPI(X, model.predict(X))
requirements = ["scikit-learn", "xgboost"]
best_run.log_model(model, model_api=model_api)
best_run.log_requirements(requirements)
Explanation: Log Model for Deployment
End of explanation
best_run
Explanation: Make Live Predictions
Deploy Model Through Web App
End of explanation
from verta._demo_utils import DeployedModel
deployed_model = DeployedModel(HOST, best_run.id)
Explanation: Load Deployed Model
End of explanation
for x in itertools.cycle(np.random.permutation(X).tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
Explanation: Query Deployed Model
End of explanation |
4,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
$$
\def\CC{\bf C}
\def\QQ{\bf Q}
\def\RR{\bf R}
\def\ZZ{\bf Z}
\def\NN{\bf N}
$$
Lean Tutorial
This tutorial runs through all of the steps for doing a project with
Marvin from start-to-finish with no extra fat. We recommend the use of
ipython or jupyter notebook when using Marvin. You can start either
from a terminal with ipython or jupyter notebook.
Project Description
Calculate the [NII]/H$\alpha$ ratio for star-forming spaxels in
galaxies with stellar mass between $10^{10}$ and $10^{11}$ .
Sample Selection
Marvin uses a simplified query syntax (in both
Web and local queries) that
understands the MaNGA database schema, so you don't have to write
complicated SQL queries.
Goal
Step1: Tip
Step2: Convert into ../tools/maps objects
Step3: Get the Maps
Alternatively, maybe we already knew our galaxy IDs, which we can use to
create ~marvin.tools.maps.Maps objects
Step4: Get the H$\alpha$ maps
Step5: Plot H$\alpha$ map of the second galaxy
Step6: Get Spectrum and Model Fit
Let's take a look at the model fits a spaxel. The easiest way is to
navigate to the Galaxy page for
7992-6101 and click on
the red "Map/SpecView Off" button.
However, we can also plot the spectrum and model fits in Python. First,
we can find the coordinates of a spaxel by moving our cursor around the
interactive matplotlib plotting window. When the cursor is over the
spaxel of interest, the coordinates will appear in the lower right.
Then we can create a ~marvin.tools.spaxel.Spaxel object by accessing
the parent ~marvin.tools.maps.Maps object from the
~marvin.tools.quantities.Map object (haflux_map.maps) and retrieve
the model fit.
Step7: Now let's plot the spectrum and model fit
Step8: Plot BPT Diagram
The ~marvin.tools.maps.Maps.get_bpt returns masks for spaxels of
different ionization types and the Figure object.
Step9: For a detailed description see BPT Diagrams.
Select Star-forming Spaxels
Select the star-forming spaxels that are in the star-forming region of
each diagnostic diagram (hence the "global" keyword)
Step10: Return the complement of the BPT global star-forming mask (True means
star-forming) using ~ and mark those spaxels as DONOTUSE since they
are non-star-forming spaxels.
Step11: Do a bitwise OR between the DAP mask and the non-star-forming mask
Step12: Plot with our new mask
Step13: Plot [NII]/H$\alpha$ Flux Ratio for Star-forming Spaxels
Calculate [NII]6585/H$\alpha$ flux ratio
Step14: Plot the [NII]/H$\alpha$ flux ratio for the star-forming spaxels | Python Code:
from marvin.tools.query import doQuery
q, r = doQuery(search_filter='nsa.sersic_logmass >= 10 and nsa.sersic_logmass <= 11', limit=3)
Explanation: $$
\def\CC{\bf C}
\def\QQ{\bf Q}
\def\RR{\bf R}
\def\ZZ{\bf Z}
\def\NN{\bf N}
$$
Lean Tutorial
This tutorial runs through all of the steps for doing a project with
Marvin from start-to-finish with no extra fat. We recommend the use of
ipython or jupyter notebook when using Marvin. You can start either
from a terminal with ipython or jupyter notebook.
Project Description
Calculate the [NII]/H$\alpha$ ratio for star-forming spaxels in
galaxies with stellar mass between $10^{10}$ and $10^{11}$ .
Sample Selection
Marvin uses a simplified query syntax (in both
Web and local queries) that
understands the MaNGA database schema, so you don't have to write
complicated SQL queries.
Goal: Find galaxies with stellar mass between $10^{10}$ and
$10^{11}$.
Create the query with ~marvin.tools.query.query.doQuery and run it
(limit to only 3 results for demo purposes):
End of explanation
df = r.toDF()
df
Explanation: Tip: see Marvin Query to learn the basics of
querying. See Example Queries and Marvin
Query Syntax Tutorial for help with designing
search filters.
View the ~marvin.tools.query.results.Results. You may see a different
set of results. That is ok as long as you see some set of results.:
End of explanation
r.convertToTool('maps')
r.objects
galaxies = r.objects
Explanation: Convert into ../tools/maps objects:
End of explanation
from marvin.tools.maps import Maps
mangaids = ['1-245458', '1-22301', '1-605884']
galaxies = [Maps(mangaid=mangaid) for mangaid in mangaids]
Explanation: Get the Maps
Alternatively, maybe we already knew our galaxy IDs, which we can use to
create ~marvin.tools.maps.Maps objects:
End of explanation
haflux_maps = [galaxy['emline_gflux_ha_6564'] for galaxy in galaxies]
Explanation: Get the H$\alpha$ maps:
End of explanation
haflux_map = haflux_maps[1]
fig, ax = haflux_map.plot()
Explanation: Plot H$\alpha$ map of the second galaxy:
End of explanation
spax = galaxies[1].getSpaxel(x=28, y=24, xyorig='lower', cube=True, modelcube=True)
Explanation: Get Spectrum and Model Fit
Let's take a look at the model fits a spaxel. The easiest way is to
navigate to the Galaxy page for
7992-6101 and click on
the red "Map/SpecView Off" button.
However, we can also plot the spectrum and model fits in Python. First,
we can find the coordinates of a spaxel by moving our cursor around the
interactive matplotlib plotting window. When the cursor is over the
spaxel of interest, the coordinates will appear in the lower right.
Then we can create a ~marvin.tools.spaxel.Spaxel object by accessing
the parent ~marvin.tools.maps.Maps object from the
~marvin.tools.quantities.Map object (haflux_map.maps) and retrieve
the model fit.
End of explanation
import matplotlib.pyplot as plt
# Set matplotlib style sheet. Undo with matplotib.rcdefaults().
plt.style.use('seaborn-darkgrid')
ax = spax.flux.plot()
ax.plot(spax.full_fit.wavelength, spax.full_fit.value)
ax.legend(list(ax.get_lines()), ['observed', 'model'])
ax.axis([7100, 7500, 0.3, 0.65])
Explanation: Now let's plot the spectrum and model fit:
End of explanation
masks, fig, axes = galaxies[1].get_bpt()
Explanation: Plot BPT Diagram
The ~marvin.tools.maps.Maps.get_bpt returns masks for spaxels of
different ionization types and the Figure object.
End of explanation
sf = masks['sf']['global']
Explanation: For a detailed description see BPT Diagrams.
Select Star-forming Spaxels
Select the star-forming spaxels that are in the star-forming region of
each diagnostic diagram (hence the "global" keyword):
End of explanation
mask_non_sf = ~sf * haflux_map.pixmask.labels_to_value('DONOTUSE')
Explanation: Return the complement of the BPT global star-forming mask (True means
star-forming) using ~ and mark those spaxels as DONOTUSE since they
are non-star-forming spaxels.
End of explanation
mask = haflux_map.mask | mask_non_sf
Explanation: Do a bitwise OR between the DAP mask and the non-star-forming mask:
End of explanation
haflux_map.plot(mask=mask)
Explanation: Plot with our new mask:
End of explanation
maps_7992_6101 = galaxies[1]
nii = maps_7992_6101['emline_gflux_nii_6585']
ha = maps_7992_6101['emline_gflux_ha_6564']
nii_ha = nii / ha
Explanation: Plot [NII]/H$\alpha$ Flux Ratio for Star-forming Spaxels
Calculate [NII]6585/H$\alpha$ flux ratio:
End of explanation
nii_ha.plot(mask=mask, cblabel='[NII]6585 / Halpha flux ratio')
Explanation: Plot the [NII]/H$\alpha$ flux ratio for the star-forming spaxels:
End of explanation |
4,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Suponha que não soubéssemos quantas espécies diferentes estão presentes no dataset iris. Como poderíamos descobrir essa informação aproximadamente a partir dos dados presentes ali?
Uma solução possível seria plotar os dados em um scatterplot e tentar identificar visualmente a existência de grupos distintos. O datase Iris, no entanto, possui quatro dimensões de dados então não é possível visualizá-lo inteiramente (apenas um par de features por vez).
Para visualizar o dataset completo como um scatterplot 2D, é possível usar técnicas de redução de dimensionalidade para comprimir o dataset para duas dimensões perdendo pouca informação estrutural.
Leitura dos dados
Step1: Redução de dimensões
Usaremos o algoritmo PCA do scikit-learn para reduzir o número de dimenSões para dois no dataset.
Step2: Quantos grupos distintos você consegue identificar?
Descoberta de clusters com K-Means
O problem descrito anteriormente pode ser descrito como um problema de Clusterização. Clusterização permite encontrar grupos de exemplos que sejam semelhantes a outros exemplos no mesmo grupo mas diferentes de exemplos pertencentes a outros grupos.
Neste exemplo, usaremos o algoritmo KMeans do scikit-learn para encontrar cluster no dataset.
Uma limitação do KMeans é que ele precisa receber o número esperado de clusters como argumento, então é necessário que se tenha algum conhecimento daquele domínio para chutar um número razoável de grupos ou pode-se testar diferentes números de clusters e ver qual deles apresenta o melhor resultado. | Python Code:
import pandas as pd
iris = # Carregue o arquivo 'datasets/iris_without_classes.csv'
# Exiba as primeiras cinco linhas usando o método head() para checar que não existe mais a coluna "Class"
Explanation: Suponha que não soubéssemos quantas espécies diferentes estão presentes no dataset iris. Como poderíamos descobrir essa informação aproximadamente a partir dos dados presentes ali?
Uma solução possível seria plotar os dados em um scatterplot e tentar identificar visualmente a existência de grupos distintos. O datase Iris, no entanto, possui quatro dimensões de dados então não é possível visualizá-lo inteiramente (apenas um par de features por vez).
Para visualizar o dataset completo como um scatterplot 2D, é possível usar técnicas de redução de dimensionalidade para comprimir o dataset para duas dimensões perdendo pouca informação estrutural.
Leitura dos dados
End of explanation
from sklearn.decomposition import PCA
RANDOM_STATE=1234
pca_model = # Crie um objeto PCA com dois componentes
iris_2d = # Use o método fit_transform() para reduzir o dataset para duas dimensões
import matplotlib.pyplot as plt
%matplotlib inline
# Crie um scatterplot do dataset reduzido
# Exiba o gráfico
Explanation: Redução de dimensões
Usaremos o algoritmo PCA do scikit-learn para reduzir o número de dimenSões para dois no dataset.
End of explanation
# Crie dois modelos KMeans: um com dois clusters e outro com três clusters
# Armazene os identificadores previstos pelos modelos usando dois e três clusters
from sklearn.cluster import KMeans
model2 = # Crie um objeto KMeans que espere dois clusters
labels2 = # Infira o identificador de cluster de cada exemplo no dataset usando predict()
model3 = # Crie um objeto KMeans que espere três clusters
labels3 = # Infira o identificador de cluster de cada exemplo no dataset usando predict()
# Crie um scatterplot usando o dataset reduzido colorindo cada ponto de acordo com o cluster
# ao qual ele pertence segundo o KMeans de dois clusters
# Exiba o scatterplot
# Crie um scatterplot usando o dataset reduzido colorindo cada ponto de acordo com o cluster
# ao qual ele pertence segundo o KMeans de três clusters
# Exiba o scatterplot
Explanation: Quantos grupos distintos você consegue identificar?
Descoberta de clusters com K-Means
O problem descrito anteriormente pode ser descrito como um problema de Clusterização. Clusterização permite encontrar grupos de exemplos que sejam semelhantes a outros exemplos no mesmo grupo mas diferentes de exemplos pertencentes a outros grupos.
Neste exemplo, usaremos o algoritmo KMeans do scikit-learn para encontrar cluster no dataset.
Uma limitação do KMeans é que ele precisa receber o número esperado de clusters como argumento, então é necessário que se tenha algum conhecimento daquele domínio para chutar um número razoável de grupos ou pode-se testar diferentes números de clusters e ver qual deles apresenta o melhor resultado.
End of explanation |
4,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<figure>
<IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
IHE Python course, 2017
Reading, writing and working with shape files
T.N.Olsthoorn
2017-03-15
When working with GIS you import and export shape files. A shape file consists of 3 files with the same name but different extension. The one with .shp exteions contains the actual shapes, coordinates and spatial topology, while the one with the .dbf extension contains a table with the user data that is associated with each shape in the shapefile. In it, each shape file has a record of data values that belong to textual headers that indicated what the values mean. The contents of the .dbf file is, in fact a table, with headers, in which each line represents the data pertaining to one shape in the file. The third file, with .shx extension is merely a list of pointers that allow fast searching the shapefiles in GIS. It.s not important for us.
In fact, shape files are the most widely used means to exchange spatial data.
Shape files can be very useful to spatially attribute values to a model, like the conductivities, land use etc. Knowing how they work and how to deal with them can provide a lot of functionality in dealing with spatial data, with or without a GIS.
If you the comman
import shapefile as shp
does not work, you should install pyshp
fire up your cmd window and type
pip install pyshp
The documentation is in the wiki of the pyshp project on github
Step1: We need a function that will tell us whether a coordinate pair is inside a shape or not. This function exists in matplotlib.path and is called Path. Path creates a polygon which can then check if points are inside or outside it. Because I like the word Polygon better than Path for this goal, I import the Path class and call it Polygon here.
Step3: I define a function inpoly that checks whether the coordinates x,y are inside the shape given by pgcoords. The function returns a logical array (boolean area) of the same shape as the inputs x and y such that points inside the polygon are True in the array and False otherwise.
Notice that inpoly uses the imported Polygon constructor on line 8.
The function does some import checking and guarantees that the shape of the boolean output array is the same as the input array x and y.
The checking is done in lline 27 using the method pgon.contains_points(xy)
This function is very practical, so you may want to use it in other projects as well.
Step4: CD to the exercises/Mar21 dictory, where we have a shape file stored in the subdirectory Sectie (Section)
Step5: Define the director where the shape file is and get the file name
Step6: Open a reader to read the shapefile
Step7: So we can read the bounding box of all the shapes in the file, its table, its elevaton (if specified), its fiels names etc.
Let's get some useful information about the shapefile. Here, the names of the data fields of each shape (again a comprehension is used to get them in a list)
Step8: Each field not only gives the name of the data but also whether it string )'C' or a number 'N' and its length (first number). The second number is the number of didgets, when the number is floating point.
To get only the field name in a list, we use an other comprehension
Step9: you may use
rdr.iterShapes() to iterate over the shapes in the shapefile
rdr.iterRecords() to iterate over the records in the shapefile
rdr.iterShapeRecords() to iterate over both simultaneously
Print the records in the dbf file, one for each shape
Step10: Read the shapeRecords from the shapefile into a list
Step11: Each of these ShapeRecods contain both the shape and its record (data).
Show that we can itereate over the ShapeRecords, get the shape and the record from it and do somthing with it
Step12: What attributes/methods has a shape?
Simply use a comprehension to show their names
Step13: Let's now get serious and define a model grid containing all shapes. Simply use the overall bounding box to get the grid extension.
Here is the box
Step14: Get the indices from the bounding box so that we can plot is in a single line
Step15: With these indices in sequence we can get the x and y coordinates of the bbox so arranged that the bbox is plotted (in red with linewidth 3, so that it shows up at the boundary of the plot.
Step16: Now see how we can access the point os a shape, here the first shape in our shape-records list, so shprecs[0].shape.
Step17: With all this in place we can plot the bounding box of all shapes and then each shape itself using its points as an array.
Step18: The next step is to fill the grid points that fall within each shape with the data value that belongs to the shape. We'll use the horizontal conductivity KH that is contained in each shape's record as can be seen from the field names above.
First generate a grid. A grid is an array with the x coordinate of each cell and one with the y coordinate of each cell.
We'll choose x, y as the coordinates of the cell boundaries (the grid lines) and xm, ym as the coordinates of the cell centers.
Choosing 81 x-grid lines between the horizontal extend 0 and 80 m gives cells of 1 m width.
Choosing 41 y-grid lines between the vertical exten -10 and 0 m gives cells of 0.25 m height.
Step19: Then fill in the data value for each grid and plot the result, once color per shape.
ikh is the index for the field 'KH' (see fields above)
Step20: Et voilà !
Now the array can be used as input for a groundwater model for example.
Some ways to show the array
Step21: Show the KH array the Matlab way, which is much easier to see. In spyder you may inspect it in the variable window as a kind of spreadsheet.
Because the array is far too wide to print on one line, we chop it in chunk of 8 columns
and print one chunk after the other.
First generate the indices to cut the array in pieces then use np.split( )
Put this together with the indices in a zip, so that the in each for loop we get
the next index and the next array.
For each chunk plot which columns it has.
Then cut the chunk in lines and print each line separately.
Notice that we generate the format dynamically in the print statement using the
length L, i.e. the number of values in the line.
Step22: Another way to show an array is to plot it using plt.matshow. Each cell will be colored according to its value. Clearly if the values differ too little, the show up as the same value. This is actually the case here.
Notice that matshow does not use the coordinates, it uses the index of the rows and columns. Because of this, it's plot upside down, just as was the case with plt.spy above.
Step23: You could also contour the shapes, but that has the same problems as with matshow | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import shapefile as shp
from pprint import pprint
Explanation: <figure>
<IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
IHE Python course, 2017
Reading, writing and working with shape files
T.N.Olsthoorn
2017-03-15
When working with GIS you import and export shape files. A shape file consists of 3 files with the same name but different extension. The one with .shp exteions contains the actual shapes, coordinates and spatial topology, while the one with the .dbf extension contains a table with the user data that is associated with each shape in the shapefile. In it, each shape file has a record of data values that belong to textual headers that indicated what the values mean. The contents of the .dbf file is, in fact a table, with headers, in which each line represents the data pertaining to one shape in the file. The third file, with .shx extension is merely a list of pointers that allow fast searching the shapefiles in GIS. It.s not important for us.
In fact, shape files are the most widely used means to exchange spatial data.
Shape files can be very useful to spatially attribute values to a model, like the conductivities, land use etc. Knowing how they work and how to deal with them can provide a lot of functionality in dealing with spatial data, with or without a GIS.
If you the comman
import shapefile as shp
does not work, you should install pyshp
fire up your cmd window and type
pip install pyshp
The documentation is in the wiki of the pyshp project on github:
https://github.com/GeospatialPython/pyshp/wiki
The more general functionalities are impoted next:
End of explanation
from matplotlib.path import Path as Polygon
Explanation: We need a function that will tell us whether a coordinate pair is inside a shape or not. This function exists in matplotlib.path and is called Path. Path creates a polygon which can then check if points are inside or outside it. Because I like the word Polygon better than Path for this goal, I import the Path class and call it Polygon here.
End of explanation
def inpoly(x, y, pgcoords):
Returns bool array [Ny, Nx] telling which grid points are inside polygon
try:
isinstance(pgcoords,(list, tuple, np.ndarray))
len(pgcoords[0])==2
pgon = Polygon(pgcoords)
except:
print('pgcoords must be like [(0, 0), (1, 0), ..] or\n'
+'an np.ndarray of shape [Np, 2]')
raise TypeError("Can't create polygon, pgcoords error")
try:
x = np.array(x, dtype=float)
y = np.array(y, dtype=float)
x.shape == y.shape
except:
raise err.ShapeError("x and y not np.ndarrays with same shape.")
if len(x.shape)==1:
Y, Y = np.meshgrid(x, y)
else:
X = x
Y = y
xy = np.vstack((X.ravel(), Y.ravel())).T
return pgon.contains_points(xy).reshape(X.shape)
Explanation: I define a function inpoly that checks whether the coordinates x,y are inside the shape given by pgcoords. The function returns a logical array (boolean area) of the same shape as the inputs x and y such that points inside the polygon are True in the array and False otherwise.
Notice that inpoly uses the imported Polygon constructor on line 8.
The function does some import checking and guarantees that the shape of the boolean output array is the same as the input array x and y.
The checking is done in lline 27 using the method pgon.contains_points(xy)
This function is very practical, so you may want to use it in other projects as well.
End of explanation
os.listdir('.')
Explanation: CD to the exercises/Mar21 dictory, where we have a shape file stored in the subdirectory Sectie (Section)
End of explanation
# directory (this will be different on your computer)
shpdir = os.path.join('.', 'Sectie')
fname = os.listdir(shpdir)[0]
print("The shapefile to work with: ",fname)
Explanation: Define the director where the shape file is and get the file name
End of explanation
rdr = shp.Reader(os.path.join(shpdir, fname))
print("\nAttributes and methods accessible throuhgh this reader:\n")
# learn to read and create comprehensions, this is one
pprint([p for p in dir(rdr) if not p.startswith('_')])
Explanation: Open a reader to read the shapefile:
Then show its attributes, so that we can see what we can do with it.
End of explanation
rdr.fields
Explanation: So we can read the bounding box of all the shapes in the file, its table, its elevaton (if specified), its fiels names etc.
Let's get some useful information about the shapefile. Here, the names of the data fields of each shape (again a comprehension is used to get them in a list)
End of explanation
fldNames = [p[0] for p in rdr.fields]
print(fldNames)
Explanation: Each field not only gives the name of the data but also whether it string )'C' or a number 'N' and its length (first number). The second number is the number of didgets, when the number is floating point.
To get only the field name in a list, we use an other comprehension:
End of explanation
print(fldNames)
for r in rdr.iterRecords():
print(r)
Explanation: you may use
rdr.iterShapes() to iterate over the shapes in the shapefile
rdr.iterRecords() to iterate over the records in the shapefile
rdr.iterShapeRecords() to iterate over both simultaneously
Print the records in the dbf file, one for each shape:
End of explanation
shprecs = rdr.shapeRecords()
shprecs
Explanation: Read the shapeRecords from the shapefile into a list:
End of explanation
for i, sr in enumerate(shprecs):
print("Shape number ", i+1)
print("Bounding box = ",sr.shape.bbox)
print("Record = ",sr.record)
print()
Explanation: Each of these ShapeRecods contain both the shape and its record (data).
Show that we can itereate over the ShapeRecords, get the shape and the record from it and do somthing with it:
End of explanation
[att for att in dir(sr.shape) if not att.startswith('_')]
Explanation: What attributes/methods has a shape?
Simply use a comprehension to show their names:
End of explanation
rdr.bbox
Explanation: Let's now get serious and define a model grid containing all shapes. Simply use the overall bounding box to get the grid extension.
Here is the box:
End of explanation
ix = [0, 2, 2, 0, 0] # x-indices in rdr.box
iy = [1, 1, 3, 3, 1] # y-indices
Explanation: Get the indices from the bounding box so that we can plot is in a single line:
End of explanation
plt.plot(np.array([rdr.bbox[i] for i in ix]),
np.array([rdr.bbox[i] for i in iy]), 'r', lw=3)
plt.show()
Explanation: With these indices in sequence we can get the x and y coordinates of the bbox so arranged that the bbox is plotted (in red with linewidth 3, so that it shows up at the boundary of the plot.
End of explanation
np.array(shprecs[0].shape.points)
Explanation: Now see how we can access the point os a shape, here the first shape in our shape-records list, so shprecs[0].shape.
End of explanation
plt.plot(np.array([rdr.bbox[i] for i in ix]),
np.array([rdr.bbox[i] for i in iy]), 'r', lw=3)
for sr in shprecs:
pnts = np.array(sr.shape.points)
plt.plot(pnts[:,0], pnts[:,1])
plt.show()
Explanation: With all this in place we can plot the bounding box of all shapes and then each shape itself using its points as an array.
End of explanation
# grid line coordinates
x = np.linspace(rdr.bbox[0], rdr.bbox[2], 81)
y = np.linspace(rdr.bbox[1], rdr.bbox[3], 41)
# cell center coordinates
xm = 0.5 * (x[:-1] + x[1:])
ym = 0.5 * (y[:-1] + y[1:])
# generage a full 2D array for both the x and y coordinates
XM, YM = np.meshgrid(xm, ym)
Explanation: The next step is to fill the grid points that fall within each shape with the data value that belongs to the shape. We'll use the horizontal conductivity KH that is contained in each shape's record as can be seen from the field names above.
First generate a grid. A grid is an array with the x coordinate of each cell and one with the y coordinate of each cell.
We'll choose x, y as the coordinates of the cell boundaries (the grid lines) and xm, ym as the coordinates of the cell centers.
Choosing 81 x-grid lines between the horizontal extend 0 and 80 m gives cells of 1 m width.
Choosing 41 y-grid lines between the vertical exten -10 and 0 m gives cells of 0.25 m height.
End of explanation
fig, ax = plt.subplots()
ikh = fldNames.index('KH')
KH = np.zeros_like(XM) # 2D array to fill with conductivities from shapes
# iterate ove the list of shaperecords. In each loop the next index,
# the next color for the plot and the next records is given.
for i, clr, sr in zip(range(len(shprecs)), "brgmcy", shprecs):
pnts = sr.shape.points # the shape coordinates
inarray = inpoly(XM, YM, pnts) # boolean array to match the shape
KH[inarray] = sr.record[ikh] # fill in the value
ax.plot(XM[inarray], YM[inarray], clr+'o', label="shape {}".format(i))
ax.legend(loc='best', fontsize='x-small')
plt.show()
Explanation: Then fill in the data value for each grid and plot the result, once color per shape.
ikh is the index for the field 'KH' (see fields above)
End of explanation
# plt.spy shows where an array is non-zero
plt.spy(KH)
plt.show()
Explanation: Et voilà !
Now the array can be used as input for a groundwater model for example.
Some ways to show the array
End of explanation
indices = range(0, KH.shape[1], 8)
for i, kh in zip(indices, np.split(KH, indices[1:], axis=1)):
print("Columns {}:{}".format(i,i+kh.shape[1]))
for L in kh:
print(("{:8.4g}" * len(L)).format(*L))
print()
Explanation: Show the KH array the Matlab way, which is much easier to see. In spyder you may inspect it in the variable window as a kind of spreadsheet.
Because the array is far too wide to print on one line, we chop it in chunk of 8 columns
and print one chunk after the other.
First generate the indices to cut the array in pieces then use np.split( )
Put this together with the indices in a zip, so that the in each for loop we get
the next index and the next array.
For each chunk plot which columns it has.
Then cut the chunk in lines and print each line separately.
Notice that we generate the format dynamically in the print statement using the
length L, i.e. the number of values in the line.
End of explanation
print("The kh values of the different shapes: ", [sr.record[ikh] for sr in shprecs])
plt.matshow(KH)
plt.show()
Explanation: Another way to show an array is to plot it using plt.matshow. Each cell will be colored according to its value. Clearly if the values differ too little, the show up as the same value. This is actually the case here.
Notice that matshow does not use the coordinates, it uses the index of the rows and columns. Because of this, it's plot upside down, just as was the case with plt.spy above.
End of explanation
plt.contourf(XM, YM, KH, 50)
plt.show()
Explanation: You could also contour the shapes, but that has the same problems as with matshow: because of the small differences between the values of different shapes. However, the contourf function uses the coordinates of the grid, so it's plotted correctly on scale and right side up.
End of explanation |
4,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Step1: 2. Use mechanize to get reviews for all of the top attractions
Step2: Get reviews for top attractions in multiple cities
Step3: Finding forms with mechanize | Python Code:
# url = 'https://www.tripadvisor.com/Attraction_Review-g60878-d3184389-Reviews-Chihuly_Garden_and_Glass-Seattle_Washington.html#REVIEWS'
def get_reviews(response):
# response = requests.get(url)
soup = BeautifulSoup(response, 'html.parser')
entries = soup.findAll('div', {'class': 'entry'})
reviews = [entry.text.replace('\n', '') for entry in entries]
return reviews
Explanation: Goal: get tripadvisor reviews for top Seattle attractions
1. Scrape data from page using requests and BeautifulSoup
End of explanation
def mechanize_reviews(url):
br = Browser() # Initialize browser object
br.set_handle_robots(False) # try this if you get a 'disallowed by robots.txt' error
# br.addheaders = [('User-agent', 'Firefox')] # sometimes you need this line
br.open(url) # Retrieve the requested page
br.select_form(nr=0)
reviews = []
for link in br.links():
if 'Attraction_Review' in str(link):
data = br.follow_link(link)
reviews = get_reviews(data)
if len(reviews) > 10:
return reviews
return reviews
Explanation: 2. Use mechanize to get reviews for all of the top attractions
End of explanation
url = 'https://www.tripadvisor.com'
places = ['Portland, OR', 'San Francisco, CA', 'Seattle, WA']
chromedriver = '/Users/sydneydecoto/bin/chromedriver'
for place in places:
# Initialize a chrome driver and go to url
driver = webdriver.Chrome(chromedriver)
driver.get(url)
# wait for page to load, time out after 10 seconds
searchbox = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, 'GEO_SCOPED_SEARCH_INPUT')))
searchbox.send_keys(place)
mainsearch = WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, 'mainSearch')))
mainsearch.send_keys('Things to Do')
driver.find_elements_by_class_name('inner')[0].click()
driver.switch_to_alert() # ignore the popup
reviews = mechanize_reviews(driver.current_url)
# print reviews
driver.quit()
Explanation: Get reviews for top attractions in multiple cities
End of explanation
br = Browser() # Initialize browser object
br.set_handle_robots(False) # try this if you get a 'disallowed by robots.txt' error
br.addheaders = [('User-agent', 'Firefox')] # sometimes you need this line
url = 'https://seattle.craigslist.org/'
br.open(url)
for form in br.forms():
print form
Explanation: Finding forms with mechanize
End of explanation |
4,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mapping fractions between gradient communities in order to perform procrustes
Step1: Calculating centroid of binned fraction samples
centroid of all 20 replicates for fraction samples that fall into the BD-range bin
trying oriellipse() function from vegan package | Python Code:
%%R
otu.tbl.file1 = '/home/nick/notebook/SIPSim/dev/bac_genome1210/atomIncorp_taxaIncorp/0/10/1/OTU_n2_abs1e9_sub-norm_filt.physeq'
otu.tbl.file2 = '/home/nick/notebook/SIPSim/dev/bac_genome1210/atomIncorp_taxaIncorp/100/10/1/OTU_n2_abs1e9_sub-norm_filt.physeq'
physeq1 = readRDS(otu.tbl.file1)
physeq2 = readRDS(otu.tbl.file2)
%%R
ord1 = ordinate(physeq1, method='NMDS', distance='bray')
ord2 = ordinate(physeq2, method='NMDS', distance='bray')
ord1 %>% scores %>% head %>% print
ord2 %>% scores %>% head %>% print
%%R
get.fracs = function(ord){
fracs = gsub('.+__', '', rownames(ord %>% scores)) %>% as.data.frame()
colnames(fracs) = c('fractions')
fracs = fracs %>%
separate(fractions, c('start','end'), sep='-', convert=T) %>%
mutate(start = start * 1000,
end = end * 1000)
return(fracs)
}
ord1.f = get.fracs(ord1)
ord2.f = get.fracs(ord2)
%%R
library(IRanges)
%%R
ord1.r = IRanges(start=ord1.f$start, end=ord1.f$end)
ord2.r = IRanges(start=ord2.f$start, end=ord2.f$end)
%%R
ov = findOverlaps(ord1.r, ord2.r, select='first')
ov
%%R
ov = findOverlaps(ord1.r, ord2.r)
ov
Explanation: Mapping fractions between gradient communities in order to perform procrustes
End of explanation
%%R
otu.tbl.file1 = '/home/nick/notebook/SIPSim/dev/bac_genome1210/atomIncorp_taxaIncorp/0/10/1/OTU_n2_abs1e9_sub-norm_filt.physeq'
otu.tbl.file2 = '/home/nick/notebook/SIPSim/dev/bac_genome1210/atomIncorp_taxaIncorp/100/10/1/OTU_n2_abs1e9_sub-norm_filt.physeq'
physeq1 = readRDS(otu.tbl.file1)
physeq2 = readRDS(otu.tbl.file2)
%%R
ord1 = ordinate(physeq1, method='NMDS', distance='bray')
ord2 = ordinate(physeq2, method='NMDS', distance='bray')
%%R
grps = as.character(rep(seq(1,nrow(ord1$points) / 2), 2))
grps = append(grps, '2')
plot(ord1, type = "p", display='sites')
elps = ordiellipse(ord1, grps, kind="se", conf=0.95, lwd=2, col="blue")
elps = elps %>% summary %>% t %>% as.data.frame
elps
%%R
ggplot(elps, aes(NMDS1, NMDS2)) +
geom_point()
%%R
get.ellipse = function(ord){
grps = as.character(rep(seq(1,nrow(ord$points) / 2), 2))
grps = append(grps, '2')
plot(ord, type = "p", display='sites')
elps = ordiellipse(ord, grps, kind="se", conf=0.95, lwd=2, col="blue")
elps = elps %>% summary %>% t %>% as.data.frame
return(elps)
}
get.ellipse(ord1)
%%R
mid = function(x, y){ (x + y)/2 }
get.BD.range = function(tbl){
tbl = as.data.frame(tbl)
tbl$lib = gsub('__.+', '', rownames(tbl)) %>% as.character
tbl$BD.start = gsub('.+__([0-9.]+)-.+', '\\1', rownames(tbl)) %>% as.numeric
tbl$BD.end = gsub('.+-', '', rownames(tbl)) %>% as.numeric
tbl$BD.mid = mapply(mid, tbl$BD.start, tbl$BD.end)
return(tbl)
}
ord.BD = get.BD.range(ord1 %>% scores)
ord.BD %>% head
%%R
# making fixed BD-range & binning by BD.mid
BD.range = seq(1.6, 1.9, 0.004)
BD.range
Explanation: Calculating centroid of binned fraction samples
centroid of all 20 replicates for fraction samples that fall into the BD-range bin
trying oriellipse() function from vegan package
End of explanation |
4,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple SEA model of two beams
Step1: Creating a SEA model
To create a SEA model we begin by creating an instance of Model.
Step2: We are only interested in a limited frequency range.
An important parameter of a component is the type of which it is made. Let's create steel beams.
Step3: While we gave certain quantities as arguments, we can also assign them later on.
Step4: We now create two almost similar beams.
Step5: When we now check the steel material, we can see it is used by these two beams.
Step6: Let's have a look at the subsystems we have now.
Step7: Indeed, we have six subsystems. Each structural component has by default subsystems representing longitudinal, bending and shear waves.
Step8: We can now plot attributes ofa subsystem, e.g. the group speed.
Step9: Let's check the modal density of longitudinal waves in the first beam.
Step10: The modal density increases with frequency.. The modal density of bending waves however remains constant on a logarithmic scale.
Step11: Now, let's connect the two beams with their tips.
Step12: Indeed, the connection we just made has connects two components.
Step13: Solving the system
Now that the entire system is set up we can solve for the modal powers.
Step14: An indicator is present to show whether the system was solved. Note however that changes made in the model since solving the system do NOT change the value of this indicator.
Step17: Plotting results
We can now plot the velocity levels in the components
Step18: and/or subsystems. | Python Code:
import sys
import seapy
import numpy as np
%matplotlib inline
Explanation: A simple SEA model of two beams
End of explanation
from acoustics.signal import OctaveBand
frequency = OctaveBand(fstart=500.0, fstop=8000.0, fraction=1)
system1 = seapy.system.System(frequency)
Explanation: Creating a SEA model
To create a SEA model we begin by creating an instance of Model.
End of explanation
steel = system1.add_material('steel',
'MaterialSolid',
young=1.0e7,
poisson=0.30,
density=8.0e3,
loss_factor=np.ones(len(system1.frequency.center))*0.2
)
Explanation: We are only interested in a limited frequency range.
An important parameter of a component is the type of which it is made. Let's create steel beams.
End of explanation
steel.density = 2000.0
Explanation: While we gave certain quantities as arguments, we can also assign them later on.
End of explanation
beam1 = system1.add_component(
'beam1',
'Component1DBeam',
material='steel',
length=2.0,
width=0.5,
height=0.6
)
beam2 = system1.add_component(
'beam2',
'Component1DBeam',
material='steel',
length=2.0,
width=0.5,
height=0.6,
)
Explanation: We now create two almost similar beams.
End of explanation
for component in steel.linked_components:
print(component)
Explanation: When we now check the steel material, we can see it is used by these two beams.
End of explanation
for subsystem in system1.subsystems:
subsystem.name
Explanation: Let's have a look at the subsystems we have now.
End of explanation
for i in beam1.linked_subsystems:
print(i)
Explanation: Indeed, we have six subsystems. Each structural component has by default subsystems representing longitudinal, bending and shear waves.
End of explanation
fig = beam1.subsystem_long.plot('soundspeed_group')
Explanation: We can now plot attributes ofa subsystem, e.g. the group speed.
End of explanation
fig = beam1.subsystem_long.plot('modal_density')
Explanation: Let's check the modal density of longitudinal waves in the first beam.
End of explanation
fig = beam1.subsystem_bend.plot('modal_density')
Explanation: The modal density increases with frequency.. The modal density of bending waves however remains constant on a logarithmic scale.
End of explanation
connection1 = system1.addConnection('connection1', 'Connection', shape='Point')
connection1.addComponent(beam1, 'corner')
connection1.addComponent(beam2, 'corner')
for i in system1.connections:
print(i)
Explanation: Now, let's connect the two beams with their tips.
End of explanation
print((len(connection1.components)))
from weakref import WeakValueDictionary
d = WeakValueDictionary()
d['beam1'] = system1._objects[1]
d['material'] = system1._objects[0]
print(len(d))
system1.removeObject('beam1')
print(len(d))
Explanation: Indeed, the connection we just made has connects two components.
End of explanation
system1.solveSystem()
Explanation: Solving the system
Now that the entire system is set up we can solve for the modal powers.
End of explanation
print(system1.solved)
Explanation: An indicator is present to show whether the system was solved. Note however that changes made in the model since solving the system do NOT change the value of this indicator.
End of explanation
fig = beam1.plot('velocity_level')
###subsystem2.plot_velocity_level('velocity_level_subsystem2.png')
###Or the velocity levels in the components, which is given by a summation over its related subsystems.
###beam1.plot_velocity_level('velocity_level_beam1.png')
###beam2.plot_velocity_level('velocity_level_beam2.png')
###Since we included only one subsystem per component, the results are the same.
Explanation: Plotting results
We can now plot the velocity levels in the components
End of explanation
fig = beam1.subsystem_bend.plot('velocity_level')
Explanation: and/or subsystems.
End of explanation |
4,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 2 assignment
This assignment will get you familiar with the basic elements of Python by programming a simple card game. We will create a custom class to represent each player in the game, which will store information about their current pot, as well as a series of methods defining how they play the game. We will also build several functions to control the flow of the game and get data back at the end.
We will start by importing the 'random' library, which will allow us to use its functions for picking a random entry from a list.
Step1: First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.
Step2: Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
Step3: Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.
Step4: Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.
Step5: Now we are ready to start the game. First we create an empy list to store the collection of players in the game.
Step6: Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.
Step7: Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.
Step8: Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players. | Python Code:
import random
import numpy
Explanation: Lab 2 assignment
This assignment will get you familiar with the basic elements of Python by programming a simple card game. We will create a custom class to represent each player in the game, which will store information about their current pot, as well as a series of methods defining how they play the game. We will also build several functions to control the flow of the game and get data back at the end.
We will start by importing the 'random' library, which will allow us to use its functions for picking a random entry from a list.
End of explanation
gameStake = 50
cards = range(10)
Explanation: First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.
End of explanation
class Player:
# create here two local variables to store a unique ID for each player and the player's current 'pot' of money
PN=0
Pot=0# [FILL IN YOUR VARIABLES HERE]
# in the __init__() function, use the two input variables to initialize the ID and starting pot of each player
def __init__(self, inputID, startingPot):
self.PN=inputID
self.Pot=startingPot# [CREATE YOUR INITIALIZATIONS HERE]
# create a function for playing the game. This function starts by taking an input for the dealer's card
# and picking a random number from the 'cards' list for the player's card
def play(self, dealerCard):
# we use the random.choice() function to select a random item from a list
playerCard = random.choice(cards)
# here we should have a conditional that tests the player's card value against the dealer card
# and returns a statement saying whether the player won or lost the hand
# before returning the statement, make sure to either add or subtract the stake from the player's pot so that
# the 'pot' variable tracks the player's money
if playerCard < dealerCard:
self.Pot=self.Pot-gameStake
print 'player'+str(self.PN)+' Lose,'+str(playerCard)+' vs '+str(dealerCard)# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
else:
self.Pot=self.Pot+gameStake
print 'player'+str(self.PN)+' Win,'+str(playerCard)+' vs '+str(dealerCard)# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
# create an accessor function to return the current value of the player's pot
def returnPot(self):
return self.Pot# [FILL IN THE RETURN STATEMENT]
# create an accessor function to return the player's ID
def returnID(self):
return self.PN# [FILL IN THE RETURN STATEMENT]
Explanation: Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
End of explanation
def playHand(players):
for player in players:
dealerCard = random.choice(cards)
player.play(dealerCard)#[EXECUTE THE PLAY() FUNCTION FOR EACH PLAYER USING THE DEALER CARD, AND PRINT OUT THE RESULTS]
Explanation: Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.
End of explanation
def checkBalances(players):
for player in players:
print 'player '+str(player.returnID())+ ' has $ '+str(player.returnPot())+ ' left'#[PRINT OUT EACH PLAYER'S BALANCE BY USING EACH PLAYER'S ACCESSOR FUNCTIONS]
Explanation: Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.
End of explanation
players = []
Explanation: Now we are ready to start the game. First we create an empy list to store the collection of players in the game.
End of explanation
for i in range(5):
players.append(Player(i, 500))
Explanation: Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.
End of explanation
for i in range(10):
print ''
print 'start game ' + str(i)
playHand(players)
Explanation: Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.
End of explanation
print ''
print 'game results:'
checkBalances(players)
Explanation: Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players.
End of explanation |
4,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsing and querying AI Platform Prediction request-response logs in BigQuery
This tutorial shows you how to create a view to parse raw request instances and response predictions logged from AI Platform Prediction to BigQuery.
The tutorial covers the following tasks
Step1: Configure Google Cloud environment settings
Step2: Authenticate your Google Cloud account
This step is required if you run the notebook in Colab.
Step3: Import libraries
Step4: 1. Define dataset metadata
Step5: 2. Generate the CREATE VIEW script
Step6: Optionally, print the generated script
Step7: 3. Execute the CREATE VIEW script
Step8: 4. Query the view | Python Code:
!pip install -U -q google-api-python-client
!pip install -U -q pandas
Explanation: Parsing and querying AI Platform Prediction request-response logs in BigQuery
This tutorial shows you how to create a view to parse raw request instances and response predictions logged from AI Platform Prediction to BigQuery.
The tutorial covers the following tasks:
Define dataset metadata.
Generate the CREATE VIEW script that parses the raw data.
Execute the CREATE VIEW script.
Query the view to retrieve the parsed data.
Setup
Install packages and dependencies
End of explanation
PROJECT_ID = "sa-data-validation"
MODEL_NAME = 'covertype_classifier'
VERSION_NAME = 'v1'
BQ_DATASET_NAME = 'prediction_logs'
BQ_TABLE_NAME = 'covertype_classifier_logs'
!gcloud config set project $PROJECT_ID
Explanation: Configure Google Cloud environment settings
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
Explanation: Authenticate your Google Cloud account
This step is required if you run the notebook in Colab.
End of explanation
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import pandas as pd
from google.cloud import bigquery
Explanation: Import libraries
End of explanation
HEADER = ['Elevation', 'Aspect', 'Slope','Horizontal_Distance_To_Hydrology',
'Vertical_Distance_To_Hydrology', 'Horizontal_Distance_To_Roadways',
'Hillshade_9am', 'Hillshade_Noon', 'Hillshade_3pm',
'Horizontal_Distance_To_Fire_Points', 'Wilderness_Area', 'Soil_Type',
'Cover_Type']
TARGET_FEATURE_NAME = 'Cover_Type'
FEATURE_LABELS = ['0', '1', '2', '3', '4', '5', '6']
NUMERIC_FEATURE_NAMES = ['Aspect', 'Elevation', 'Hillshade_3pm',
'Hillshade_9am', 'Hillshade_Noon',
'Horizontal_Distance_To_Fire_Points',
'Horizontal_Distance_To_Hydrology',
'Horizontal_Distance_To_Roadways','Slope',
'Vertical_Distance_To_Hydrology']
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
'Soil_Type': ['2702', '2703', '2704', '2705', '2706', '2717', '3501', '3502',
'4201', '4703', '4704', '4744', '4758', '5101', '6101', '6102',
'6731', '7101', '7102', '7103', '7201', '7202', '7700', '7701',
'7702', '7709', '7710', '7745', '7746', '7755', '7756', '7757',
'7790', '8703', '8707', '8708', '8771', '8772', '8776'],
'Wilderness_Area': ['Cache', 'Commanche', 'Neota', 'Rawah']
}
FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys()) + NUMERIC_FEATURE_NAMES
HEADER_DEFAULTS = [[0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else ['NA']
for feature_name in HEADER]
NUM_CLASSES = len(FEATURE_LABELS)
Explanation: 1. Define dataset metadata
End of explanation
LABEL_KEY = 'predicted_label'
SCORE_KEY = 'confidence'
SIGNATURE_NAME = 'serving_default'
def _extract_json(column, feature_name):
return "JSON_EXTRACT({}, '$.{}')".format(column, feature_name)
def _replace_brackets(field):
return "REPLACE(REPLACE({}, ']', ''), '[','')".format(field)
def _replace_quotes(field):
return 'REPLACE({}, "\\"","")'.format(field)
def _cast_to_numeric(field):
return "CAST({} AS NUMERIC)".format(field)
def _add_alias(field, feature_name):
return "{} AS {}".format(field, feature_name)
view_name = "vw_"+BQ_TABLE_NAME+"_"+VERSION_NAME
colum_names = FEATURE_NAMES
input_features = ', \r\n '.join(colum_names)
json_features_extraction = []
for feature_name in colum_names:
field = _extract_json('instance', feature_name)
field = _replace_brackets(field)
if feature_name in NUMERIC_FEATURE_NAMES:
field = _cast_to_numeric(field)
else:
field = _replace_quotes(field)
field = _add_alias(field, feature_name)
json_features_extraction.append(field)
json_features_extraction = ', \r\n '.join(json_features_extraction)
json_prediction_extraction = []
for feature_name in [LABEL_KEY, SCORE_KEY]:
field = _extract_json('prediction', feature_name)
field = _replace_brackets(field)
if feature_name == SCORE_KEY:
field = _cast_to_numeric(field)
else:
field = _replace_quotes(field)
field = _add_alias(field, feature_name)
json_prediction_extraction.append(field)
json_prediction_extraction = ', \r\n '.join(json_prediction_extraction)
sql_script = '''
CREATE OR REPLACE VIEW @dataset_name.@view_name
AS
WITH step1
AS
(
SELECT
model,
model_version,
time,
SPLIT(JSON_EXTRACT(raw_data, '$.instances'), '}],[{') instance_list,
SPLIT(JSON_EXTRACT(raw_prediction, '$.predictions'), '}],[{') as prediction_list
FROM
`@project.@dataset_name.@table_name`
WHERE
model = '@model_name' AND
model_version = '@version'
),
step2
AS
(
SELECT
model,
model_version,
time,
REPLACE(REPLACE(instance, '[{', '{'),'}]', '}') AS instance,
REPLACE(REPLACE(prediction, '[{', '{'),'}]', '}') AS prediction,
FROM step1
JOIN UNNEST(step1.instance_list) AS instance
WITH OFFSET AS f1
JOIN UNNEST(step1.prediction_list) AS prediction
WITH OFFSET AS f2
ON f1=f2
),
step3 AS
(
SELECT
model,
model_version,
time,
@json_features_extraction,
@json_prediction_extraction
FROM step2
)
SELECT *
FROM step3
'''
sql_script = sql_script.replace("@project", PROJECT_ID)
sql_script = sql_script.replace("@dataset_name", BQ_DATASET_NAME)
sql_script = sql_script.replace("@table_name", BQ_TABLE_NAME)
sql_script = sql_script.replace("@view_name", view_name)
sql_script = sql_script.replace("@model_name", MODEL_NAME)
sql_script = sql_script.replace("@version", VERSION_NAME)
sql_script = sql_script.replace("@input_features", input_features)
sql_script = sql_script.replace("@json_features_extraction", json_features_extraction)
sql_script = sql_script.replace("@json_prediction_extraction", json_prediction_extraction)
Explanation: 2. Generate the CREATE VIEW script
End of explanation
print(sql_script)
Explanation: Optionally, print the generated script:
End of explanation
client = bigquery.Client(PROJECT_ID)
client.query(query = sql_script)
print("View was created or replaced.")
Explanation: 3. Execute the CREATE VIEW script
End of explanation
query = '''
SELECT * FROM
`{}.{}`
LIMIT {}
'''.format(BQ_DATASET_NAME, view_name, 3)
pd.io.gbq.read_gbq(
query, project_id=PROJECT_ID).T
Explanation: 4. Query the view
End of explanation |
4,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Small scale example
Step1: Example with more savings, but slower to optimize
Step2: Look at the recommendations | Python Code:
def func(a, b, c):
res = tf.einsum('ijk,ja,kb->iab', a, b, c) + 1
res = tf.einsum('iab,kb->iak', res, c)
return res
a = tf.random_normal((10, 11, 12))
b = tf.random_normal((11, 13))
c = tf.random_normal((12, 14))
# res = func(a, b, c)
orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c)
res1 = func(a, b, c)
%timeit sess.run(res1)
res2 = optimized_func(a, b, c)
%timeit sess.run(res2)
# Check that the results of optimized and the original function are the same.
np.testing.assert_allclose(*sess.run([res1, res2]), rtol=1e-5, atol=1e-5)
Explanation: Small scale example
End of explanation
def func(a, b, c, d):
res = tf.einsum('si,sj,sk,ij->s', a, b, d, c)
res += tf.einsum('s,si->s', res, a)
return res
a = tf.random_normal((100, 101))
b = tf.random_normal((100, 102))
c = tf.random_normal((101, 102))
d = tf.random_normal((100, 30))
orders, optimized_func = tf_einsum_opt.optimizer(func, sess, a, b, c, d)
res1 = func(a, b, c, d)
%timeit sess.run(res1)
res2 = optimized_func(a, b, c, d)
%timeit sess.run(res2)
Explanation: Example with more savings, but slower to optimize
End of explanation
orders
Explanation: Look at the recommendations:
End of explanation |
4,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Energy and RMSE
The energy (Wikipedia of a signal corresponds to the total magntiude of the signal. For audio signals, that roughly corresponds to how loud the signal is. The energy in a signal is defined as
$$ \sum_n \left| x(n) \right|^2 $$
The root-mean-square energy (RMSE) in a signal is defined as
$$ \sqrt{ \frac{1}{N} \sum_n \left| x(n) \right|^2 } $$
Let's load a signal
Step1: Listen to the signal
Step2: Plot the signal
Step3: Compute the short-time energy using a list comprehension
Step4: Compute the RMSE using librosa.feature.rmse
Step5: Plot both the energy and RMSE along with the waveform
Step6: Questions
Write a function, strip, that removes leading silence from a signal. Make sure it works for a variety of signals recorded in different environments and with different signal-to-noise ratios (SNR).
Step7: Let's see if it works. | Python Code:
x, sr = librosa.load('audio/simple_loop.wav')
sr
x.shape
librosa.get_duration(x, sr)
Explanation: ← Back to Index
Energy and RMSE
The energy (Wikipedia of a signal corresponds to the total magntiude of the signal. For audio signals, that roughly corresponds to how loud the signal is. The energy in a signal is defined as
$$ \sum_n \left| x(n) \right|^2 $$
The root-mean-square energy (RMSE) in a signal is defined as
$$ \sqrt{ \frac{1}{N} \sum_n \left| x(n) \right|^2 } $$
Let's load a signal:
End of explanation
ipd.Audio(x, rate=sr)
Explanation: Listen to the signal:
End of explanation
librosa.display.waveplot(x, sr=sr)
Explanation: Plot the signal:
End of explanation
hop_length = 256
frame_length = 512
energy = numpy.array([
sum(abs(x[i:i+frame_length]**2))
for i in range(0, len(x), hop_length)
])
energy.shape
Explanation: Compute the short-time energy using a list comprehension:
End of explanation
rmse = librosa.feature.rmse(x, frame_length=frame_length, hop_length=hop_length, center=True)
rmse.shape
rmse = rmse[0]
Explanation: Compute the RMSE using librosa.feature.rmse:
End of explanation
frames = range(len(energy))
t = librosa.frames_to_time(frames, sr=sr, hop_length=hop_length)
librosa.display.waveplot(x, sr=sr, alpha=0.4)
plt.plot(t, energy/energy.max(), 'r--') # normalized for visualization
plt.plot(t[:len(rmse)], rmse/rmse.max(), color='g') # normalized for visualization
plt.legend(('Energy', 'RMSE'))
Explanation: Plot both the energy and RMSE along with the waveform:
End of explanation
def strip(x, frame_length, hop_length):
# Compute RMSE.
rmse = librosa.feature.rmse(x, frame_length=frame_length, hop_length=hop_length, center=True)
# Identify the first frame index where RMSE exceeds a threshold.
thresh = 0.01
frame_index = 0
while rmse[0][frame_index] < thresh:
frame_index += 1
# Convert units of frames to samples.
start_sample_index = librosa.frames_to_samples(frame_index, hop_length=hop_length)
# Return the trimmed signal.
return x[start_sample_index:]
Explanation: Questions
Write a function, strip, that removes leading silence from a signal. Make sure it works for a variety of signals recorded in different environments and with different signal-to-noise ratios (SNR).
End of explanation
y = strip(x, frame_length, hop_length)
ipd.Audio(y, rate=sr)
librosa.display.waveplot(y, sr=sr)
Explanation: Let's see if it works.
End of explanation |
4,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Meta-analysis of incubation time for Covid-19 with pediatric subset
A wide variety of estimates of the incubation time have been seen in the rapidly evolving Covid-19 literature. As of March 27th, 2020 the most common number reported in the popular press appears to be the 5.2 day estimate. While the variation in point estimates continues to be substantial, an average range of 5-6 days appears to be appropriate (for the mean). There has also been a limited discussion on the possibility of age-based differences in the incubation period. This kernel will go through the CORD-19 dataset and extract relevant sentences in regards to differents moments of the the incubation period (mean, median, lower-bound, and upper-bound). At the end of the notebook, a subset of relevant papers that mention pediatric populations will be explored. As of now, there does not appear to be strong evidence for age related-difference in incubation periods.
Several technical notes.
1. The df_txt.csv load in the script below was generated with a similar method to xhlulu's kernel.
2. A utility script is being used to help with the parsing. Please download the kernel for the details of these functions.
3. After the relevant sentences are found, a function record_vals is used to allow the user to <u>manually</u> select the sentences with "y(es)/n(o)"
4. I manually annotated the moments in the sentences
Step1: Section 1
Step2: Section 2
Step3: Section 4
Step4: Section 5
Step5: The figure above shows thats the point estimates, especially for the mean, are quite noisy and range from just below 3 days, to just above 8 days.
Section 6
Step6: Three articles show up of interest | Python Code:
import numpy as np
import pandas as pd
import os
import re
import seaborn as sns
from datetime import datetime as dt
from support_funs_incubation import stopifnot, uwords, idx_find, find_beside, ljoin, sentence_find, record_vals
!pip install ansicolors
# Takes a tuple (list(idx), sentence) and will print in red anything in the index
def color_printer(idx_sentence):
indices = idx_sentence[0]
sentence = idx_sentence[1]
mat = np.zeros([2 * len(indices) + 1, 2], dtype=int)
for ii, idx in enumerate(indices):
ri = 2 * ii + 1
mat[ri - 1, 1] = idx[0]
mat[ri, :] = idx
mat[ri + 1, 0] = idx[1]
if ii + 1 == len(indices):
mat[ri + 1, 1] = len(sentence)
output = ''
for ii in range(mat.shape[0]):
if ii % 2 == 0:
output = output + sentence[mat[ii, 0]:mat[ii, 1]]
else:
output = output + red(sentence[mat[ii, 0]:mat[ii, 1]])
output = output.replace('\n', '')
print(output)
from colors import red, black, white # ansicolors
dir_base = os.getcwd()
dir_data = os.path.join(dir_base,'..','input','incubation')
# load data
df = pd.read_csv(os.path.join(dir_data, 'df_txt.csv'))
df['date'] = pd.to_datetime(df.date)
print(df.shape)
# remove prefix from some abstracts: publically funded repositories.... etc
pref = 'COVID-19 resource centre remains active.'
for ii, aa in enumerate(df.abstract):
if isinstance(aa, float): # nan
continue
hit = re.search(pref, aa)
if hit:
df.abstract.iloc[ii] = aa[hit.span()[1] + 1:]
Explanation: Meta-analysis of incubation time for Covid-19 with pediatric subset
A wide variety of estimates of the incubation time have been seen in the rapidly evolving Covid-19 literature. As of March 27th, 2020 the most common number reported in the popular press appears to be the 5.2 day estimate. While the variation in point estimates continues to be substantial, an average range of 5-6 days appears to be appropriate (for the mean). There has also been a limited discussion on the possibility of age-based differences in the incubation period. This kernel will go through the CORD-19 dataset and extract relevant sentences in regards to differents moments of the the incubation period (mean, median, lower-bound, and upper-bound). At the end of the notebook, a subset of relevant papers that mention pediatric populations will be explored. As of now, there does not appear to be strong evidence for age related-difference in incubation periods.
Several technical notes.
1. The df_txt.csv load in the script below was generated with a similar method to xhlulu's kernel.
2. A utility script is being used to help with the parsing. Please download the kernel for the details of these functions.
3. After the relevant sentences are found, a function record_vals is used to allow the user to <u>manually</u> select the sentences with "y(es)/n(o)"
4. I manually annotated the moments in the sentences
End of explanation
# Find ways in which covid and ncov are referred to
regex_ncov = r'(20)?19(\-)?ncov|ncov(\-)?(20)?19'
regex_covid = r'covid(\-)?(20)?19'
# row indices
idx_covid_abs = np.where(idx_find(df.abstract, regex_covid))[0]
idx_ncov_abs = np.where(idx_find(df.abstract, regex_ncov))[0]
idx_union_abs = np.union1d(idx_covid_abs, idx_ncov_abs)
di_regex = {'covid': regex_covid, 'ncov': regex_ncov}
di_idx = {'covid': idx_covid_abs, 'ncov': idx_ncov_abs}
print('%i possible "covid" articles (using abstract)\n'
'%i possible nCoV articles (using abstract)\n'
'Union: %i, interection: %i' %
(len(idx_covid_abs), len(idx_ncov_abs), len(idx_union_abs),
len(np.intersect1d(idx_covid_abs, idx_ncov_abs))))
dfmt = '%B %d, %Y'
date_ncov_min = df.date.iloc[idx_ncov_abs].min().strftime(dfmt)
date_ncov_max = df.date.iloc[idx_ncov_abs].max().strftime(dfmt)
date_covid_min = df.date.iloc[idx_covid_abs].min().strftime(dfmt)
date_covid_max = df.date.iloc[idx_covid_abs].max().strftime(dfmt)
print('First and last nCoV article: %s & %s\n'
'First and last covid-19 article: %s & %s' %
(date_ncov_min, date_ncov_max, date_covid_min, date_covid_max))
holder = []
for term in di_regex:
regex = di_regex[term]
idx = di_idx[term]
dat_abstract = uwords(df.abstract.iloc[idx], regex).assign(doc='abstract')
dat_txt = uwords(df.txt.iloc[idx], regex).assign(doc='txt')
dat = pd.concat([dat_abstract, dat_txt])
dat = dat.groupby('term').n.sum().reset_index()
dat.insert(0, 'tt', term)
holder.append(dat)
df_term = pd.concat(holder).reset_index(drop=True)
# Term usage
print(df_term)
Explanation: Section 1: Summary statistics
The code block below will calculate the number of 'covid' and 'nCoV' mentions in the texts and abstracts of the corpus. The first journal articles referencing 'covid' and 'nCoV' start on January 1st, 2020. The last relevant articles are around a week old as of March 27, 2020. The majority of articles use either 'Covid-2019' or '2019-nCoV', although there are some exceptions.
End of explanation
pat_peds = r'infant|child|pediatric|age\<'
idx_incubation = []
idx_peds = []
for ii in idx_union_abs:
abs, txt = df.abstract[ii], df.txt[ii]
corpus = abs + '. ' + txt
if re.search(r'incubation', corpus, re.IGNORECASE) is not None:
idx_incubation.append(ii)
if re.search(pat_peds, corpus, re.IGNORECASE) is not None:
idx_peds.append(ii)
idx_incubation_peds = np.intersect1d(idx_incubation, idx_peds)
print('%i incubation articles, with %i pediatric articles, %i overlap' %
(len(idx_incubation), len(idx_peds), len(idx_incubation_peds)))
# What is the most common word to appear before/after incubation?
holder_l, holder_r = [], []
for ii in idx_incubation:
abs, txt = df.abstract[ii], df.txt[ii]
corpus = abs + '. ' + txt
rterm = find_beside(corpus, 'incubation', tt='right')
lterm = find_beside(corpus, 'incubation', tt='left')
holder_r.append(rterm)
holder_l.append(lterm)
dat_suffix = pd.Series(ljoin(holder_r)).str.lower().value_counts().reset_index().rename(
columns={0: 'n', 'index': 'suffix'})
dat_prefix = pd.Series(ljoin(holder_l)).str.lower().value_counts().reset_index().rename(
columns={0: 'n', 'index': 'suffix'})
print(dat_suffix.head(50))
print(dat_prefix.head(50))
suffix = ['period', 'time', 'distribution', 'duration', 'interval', 'rate', 'mean', 'median', 'estimation']
suffix = [z + r'(s)?' for z in suffix]
pat_incubation = [r'incubation\s'+z for z in suffix]
Explanation: Section 2: Incubation period
To see how 'incubation' is being used in the corpus, it is useful to see the preceeding and succeeding word. While 'incubation period' is the most common expression, others such as 'incubation time' or 'incubation period' are in use too.
End of explanation
do_run = False
if do_run:
keepers = []
for jj, ii in enumerate(idx_incubation):
abs, txt = df.abstract[ii], df.txt[ii]
corpus = abs + '. ' + txt
idx_sentences = sentence_find(corpus, pat_incubation)
if len(idx_sentences) > 0:
try:
dd = df.loc[ii,'date'].strftime('%B %d, %Y')
except:
dd = 'NaN'
print('---- Title: %s, date: %s, index: %i (%i of %i) ----' %
(df.loc[ii, 'title'], dd , ii,jj+1,len(idx_incubation)))
tmp = record_vals(idx_sentences)
dat = pd.DataFrame(tmp,columns=['pos','txt']).assign(idx = ii)
keepers.append(dat)
dat_sentences = pd.concat(keepers)
dat_sentences = dat_sentences[['idx','pos','txt']]
dat_sentences['txt'] = dat_sentences.txt.str.replace('\n','')
dat_sentences = df.iloc[idx_incubation][['source','title','doi','date']].rename_axis('idx').reset_index().merge(
dat_sentences,on='idx',how='right')
dat_sentences.to_csv(os.path.join(dir_output,'sentence_flag.csv'),index=False)
Explanation: Section 4: Manual curation
Now that a total of 194 articles have been found with relevant sentences, a manual curation will be performed to select which sentences are relevant and allow the user to annotate the data with the stated moments. Sentences were selected if they estimated an incubation period from actual data rather than used existing estimates.
End of explanation
df_moments = pd.read_csv(os.path.join(dir_data,'sentence_flag.csv'))
df_txt = df_moments[['title','pos','txt']].copy()
df_moments.drop(columns = ['pos','txt'],inplace=True)
df_moments['date'] = pd.to_datetime(df_moments.date)
moments = df_moments.moments.str.split('\;',expand=True).reset_index().melt('index')
moments = moments[moments.value.notnull()].reset_index(drop=True).drop(columns='variable')
tmp = moments.value.str.split('\=',expand=True)
moments = moments.drop(columns='value').assign(moment=tmp.iloc[:,0], val=tmp.iloc[:,1].astype(float))
df_moments = df_moments.drop(columns='moments').reset_index().merge(moments,on='index',how='right').drop(columns='index')
# Print off key sentences
print('A total of %i unique studies' % (df_moments.title.unique().shape[0]) )
print('\n\n')
for ii, rr in df_txt.iterrows():
print('----- Article: %s -----' % rr['title'] )
idx = [int(z) for z in re.findall(r'\d+', rr['pos'])]
idx = np.array(idx).reshape([int(len(idx) / 2), 2])
idx = [tuple(idx[i]) for i in range(idx.shape[0])]
sentence = rr['txt']
idx_sentence = (idx,sentence)
color_printer(idx_sentence)
print('\n')
di_moments = {'lb':'Lower-bound','ub':'Upper-bound','mu':'Mean','med':'Median',
'q2':'25th percentile','q3':'75th percentile'}
# Plot the moments over time
g = sns.FacetGrid(data=df_moments.assign(moment=lambda x: x.moment.map(di_moments)),
col='moment',col_wrap=3,sharex=True,sharey=False,height=4,aspect=1)
g.map(sns.lineplot,'date','val',ci=None)
g.map(sns.scatterplot,'date','val')
g.set_xlabels('');g.set_ylabels('Days')
g.fig.suptitle(t='Figure: Estimate of Incubation period moments over time',size=16,weight='bold')
g.fig.subplots_adjust(top=0.85)
for ax in g.axes.flat:
ax.set_title(ax.title._text.replace('moment = ', ''))
# dates = [dt.strftime(dt.strptime(z,'%Y-%m-%d'),'%b-%d, %y') for z in dates]
xticks = [737425., 737439., 737456., 737470., 737485., 737499.]
lbls = ['Jan-01, 20', 'Jan-15, 20', 'Feb-01, 20', 'Feb-15, 20', 'Mar-01, 20', 'Mar-15, 20']
g.set_xticklabels(rotation=45,labels=lbls)
g.set(xticks = xticks)
ave = df_moments.groupby('moment').val.mean().reset_index().rename(columns={'moment':'Moment','val':'Average'}).assign(Moment=lambda x: x.Moment.map(di_moments))
print(np.round(ave,1))
Explanation: Section 5: Analyze moments of incubation period
Load the manually annotated data with the added moments column.
End of explanation
# Get the index
df_match = df_txt.merge(df,on='title',how='left').rename(columns={'txt_x':'sentence','txt_y':'txt_full'})
for jj, rr in df_match.iterrows():
try:
dd = rr['date'].strftime('%B %d, %Y')
except:
dd = 'NaN'
corpus = rr['abstract'] + '. ' + rr['txt_full']
peds_sentences = sentence_find(corpus, pat_peds)
incubation_sentences = sentence_find(corpus, pat_incubation)
if len(peds_sentences) > 0 and len(incubation_sentences) > 0:
print('---- Title: %s, date: %s (%i of %i) ----' %
(rr['title'], dd, jj+1, df_match.shape[0]))
for ii_ss in peds_sentences + incubation_sentences:
color_printer(ii_ss)
print('\n')
Explanation: The figure above shows thats the point estimates, especially for the mean, are quite noisy and range from just below 3 days, to just above 8 days.
Section 6: Pediatric references
Using the 30 articles found above, we can now see which papers might shed any clues on the incubation period for pediatric populations.
End of explanation
from PIL import Image
from matplotlib import pyplot as plt
image = Image.open(os.path.join(dir_data,"age_incubation.png"))
fig, ax = plt.subplots(figsize=(18,9))
ax.imshow(image)
fig.suptitle("Figure 3: from (Han 2020) ", fontsize=18,weight='bold')
fig.subplots_adjust(top=1.1)
Explanation: Three articles show up of interest:
(Han 2020) Estimate the incubation period of coronavirus 2019 (COVID-19)
(Zhang et al 2020) Clinical Characteristics of 34 Children with Coronavirus Disease-2019 in the West of China: a Multiple-center Case Series
(Henry and Oliveira 2020) Preliminary epidemiological analysis on children and adolescents with novel coronavirus disease 2019 outside Hubei Province, China: an observational study utilizing crowdsourced data
The first paper by (Han 2020) suggests that the incubation period is shorter for patients under the age of 40. The distribution of data points from Figure 3 of the paper appears to show a relatively short incubation period for those under 25. However there are onyl 59 patients in total for this study.
End of explanation |
4,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5 次元削減でデータを圧縮する
特徴部分空間(feature extraction)を作成する
主成分分析(PCA
Step1: 5.1.2 特徴変換
Step2: 5.1.3 scikit-learn の主成分分析
Step3: 5.2 線形判別分析による教師ありデータ圧縮
d次元のデータセットを標準化する(dは特徴量の個数)
クラスごとにd次元の平均ベクトルを計算する
クラス間変動行列SBと、クラス内変動行列SWを生成する
行列 SW^-1 SB の固有ベクトルと対応する固有値を計算する
d x k次元の変換行列Wを生成するために、最も大きいk個の固有値に対応するk個の固有ベクトルを選択する
変換行列Wを使ってサンプルを新しい特徴部分空間へ射影する
5.2.1 変動行列を計算する
クラス内変動行列(within-class scatter matrix)
クラス間変動行列(between-class scatter matrix)
Step4: 5.2.2 新しい特徴部分空間の線形判別を選択する
Step5: 5.3.2 新しい特徴空間にサンプルを射影する
Step6: 5.2.4 scikit-learn による LDA
Step8: 5.3 カーネル主成分分析を使った非線形写像
カーネル化したPCA(kernel PCA)
5.3.1 カーネル関数とカーネルトリック
5.3.2 Python でカーネル主成分分析を実装する
Step9: 例1
Step10: 例2
Step11: 5.3.3 新しいデータ点を射影する
5.3.34 scikit-learnのカーネル主成分分析 | Python Code:
from IPython.core.display import display
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
import pandas as pd
# http://archive.ics.uci.edu/ml/datasets/Wine
df_wine = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/wine/wine.data', header=None)
# 1. d次元のデータセットを標準化する
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# 特徴量とクラスラベルを別々に抽出
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
# 全体の30%をテストデータにする
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train)
X_test_std = sc.transform(X_test)
import numpy as np
# 2. 共分散行列を作成
cov_mat = np.cov(X_train_std.T)
# 3. 固有値と固有ベクトルを計算
# linalg.eig関数は固有分解(eigendecomposition)を実行する関数
eigen_vals, eigen_vecs = np.linalg.eig(cov_mat)
eigen_vals
# 固有値を合計
tot = sum(eigen_vals)
# 分散説明率を計算
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
display("var_exp:", var_exp)
# 分散説明率の累積和を取得
cum_var_exp = np.cumsum(var_exp)
display("cum_var_exp:", cum_var_exp)
import matplotlib.pyplot as plt
# 分散説明率の棒グラフを作成
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center', label='individual explained variance')
# 分散説明率の累積和の階段グラフを作成
plt.step(range(1, 14), cum_var_exp, where='mid', label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.show()
Explanation: 5 次元削減でデータを圧縮する
特徴部分空間(feature extraction)を作成する
主成分分析(PCA: PrincipalComponent Analysis)
線形判別分析(LDA: Linear Discriminant Analysis)
カーネル主成分分析
5.1 主成分分析による教師なし次元削減
d次元のデータセットを標準化する
標準化したデータセットの共分散行列(covariance matric)を作成する
共分散行列を固有ベクトルと固有値に分解する
最も大きいk個の固有値に対するk個の固有ベクトルを選択する
上位k個の固有ベクトルから射影行列Wを作成する
射影行列Wを使ってd次元の入力データセットXを変換し、新しいk次元の特徴部分空間を取得する
5.1.1 共分散行列の固有対を求める
End of explanation
# (固有値, 固有ベクトル)のタプルのリストを作成
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:,i]) for i in range(len(eigen_vals))]
# (固有値, 固有ベクトル)のタプルを大きいものから順に並び替え
eigen_pairs.sort(reverse=True)
# 4. 最も大きいk個の固有値に対するk個の固有ベクトルを選択する(ここでは k = 2 とする)
# 5. 上位k個の固有ベクトルから射影行列Wを作成する
w = np.hstack((eigen_pairs[0][1][:, np.newaxis], eigen_pairs[1][1][:, np.newaxis]))
display("Matrix W:", w)
# x' = xW
display(X_train_std[0].dot(w))
# 6. 射影行列Wを使ってd次元の入力データセットXを変換し、新しいk次元の特徴部分空間を取得する
# X' = XW
X_train_pca = X_train_std.dot(w)
# 2次元の散布図としてプロット
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
# クラスラベル、点の色、点の種類の組み合わせからなるリストを生成してプロット
for label, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train==label, 0], X_train_pca[y_train==label, 1], c=c, label=label, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.show()
Explanation: 5.1.2 特徴変換
End of explanation
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# マーカーとカラーマップの用意
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# 決定領域のプロット
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
# グリッドポイントの生成
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
# 各特徴量を1次元配列に変換して予測を実行
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
# 予測結果を元のグリッドポイントのデータサイズに変換
Z = Z.reshape(xx1.shape)
# グリッドポイントの等高線のプロット
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
# 軸の範囲の設定
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# クラスごとにサンプルをプロット
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y==cl, 0], y=X[y==cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl)
from sklearn.linear_model import LogisticRegression
from sklearn.decomposition import PCA
# 主成分数を指定して、PCAのインスタンスを生成
pca = PCA(n_components=2)
# ロジスティック回帰のインスタンスを生成
lr = LogisticRegression()
# トレーニングデータやテストデータをPCAに適合させる
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
# トレーニングデータをロジスティック回帰に適合させる
lr.fit(X_train_pca, y_train)
# 決定境界をプロット
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.show()
# 決定境界をプロット
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.show()
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
# 分散説明率を計算
pca.explained_variance_ratio_
Explanation: 5.1.3 scikit-learn の主成分分析
End of explanation
# 1. d次元のデータセットを標準化する(dは特徴量の個数)
# X_train_std, X_test_std は作成済
# 2. クラスごとにd次元の平均ベクトルを計算する
np.set_printoptions(precision=4)
mean_vecs = []
for label in range(1, 4):
mean_vecs.append(np.mean(X_train_std[y_train==label], axis=0))
print('MV {}:, {}\n'.format(label, mean_vecs[label - 1]))
# 3. クラス間変動行列SBと、クラス内変動行列SWを生成する
d = 13 # 特徴量の個数
# クラス内変動行列 SW
S_W = np.zeros((d, d)) # 13 x 13 で値がすべて0の行列を生成
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.zeros((d, d))
for row in X_train_std[y_train == label]:
row, mv = row.reshape(d, 1), mv.reshape(d, 1)
class_scatter += (row - mv).dot((row - mv).T)
S_W += class_scatter
print('Within-class scatter matrix: {}x{}'.format(S_W.shape[0], S_W.shape[1]))
# クラスラベルの一様に分散していない
print('Class label distribution: {}'.format(np.bincount(y_train)[1:]))
d = 13
# クラス内変動行列 SW
S_W = np.zeros((d, d))
for label, mv in zip(range(1, 4), mean_vecs):
class_scatter = np.cov(X_train_std[y_train == label].T)
S_W += class_scatter
print('Scaled within-class scatter matrix: {}x{}'.format(S_W.shape[0], S_W.shape[1]))
# クラス間変動行列SB
mean_overall = np.mean(X_train_std, axis=0)
d = 13
S_B = np.zeros((d, d))
for i, mean_vec in enumerate(mean_vecs):
n = X_train[y_train==i + 1, :].shape[0]
mean_vec = mean_vec.reshape(d, 1)
mean_overall = mean_overall.reshape(d, 1)
S_B += n * (mean_vec - mean_overall).dot((mean_vec - mean_overall).T)
print('Between-class scatter matrix: {}x{}'.format(S_B.shape[0], S_B.shape[1]))
Explanation: 5.2 線形判別分析による教師ありデータ圧縮
d次元のデータセットを標準化する(dは特徴量の個数)
クラスごとにd次元の平均ベクトルを計算する
クラス間変動行列SBと、クラス内変動行列SWを生成する
行列 SW^-1 SB の固有ベクトルと対応する固有値を計算する
d x k次元の変換行列Wを生成するために、最も大きいk個の固有値に対応するk個の固有ベクトルを選択する
変換行列Wを使ってサンプルを新しい特徴部分空間へ射影する
5.2.1 変動行列を計算する
クラス内変動行列(within-class scatter matrix)
クラス間変動行列(between-class scatter matrix)
End of explanation
X_train[y_train==2, :].shape[0]
# 4. 行列 SW^-1 SB の固有ベクトルと対応する固有値を計算する
# inv関数で逆行列、dot関数で行列積、eig関数で固有値を計算
eigen_vals, eigen_vecs = np.linalg.eig(np.linalg.inv(S_W).dot(S_B))
# (固有値, 固有ベクトル)のタプルのリストを作成
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i]) for i in range(len(eigen_vals))]
# (固有値, 固有ベクトル)のタプルを大きいものから順に並び替え
eigen_pairs = sorted(eigen_pairs, key=lambda k: k[0], reverse=True)
for eigen_val in eigen_pairs:
print(eigen_val[0])
# 固有値の実数部の総和を求める
tot = sum(eigen_vals.real)
# 分散説明率とその累積和を計算
discr = [(i / tot) for i in sorted(eigen_vals.real, reverse=True)]
display("discr:", discr)
cum_discr = np.cumsum(discr)
display("cum_discr:", cum_discr)
# 分散説明率の棒グラフを作成
plt.bar(range(1, 14), discr, alpha=0.5, align='center', label='individual "discriminability"')
# 分散説明率の累積和の階段グラフを作成
plt.step(range(1, 14), cum_discr, where='mid', label='cumulative "discriminability"')
plt.ylabel('"discriminability" ratio')
plt.xlabel('Linear Discriminants')
plt.ylim([-0.1, 1.1])
plt.legend(loc='best')
plt.show()
# 6. 変換行列Wを使ってサンプルを新しい特徴部分空間へ射影する
# 2つの固有ベクトルから変換行列を作成
w = np.hstack((eigen_pairs[0][1][:, np.newaxis].real, eigen_pairs[1][1][:, np.newaxis].real))
display("Matrix W:", w)
Explanation: 5.2.2 新しい特徴部分空間の線形判別を選択する
End of explanation
# 標準化したトレーニングデータに変換行列をかける
X_train_lda = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
# クラスラベル、点の色、点の種類の組み合わせからなるリストを生成してプロット
for label, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_lda[y_train==label, 0] * -1, X_train_lda[y_train==label, 1] * -1, c=c, label=label, marker=m)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.show()
Explanation: 5.3.2 新しい特徴空間にサンプルを射影する
End of explanation
if Version(sklearn_version) < '0.18':
from sklearn.lda import LDA
else:
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
# 次元数を指定して、LDAのインスタンスを生成
lda = LDA(n_components=2)
X_train_lda = lda.fit_transform(X_train_std, y_train)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X_train_lda, y_train)
# 決定境界をプロット
plot_decision_regions(X_train_lda, y_train, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.show()
X_test_lda = lda.transform(X_test_std)
plot_decision_regions(X_test_lda, y_test, classifier=lr)
plt.xlabel('LD 1')
plt.ylabel('LD 2')
plt.legend(loc='lower left')
plt.show()
Explanation: 5.2.4 scikit-learn による LDA
End of explanation
from scipy.spatial.distance import pdist, squareform
from scipy import exp
from scipy.linalg import eigh
import numpy as np
def rbf_kernel_pca(X, gamma, n_components):
RBPカーネルPCAの実装
パラメータ
----------
X: [NumPy ndarray], shape = [n_samples, n_features]
gamma: float
RBFカーネルのチューニングパラメータ
n_components: int
返される主成分の個数
戻り値
------
X_pc: [NumPy ndarray], shape = [n_samples, n_features]
射影されたデータセット
# M x N 次元のデータセットでペアごとの平方ユークリッド距離を計算
sq_dists = pdist(X, 'sqeuclidean')
# ペアごとの距離を正方行列に変換
mat_sq_dists = squareform(sq_dists)
# 対象カーネル行列を計算
K = exp(-gamma * mat_sq_dists)
# カーネル行列を中心窩
N = K.shape[0]
one_n = np.ones((N, N)) / N
K = K - one_n.dot(K) - K.dot(one_n) + one_n.dot(K).dot(one_n)
# 中心化されたカーネル行列から固有値を取得
# numpy.eigh はそれらをソート順に返す
eigvals, eigvecs = eigh(K)
# 上位k個の固有ベクトル(射影されたサンプル)を収集
X_pc = np.column_stack((eigvecs[:, -i] for i in range(1, n_components + 1)))
return X_pc
Explanation: 5.3 カーネル主成分分析を使った非線形写像
カーネル化したPCA(kernel PCA)
5.3.1 カーネル関数とカーネルトリック
5.3.2 Python でカーネル主成分分析を実装する
End of explanation
from sklearn.datasets import make_moons
import matplotlib.pyplot as plt
# 2つの半月形データを作成
X, y = make_moons(n_samples=100, random_state=123)
plt.scatter(X[y==0, 0], X[y==0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y==1, 0], X[y==1, 1], color='blue', marker='o', alpha=0.5)
plt.show()
# 標準のPCAを使ってみる
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y==0, 0], X_spca[y==0, 1], color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y==1, 0], X_spca[y==1, 1], color='blue', marker='o', alpha=0.5)
# 2番目のグラフ領域に散布図をプロット
ax[1].scatter(X_spca[y==0, 0], np.zeros((50, 1)) + 0.02, color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y==1, 0], np.zeros((50, 1)) - 0.02, color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.show()
from matplotlib.ticker import FormatStrFormatter
# カーネルPCA関数を使う
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y==0, 0], X_spca[y==0, 1], color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_spca[y==1, 1], color='blue', marker='o', alpha=0.5)
# 2番目のグラフ領域に散布図をプロット
ax[1].scatter(X_kpca[y==0, 0], np.zeros((50, 1)) + 0.02, color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((50, 1)) - 0.02, color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
ax[0].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
ax[1].xaxis.set_major_formatter(FormatStrFormatter('%0.1f'))
plt.show()
Explanation: 例1: 半月形の分離
End of explanation
from sklearn.datasets import make_circles
import matplotlib.pyplot as plt
# 同心円用のデータを作成してプロット
X, y = make_circles(n_samples=1000, random_state=123, noise=0.1, factor=0.2)
plt.scatter(X[y==0, 0], X[y==0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X[y==1, 0], X[y==1, 1], color='blue', marker='o', alpha=0.5)
plt.show()
# 標準のPCAを使ってみる
from sklearn.decomposition import PCA
scikit_pca = PCA(n_components=2)
X_spca = scikit_pca.fit_transform(X)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_spca[y==0, 0], X_spca[y==0, 1], color='red', marker='^', alpha=0.5)
ax[0].scatter(X_spca[y==1, 0], X_spca[y==1, 1], color='blue', marker='o', alpha=0.5)
# 2番目のグラフ領域に散布図をプロット
ax[1].scatter(X_spca[y==0, 0], np.zeros((500, 1)) + 0.02, color='red', marker='^', alpha=0.5)
ax[1].scatter(X_spca[y==1, 0], np.zeros((500, 1)) - 0.02, color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.show()
# カーネルPCA関数を使う
X_kpca = rbf_kernel_pca(X, gamma=15, n_components=2)
fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(7, 3))
ax[0].scatter(X_kpca[y==0, 0], X_spca[y==0, 1], color='red', marker='^', alpha=0.5)
ax[0].scatter(X_kpca[y==1, 0], X_spca[y==1, 1], color='blue', marker='o', alpha=0.5)
# 2番目のグラフ領域に散布図をプロット
ax[1].scatter(X_kpca[y==0, 0], np.zeros((500, 1)) + 0.02, color='red', marker='^', alpha=0.5)
ax[1].scatter(X_kpca[y==1, 0], np.zeros((500, 1)) - 0.02, color='blue', marker='o', alpha=0.5)
ax[0].set_xlabel('PC1')
ax[0].set_ylabel('PC2')
ax[1].set_ylim([-1, 1])
ax[1].set_yticks([])
ax[1].set_xlabel('PC1')
plt.show()
Explanation: 例2: 同心円の分離
End of explanation
from sklearn.decomposition import KernelPCA
X, y = make_moons(n_samples=100, random_state=123)
scikit_kpca = KernelPCA(n_components=2, kernel='rbf', gamma=15)
X_skernpca = scikit_kpca.fit_transform(X)
plt.scatter(X_skernpca[y==0, 0], X_skernpca[y==0, 1], color='red', marker='^', alpha=0.5)
plt.scatter(X_skernpca[y==1, 0], X_skernpca[y==1, 1], color='blue', marker='o', alpha=0.5)
plt.xlabel('PC1')
plt.ylabel('PC2')
plt.show()
Explanation: 5.3.3 新しいデータ点を射影する
5.3.34 scikit-learnのカーネル主成分分析
End of explanation |
4,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ch4 Writing Structured Programs
Assignments
Step1: 注意
Step2: Equality
用==是比較兩個元素值是否相同。
用is是比較兩個元素是否參考同一個物件。
Step3: Conditions
將list放在if中,會直接判斷list是否為空,相當於if len(list) > 0
Step4: any()判斷一個list是否存在True的元素,all()判斷一個list是否全為True,in用來判斷值是否存在list中。
Step5: Sequences
sequence最常用的操作是用for訪問每一個元素。
Step6: 利用tuple可以同時進行多個元素的取代。
Step7: 用zip可以將多個list結合成tuple。
Step8: 重覆元素的方法
Step9: Function Inputs and Outputs
在設計function時,要注意,如果會修改輸入參數,最好不要有輸出,否則會讓使用者混淆。
def sort1(a)
Step10: Variable Scope
Python遵守LGB Rule,先找local,再找global,再找built-in。
function可以透過global關鍵字創造global變數,但實際上越少用越好,這會影響function的可用性。
Check Variable Type
一般用assert(cond)配合isinstance來完成。assert當參數為False時,會出現AssertionError。
Step12: Documenting Function
Step13: Lambda Expression
lambda是用來產生臨時性function的方法。
Step14: Named Arguments
Step15: Structure of a Python Module
Step16: Letter Trie
Step17: Matplotlib | Python Code:
a = list('hello') # a指向一個list物件
b = a # b指向a所指向的list物件
b[3] = 'x' # 改變物件第3個元素,因為實際件只有一個,所以a,b看到的物件會同時改變
a, b
a = ['maybe']
b = [a, a, a]
b
a[0] = 'will'
b
Explanation: Ch4 Writing Structured Programs
Assignments
End of explanation
a = ['play']
b = a[:]
a[0] = 'zero'
a, b
a = ['play']
b = [a, a]
a[0] = 'what'
a, b, id(a), id(b[0])
Explanation: 注意: 如果你要複製list,必須用[:]來複製,否則只會複製指標。
End of explanation
a is b[0], a is b[1]
b = a[:]
# 因為用複製的,所以值相同但物件不同
a is b, a == b
Explanation: Equality
用==是比較兩個元素值是否相同。
用is是比較兩個元素是否參考同一個物件。
End of explanation
e = []
if e: print e, " is not empty"
e = []
if not e: print e, " is empty"
Explanation: Conditions
將list放在if中,會直接判斷list是否為空,相當於if len(list) > 0:。
End of explanation
a = [0, 1, 2, 3, 4, 5]
any(a), all(a), 3 in a, 8 in a
Explanation: any()判斷一個list是否存在True的元素,all()判斷一個list是否全為True,in用來判斷值是否存在list中。
End of explanation
a = [3, 3, 2, 4, 1]
[item for item in a] # 原始順序
[item for item in sorted(a)] # 排序
[item for item in set(a)] # 只考慮唯一的元素
[item for item in reversed(a)] # 倒序
[item for item in set(a).difference([3,4])] # 不要某些元素
import random
random.shuffle(a) # shuffle後,會直接影響a內部的值
[item for item in a]
''.join(['hello', 'world']) # join可以將字串連在一起
Explanation: Sequences
sequence最常用的操作是用for訪問每一個元素。
End of explanation
a = [1, 2, 3, 4, 5]
(a[2], a[3], a[4]) = (5, 6, 7)
a
Explanation: 利用tuple可以同時進行多個元素的取代。
End of explanation
a = range(5)
b = range(5, 10)
zip(a, b, a, b)
list(enumerate(b)) # enumerate 會傳回 (index, a[index])
a = [5, 3, 2, 4, 1]
a.sort() # .sort() 會直接修改原始list
a
a = [5, 3, 2, 4, 1]
sorted(a), a # 用sorted()不會影響原始list
Explanation: 用zip可以將多個list結合成tuple。
End of explanation
'hello' * 3
['hello'] * 3
[['a'] * 3] * 2
Explanation: 重覆元素的方法
End of explanation
def func1(a):
a[0] = 'modified'
s = ['hello', 'world']
func1(s)
s
Explanation: Function Inputs and Outputs
在設計function時,要注意,如果會修改輸入參數,最好不要有輸出,否則會讓使用者混淆。
def sort1(a): # OK, 會修改輸入但沒有輸出
a.sort()
def sort2(a): # OK, 不會修改輸入, 有輸出
return sorted(a)
def sort3(a): # BAD, 有修改輸入又有輸出, 一定會有人搞錯
a.sort()
return a
所有function的參數都是call-by-value,但要注意,如果參數是一個list,list傳入的value是物件id,傳到function內部後變成可修改的list。
End of explanation
a = 'hello'
assert(isinstance(a, basestring)) # 沒問題
a = 3
assert(isinstance(a, basestring)) # 錯誤
Explanation: Variable Scope
Python遵守LGB Rule,先找local,再找global,再找built-in。
function可以透過global關鍵字創造global變數,但實際上越少用越好,這會影響function的可用性。
Check Variable Type
一般用assert(cond)配合isinstance來完成。assert當參數為False時,會出現AssertionError。
End of explanation
def hello(a):
This is a hello function.
The only function is print hello world.
@param a: a string to be printed
@type a: C{basestring}
@rtype: C{float}
print 'hello world', a
return(3.14)
hello('my dear')
print hello.__doc__
Explanation: Documenting Function
End of explanation
z = lambda w: w**2
z(5)
Explanation: Lambda Expression
lambda是用來產生臨時性function的方法。
End of explanation
def generic(*a, **b):
print a # 集中所有 unnamed arguments
print b # 集中所有 names arguments
generic(1, 3.5, 'money', zzz='maybe', ggg='good')
def func(*a, z):
print a, z # 因為有指定 *a 收集所有 unnamed arguments,造成 z 出錯
func('hi', 'this')
Explanation: Named Arguments
End of explanation
nltk.corpus.__file__
help(nltk.bigrams)
Explanation: Structure of a Python Module
End of explanation
def insert(trie, key, value):
if key:
first, rest = key[0], key[1:]
if first not in trie:
trie[first] = {} # empty dict
insert(trie[first], rest, value) # key[1:] is new key
else:
trie['value'] = value
trie = nltk.defaultdict(dict)
insert(trie, 'chat', 100)
insert(trie, 'chair', 2000)
insert(trie, 'chien', 150)
trie
trie['c']['h']['a']['t']['value']
Explanation: Letter Trie
End of explanation
%matplotlib inline
import matplotlib
from matplotlib import pylab
import nltk
nltk.ngrams()
Explanation: Matplotlib
End of explanation |
4,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: Hyperparameters
Let's get a few bookkeeping items out of the way.
Step2: Auto-batching predictions
Let us first define our prediction function. Note that we're defining this for a single image example. We're going to use JAX's vmap function to automatically handle mini-batches, with no performance penalty.
Step3: Let's check that our prediction function only works on single images.
Step5: At this point, we have all the ingredients we need to define our neural network and train it. We've built an auto-batched version of predict, which we should be able to use in a loss function. We should be able to use grad to take the derivative of the loss with respect to the neural network parameters. Last, we should be able to use jit to speed up everything.
Utility and loss functions
Step6: Data Loading with tensorflow/datasets
JAX is laser-focused on program transformations and accelerator-backed NumPy, so we don't include data loading or munging in the JAX library. There are already a lot of great data loaders out there, so let's just use them instead of reinventing anything. We'll use the tensorflow/datasets data loader.
Step7: Training Loop | Python Code:
import jax.numpy as jnp
from jax import grad, jit, vmap
from jax import random
Explanation: Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Training a Simple Neural Network, with tensorflow/datasets Data Loading
Forked from neural_network_and_data_loading.ipynb
Let's combine everything we showed in the quickstart notebook to train a simple neural network. We will first specify and train a simple MLP on MNIST using JAX for the computation. We will use tensorflow/datasets data loading API to load images and labels (because it's pretty great, and the world doesn't need yet another data loading library :P).
Of course, you can use JAX with any API that is compatible with NumPy to make specifying the model a bit more plug-and-play. Here, just for explanatory purposes, we won't use any neural network libraries or special APIs for building our model.
End of explanation
# A helper function to randomly initialize weights and biases
# for a dense neural network layer
def random_layer_params(m, n, key, scale=1e-2):
w_key, b_key = random.split(key)
return scale * random.normal(w_key, (n, m)), scale * random.normal(b_key, (n,))
# Initialize all layers for a fully-connected neural network with sizes "sizes"
def init_network_params(sizes, key):
keys = random.split(key, len(sizes))
return [random_layer_params(m, n, k) for m, n, k in zip(sizes[:-1], sizes[1:], keys)]
layer_sizes = [784, 512, 512, 10]
step_size = 0.01
num_epochs = 10
batch_size = 128
n_targets = 10
params = init_network_params(layer_sizes, random.PRNGKey(0))
Explanation: Hyperparameters
Let's get a few bookkeeping items out of the way.
End of explanation
from jax.scipy.special import logsumexp
def relu(x):
return jnp.maximum(0, x)
def predict(params, image):
# per-example predictions
activations = image
for w, b in params[:-1]:
outputs = jnp.dot(w, activations) + b
activations = relu(outputs)
final_w, final_b = params[-1]
logits = jnp.dot(final_w, activations) + final_b
return logits - logsumexp(logits)
Explanation: Auto-batching predictions
Let us first define our prediction function. Note that we're defining this for a single image example. We're going to use JAX's vmap function to automatically handle mini-batches, with no performance penalty.
End of explanation
# This works on single examples
random_flattened_image = random.normal(random.PRNGKey(1), (28 * 28,))
preds = predict(params, random_flattened_image)
print(preds.shape)
# Doesn't work with a batch
random_flattened_images = random.normal(random.PRNGKey(1), (10, 28 * 28))
try:
preds = predict(params, random_flattened_images)
except TypeError:
print('Invalid shapes!')
# Let's upgrade it to handle batches using `vmap`
# Make a batched version of the `predict` function
batched_predict = vmap(predict, in_axes=(None, 0))
# `batched_predict` has the same call signature as `predict`
batched_preds = batched_predict(params, random_flattened_images)
print(batched_preds.shape)
Explanation: Let's check that our prediction function only works on single images.
End of explanation
def one_hot(x, k, dtype=jnp.float32):
Create a one-hot encoding of x of size k.
return jnp.array(x[:, None] == jnp.arange(k), dtype)
def accuracy(params, images, targets):
target_class = jnp.argmax(targets, axis=1)
predicted_class = jnp.argmax(batched_predict(params, images), axis=1)
return jnp.mean(predicted_class == target_class)
def loss(params, images, targets):
preds = batched_predict(params, images)
return -jnp.mean(preds * targets)
@jit
def update(params, x, y):
grads = grad(loss)(params, x, y)
return [(w - step_size * dw, b - step_size * db)
for (w, b), (dw, db) in zip(params, grads)]
Explanation: At this point, we have all the ingredients we need to define our neural network and train it. We've built an auto-batched version of predict, which we should be able to use in a loss function. We should be able to use grad to take the derivative of the loss with respect to the neural network parameters. Last, we should be able to use jit to speed up everything.
Utility and loss functions
End of explanation
import tensorflow as tf
# Ensure TF does not see GPU and grab all GPU memory.
tf.config.set_visible_devices([], device_type='GPU')
import tensorflow_datasets as tfds
data_dir = '/tmp/tfds'
# Fetch full datasets for evaluation
# tfds.load returns tf.Tensors (or tf.data.Datasets if batch_size != -1)
# You can convert them to NumPy arrays (or iterables of NumPy arrays) with tfds.dataset_as_numpy
mnist_data, info = tfds.load(name="mnist", batch_size=-1, data_dir=data_dir, with_info=True)
mnist_data = tfds.as_numpy(mnist_data)
train_data, test_data = mnist_data['train'], mnist_data['test']
num_labels = info.features['label'].num_classes
h, w, c = info.features['image'].shape
num_pixels = h * w * c
# Full train set
train_images, train_labels = train_data['image'], train_data['label']
train_images = jnp.reshape(train_images, (len(train_images), num_pixels))
train_labels = one_hot(train_labels, num_labels)
# Full test set
test_images, test_labels = test_data['image'], test_data['label']
test_images = jnp.reshape(test_images, (len(test_images), num_pixels))
test_labels = one_hot(test_labels, num_labels)
print('Train:', train_images.shape, train_labels.shape)
print('Test:', test_images.shape, test_labels.shape)
Explanation: Data Loading with tensorflow/datasets
JAX is laser-focused on program transformations and accelerator-backed NumPy, so we don't include data loading or munging in the JAX library. There are already a lot of great data loaders out there, so let's just use them instead of reinventing anything. We'll use the tensorflow/datasets data loader.
End of explanation
import time
def get_train_batches():
# as_supervised=True gives us the (image, label) as a tuple instead of a dict
ds = tfds.load(name='mnist', split='train', as_supervised=True, data_dir=data_dir)
# You can build up an arbitrary tf.data input pipeline
ds = ds.batch(batch_size).prefetch(1)
# tfds.dataset_as_numpy converts the tf.data.Dataset into an iterable of NumPy arrays
return tfds.as_numpy(ds)
for epoch in range(num_epochs):
start_time = time.time()
for x, y in get_train_batches():
x = jnp.reshape(x, (len(x), num_pixels))
y = one_hot(y, num_labels)
params = update(params, x, y)
epoch_time = time.time() - start_time
train_acc = accuracy(params, train_images, train_labels)
test_acc = accuracy(params, test_images, test_labels)
print("Epoch {} in {:0.2f} sec".format(epoch, epoch_time))
print("Training set accuracy {}".format(train_acc))
print("Test set accuracy {}".format(test_acc))
Explanation: Training Loop
End of explanation |
4,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 2
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
Step3: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features
Step4: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows
Step5: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above
Step6: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
Step7: Test your function by computing the RSS on TEST data for the example model
Step8: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
Step9: Next create the following 4 new features as column in both TEST and TRAIN data
Step10: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question
Step11: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more
Step12: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients
Step13: Quiz Question
Step14: Quiz Question | Python Code:
import graphlab
Explanation: Regression Week 2: Multiple Regression (Interpretation)
The goal of this first notebook is to explore multiple regression and feature engineering with existing graphlab functions.
In this notebook you will use data on house sales in King County to predict prices using multiple regression. You will:
* Use SFrames to do some feature engineering
* Use built-in graphlab functions to compute the regression weights (coefficients/parameters)
* Given the regression weights, predictors and outcome write a function to compute the Residual Sum of Squares
* Look at coefficients and interpret their meanings
* Evaluate multiple models via RSS
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Split data into training and testing.
We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
End of explanation
example_features = ['sqft_living', 'bedrooms', 'bathrooms']
example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features,
validation_set = None)
Explanation: Learning a multiple regression model
Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features:
example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code:
(Aside: We set validation_set = None to ensure that the results are always the same)
End of explanation
example_weight_summary = example_model.get("coefficients")
print example_weight_summary
Explanation: Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:
End of explanation
example_predictions = example_model.predict(train_data)
print example_predictions[0] # should be 271789.505878
Explanation: Making Predictions
In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions.
Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
End of explanation
def get_residual_sum_of_squares(model, data, outcome):
# First get the predictions
predictions = model.predict(data)
# Then compute the residuals/errors
residuals = predictions - outcome
# Then square and add them up
RSS = (residuals * residuals).sum()
return(RSS)
Explanation: Compute RSS
Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
End of explanation
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price'])
print rss_example_train # should be 2.7376153833e+14
Explanation: Test your function by computing the RSS on TEST data for the example model:
End of explanation
from math import log
Explanation: Create some new features
Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms.
You will use the logarithm function to create a new feature. so first you should import it from the math library.
End of explanation
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2)
test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2)
# create the remaining 3 features in both TEST and TRAIN data
train_data['bed_bath_rooms'] = train_data['bedrooms'] * train_data['bathrooms']
test_data['bed_bath_rooms'] = test_data['bedrooms'] * test_data['bathrooms']
train_data['log_sqft_living'] = train_data['sqft_living'].apply(lambda x: log(x))
test_data['log_sqft_living'] = test_data['sqft_living'].apply(lambda x: log(x))
train_data['lat_plus_long'] = train_data['lat'] + train_data['long']
test_data['lat_plus_long'] = test_data['lat'] + test_data['long']
Explanation: Next create the following 4 new features as column in both TEST and TRAIN data:
* bedrooms_squared = bedrooms*bedrooms
* bed_bath_rooms = bedrooms*bathrooms
* log_sqft_living = log(sqft_living)
* lat_plus_long = lat + long
As an example here's the first one:
End of explanation
test_data['bedrooms_squared'].mean()
test_data['bed_bath_rooms'].mean()
test_data['log_sqft_living'].mean()
test_data['lat_plus_long'].mean()
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms.
bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large.
Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values.
Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why)
Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits)
End of explanation
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long']
model_2_features = model_1_features + ['bed_bath_rooms']
model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
Explanation: Learning Multiple Models
Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more:
* Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude
* Model 2: add bedrooms*bathrooms
* Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude
End of explanation
# Learn the three models: (don't forget to set validation_set = None)
model_1 = graphlab.linear_regression.create(train_data, target='price', features=model_1_features, validation_set=None)
model_2 = graphlab.linear_regression.create(train_data, target='price', features=model_2_features, validation_set=None)
model_3 = graphlab.linear_regression.create(train_data, target='price', features=model_3_features, validation_set=None)
# Examine/extract each model's coefficients:
print model_1.get('coefficients')
print model_2.get('coefficients')
Explanation: Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:
End of explanation
# Compute the RSS on TRAINING data for each of the three models and record the values:
print get_residual_sum_of_squares(model_1, train_data, train_data['price'])
print get_residual_sum_of_squares(model_2, train_data, train_data['price'])
print get_residual_sum_of_squares(model_3, train_data, train_data['price'])
Explanation: Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1?
Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2?
Think about what this means.
Comparing multiple models
Now that you've learned three models and extracted the model weights we want to evaluate which model is best.
First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.
End of explanation
# Compute the RSS on TESTING data for each of the three models and record the values:
print get_residual_sum_of_squares(model_1, test_data, test_data['price'])
print get_residual_sum_of_squares(model_2, test_data, test_data['price'])
print get_residual_sum_of_squares(model_3, test_data, test_data['price'])
Explanation: Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected?
Now compute the RSS on on TEST data for each of the three models.
End of explanation |
4,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We generate some random variates from a non-normal distribution and make a
probability plot for it, to show it is non-normal in the tails
Step1: We now use boxcox to transform the data so it's closest to normal | Python Code:
# Generate data
x = stats.loggamma.rvs(5, size=500) + 5
# Plot it
fig = plt.figure(figsize=(6,9))
ax1 = fig.add_subplot(211)
prob = stats.probplot(x, dist=stats.norm, plot=ax1)
ax1.set_title('Probplot against normal distribution')
# Plot an histogram
ax2 = fig.add_subplot(212)
ax2.hist(x)
ax2.set_title('Histogram')
Explanation: We generate some random variates from a non-normal distribution and make a
probability plot for it, to show it is non-normal in the tails:
End of explanation
xt, _ = stats.boxcox(x)
# Plot the results
fig = plt.figure(figsize=(6,9))
ax1 = fig.add_subplot(211)
prob = stats.probplot(xt, dist=stats.norm, plot=ax1)
ax1.set_title('Probplot after Box-Cox transformation')
# Plot an histogram
ax2 = fig.add_subplot(212)
ax2.hist(xt)
ax2.set_title('Histogram')
Explanation: We now use boxcox to transform the data so it's closest to normal:
End of explanation |
4,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http
Step3: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
Step5: <img src="image/Mean_Variance_Image.png" style="height
Step6: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
Step7: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height
Step8: <img src="image/Learn_Rate_Tune_Image.png" style="height
Step9: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%. | Python Code:
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
def download(url, file):
Download file from <url>
:param url: URL to file
:param file: Local file path
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
# TODO: Implement Min-Max scaling for grayscale image data
a = 0.1
b = 0.9
xmin = np.min(image_data)
xmax = np.max(image_data)
valRange = b-a
denominator = xmax-xmin
return [a + ((x-xmin)*valRange)/denominator for x in image_data]
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
Explanation: <img src="image/Mean_Variance_Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 3
learning_rate = 0.1
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
Explanation: <img src="image/Learn_Rate_Tune_Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation |
4,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlowアドオンのコールバック:TimeStopping
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: データのインポートと正規化
Step3: シンプルなMNIST CNNモデルの構築
Step4: シンプルなTimeStoppingの使用法 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -U tensorflow-addons
import tensorflow_addons as tfa
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Flatten
Explanation: TensorFlowアドオンのコールバック:TimeStopping
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/addons/tutorials/time_stopping"><img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.orgで表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/time_stopping.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/addons/tutorials/time_stopping.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示{</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/addons/tutorials/time_stopping.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
概要
このノートブックでは、TensorFlowアドオンでTimeStoppingコールバックを使用する方法を紹介します。
セットアップ
End of explanation
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# normalize data
x_train, x_test = x_train / 255.0, x_test / 255.0
Explanation: データのインポートと正規化
End of explanation
# build the model using the Sequential API
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))
model.compile(optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics=['accuracy'])
Explanation: シンプルなMNIST CNNモデルの構築
End of explanation
# initialize TimeStopping callback
time_stopping_callback = tfa.callbacks.TimeStopping(seconds=5, verbose=1)
# train the model with tqdm_callback
# make sure to set verbose = 0 to disable
# the default progress bar.
model.fit(x_train, y_train,
batch_size=64,
epochs=100,
callbacks=[time_stopping_callback],
validation_data=(x_test, y_test))
Explanation: シンプルなTimeStoppingの使用法
End of explanation |
4,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Preprocess
Step2: i2t
Step3: t2i | Python Code:
import pandas as pd
import tensorflow as tf
# i2t: image-to-text.
i2t_path = '/bigstore/mmt/raw_data/fashion_gen/fashion_gen_i2t_test_pairs.csv'
# t2i: text-to-image.
t2i_path = '/bigstore/mmt/raw_data/fashion_gen/fashion_gen_t2i_test_pairs.csv'
t2i_output_path = '/bigstore/mmt/fashion_gen/metadata/fashion_bert_t2i_test.csv'
i2t_output_path = '/bigstore/mmt/fashion_gen/metadata/fashion_bert_i2t_test.csv'
dtype = {
'image_prod_id': str,
'prod_img_id': str,
'text_prod_id': str,
}
with tf.io.gfile.GFile(i2t_path, 'r') as f:
i2t_df = pd.read_csv(f, dtype=dtype)
with tf.io.gfile.GFile(t2i_path, 'r') as f:
t2i_df = pd.read_csv(f, dtype=dtype)
i2t_df
t2i_df
def add_columns(df):
Adds new columns to the dataframe.
New columns: image_id, image_index, text_index, and gt.
A product will have multiple images (files). They are images from differnt
angle of view of the product. Thus, `image_prod_id` is the main product id and
`prod_img_id` is id for different angles. One product has one text description.
image_id: image file name.
image_index: unique image index for the image file.
text_index: unique text index for the product description.
gt: if the row in the dataframe is the gruth-truth pair.
Args:
df: a pd.DataFrame. Each row of the dataframe is a image-text pair.
Returns:
a pd.DataFrame.
# image_id is the id for an image of a product. A product can have multiple
# image_id's (image files).
df['image_id'] = df['image_prod_id'] + '_' + df['prod_img_id']
# Gives each text_prod_id a unique index.
df['text_index'] = df.assign(id=df['text_prod_id'])['id'].astype(
'category').cat.codes
# Gives each image_id (an image file) a unique index.
df['image_index'] = df.assign(id=df['image_id'])['id'].astype(
'category').cat.codes
# If image and text have the smae product id, they are a ground-truth pair.
df['gt'] = (df['image_prod_id'] == df['text_prod_id']).astype(int)
return df
Explanation: Preprocess: Create FashionGen metadata for Mmt.
Defines paths and loads raw csv metadata files from Fashion-BERT and kaleido-BERT.
End of explanation
i2t_df = add_columns(i2t_df)
# Gets all ground-truth pairs in gt_df.
gt_df = i2t_df[i2t_df['gt'] == 1][['text_index', 'image_index']].rename(
columns={'image_index': 'gt_image_index'})
gt_df
# We give each text_index their ground-truth image_index if exists.
# Since FashionGen does not share the same retrieval pool (text pool for i2t),
# some text_index will not have corresponding gt_image_index.
# Thus, we fill -1 for those non-existent gt_image_index.
i2t_df = i2t_df.merge(gt_df, how='left', on='text_index').fillna(-1)
# Converts gt_image_index column from float to int.
i2t_df['gt_image_index'] = i2t_df['gt_image_index'].astype(int)
with tf.io.gfile.GFile(i2t_output_path, 'w') as f:
i2t_df.to_csv(f, index=False)
i2t_df
Explanation: i2t: add columns
End of explanation
t2i_df = add_columns(t2i_df)
gt_df = t2i_df[t2i_df['gt'] == 1][['text_index', 'image_index']].rename(
columns={'image_index': 'gt_image_index'})
gt_df
# We give each text_index their ground-truth image_index.
# We don't have non-existed gt_image_index becuase it iss text-to-image.
t2i_df = t2i_df.merge(gt_df, how='left', on='text_index')
with tf.io.gfile.GFile(t2i_output_path, 'w') as f:
t2i_df.to_csv(f, index=False)
t2i_df
# 989 texts have 101 images; 11 texts have 100 images.
print('101 images: ', (t2i_df['text_index'].value_counts() == 101).sum())
print('100 images: ', (t2i_df['text_index'].value_counts() == 100).sum())
# ground-truth pairs
print('# ground-truth: ', t2i_df['gt'].sum())
# 989 images have 101 text; 11 images have 100 texts.
print('101 images: ', (i2t_df['image_index'].value_counts() == 101).sum())
print('100 images: ', (i2t_df['image_index'].value_counts() == 100).sum())
# ground-truth pairs
print('# ground-truth: ', i2t_df['gt'].sum())
Explanation: t2i: add columns
End of explanation |
4,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kets are column vectors, i.e. with shape (d, 1)
Step1: The normalized=True option can be used to ensure a normalized output.
Bras are row vectors, i.e. with shape (1, d)
Step2: And operators are square matrices, i.e. have shape (d, d)
Step3: Which can also be sparse
Step4: Here's an example for a much larger (20 qubit), sparse operator expecation,
which will be automatically parallelized | Python Code:
qu(data, qtype='ket')
Explanation: Kets are column vectors, i.e. with shape (d, 1):
End of explanation
qu(data, qtype='bra') # also conjugates the data
Explanation: The normalized=True option can be used to ensure a normalized output.
Bras are row vectors, i.e. with shape (1, d):
End of explanation
qu(data, qtype='dop')
Explanation: And operators are square matrices, i.e. have shape (d, d):
End of explanation
qu(data, qtype='dop', sparse=True)
psi = 1.0j * bell_state('psi-')
psi
psi.H
psi = up()
psi
psi.H @ psi # inner product
X = pauli('X')
X @ psi # act as gate
psi.H @ X @ psi # operator expectation
expec(psi, psi)
expec(psi, X)
Explanation: Which can also be sparse:
End of explanation
psi = rand_ket(2**20)
A = rand_herm(2**20, sparse=True) + speye(2**20)
A
expec(A, psi) # should be ~ 1
%%timeit
expec(A, psi)
dims = [2] * 10 # overall space of 10 qubits
X = pauli('X')
IIIXXIIIII = ikron(X, dims, inds=[3, 4]) # act on 4th and 5th spin only
IIIXXIIIII.shape
dims = [2] * 3
XZ = pauli('X') & pauli('Z')
ZIX = pkron(XZ, dims, inds=[2, 0])
ZIX.real.astype(int)
dims = [2] * 10
D = prod(dims)
psi = rand_ket(D)
rho_ab = ptr(psi, dims, [0, 9])
rho_ab.round(3) # probably pretty close to identity
Explanation: Here's an example for a much larger (20 qubit), sparse operator expecation,
which will be automatically parallelized:
End of explanation |
4,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: CMCC
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation |
4,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a song recommender
Fire up GraphLab Create
Step1: Load music data
Step2: Explore data
Music data shows how many times a user listened to a song, as well as the details of the song.
Step3: Showing the most popular songs in the dataset
Step4: Count number of unique users in the dataset
Step5: Create a song recommender
Step6: Simple popularity-based recommender
Step7: Use the popularity model to make some predictions
A popularity model makes the same prediction for all users, so provides no personalization.
Step8: Build a song recommender with personalization
We now create a model that allows us to make personalized recommendations to each user.
Step9: Applying the personalized model to make song recommendations
As you can see, different users get different recommendations now.
Step10: We can also apply the model to find similar songs to any song in the dataset
Step11: Quantitative comparison between the models
We now formally compare the popularity and the personalized models using precision-recall curves. | Python Code:
import graphlab
Explanation: Building a song recommender
Fire up GraphLab Create
End of explanation
song_data = graphlab.SFrame('song_data.gl/')
Explanation: Load music data
End of explanation
song_data.head()
Explanation: Explore data
Music data shows how many times a user listened to a song, as well as the details of the song.
End of explanation
graphlab.canvas.set_target('ipynb')
song_data['song'].show()
len(song_data)
Explanation: Showing the most popular songs in the dataset
End of explanation
users = song_data['user_id'].unique()
len(users)
Explanation: Count number of unique users in the dataset
End of explanation
train_data,test_data = song_data.random_split(.8,seed=0)
Explanation: Create a song recommender
End of explanation
popularity_model = graphlab.popularity_recommender.create(train_data,
user_id='user_id',
item_id='song')
Explanation: Simple popularity-based recommender
End of explanation
popularity_model.recommend(users=[users[0]])
popularity_model.recommend(users=[users[1]])
Explanation: Use the popularity model to make some predictions
A popularity model makes the same prediction for all users, so provides no personalization.
End of explanation
personalized_model = graphlab.item_similarity_recommender.create(train_data,
user_id='user_id',
item_id='song')
Explanation: Build a song recommender with personalization
We now create a model that allows us to make personalized recommendations to each user.
End of explanation
personalized_model.recommend(users=[users[0]])
personalized_model.recommend(users=[users[1]])
Explanation: Applying the personalized model to make song recommendations
As you can see, different users get different recommendations now.
End of explanation
personalized_model.get_similar_items(['With Or Without You - U2'])
personalized_model.get_similar_items(['Chan Chan (Live) - Buena Vista Social Club'])
Explanation: We can also apply the model to find similar songs to any song in the dataset
End of explanation
if graphlab.version[:3] >= "1.6":
model_performance = graphlab.compare(test_data, [popularity_model, personalized_model], user_sample=0.05)
graphlab.show_comparison(model_performance,[popularity_model, personalized_model])
else:
%matplotlib inline
model_performance = graphlab.recommender.util.compare_models(test_data, [popularity_model, personalized_model], user_sample=.05)
Explanation: Quantitative comparison between the models
We now formally compare the popularity and the personalized models using precision-recall curves.
End of explanation |
4,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.e - Enoncé 22 octobre 2019 (2)
Correction du second énoncé de l'examen du 22 octobre 2019. L'énoncé propose une façon de disposer des tables carrées dans une salle carrée.
Step1: On sait d'après les dernières questions qu'il faudra tout répéter plusieurs fois. On prend le soin d'écrire chaque question dans une fonction. C'est un mariage dans une salle carrée. On veut disposer les tables de sortes qu'elles soient éloignées le plus possible les unes des autres et du bord de la salle. Les tables sont toutes carrées et toutes la même taille.
Q1 - distance_table
Ecrire une fonction qui calcule la distance entre deux tables carrées dont on connaît le centre. Et comme ce sont des tables carrées, on considère que la distance entre deux tables est la plus grande des valeurs absolues des différences de coordonnées.
Step2: Q2 - distance_bord
Ecrire une fonction qui calcule la distance entre une table (son centre) et le bord de la salle de côté 2C.
Step3: Q3 - table_alea
Ecrire une fonction qui tire aléatoirement une table dans le carré de côté 2C
Step4: Q4 - n_table_alea
Ecrire une fonction qui tire aléatoirement N tables dans le carré de côté 2C.
Step5: Q5 - table_proches
Ecrire une fonction qui retourne la table la plus proche d'une table ou du bord. La fonction doit retourner l'indice de la table la plus proche ou -1 si c'est le bord, puis la distance associée. On ajoute un paramètre skip_i pour éviter une table.
Step6: Q6 - distance_n_tables_alea
Ecrire une fonction qui tire N tables aléatoirement et qui retourne la distance minimum entre deux tables ou le mur et les tables.
Step7: Q7 - meilleur_table_alea
Ecrire une fonction qui tire N tables aléatoirement et qui retourne la distance minimum entre deux tables ou le mur et les tables.
Step8: Q8 - résultat numérique
Ecrire une fonction qui retourne le résultat pour 11 tables et une salle de demi-côté 1.
Step9: Q9 - plot_tables
Ecrire une fonction qui représente la solution avec matplotlib en partant de l'exemple donnée.
Step10: Q10 ...
Il est difficile de tomber sur une bonne répartition de tables en partant du hasard et plus il y aura de tables, plus il faudra de tirages. Il est plus simple de partir d'un tirage puis d'éloigner les deux tables les plus proches. L'éloigner de combien, c'est une autre question. C'est la première option. La seconde est de positionner les tables selon un quadrillage rectangulaire en formant une spirale et de chercher le meilleur écartement.
Pour la première option, on peut soit écarter une paire de table, soit écarter une table de ses voisins les plus proches, voisins trouvés grâce à un diagramme de Voronoï. La première variante ne marche pas très bien.
Step11: Q10 - Voronoï
Step12: On ajoute le bord.
Step13: Le diagramme de Voronoï permet de construire un voisinage de points pour qu'on peut rapprocher le plus possible d'en ensemble de triangle équilatéraux ou rapprocher une table le plus possible de sa frontière la plus éloignée. Après quelques essais, je ne peux pas que ce fut là la meilleure inspiration.
Q10 - KMeans
Une autre idée consiste à recouvrir la salle de points puis à effectuer un KMeans pour créer artificiellement des zones.
Step14: Les centres des clusters sont les emplacements des tables cherchées.
Step15: On essaye avec un mélange de lois normales. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.e - Enoncé 22 octobre 2019 (2)
Correction du second énoncé de l'examen du 22 octobre 2019. L'énoncé propose une façon de disposer des tables carrées dans une salle carrée.
End of explanation
def distance_table(x1, y1, x2, y2):
return max(abs(x1 - x2), abs(y1 - y2))
distance_table(0, 0, 2, 1)
Explanation: On sait d'après les dernières questions qu'il faudra tout répéter plusieurs fois. On prend le soin d'écrire chaque question dans une fonction. C'est un mariage dans une salle carrée. On veut disposer les tables de sortes qu'elles soient éloignées le plus possible les unes des autres et du bord de la salle. Les tables sont toutes carrées et toutes la même taille.
Q1 - distance_table
Ecrire une fonction qui calcule la distance entre deux tables carrées dont on connaît le centre. Et comme ce sont des tables carrées, on considère que la distance entre deux tables est la plus grande des valeurs absolues des différences de coordonnées.
End of explanation
def distance_bord(x1, y1, C):
dist = distance_table(x1, y1, 0, 0)
return C - dist
distance_bord(1, 1, 5)
distance_bord(10, 1, 5)
Explanation: Q2 - distance_bord
Ecrire une fonction qui calcule la distance entre une table (son centre) et le bord de la salle de côté 2C.
End of explanation
import random
def table_alea(C):
C2 = C ** 2
dist = C2 * 2
x = random.uniform(-C, C)
y = random.uniform(-C, C)
return x, y
table_alea(5)
Explanation: Q3 - table_alea
Ecrire une fonction qui tire aléatoirement une table dans le carré de côté 2C
End of explanation
def n_table_alea(N, C):
return [table_alea(C) for n in range(N)]
n_table_alea(3, 5)
Explanation: Q4 - n_table_alea
Ecrire une fonction qui tire aléatoirement N tables dans le carré de côté 2C.
End of explanation
def table_proches(x1, y1, list_tables, C, skip_i):
proche = -1
best = distance_bord(x1, y1, C)
for i, table in enumerate(list_tables):
if i == skip_i:
continue
dist = distance_table(x1, y1, table[0], table[1])
if dist < best:
best = dist
proche = i
return proche, best
C = 5
list_tables = n_table_alea(3, C)
table_proches(1, 1, list_tables, C, None)
table_proches(C * 0.9, 0, list_tables, C, None)
Explanation: Q5 - table_proches
Ecrire une fonction qui retourne la table la plus proche d'une table ou du bord. La fonction doit retourner l'indice de la table la plus proche ou -1 si c'est le bord, puis la distance associée. On ajoute un paramètre skip_i pour éviter une table.
End of explanation
def distance_n_tables_alea(N, C):
distrib = n_table_alea(N, C)
best = C ** 2
for i, table in enumerate(distrib):
proche, dist = table_proches(table[0], table[1], distrib, C, skip_i=i)
if dist < best:
best = dist
return best, distrib
distance_n_tables_alea(3, C)
Explanation: Q6 - distance_n_tables_alea
Ecrire une fonction qui tire N tables aléatoirement et qui retourne la distance minimum entre deux tables ou le mur et les tables.
End of explanation
def meilleur_table_alea(k, N, C):
dist = 0
best = None
for i in range(k):
d, distrib = distance_n_tables_alea(N, C)
if d > dist:
best = distrib
dist = d
return best, dist
meilleur_table_alea(10, 3, C)
Explanation: Q7 - meilleur_table_alea
Ecrire une fonction qui tire N tables aléatoirement et qui retourne la distance minimum entre deux tables ou le mur et les tables.
End of explanation
best, dist = meilleur_table_alea(10, 11, 1)
best, dist
Explanation: Q8 - résultat numérique
Ecrire une fonction qui retourne le résultat pour 11 tables et une salle de demi-côté 1.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
fig, ax = plt.subplots(1, 1, figsize=(4, 4))
C = 1
ax.set_xlim([-C, C])
ax.set_ylim([-C, C])
c = Rectangle((-1, -1), 2, 2, alpha=0.2, fill=True, facecolor='blue')
ax.add_artist(c)
ax.plot([b[0] for b in best], [b[1] for b in best], 'o');
Explanation: Q9 - plot_tables
Ecrire une fonction qui représente la solution avec matplotlib en partant de l'exemple donnée.
End of explanation
import numpy
def improve_distrib(iter, tables, C, alpha=0.2):
for it in range(iter):
# On cherche la pair la plus proche.
best = C ** 2
pair = None
for i, table in enumerate(tables):
proche, dist = table_proches(table[0], table[1], tables, C, skip_i=i)
if dist < best:
best = dist
pair = i, proche
if it % 50 == 0:
print(it, "paire", pair, "distance", best)
# On choisit une table.
if pair[0] == -1:
i = 1
elif pair[1] == -1:
i = 0
else:
i = numpy.random.randint(0, 1)
pi = pair[i]
if pair[1-i] == -1:
pjp = (0, 0)
sign = 1
else:
pjp = tables[pair[1-i]]
sign = -1
# On calcule le vecteur entre les deux tables.
dx, dy = (pjp[0] - tables[pi][0],
pjp[1] - tables[pi][1])
# Un peu d'aléa.
h = numpy.random.uniform(0, alpha)
dx *= h * sign
dy *= h * sign
# On bouge la table.
table = tables[pi]
tables[pi] = (table[0] + dx, table[1] + dy)
if distance_bord(tables[pi][0], tables[pi][1], C) < 0:
# Table hors du cercle
tables[pi] = (table[0] - dx, table[1] - dy)
C = 1
best_sol, dist = meilleur_table_alea(10, 11, C)
improve_distrib(200, best_sol, C, alpha=0.5)
fig, ax = plt.subplots(1, 1, figsize=(4, 4))
C = 1
ax.set_xlim([-C, C])
ax.set_ylim([-C, C])
c = Rectangle((-1, -1), 2, 2, alpha=0.2, fill=True, facecolor='blue')
ax.add_artist(c)
ax.plot([b[0] for b in best_sol], [b[1] for b in best_sol], 'o');
Explanation: Q10 ...
Il est difficile de tomber sur une bonne répartition de tables en partant du hasard et plus il y aura de tables, plus il faudra de tirages. Il est plus simple de partir d'un tirage puis d'éloigner les deux tables les plus proches. L'éloigner de combien, c'est une autre question. C'est la première option. La seconde est de positionner les tables selon un quadrillage rectangulaire en formant une spirale et de chercher le meilleur écartement.
Pour la première option, on peut soit écarter une paire de table, soit écarter une table de ses voisins les plus proches, voisins trouvés grâce à un diagramme de Voronoï. La première variante ne marche pas très bien.
End of explanation
from scipy.spatial import Voronoi, voronoi_plot_2d
points = numpy.array(best_sol)
vor = Voronoi(points)
fig, ax = plt.subplots(1, 1, figsize=(4, 4))
C = 1
c = Rectangle((-1, -1), 2, 2, alpha=0.2, fill=True, facecolor='blue')
ax.add_artist(c)
ax.plot([b[0] for b in best_sol], [b[1] for b in best_sol], 'o')
voronoi_plot_2d(vor, ax=ax)
ax.set_xlim([-C, C])
ax.set_ylim([-C, C]);
Explanation: Q10 - Voronoï
End of explanation
bords = []
for i in range(-5, 6):
bords.append((C, C * i / 5))
bords.append((-C, C * i / 5))
bords.append((C * i / 5, -C))
bords.append((C * i / 5, C))
points2 = numpy.vstack([points, bords])
fig, ax = plt.subplots(1, 1, figsize=(4, 4))
C = 1
c = Rectangle((-1, -1), 2, 2, alpha=0.2, fill=True, facecolor='blue')
ax.add_artist(c)
ax.plot([b[0] for b in best_sol], [b[1] for b in best_sol], 'o')
vor2 = Voronoi(points2)
voronoi_plot_2d(vor2, ax=ax)
ax.set_xlim([-C, C])
ax.set_ylim([-C, C]);
Explanation: On ajoute le bord.
End of explanation
def points_in_rectangle(N, R):
points = numpy.empty(((N+1)**2, 2))
pos = 0
for i in range(0, N + 1):
for j in range(0, N + 1):
points[pos, 0] = 1.0 * i / N * R * 2 - R
points[pos, 1] = 1.0 * j / N * R * 2 - R
pos += 1
return points
R = 1
points = points_in_rectangle(25, R)
fig, ax = plt.subplots(1, 1, figsize=(4, 4))
c = Rectangle((-1, -1), 2, 2, alpha=0.2, fill=True, facecolor='blue')
ax.add_artist(c)
ax.plot(points[:, 0], points[:, 1], '.');
from sklearn.cluster import KMeans
km = KMeans(n_clusters=11)
km.fit(points)
pred = km.predict(points)
Explanation: Le diagramme de Voronoï permet de construire un voisinage de points pour qu'on peut rapprocher le plus possible d'en ensemble de triangle équilatéraux ou rapprocher une table le plus possible de sa frontière la plus éloignée. Après quelques essais, je ne peux pas que ce fut là la meilleure inspiration.
Q10 - KMeans
Une autre idée consiste à recouvrir la salle de points puis à effectuer un KMeans pour créer artificiellement des zones.
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
c = Rectangle((-1, -1), 2, 2, alpha=0.2, fill=True, facecolor='blue')
ax[0].add_artist(c)
ax[0].set_xlim([-R, R])
ax[0].set_ylim([-R, R])
ax[0].scatter(points[:, 0], points[:, 1], c=pred)
c = Rectangle((-1, -1), 2, 2, alpha=0.2, fill=True, facecolor='blue')
ax[1].add_artist(c)
ax[1].set_xlim([-R, R])
ax[1].set_ylim([-R, R])
ax[1].plot(km.cluster_centers_[:, 0], km.cluster_centers_[:, 1], 'o')
vor2 = Voronoi(km.cluster_centers_)
voronoi_plot_2d(vor2, ax=ax[1])
ax[1].set_title("Centres des clusters - KMeans")
ax[1].set_xlim([-R, R])
ax[1].set_ylim([-R, R]);
def distance_n_tables(distrib, R):
best = R ** 2
for i, table in enumerate(distrib):
proche, dist = table_proches(table[0], table[1], distrib, R, skip_i=i)
if dist < best:
best = dist
return best
distance_n_tables(km.cluster_centers_, 1), distance_n_tables(best_sol, 1)
Explanation: Les centres des clusters sont les emplacements des tables cherchées.
End of explanation
from sklearn.mixture import GaussianMixture
gau = GaussianMixture(11)
gau.fit(points)
pred = gau.predict(points)
fig, ax = plt.subplots(1, 2, figsize=(10, 4))
c = Rectangle((-1, -1), 2, 2, alpha=0.2, fill=True, facecolor='blue')
ax[0].add_artist(c)
ax[0].set_xlim([-R, R])
ax[0].set_ylim([-R, R])
ax[0].scatter(points[:, 0], points[:, 1], c=pred)
c = Rectangle((-1, -1), 2, 2, alpha=0.2, fill=True, facecolor='blue')
ax[1].add_artist(c)
ax[1].set_xlim([-R, R])
ax[1].set_ylim([-R, R])
ax[1].plot(gau.means_[:, 0], gau.means_[:, 1], 'o')
vor2 = Voronoi(gau.means_)
voronoi_plot_2d(vor2, ax=ax[1])
ax[1].set_title("Centres des clusters - gaussian mixture")
ax[1].set_xlim([-R, R])
ax[1].set_ylim([-R, R]);
distance_n_tables(km.cluster_centers_, 1), distance_n_tables(gau.means_, 1)
Explanation: On essaye avec un mélange de lois normales.
End of explanation |
4,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Raw Data Thresholding Exploration
A slice at z=0 for reference
Step1: Our Previous Method
Step2: This result initially seemed reasonable. But, as a sanity check, we naively thresholded these clusters by volume (throwing out all clusters with volume above 200) to understand how clustering was occuring.
Step3: What we found was that, after thresholding by volume naively, there were very few clusters in the most concentrated areas. Further investigation showed that, before naive thresholding, there were 33210 clusters. After naive thresholding, there were 33177. This means that using Otsu's + Connected Components yielded 33 clusters that were massive. Most notably, the majority of the most concentrated strip (where most synapses are likely to be found) was grouped into a couple big clusters. Thus, we needed to find a more appropiate method for thresholding the raw data so we could also evaluate the clusters along that concentrated strip.
Adaptive Thresholding
We first attempted adaptive thresholding, as it allows localized thresholding - this is good because average intensity greatly varies accross the z axis. Adaptive thresholding works by calculating the mean of a blockSize x blockSize x blockSize neighborhood, subtracting a C-value from such mean, and thresholding all voxels in the neighborhood below that value. This seemed like it would work. However, we found that the results weren't as promising as we'd hoped. Such results are shown below.
Step4: We found that a blocksize of 81 optimized the number of clusters below 200-volume. Thus, we also tried varying the subtracted value (called "C") from the voxels in each window.
Step5: Thus, we found that the best combination of hyperparameters was blockSize = 81, C=0. But even with these hyperparameters, the number of clusters below 200-volume was too low (2300 vs expected ~tens of thousands). Thus, we decided to explore binary thresholding.
Binary Thresholding
The initial concern with using binary thresholding is that the average intensity of the base slices (around z=0) is 4x that of the top slices (around z=280). An implimentation of binary thresholding that uses a single value for the entire volume would either throw out almost the entire top half of the 3D image (if we used a very restrictive, high value for the hyperparameter) or wouldn't threshold enough of the entire bottom half of the 3D image (if we used a low value for the hyperparameter) and would result in most of the bottom half being grouped together into one cluster.
To fix this issue, we decided to impliment our own 3-dimensional binary thresholding method that locally thresholds within each slice based off of percentile. Such implimentation is shown below
Step6: We decided to try out many different hyperparameters for the percentile value to find which one gave the most number of clusters and the average volume closest to ~54. The results are shown below
Step7: Analysis of Binary Thresholding Results
Our implimentation of binary thresholding at the 90th percentile yielded the most desirable results. It produced the most clusters below 200-volume, contained a significant amount of clusters along the concentrated strip, and yielded clusters with relatively high volume.
Further investigation with percentiles neighboring 90 showed that the 90th percentile yielded the best results. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import pickle
import sys
sys.path.insert(0,'../code/functions/')
import connectLib as cLib
import plosLib as pLib
import mouseVis as mv
import tiffIO as tIO
data0 = tIO.unzipChannels(tIO.loadTiff('../data/SEP-GluA1-KI_tp1.tif'))[0][5:10]
plt.imshow(data0[0], cmap='gray')
plt.show()
Explanation: Raw Data Thresholding Exploration
A slice at z=0 for reference
End of explanation
data0OtsuThresh = cLib.otsuVox(data0)
plt.imshow(data0OtsuThresh[0])
plt.title("Visualization of Slice 0 After Otsu's Binarization")
plt.show()
clusters = cLib.connectedComponents(data0OtsuThresh)
volumeList = np.zeros((len(clusters)))
print 'Analysis of Otsu-Thresholded Clusters'
for cluster in range(len(clusters)):
volumeList[cluster] = clusters[cluster].getVolume()
print '\tnumber of clusters: ' + str(len(volumeList))
print '\taverage volume: ' + str(np.average(volumeList))
Explanation: Our Previous Method: Otsu's Binarization
With our previous pipeline, we found that we were only yielding 2 clusters in total accross the entire volume. After some investigation, we found that the error was occuring in our thresholding method for the raw data. We were using Otsu's Binarization, and it was grouping a large portion of our data together into a couple massive clusters. This was an issue.
This is what resulted from Otsu's Binarization.
End of explanation
naiveThreshClusters = cLib.thresholdByVolumeNaive(clusters, limit=200)
displayIm = np.zeros_like(data0)
clusterMembersList =[]
for cluster in range(len(naiveThreshClusters)):
clusterMembersList.extend(naiveThreshClusters[cluster].members)
for index in range(len(clusterMembersList)):
x, y, z = clusterMembersList[index]
displayIm[x][y][z] = 100
plt.imshow(displayIm[0], cmap = 'gray')
plt.show()
volumeList = np.zeros((len(naiveThreshClusters)))
print "Analysis of Naively Thresholded Clusters Using Otsu's Binarization"
for cluster in range(len(naiveThreshClusters)):
volumeList[cluster] = naiveThreshClusters[cluster].getVolume()
print '\tnumber of clusters below 200-volume: ' + str(len(volumeList))
print '\taverage volume of clusters below 200-volume: ' + str(np.average(volumeList))
Explanation: This result initially seemed reasonable. But, as a sanity check, we naively thresholded these clusters by volume (throwing out all clusters with volume above 200) to understand how clustering was occuring.
End of explanation
from connectLib import adaptiveThreshold
for i in range(9):
print 'blocksize: ' + str(10*(i + 1) + 1)
data0AdaptiveThresh = adaptiveThreshold(data0, 10*(i + 1) + 1, 5)
clusters = cLib.connectedComponents(data0AdaptiveThresh)
naiveThreshClusters = cLib.thresholdByVolumeNaive(clusters, limit=200)
displayIm = np.zeros_like(data0)
clusterMembersList =[]
for cluster in range(len(naiveThreshClusters)):
clusterMembersList.extend(naiveThreshClusters[cluster].members)
for index in range(len(clusterMembersList)):
x, y, z = clusterMembersList[index]
displayIm[x][y][z] = 100
plt.imshow(displayIm[0], cmap = 'gray')
plt.show()
volumeList = np.zeros((len(naiveThreshClusters)))
for cluster in range(len(naiveThreshClusters)):
volumeList[cluster] = naiveThreshClusters[cluster].getVolume()
print '\tnumber of clusters below 200-volume: ' + str(len(volumeList))
print '\taverage volume of clusters below 200-volume: ' + str(np.average(volumeList))
Explanation: What we found was that, after thresholding by volume naively, there were very few clusters in the most concentrated areas. Further investigation showed that, before naive thresholding, there were 33210 clusters. After naive thresholding, there were 33177. This means that using Otsu's + Connected Components yielded 33 clusters that were massive. Most notably, the majority of the most concentrated strip (where most synapses are likely to be found) was grouped into a couple big clusters. Thus, we needed to find a more appropiate method for thresholding the raw data so we could also evaluate the clusters along that concentrated strip.
Adaptive Thresholding
We first attempted adaptive thresholding, as it allows localized thresholding - this is good because average intensity greatly varies accross the z axis. Adaptive thresholding works by calculating the mean of a blockSize x blockSize x blockSize neighborhood, subtracting a C-value from such mean, and thresholding all voxels in the neighborhood below that value. This seemed like it would work. However, we found that the results weren't as promising as we'd hoped. Such results are shown below.
End of explanation
for i in range(4):
print 'C-value: ' + str(i)
data0AdaptiveThresh = adaptiveThreshold(data0, 81, i)
clusters = cLib.connectedComponents(data0AdaptiveThresh)
naiveThreshClusters = cLib.thresholdByVolumeNaive(clusters, limit=200)
displayIm = np.zeros_like(data0)
clusterMembersList =[]
for cluster in range(len(naiveThreshClusters)):
clusterMembersList.extend(naiveThreshClusters[cluster].members)
for index in range(len(clusterMembersList)):
x, y, z = clusterMembersList[index]
displayIm[x][y][z] = 100
plt.imshow(displayIm[0], cmap = 'gray')
plt.show()
volumeList = np.zeros((len(naiveThreshClusters)))
for cluster in range(len(naiveThreshClusters)):
volumeList[cluster] = naiveThreshClusters[cluster].getVolume()
print '\tnumber of clusters below 200-volume: ' + str(len(volumeList))
print '\taverage volume of clusters below 200-volume: ' + str(np.average(volumeList))
Explanation: We found that a blocksize of 81 optimized the number of clusters below 200-volume. Thus, we also tried varying the subtracted value (called "C") from the voxels in each window.
End of explanation
def binaryThreshold(img, perc):
img = (img/256).astype('uint8')
threshImg = np.zeros_like(img)
percentile = np.percentile(img, perc)
for i in range(len(img)):
threshImg[i] = cv2.threshold(img[i], percentile, 255, cv2.THRESH_BINARY)[1]
return threshImg
Explanation: Thus, we found that the best combination of hyperparameters was blockSize = 81, C=0. But even with these hyperparameters, the number of clusters below 200-volume was too low (2300 vs expected ~tens of thousands). Thus, we decided to explore binary thresholding.
Binary Thresholding
The initial concern with using binary thresholding is that the average intensity of the base slices (around z=0) is 4x that of the top slices (around z=280). An implimentation of binary thresholding that uses a single value for the entire volume would either throw out almost the entire top half of the 3D image (if we used a very restrictive, high value for the hyperparameter) or wouldn't threshold enough of the entire bottom half of the 3D image (if we used a low value for the hyperparameter) and would result in most of the bottom half being grouped together into one cluster.
To fix this issue, we decided to impliment our own 3-dimensional binary thresholding method that locally thresholds within each slice based off of percentile. Such implimentation is shown below:
End of explanation
for i in range(5):
print 'percentile: ' + str(75 + 5*i)
data0AdaptiveThresh = binaryThreshold(data0, 75 + 5*i)
clusters = cLib.connectedComponents(data0AdaptiveThresh)
naiveThreshClusters = cLib.thresholdByVolumeNaive(clusters, limit=200)
displayIm = np.zeros_like(data0)
clusterMembersList =[]
for cluster in range(len(naiveThreshClusters)):
clusterMembersList.extend(naiveThreshClusters[cluster].members)
for index in range(len(clusterMembersList)):
x, y, z = clusterMembersList[index]
displayIm[x][y][z] = 100
plt.imshow(displayIm[0], cmap = 'gray')
plt.show()
volumeList = np.zeros((len(naiveThreshClusters)))
for cluster in range(len(naiveThreshClusters)):
volumeList[cluster] = naiveThreshClusters[cluster].getVolume()
print '\tnumber of clusters below 200-volume: ' + str(len(volumeList))
print '\taverage volume of clusters below 200-volume: ' + str(np.average(volumeList))
Explanation: We decided to try out many different hyperparameters for the percentile value to find which one gave the most number of clusters and the average volume closest to ~54. The results are shown below:
End of explanation
for i in range(3):
percentile = 89 + i
print 'percentile: ' + str(percentile)
data0AdaptiveThresh = binaryThreshold(data0, percentile)
clusters = cLib.connectedComponents(data0AdaptiveThresh)
naiveThreshClusters = cLib.thresholdByVolumeNaive(clusters, limit=200)
displayIm = np.zeros_like(data0)
clusterMembersList =[]
for cluster in range(len(naiveThreshClusters)):
clusterMembersList.extend(naiveThreshClusters[cluster].members)
for index in range(len(clusterMembersList)):
x, y, z = clusterMembersList[index]
displayIm[x][y][z] = 100
plt.imshow(displayIm[0], cmap = 'gray')
plt.show()
volumeList = np.zeros((len(naiveThreshClusters)))
for cluster in range(len(naiveThreshClusters)):
volumeList[cluster] = naiveThreshClusters[cluster].getVolume()
print '\tnumber of clusters below 200-volume: ' + str(len(volumeList))
print '\taverage volume of clusters below 200-volume: ' + str(np.average(volumeList))
Explanation: Analysis of Binary Thresholding Results
Our implimentation of binary thresholding at the 90th percentile yielded the most desirable results. It produced the most clusters below 200-volume, contained a significant amount of clusters along the concentrated strip, and yielded clusters with relatively high volume.
Further investigation with percentiles neighboring 90 showed that the 90th percentile yielded the best results.
End of explanation |
4,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Verifying Central Limit Theorem in regression
Step1: Synthesize the dataset
Create 1000 random integers between 0, 100 for X and create y such that
$$
y = \beta_{0} + \beta_{1}X + \epsilon
$$
where
$$
\beta_{0} = 30 \ and \ \beta_{1} = 1.8 \ and \ \epsilon \ = \ standard \ normal \ error
$$
Step2: Make a scatter plot of X and y variables.
Step3: X and y follow uniform distribution, but the error $\epsilon$ is generated from standard normal distribution with a boosting factor. Let us plot its histogram to verify the distribution
Step4: Predict using population
Let us predict the coefficients and intercept when using the whole dataset. We will compare this approach with CLT approach of breaking into multiple subsets and averaging the coefficients and intercepts
Using whole population
Step5: Prediction with 66% of data
Step6: Perform predictions and plot the charts
Step7: Fitted vs Actual scatter
Step8: Predict using multiple samples
Step9: Select 50 samples of size 200 and perform regression
Step10: Plot the distribution of sample slopes and intercepts
Step11: Conclusion
Here we compare the coefficients and intercepts obtained by different methods to see how CLT adds up. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
Explanation: Verifying Central Limit Theorem in regression
End of explanation
rand_1kx = np.random.randint(0,100,1000)
x_mean = np.mean(rand_1kx)
x_sd = np.std(rand_1kx)
x_mean
pop_intercept = 30
pop_slope = 1.8
error_boost = 10
pop_error = np.random.standard_normal(size = rand_1kx.size) * error_boost
# I added an error booster since without it, the correlation was too high.
y = pop_intercept + pop_slope*rand_1kx + pop_error
y_mean = np.mean(y)
y_sd = np.std(y)
y_mean
Explanation: Synthesize the dataset
Create 1000 random integers between 0, 100 for X and create y such that
$$
y = \beta_{0} + \beta_{1}X + \epsilon
$$
where
$$
\beta_{0} = 30 \ and \ \beta_{1} = 1.8 \ and \ \epsilon \ = \ standard \ normal \ error
$$
End of explanation
sns.jointplot(rand_1kx, y)
Explanation: Make a scatter plot of X and y variables.
End of explanation
sns.distplot(pop_error)
Explanation: X and y follow uniform distribution, but the error $\epsilon$ is generated from standard normal distribution with a boosting factor. Let us plot its histogram to verify the distribution
End of explanation
from sklearn.linear_model import LinearRegression
X_train_full = rand_1kx.reshape(-1,1)
y_train_full = y.reshape(-1,1)
y_train_full.shape
lm.fit(X_train, y_train)
#print the linear model built
predicted_pop_slope = lm.coef_[0][0]
predicted_pop_intercept = lm.intercept_[0]
print("y = " + str(predicted_pop_slope) + "*X" + " + " + str(predicted_pop_intercept))
Explanation: Predict using population
Let us predict the coefficients and intercept when using the whole dataset. We will compare this approach with CLT approach of breaking into multiple subsets and averaging the coefficients and intercepts
Using whole population
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(rand_1kx, y, test_size=0.33)
print(X_train.size)
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
X_train = X_train.reshape(-1,1)
X_test = X_test.reshape(-1,1)
y_train = y_train.reshape(-1,1)
y_test = y_test.reshape(-1,1)
y_train.shape
lm.fit(X_train, y_train)
#print the linear model built
predicted_subset_slope = lm.coef_[0][0]
predicted_subset_intercept = lm.intercept_[0]
print("y = " + str(predicted_subset_slope) + "*X"
+ " + " + str(predicted_subset_intercept))
Explanation: Prediction with 66% of data
End of explanation
y_predicted = lm.predict(X_test)
residuals = y_test - y_predicted
Explanation: Perform predictions and plot the charts
End of explanation
jax = sns.jointplot(y_test, y_predicted)
jax.set_axis_labels(xlabel='Y', ylabel='Predicted Y')
dax = sns.distplot(residuals)
dax.set_title('Distribution of residuals')
jax = sns.jointplot(y_predicted, residuals)
jax.set_axis_labels(xlabel='Predicted Y', ylabel='Residuals')
jax = sns.jointplot(y_test, residuals)
jax.set_axis_labels(xlabel='Y', ylabel='Residuals')
Explanation: Fitted vs Actual scatter
End of explanation
pop_df = pd.DataFrame(data={'x':rand_1kx, 'y':y})
pop_df.head()
pop_df.shape
Explanation: Predict using multiple samples
End of explanation
sample_slopes = []
sample_intercepts = []
for i in range(0,50):
# perform a choice on dataframe index
sample_index = np.random.choice(pop_df.index, size=50)
# select the subset using that index
sample_df = pop_df.iloc[sample_index]
# convert to numpy and reshape the matrix for lm.fit
sample_x = np.array(sample_df['x']).reshape(-1,1)
sample_y = np.array(sample_df['y']).reshape(-1,1)
lm.fit(X=sample_x, y=sample_y)
sample_slopes.append(lm.coef_[0][0])
sample_intercepts.append(lm.intercept_[0])
Explanation: Select 50 samples of size 200 and perform regression
End of explanation
mean_sample_slope = np.mean(sample_slopes)
mean_sample_intercept = np.mean(sample_intercepts)
fig, ax = plt.subplots(1,2, figsize=(15,6))
# plot sample slopes
sns.distplot(sample_slopes, ax=ax[0])
ax[0].set_title('Distribution of sample slopes. Mean: '
+ str(round(mean_sample_slope, 2)))
ax[0].axvline(mean_sample_slope, color='black')
# plot sample slopes
sns.distplot(sample_intercepts, ax=ax[1])
ax[1].set_title('Distribution of sample intercepts. Mean: '
+ str(round(mean_sample_intercept,2)))
ax[1].axvline(mean_sample_intercept, color='black')
Explanation: Plot the distribution of sample slopes and intercepts
End of explanation
print("Predicting using population")
print("----------------------------")
print("Error in intercept: {}".format(pop_intercept - predicted_pop_intercept))
print("Error in slope: {}".format(pop_slope - predicted_pop_slope))
print("\n\nPredicting using subset")
print("----------------------------")
print("Error in intercept: {}".format(pop_intercept - predicted_subset_intercept))
print("Error in slope: {}".format(pop_slope - predicted_subset_slope))
print("\n\nPredicting using a number of smaller samples")
print("------------------------------------------------")
print("Error in intercept: {}".format(pop_intercept - mean_sample_intercept))
print("Error in slope: {}".format(pop_slope - mean_sample_slope))
Explanation: Conclusion
Here we compare the coefficients and intercepts obtained by different methods to see how CLT adds up.
End of explanation |
4,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook demonstrates how to use the classes in haloanalysis.model to perform likelihood scans of the IGMF parameter space. To start we construct the following
Step1: The CascLike class provides methods that can be used to evaluate the likelihood as a function of B-field parameters (p0 argument) or primary spectrum parameters (p1 argument). Here we perform a likelihood scan as a function of B-field strength for several values of the coherence length scale.
Step2: To derive constraints we need to profile the parameters of the primary spectrum at each point the IGMF parameter space. Here we build a 2D grid in Lcoh and B for the likelihood scan. We fill an array with the cascade likelihood values by calling CascLike.fit at each point. This method finds the best-fit parmeters of the primary spectrum given a set of IGMF parameters.
Step3: The following plot shows the negative delta-loglikelihood values for the scan. | Python Code:
%matplotlib inline
from astropy.table import Table
import matplotlib.pyplot as plt
import matplotlib.cm
from fermipy.spectrum import PLExpCutoff
from fermipy.castro import CastroData
from haloanalysis.model import make_prim_model, make_casc_model
from haloanalysis.model import CascModel, CascLike
from haloanalysis.utils import Axis, load_source_rows
from haloanalysis.sed import HaloSED, SED
import numpy as np
fn = PLExpCutoff([1E-13,-1.5,1E7],scale=1E3)
tab_tev = Table.read('../data/CompiledTeVSources.fits')
tab_tev = load_source_rows(tab_tev, ['1es0229+200'], key='SOURCE')
tab_casc = Table.read('1es0229_casc_sed.fits')
sed_prim = SED.create_from_row(tab_tev)
sed_casc = HaloSED.create_from_fits(tab_casc[0])
hmm = CascModel.create_from_fits('results.fits')
hl = CascLike(hmm, fn, sed_casc, sed_prim)
Explanation: This notebook demonstrates how to use the classes in haloanalysis.model to perform likelihood scans of the IGMF parameter space. To start we construct the following:
SED objects for the primary spectrum -- this potentially includes SEDs for both the TeV and GeV regimes
SED for the cascade spectrum -- these are derived by fitting halo templates of varying sizes to LAT data
A cascade model object -- this generates the predicted flux and angular size for a given choice of IGMF parameters and primary spectrum
A spectrum object for the primary spectrum parameterization
These objects are then used to instantiate a CascLike object which is responsible for computing the total model likelihood from the sum of the primary and cascade likelihood functions.
End of explanation
lnl, p1 = hl.fit([0.0,-16.0],fn.params,method='SLSQP',casc_scale=1E-6)
igmf = np.linspace(-20.0,-12.0,30)
lnl0 = hl.lnl([-2.0,igmf],p1)
lnl1 = hl.lnl([0.0,igmf],p1)
lnl2 = hl.lnl([2.0,igmf],p1)
lnl_null = hl.lnl([2.0,igmf],p1,casc_scale=1E-6)
plt.figure()
plt.errorbar(sed_prim.ectr/1E6,
sed_prim.ectr*sed_prim.flux,
sed_prim.ectr*sed_prim.flux_err,
marker='o',linestyle='None')
plt.errorbar(sed_prim.ectr/1E6,
sed_prim.ectr*hmm.prim_flux(fn,[0.0,-16.0],p1,axis_eobs=hl._axis_prim))
plt.gca().set_yscale('log')
plt.gca().set_xscale('log')
plt.gca().set_xlabel('Energy [TeV]')
plt.figure()
plt.plot(igmf,lnl0,label='Lcoh = 0.01 Mpc')
plt.plot(igmf,lnl1,label='Lcoh = 1 Mpc')
plt.plot(igmf,lnl2,label='Lcoh = 100 Mpc')
plt.plot(igmf,lnl_null,label='No Cascade',color='k')
plt.gca().legend(frameon=False,loc='upper left')
plt.gca().set_ylabel('-Delta-LnL')
plt.gca().set_ylim(0,100)
Explanation: The CascLike class provides methods that can be used to evaluate the likelihood as a function of B-field parameters (p0 argument) or primary spectrum parameters (p1 argument). Here we perform a likelihood scan as a function of B-field strength for several values of the coherence length scale.
End of explanation
nstep = 11
lcoh_scan = np.linspace(-4,4,nstep)
igmf_scan = np.linspace(-20,-12,nstep)
bpars = np.meshgrid(lcoh_scan, igmf_scan)
model_lnl = np.zeros(bpars[0].shape)*np.nan
p1 = fn.params
for idx, x in np.ndenumerate(bpars[0]):
p0 = [bpars[0][idx], bpars[1][idx]]
lnl, p1 = hl.fit(p0,p1,method='SLSQP')
model_lnl[idx] = lnl
Explanation: To derive constraints we need to profile the parameters of the primary spectrum at each point the IGMF parameter space. Here we build a 2D grid in Lcoh and B for the likelihood scan. We fill an array with the cascade likelihood values by calling CascLike.fit at each point. This method finds the best-fit parmeters of the primary spectrum given a set of IGMF parameters.
End of explanation
plt.figure()
plt.pcolormesh(bpars[0],bpars[1],model_lnl)
plt.gca().set_xlabel('log10(Lcoh/Mpc)')
plt.gca().set_ylabel('log10(B/G)')
plt.colorbar()
Explanation: The following plot shows the negative delta-loglikelihood values for the scan.
End of explanation |
4,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: 1. Collect Wikipedia data about Olympic Games 2020
The idea of this project is to create a question answering model, based on a few paragraphs of provided text. Base GPT-3 models do a good job at answering questions when the answer is contained within the paragraph, however if the answer isn't contained, the base models tend to try their best to answer anyway, often leading to confabulated answers.
To create a model which answers questions only if there is sufficient context for doing so, we first create a dataset of questions and answers based on paragraphs of text. In order to train the model to answer only when the answer is present, we also add adversarial examples, where the question doesn't match the context. In those cases, we ask the model to output "No sufficient context for answering the question".
We will perform this task in three notebooks
Step7: 1.2 Filtering the Wikipedia pages and splitting them into sections by headings
We remove sections unlikely to contain textual information, and ensure that each section is not longer than the token limit
Step8: 1.2.1 We create a dataset and filter out any sections with fewer than 40 tokens, as those are unlikely to contain enough context to ask a good question.
Step9: Save the section dataset
We will save the section dataset, for the next notebook
Step10: 1.3 (Optional) Exploring the data
Step11: There appear to be winter and summer Olympics 2020. We chose to leave a little ambiguity and noise in the dataset, even though we were interested in only Summer Olympics 2020. | Python Code:
import pandas as pd
import wikipedia
def filter_olympic_2020_titles(titles):
Get the titles which are related to Olympic games hosted in 2020, given a list of titles
titles = [title for title in titles if '2020' in title and 'olympi' in title.lower()]
return titles
def get_wiki_page(title):
Get the wikipedia page given a title
try:
return wikipedia.page(title)
except wikipedia.exceptions.DisambiguationError as e:
return wikipedia.page(e.options[0])
except wikipedia.exceptions.PageError as e:
return None
def recursively_find_all_pages(titles, titles_so_far=set()):
Recursively find all the pages that are linked to the Wikipedia titles in the list
all_pages = []
titles = list(set(titles) - titles_so_far)
titles = filter_olympic_2020_titles(titles)
titles_so_far.update(titles)
for title in titles:
page = get_wiki_page(title)
if page is None:
continue
all_pages.append(page)
new_pages = recursively_find_all_pages(page.links, titles_so_far)
for pg in new_pages:
if pg.title not in [p.title for p in all_pages]:
all_pages.append(pg)
titles_so_far.update(page.links)
return all_pages
pages = recursively_find_all_pages(["2020 Summer Olympics"])
len(pages)
Explanation: 1. Collect Wikipedia data about Olympic Games 2020
The idea of this project is to create a question answering model, based on a few paragraphs of provided text. Base GPT-3 models do a good job at answering questions when the answer is contained within the paragraph, however if the answer isn't contained, the base models tend to try their best to answer anyway, often leading to confabulated answers.
To create a model which answers questions only if there is sufficient context for doing so, we first create a dataset of questions and answers based on paragraphs of text. In order to train the model to answer only when the answer is present, we also add adversarial examples, where the question doesn't match the context. In those cases, we ask the model to output "No sufficient context for answering the question".
We will perform this task in three notebooks:
1. The first (this) notebook focuses on collecting recent data, which GPT-3 didn't see during it's pre-training. We picked the topic of Olympic Games 2020 (which actually took place in the summer of 2021), and downloaded 713 unique pages. We organized the dataset by individual sections, which will serve as context for asking and answering the questions.
2. The second notebook will utilize Davinci-instruct to ask a few questions based on a Wikipedia section, as well as answer those questions, based on that section.
3. The third notebook will utilize the dataset of context, question and answer pairs to additionally create adversarial questions and context pairs, where the question was not generated on that context. In those cases the model will be prompted to answer "No sufficient context for answering the question". We will also train a discriminator model, which predicts whether the question can be answered based on the context or not.
1.1 Data extraction using the wikipedia API
Extracting the data will take about half an hour, and processing will likely take about as much.
End of explanation
import re
from typing import Set
from transformers import GPT2TokenizerFast
import numpy as np
from nltk.tokenize import sent_tokenize
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
def count_tokens(text: str) -> int:
count the number of tokens in a string
return len(tokenizer.encode(text))
def reduce_long(
long_text: str, long_text_tokens: bool = False, max_len: int = 590
) -> str:
Reduce a long text to a maximum of `max_len` tokens by potentially cutting at a sentence end
if not long_text_tokens:
long_text_tokens = count_tokens(long_text)
if long_text_tokens > max_len:
sentences = sent_tokenize(long_text.replace("\n", " "))
ntokens = 0
for i, sentence in enumerate(sentences):
ntokens += 1 + count_tokens(sentence)
if ntokens > max_len:
return ". ".join(sentences[:i][:-1]) + "."
return long_text
discard_categories = ['See also', 'References', 'External links', 'Further reading', "Footnotes",
"Bibliography", "Sources", "Citations", "Literature", "Footnotes", "Notes and references",
"Photo gallery", "Works cited", "Photos", "Gallery", "Notes", "References and sources",
"References and notes",]
def extract_sections(
wiki_text: str,
title: str,
max_len: int = 1500,
discard_categories: Set[str] = discard_categories,
) -> str:
Extract the sections of a Wikipedia page, discarding the the references and other low information sections
if len(wiki_text) == 0:
return []
# find all headings and the coresponding contents
headings = re.findall("==+ .* ==+", wiki_text)
for heading in headings:
wiki_text = wiki_text.replace(heading, "==+ !! ==+")
contents = wiki_text.split("==+ !! ==+")
contents = [c.strip() for c in contents]
assert len(headings) == len(contents) - 1
cont = contents.pop(0).strip()
outputs = [(title, "Summary", cont, count_tokens(cont)+4)]
# discard the discard categories, accounting for a tree structure
max_level = 100
keep_group_level = max_level
remove_group_level = max_level
nheadings, ncontents = [], []
for heading, content in zip(headings, contents):
plain_heading = " ".join(heading.split(" ")[1:-1])
num_equals = len(heading.split(" ")[0])
if num_equals <= keep_group_level:
keep_group_level = max_level
if num_equals > remove_group_level:
if (
num_equals <= keep_group_level
):
continue
keep_group_level = max_level
if plain_heading in discard_categories:
remove_group_level = num_equals
keep_group_level = max_level
continue
nheadings.append(heading.replace("=", "").strip())
ncontents.append(content)
remove_group_level = max_level
# count the tokens of each section
ncontent_ntokens = [
count_tokens(c)
+ 3
+ count_tokens(" ".join(h.split(" ")[1:-1]))
- (1 if len(c) == 0 else 0)
for h, c in zip(nheadings, ncontents)
]
# Create a tuple of (title, section_name, content, number of tokens)
outputs += [(title, h, c, t) if t<max_len
else (title, h, reduce_long(c, max_len), count_tokens(reduce_long(c,max_len)))
for h, c, t in zip(nheadings, ncontents, ncontent_ntokens)]
return outputs
# Example page being processed into sections
bermuda_page = get_wiki_page('Bermuda at the 2020 Summer Olympics')
ber = extract_sections(bermuda_page.content, bermuda_page.title)
# Example section
ber[-1]
Explanation: 1.2 Filtering the Wikipedia pages and splitting them into sections by headings
We remove sections unlikely to contain textual information, and ensure that each section is not longer than the token limit
End of explanation
res = []
for page in pages:
res += extract_sections(page.content, page.title)
df = pd.DataFrame(res, columns=["title", "heading", "content", "tokens"])
df = df[df.tokens>40]
df = df.drop_duplicates(['title','heading'])
df = df.reset_index().drop('index',axis=1) # reset index
df.head()
Explanation: 1.2.1 We create a dataset and filter out any sections with fewer than 40 tokens, as those are unlikely to contain enough context to ask a good question.
End of explanation
df.to_csv('olympics-data/olympics_sections.csv', index=False)
Explanation: Save the section dataset
We will save the section dataset, for the next notebook
End of explanation
df.title.value_counts().head()
Explanation: 1.3 (Optional) Exploring the data
End of explanation
df.title.str.contains('Summer').value_counts()
df.title.str.contains('Winter').value_counts()
import pandas as pd
from matplotlib import pyplot as plt
df = pd.read_csv('olympics-data/olympics_sections.csv')
df[['tokens']].hist()
# add axis descriptions and title
plt.xlabel('Number of tokens')
plt.ylabel('Number of Wikipedia sections')
plt.title('Distribution of number of tokens in Wikipedia sections')
plt.show()
Explanation: There appear to be winter and summer Olympics 2020. We chose to leave a little ambiguity and noise in the dataset, even though we were interested in only Summer Olympics 2020.
End of explanation |
4,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SNR Benefits of Non-Uniform Scalar Quantization
This code is provided as supplementary material of the lecture Quellencodierung.
This code illustrates
* Uniform scalar quantization of audio files with midrise and midtread characteristic
* Non-Uniform quantization with $\mu$-law characteristic
* Illustration of the segmental SNR benefits and constant SNR (almost) independent of the signal amplitude
Step1: Load and display wave file
Step2: Compute segmental SNR (with $n_\mathsf{s} = 256$)
$$
\textrm{segSNR}[k]\bigg|{\mathrm{dB}} = 10\log{10}\left(\frac{\sum_{i=1}^{n_s}(x[(k-1)n_{\mathsf{s}}+i])^2}{\sum_{i=1}^{n_s}(x[(k-1)n_{\mathsf{s}}+i]-\hat{x}[(k-1)n_{\mathsf{s}}+i])^2}\right)
$$
Step3: Also carry out quantization using uniform midtread quantizer
Step4: Non-Uniform Quantization with $\mu$-law characteristic | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import librosa
import librosa.display
import IPython.display as ipd
Explanation: SNR Benefits of Non-Uniform Scalar Quantization
This code is provided as supplementary material of the lecture Quellencodierung.
This code illustrates
* Uniform scalar quantization of audio files with midrise and midtread characteristic
* Non-Uniform quantization with $\mu$-law characteristic
* Illustration of the segmental SNR benefits and constant SNR (almost) independent of the signal amplitude
End of explanation
wave_filename = '../audio/33711__acclivity__excessiveexposure.wav'
# wave_filename = '../audio/E-Core - Pingouin-Banquise_45s.wav'
# wave_filename = '../audio/KIMIKO ISHIZAKA - Goldberg Variations BWV 988 - 01 - Aria_45s.wav'
x, sampling_rate = librosa.load(wave_filename)
# Sample to 5 bit ... 32 quantization levels
w = 5
# fix x_max based on the current signal, leave some room
x_max = np.max([np.max(x), -np.min(x)])
Delta_x = x_max / (2**(w-1))
xh_uniform_midrise = np.sign(x)*Delta_x*(np.floor(np.abs(x)/Delta_x)+0.5)
Explanation: Load and display wave file
End of explanation
# Compute segmental SNR
# number of samples used for segmentation
seg_len = 256
segments = int(np.floor(len(x)/seg_len))
x_seg = np.reshape(x[0:segments*seg_len],(segments,seg_len))
xh_seg = np.reshape(xh_uniform_midrise[0:segments*seg_len],(segments,seg_len))
snr_uniform_midrise_seg = 10*np.log10(np.mean(np.square(x_seg),axis=1) / np.mean(np.square(xh_seg - x_seg),axis=1))
Explanation: Compute segmental SNR (with $n_\mathsf{s} = 256$)
$$
\textrm{segSNR}[k]\bigg|{\mathrm{dB}} = 10\log{10}\left(\frac{\sum_{i=1}^{n_s}(x[(k-1)n_{\mathsf{s}}+i])^2}{\sum_{i=1}^{n_s}(x[(k-1)n_{\mathsf{s}}+i]-\hat{x}[(k-1)n_{\mathsf{s}}+i])^2}\right)
$$
End of explanation
# fix x_max based on the current signal, leave some room
x_max = np.max(x)
Delta_x = x_max / (2**(w-1))
xh_uniform_midtread = np.sign(x)*Delta_x*np.floor(np.abs(x)/Delta_x+0.5)
# saturate
xh_max = (2**(w-1)*Delta_x - Delta_x)
xh_min = -(xh_max + Delta_x)
xh_uniform_midtread[xh_uniform_midtread >= xh_max] = xh_max
xh_uniform_midtread[xh_uniform_midtread <= xh_min] = xh_min
# Compute segmental SNR
x_seg = np.reshape(x[0:segments*seg_len],(segments,seg_len))
xh_seg = np.reshape(xh_uniform_midtread[0:segments*seg_len],(segments,seg_len))
snr_uniform_midtread_seg = 10*np.log10(np.mean(np.square(x_seg),axis=1) / np.mean(np.square(xh_seg - x_seg),axis=1))
Explanation: Also carry out quantization using uniform midtread quantizer
End of explanation
def uLaw(x):
mu = 255
y = np.array([np.sign(t)*np.log(1+mu*np.abs(t))/np.log(1+mu) for t in x])
return y
def uLaw_inv(y):
mu = 255
x = np.array([np.sign(t)/mu*((1+mu)**(np.abs(t))-1) for t in y])
return x
# apply mu-law compression. First normalize signal to the range [-1,1] and then un-normalize
x_ulaw = uLaw(np.array(x)/x_max)
# quantize (attention, now input signal is in the range [-1,+1], i.e., x_max = 1)
Delta_x = 1 / (2**(w-1))
quantized = np.sign(x_ulaw)*Delta_x*(np.floor(np.abs(x_ulaw)/Delta_x)+0.5)
# apply inverse mu-law compression
xh_nonuniform = uLaw_inv(np.array(quantized))*x_max
# Compute segmental SNR
x_seg = np.reshape(x[0:segments*seg_len],(segments,seg_len))
xh_seg = np.reshape(xh_nonuniform[0:segments*seg_len],(segments,seg_len))
snr_nonuniform_seg = 10*np.log10(np.mean(np.square(x_seg),axis=1) / np.mean(np.square(xh_seg - x_seg),axis=1))
# plot segmental snr
font = {'size' : 18}
plt.rc('font', **font)
plt.rc('text', usetex=True)
plt.figure(figsize=(14, 8.5))
plt.subplot(2,1,1)
plt.plot(np.arange(segments), snr_uniform_midrise_seg, linewidth=2)
plt.plot(np.arange(segments), snr_uniform_midtread_seg, linewidth=2)
plt.plot(np.arange(segments), snr_nonuniform_seg, linewidth=2)
plt.xlabel('$k$ / %d' % seg_len, fontsize=22)
plt.ylabel('segmental SNR (dB)', fontsize=22)
plt.ylim((-30,30))
plt.grid()
x_start = 25600 // seg_len
x_stop = 72960 // seg_len
plt.xlim((x_start,x_stop))
plt.legend(['Midrise ($w=5$)','Midtread ($w=5$)','$\mu$-law non-uniform ($w=5$)'])
plt.subplot(2,1,2)
librosa.display.waveplot(x[(x_start*seg_len+seg_len//2):(x_stop*seg_len+seg_len//2)], sr=sampling_rate)
plt.tight_layout()
plt.savefig('nonuniform_segmentalSNR_w%d.pdf' % w, bbox_inches='tight')
Explanation: Non-Uniform Quantization with $\mu$-law characteristic
End of explanation |
4,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This entire theory is built on the idea that everything is normalized as input into the brain. i.e. all values are between 0 and 1. This is necessary because the learning rule has an adaptive learning rate that is $\sigma^4$. If everything is normalized, the probability of $\sigma^2$ being greater than 1 is very low
Step1: I can assume $q(x)$ has two forms
$$q(x) = \frac{1}{\sqrt{2 \pi \sigma^2}}exp{-\frac{(x-\mu)^2}{2\sigma^2}}$$
or
$$q(x) = exp{-\frac{(x-\mu)^2}{\sigma^2}}$$
When I assume the second form and remove the extra $\sigma$ term from the learning equations it no longer converges smoothly. However, if I add an 'astrocyte' to normalize all of them periodically by averaging over the output it works again. Perhaps astrocytes 'normalizing' the neurons is the biological mechanism for keeping the output roughly normal. | Python Code:
p = GMM([1.0], np.array([[0.5,0.05]]))
num_samples = 1000
beg = 0.0
end = 1.0
t = np.linspace(beg,end,num_samples)
num_neurons = len(p.pis)
colors = [np.random.rand(num_neurons,) for i in range(num_neurons)]
p_y = p(t)
p_max = p_y.max()
np.random.seed(20)
num_neurons = 9
network = Net(1,1,num_neurons, bias=0.0006, decay=[0.4]*num_neurons, kernels=[[1,1]], locs=[[0,0]], sleep_cycle=2000)
samples, labels = p.sample(10000)
ys = []
lbls = []
colors = [np.random.rand(3,) for i in range(num_neurons)]
def f(i=0):
x = np.array(samples[i])
l = labels[i]
y = network(x.reshape(1,1,1))
ys.append(y)
c = 'b' if l else 'g'
lbls.append(c)
fig, ax = plt.subplots(figsize=(15,5))
ax.plot(t, p_y/p_max, c='r', lw=3, label='$p(x)$')
ax.plot([x,x],[0,p_max],label="$x\sim p(x)$", lw=4)
y = network(t.reshape(num_samples,1,1),update=0)
for j,yi in enumerate(y):
yj_max = y[j].max()
ax.plot(t, y[j]/yj_max, c=colors[j], lw=3, label="$q(x)$")
ax.set_ylim(0.,1.5)
ax.set_xlim(beg,end)
plt.savefig('for_colloquium/fig%03i.png'%(i))
plt.show()
interactive_plot = interactive(f, i=(0, 9999))
output = interactive_plot.children[-1]
output.layout.height = '450px'
interactive_plot
[n.weights for n in list(network.neurons.items())[0][1]]
[np.sqrt(n.bias) for n in list(network.neurons.items())[0][1]]
[n.pi for n in list(network.neurons.items())[0][1]]
Explanation: This entire theory is built on the idea that everything is normalized as input into the brain. i.e. all values are between 0 and 1. This is necessary because the learning rule has an adaptive learning rate that is $\sigma^4$. If everything is normalized, the probability of $\sigma^2$ being greater than 1 is very low
End of explanation
def s(x):
return (1/(1+np.exp(-10*(x-0.25))))
x = np.linspace(0,1,100)
plt.plot(x,s(x))
plt.show()
Explanation: I can assume $q(x)$ has two forms
$$q(x) = \frac{1}{\sqrt{2 \pi \sigma^2}}exp{-\frac{(x-\mu)^2}{2\sigma^2}}$$
or
$$q(x) = exp{-\frac{(x-\mu)^2}{\sigma^2}}$$
When I assume the second form and remove the extra $\sigma$ term from the learning equations it no longer converges smoothly. However, if I add an 'astrocyte' to normalize all of them periodically by averaging over the output it works again. Perhaps astrocytes 'normalizing' the neurons is the biological mechanism for keeping the output roughly normal.
End of explanation |
4,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coverage characterization using GenomeCov class (sequana.bedtools module)
<center>http
Step1: Read a Coverage file in BED format
Step2: Select one chromosome (there is only one in this case)
Step3: Compute the running median and plot the results
Step4: Interactive view of the effects of the running window
Step5: Region of interests
Step6: Some statistics
Step7: GC correlation | Python Code:
%pylab inline
from sequana import GenomeCov, sequana_data
rcParams['figure.figsize'] = (10,6)
Explanation: Coverage characterization using GenomeCov class (sequana.bedtools module)
<center>http://sequana.readthedocs.org</center>
Author: Thomas Cokelaer 2016-2018
Illustrative example of the Coverage module with interactive widget to see effect of the running median window length
First, let us import the functions of interest (sequana_data is optional and used to import some data from sequana)
End of explanation
gc = GenomeCov(sequana_data("virus.bed", "data"), low_threshold=-2.5, high_threshold=2.5)
Explanation: Read a Coverage file in BED format
End of explanation
chrom = gc[0]
Explanation: Select one chromosome (there is only one in this case)
End of explanation
N = 4001
chrom.running_median(N, circular=True)
chrom.compute_zscore()
chrom.plot_coverage()
Explanation: Compute the running median and plot the results
End of explanation
from ipywidgets import interact, interactive, fixed
import ipywidgets as widgets
def f(N):
chrom.running_median(N, circular=True)
chrom.compute_zscore()
chrom.plot_coverage()
ylim([1000,5500])
plt.show()
# plt.show is to fix issue reported in :
# https://stackoverflow.com/questions/44329068/jupyter-notebook-interactive-plot-with-widgets
interact(f, N=widgets.IntSlider(min=501,max=8001, step=200))
Explanation: Interactive view of the effects of the running window
End of explanation
chrom.running_median(4101)
chrom.compute_zscore()
chrom.get_rois().get_low_rois()
Explanation: Region of interests
End of explanation
print(chrom)
chrom.get_centralness()
print(chrom.get_stats())
Explanation: Some statistics
End of explanation
filename = sequana_data("JB409847.bed")
reference = sequana_data("JB409847.fasta")
gc = GenomeCov(filename)
gc.compute_gc_content(reference)
chrom = gc[0]
chrom.get_gc_correlation()
chrom.plot_gc_vs_coverage(cmap="BrBG", Nlevels=0, bins=[80,50])
Explanation: GC correlation
End of explanation |
4,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing In Context
Social Sciences Track
Lecture 4--topics, trends, and dimensional scaling
Matthew L. Jones
Step1: Reading at scale
Step2: IMPORTANT
Step3: Let's keep using the remarkable narratives available from Documenting the American South (http
Step4: back to boolean indexing!
Step5: For now, we'll play with the cool scientists and use the powerful and fast scikit learn package.
Let's look at Victorian novels for a little while
Step6: Our Zero-ith tool
Step7: back to vectorizer from scikit learn
Step8: While this data frame is lovely to look at and useful to think with, it's tough on your computer's memory
Step9: that is a symmetrical matrix relating each of the texts (rows) to another text (row)
Step10: can do lots of things with similarity matrix
you've already seen hierarchical clustering
Multidimension scaling
Technique to visualize distances in high dimensional spaces in ways we can cognize.
Keep distances but reduce dimensionality
Step11: It's an 11 by 2 matrix
OR
simply an (x,y) coordinate pair for each of our texts
Step13: What has this got us?
It suggests that even this crude measure of similarity is able to capture something significant.
Note
Step14: Now we are going to call the topic modeling black box
the key parameter is how many distinct topics we want the computer to find
this will take a while
Step15: So which topics most significant for each document? | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Computing In Context
Social Sciences Track
Lecture 4--topics, trends, and dimensional scaling
Matthew L. Jones
End of explanation
from IPython.display import Image
Image("http://journalofdigitalhumanities.org/wp-content/uploads/2013/02/blei_lda_illustration.png")
import textmining_blackboxes as tm
Explanation: Reading at scale:
Martha Ballard's Diary
http://dohistory.org/diary/index.html
http://www.cameronblevins.org/posts/topic-modeling-martha-ballards-diary/
Richmond Dispatch
https://dsl.richmond.edu/dispatch/pages/home
Source: http://dlxs.richmond.edu/d/ddr/index.html
End of explanation
#see if package imported correctly
tm.icantbelieve("butter")
Explanation: IMPORTANT: tm is our temporarily helper, not a standard python package!!
download it from my github:
https://github.com/matthewljones/computingincontext
End of explanation
title_info["Date"].str.replace("[^0-9]", "") #use regular expressions to clean up
title_info["Date"]=title_info["Date"].str.replace("\-\?", "5")
title_info["Date"]=title_info["Date"].str.replace("[^0-9]", "") # what assumptions have I made about the data?
title_info["Date"]=pd.to_datetime(title_info["Date"], coerce=True)
title_info["Date"]<pd.datetime(1800,1,1)
title_info[title_info["Date"]<pd.datetime(1800,1,1)]
Explanation: Let's keep using the remarkable narratives available from Documenting the American South (http://docsouth.unc.edu/docsouthdata/)
Assuming that you are storing your data in a directory in the same place as your iPython notebook.
Put the slave narratives texts within a data directory in the same place as this notebook
End of explanation
#Let's use a brittle thing for reading in a directory of pure txt files.
our_texts=tm.readtextfiles('data/na-slave-narratives/data/texts')
#again, this is not a std python package
#returns a simple list of the document as very long strings
#note if you want the following notebook will work on any directory of text files.
Explanation: back to boolean indexing!
End of explanation
our_texts, names=tm.readtextfiles("data/british-fiction-corpus")
names
Explanation: For now, we'll play with the cool scientists and use the powerful and fast scikit learn package.
Let's look at Victorian novels for a little while
End of explanation
our_texts=tm.data_cleanse(our_texts)
#more necessary when have messy text
#eliminate escaped characters
Explanation: Our Zero-ith tool: cleaning up the text
I've included a little utility function in tm that takes a list of strings and cleans it up a bit
check out the code on your own time later
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer=TfidfVectorizer(min_df=0.5, stop_words='english', use_idf=True)
document_term_matrix=vectorizer.fit_transform(our_texts)
# now let's get our vocabulary--the names corresponding to the rows
vocab=vectorizer.get_feature_names()
len(vocab)
document_term_matrix.shape
document_term_matrix_dense=document_term_matrix.toarray()
dtmdf=pd.DataFrame(document_term_matrix_dense, columns=vocab)
dtmdf
Explanation: back to vectorizer from scikit learn
End of explanation
#easy to program, but let's use a robust version from sklearn!
from sklearn.metrics.pairwise import cosine_similarity
similarity=cosine_similarity(document_term_matrix)
#Note here that the `cosine_similiary` can take
#an entire matrix as its argument
pd.DataFrame(similarity, index=names, columns=names)
Explanation: While this data frame is lovely to look at and useful to think with, it's tough on your computer's memory
End of explanation
similarity_df.ix[1].order(ascending=False)
Explanation: that is a symmetrical matrix relating each of the texts (rows) to another text (row)
End of explanation
#here's the blackbox
from sklearn.manifold import MDS
mds = MDS(n_components=2, dissimilarity="precomputed", random_state=1)
positions= mds.fit_transform(1-similarity)
positions.shape
Explanation: can do lots of things with similarity matrix
you've already seen hierarchical clustering
Multidimension scaling
Technique to visualize distances in high dimensional spaces in ways we can cognize.
Keep distances but reduce dimensionality
End of explanation
#let's plot it: I've set up a black box
tm.plot_mds(positions,names)
names=[name.replace(".txt", "") for name in names]
tm.plot_mds(positions,names)
Explanation: It's an 11 by 2 matrix
OR
simply an (x,y) coordinate pair for each of our texts
End of explanation
our_texts, names=tm.readtextfiles("Data/PCCIPtext")
our_texts=tm.data_cleanse(our_texts)
#improved stoplist--may be too complete
stop=[]
with open('data/stoplist-multilingual') as f:
stop=f.readlines()
stop=[word.strip('\n') for word in stop]
texts = [[word for word in document.lower().split() if word not in stop] for document in our_texts] #gensim requires list of list of words in documents
from gensim import corpora, models, similarities, matutils
gensim includes its own vectorizing tools
dictionary = corpora.Dictionary(texts)
corpus = [dictionary.doc2bow(text) for text in texts]
#doc2bow just means `doc`uments to `b`ag `o`f `w`ords
#ok, this has just vectorized our texts; it's another form
Explanation: What has this got us?
It suggests that even this crude measure of similarity is able to capture something significant.
Note: the axes don't really mean anything
interesting but what does it mean?
topic modeling
unsupervised algorithm for finding the major topics of texts
unlike hierarchical clustering, assumes texts spring from multiple sets of topics
the big thing in much text modeling, from humanities, to Facebook, to NSA
many variations
fantastic python package gensim
"corpora" = a collection of documents or texts
gensim likes its documents to be a list of lists of words, not a list of strings
Get the stoplist in the data directory in my github.
End of explanation
number_topics=40
model = models.LdaModel(corpus, id2word=dictionary, num_topics=number_topics, passes=10) #use gensim multicore LDA
model.show_topics()
topics_indexed=[[b for (a,b) in topics] for topics in model.show_topics(number_topics,10,formatted=False)]
topics_indexed=pd.DataFrame(topics_indexed)
topics_indexed
Explanation: Now we are going to call the topic modeling black box
the key parameter is how many distinct topics we want the computer to find
this will take a while
End of explanation
model[dictionary.doc2bow(texts[1])]
Explanation: So which topics most significant for each document?
End of explanation |
4,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Usage
Step1: Find from the texts
Step2: frequent serial episodes which consist of words üks and kaks.
Find frequent episodes
Let the width of the Winepi window be 31 characters and the minimal relative support of serial episodes be 30%.
Find frequent serial episodes.
Step3: It turns out that the episode ('kaks', 'üks', 'kaks') appears in 58 Winepi windows. Since the length of the first and the second text is 63 and 38 characters, respctively, the total number of winepi windows is 63+31-1+38+31-1=161. Therefore the relative frequency of this episode is 58 / 161 = 36%.
Write the results to file.
Step4: If file==None (the default), then to_json returns the corresponding string.
Find episode examples
Find examples for the previously found frequent episodes.
Step5: Write the results to file.
Step6: The lines of the file of examples correspond to the lines of the file of episodes.
If file==None (the default), then examples_to_json returns the corresponding string.
Choose the frequent episode ('kaks', 'üks', 'kaks') and pretty print the examples
Step7: Find the support
Find the absolute and the relative support (frequency) for the episodes (üks, kaks, üks) and (kaks, kaks, üks). It is more efficient to find the support for a list of episodes, than for each episode separately. | Python Code:
from episode_miner import EventText, EventSequences, Episode, Episodes
from estnltk.taggers import EventTagger
from IPython.display import HTML, FileLink
Explanation: Usage
End of explanation
event_vocabulary = [{'term': 'üks'},
{'term': 'kaks'}]
event_tagger = EventTagger(event_vocabulary, case_sensitive=False, return_layer=True)
event_text1 = EventText('Üks kaks kolm neli kolm. Kaks üks kaks kolm neli kolm üks kaks.', event_tagger=event_tagger)
event_text2 = EventText('Kaks üks kaks kolm neli kolm üks kaks.', event_tagger=event_tagger)
texts = [event_text1, event_text2]
event_sequences = EventSequences(event_texts=texts, classificator='term', time_scale='start')
html = event_sequences.pretty_print()
HTML(html)
Explanation: Find from the texts
End of explanation
frequent_episodes = event_sequences.find_serial_episodes(window_width=31,
min_frequency=0.3,
only_full_windows=False,
allow_intermediate_events=True)
list(zip(frequent_episodes, frequent_episodes.abs_support(), frequent_episodes.rel_support()))
Explanation: frequent serial episodes which consist of words üks and kaks.
Find frequent episodes
Let the width of the Winepi window be 31 characters and the minimal relative support of serial episodes be 30%.
Find frequent serial episodes.
End of explanation
frequent_episodes.to_json(file='data/episodes.txt')
FileLink('data/episodes.txt')
Explanation: It turns out that the episode ('kaks', 'üks', 'kaks') appears in 58 Winepi windows. Since the length of the first and the second text is 63 and 38 characters, respctively, the total number of winepi windows is 63+31-1+38+31-1=161. Therefore the relative frequency of this episode is 58 / 161 = 36%.
Write the results to file.
End of explanation
event_sequences.find_episode_examples(frequent_episodes,
window_width=31,
allow_intermediate_events=True,
number_of_examples='ALL')
Explanation: If file==None (the default), then to_json returns the corresponding string.
Find episode examples
Find examples for the previously found frequent episodes.
End of explanation
frequent_episodes.examples_to_json(file='data/episode_examples.txt')
FileLink('data/episode_examples.txt')
Explanation: Write the results to file.
End of explanation
HTML(frequent_episodes[5].examples_pretty_print())
Explanation: The lines of the file of examples correspond to the lines of the file of episodes.
If file==None (the default), then examples_to_json returns the corresponding string.
Choose the frequent episode ('kaks', 'üks', 'kaks') and pretty print the examples:
End of explanation
episode1 = Episode(('üks', 'kaks', 'üks'))
episode2 = Episode(('kaks', 'kaks', 'üks'))
episodes = Episodes([episode1, episode2])
event_sequences.support(episodes=episodes,
window_width=31,
only_full_windows=False,
allow_intermediate_events=True)
episodes.abs_support(), episodes.rel_support()
Explanation: Find the support
Find the absolute and the relative support (frequency) for the episodes (üks, kaks, üks) and (kaks, kaks, üks). It is more efficient to find the support for a list of episodes, than for each episode separately.
End of explanation |
4,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Motivation" data-toc-modified-id="Motivation-1"><span class="toc-item-num">1 </span>Motivation</a></span><ul class="toc-item"><li><span><a href="#Shades-of-HERE-Aqua-and-HERE-Gray" data-toc-modified-id="Shades-of-HERE-Aqua-and-HERE-Gray-1.1"><span class="toc-item-num">1.1 </span>Shades of HERE Aqua and HERE Gray</a></span></li></ul></li><li><span><a href="#Discover-Ipyvolume" data-toc-modified-id="Discover-Ipyvolume-2"><span class="toc-item-num">2 </span>Discover Ipyvolume</a></span></li><li><span><a href="#Convert-Hex-to-RGB-Colors" data-toc-modified-id="Convert-Hex-to-RGB-Colors-3"><span class="toc-item-num">3 </span>Convert Hex to RGB Colors</a></span></li><li><span><a href="#Plot-Color-Schemes" data-toc-modified-id="Plot-Color-Schemes-4"><span class="toc-item-num">4 </span>Plot Color Schemes</a></span><ul class="toc-item"><li><span><a href="#All-'Scraped'-Colors" data-toc-modified-id="All-'Scraped'-Colors-4.1"><span class="toc-item-num">4.1 </span>All 'Scraped' Colors</a></span></li><li><span><a href="#Secondary-Colors-Only" data-toc-modified-id="Secondary-Colors-Only-4.2"><span class="toc-item-num">4.2 </span>Secondary Colors Only</a></span></li><li><span><a href="#Primary-Colors-Only" data-toc-modified-id="Primary-Colors-Only-4.3"><span class="toc-item-num">4.3 </span>Primary Colors Only</a></span></li></ul></li><li><span><a href="#First-grade-math-Analysis" data-toc-modified-id="First-grade-math-Analysis-5"><span class="toc-item-num">5 </span>First grade math Analysis</a></span></li><li><span><a href="#Hypothesis" data-toc-modified-id="Hypothesis-6"><span class="toc-item-num">6 </span>Hypothesis</a></span></li><li><span><a href="#Conclusions" data-toc-modified-id="Conclusions-7"><span class="toc-item-num">7 </span>Conclusions</a></span></li></ul></div>
Visualising Color Schemes in 3D
This is a decent experiment in using Ipyvolume (and Three.js) to visualize in 3D the color scheme of some corporate identity as implemented by a sample company, in this case HERE Technologies. Apart from this introduction there is not much more prose inside this notebook. This should be considered a short appetizer to see how easy it is to use Ipyvolume.
N.B.
Step1: To be explored further below…
Discover Ipyvolume
Step2: From here on we use other imports allowing to specify more aspects of the plots.
Step3: Now let's define a single function with some defaults to have a one-line callable, later.
Step5: Convert Hex to RGB Colors
Step6: Plot Color Schemes
All 'Scraped' Colors
Before you continue, have a look at the color scheme description to know what to expect for the primary and secondary colors used in the full color scheme!
Step7: Secondary Colors Only
Step8: Primary Colors Only | Python Code:
from collections import OrderedDict
# values entered manually from https://brandlive.here.com/colors
here_primary_cols = OrderedDict(
HERE_Aqua = '#48dad0',
HERE_Aqua_UNKNOWN = '#00908a', # unknown status, maybe an error?
HERE_Aqua_Dark = '#00afaa',
HERE_Aqua_75 = '#76e3dc',
HERE_Aqua_50 = '#a3ece7',
HERE_Aqua_25 = '#d1f6f3',
HERE_Gray = '#383c45',
HERE_Gray_Dark = '#0f1621',
HERE_Gray_75 = '#6a6d74',
HERE_Gray_50 = '#9b9da2',
HERE_Gray_25 = '#cdced0',
HERE_Gray_00 = '#ffffff' # a.k.a. white
)
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Motivation" data-toc-modified-id="Motivation-1"><span class="toc-item-num">1 </span>Motivation</a></span><ul class="toc-item"><li><span><a href="#Shades-of-HERE-Aqua-and-HERE-Gray" data-toc-modified-id="Shades-of-HERE-Aqua-and-HERE-Gray-1.1"><span class="toc-item-num">1.1 </span>Shades of HERE Aqua and HERE Gray</a></span></li></ul></li><li><span><a href="#Discover-Ipyvolume" data-toc-modified-id="Discover-Ipyvolume-2"><span class="toc-item-num">2 </span>Discover Ipyvolume</a></span></li><li><span><a href="#Convert-Hex-to-RGB-Colors" data-toc-modified-id="Convert-Hex-to-RGB-Colors-3"><span class="toc-item-num">3 </span>Convert Hex to RGB Colors</a></span></li><li><span><a href="#Plot-Color-Schemes" data-toc-modified-id="Plot-Color-Schemes-4"><span class="toc-item-num">4 </span>Plot Color Schemes</a></span><ul class="toc-item"><li><span><a href="#All-'Scraped'-Colors" data-toc-modified-id="All-'Scraped'-Colors-4.1"><span class="toc-item-num">4.1 </span>All 'Scraped' Colors</a></span></li><li><span><a href="#Secondary-Colors-Only" data-toc-modified-id="Secondary-Colors-Only-4.2"><span class="toc-item-num">4.2 </span>Secondary Colors Only</a></span></li><li><span><a href="#Primary-Colors-Only" data-toc-modified-id="Primary-Colors-Only-4.3"><span class="toc-item-num">4.3 </span>Primary Colors Only</a></span></li></ul></li><li><span><a href="#First-grade-math-Analysis" data-toc-modified-id="First-grade-math-Analysis-5"><span class="toc-item-num">5 </span>First grade math Analysis</a></span></li><li><span><a href="#Hypothesis" data-toc-modified-id="Hypothesis-6"><span class="toc-item-num">6 </span>Hypothesis</a></span></li><li><span><a href="#Conclusions" data-toc-modified-id="Conclusions-7"><span class="toc-item-num">7 </span>Conclusions</a></span></li></ul></div>
Visualising Color Schemes in 3D
This is a decent experiment in using Ipyvolume (and Three.js) to visualize in 3D the color scheme of some corporate identity as implemented by a sample company, in this case HERE Technologies. Apart from this introduction there is not much more prose inside this notebook. This should be considered a short appetizer to see how easy it is to use Ipyvolume.
N.B.: If you see this Jupyter notebook on GitHub or some online notebook viewer you will likely miss the whole point, namely the rendered 3D plots (Ipyvolume includes warnings where this happens, saying something like: "A Jupyter widget could not be displayed because the widget state could not be found. This could happen if the kernel storing the widget is no longer available, or if the widget state was not saved in the notebook. You may be able to create the widget by running the appropriate cells. [...]")! In this case download/install Ipyvolume and download/open this notebook locally! You might also want to run a local Jupyter inside Docker.
Motivation
Investigate some mysteries with a color scheme description, namely a possible buglet or inconsistency in the description of one shade of the two primary colors, HERE Aqua and Gray. Notice in the left image below how the second row of Aqua is unnamed, while the third is named HERE Dark Aqua. And compare to the shades of Gray on the right side.
Shades of HERE Aqua and HERE Gray
<p float="left">
<img src="here_aqua_gray.png" width="100%" />
</p>
End of explanation
import numpy as np
import ipyvolume as ipv
x, y, z = np.random.random((3, 1000))
ipv.quickscatter(x, y, z, size=1, marker="sphere")
Explanation: To be explored further below…
Discover Ipyvolume
End of explanation
import numpy as np
import ipyvolume.pylab as p3
f = p3.figure()
p3.xyzlabel('x1', 'y1', 'z1')
scale = np.linspace(0, 1, num=10)
p3.scatter(scale, scale, scale, size=3, marker="sphere")
p3.show()
Explanation: From here on we use other imports allowing to specify more aspects of the plots.
End of explanation
def scatter3d(points, **kwargs):
"Render a 3D scatter plot with some predefined defaults."
f = p3.figure()
p3.xyzlabel(*kwargs.get('xyzlabels', ['x1', 'y1', 'z1']))
kwargs1 = {}
if 'size' not in kwargs:
kwargs1.update(size=3)
if 'marker' not in kwargs:
kwargs1.update(marker='sphere')
kwargs1.update(kwargs)
p3.scatter(*points, **kwargs1)
p3.show()
points = [np.linspace(0, 1, num=10)] * 3
scatter3d(points)
scale = np.linspace(0, 1, num=10)
points = [scale] * 3
color = np.array(points)
scatter3d(points, color=color.T)
scale = np.linspace(0, 1, num=10)
points = [scale] * 3
color = np.array(points)
scatter3d(points, color=color.T, size=(1 - scale) * 8)
x, y, z = np.random.random((3, 1000))
points = [x, y, z]
color = np.array(points).T
scatter3d(points, color=color, size=3)
Explanation: Now let's define a single function with some defaults to have a one-line callable, later.
End of explanation
tuple(bytes.fromhex("aabbcc")) == (170, 187, 204)
def hex2rgb(hex_color):
Convert web hex color to normalized RGB tuple.
E.g. hex2rgb('#aabbcc') -> (0.6666666666666666, 0.7333333333333333, 0.8)
clean_hex_color = hex_color[1:] if hex_color.startswith('#') else hex_color
r, g, b = tuple(bytes.fromhex(clean_hex_color))
return (r/255., g/255., b/255.)
hex2rgb("aabbcc") == (170/255., 187/255., 204/255.)
Explanation: Convert Hex to RGB Colors
End of explanation
# Define a function with some defaults to make this a one-line call.
def scatter_colors_3d(colors, **kwargs):
"Render a 3D scatter plot with defaults for a list of hex colors."
rgb = np.array([hex2rgb(val) for val in colors])
r = rgb[:, 0]
g = rgb[:, 1]
b = rgb[:, 2]
color = np.array((r, g, b)).T
kwargs1 = {}
if 'size' not in kwargs:
kwargs1.update(size=3)
if 'marker' not in kwargs:
kwargs1.update(marker='sphere')
kwargs1.update(color=color)
kwargs1.update(kwargs)
f = p3.figure()
p3.xyzlabel(*kwargs.get('xyzlabels', ['red', 'green', 'blue']))
p3.scatter(r, g, b, **kwargs1)
p3.show()
import re
import requests
def scrape_hex_colors(url):
"Return all Hex web colors 'scraped' from some webpage."
html = requests.get(url).content.decode('utf-8')
return re.findall('#[0-9a-fA-F]{6,6}', html)
# Mind the 's' in https!
url = 'https://brandlive.here.com/colors'
here_all_colors = set(scrape_hex_colors(url))
# Just to be sure we have the values even if the website should be changed later:
# here_all_colors = '''#a3ece7 #c53580 #c41c33 #48dad0 #0f1621 #b7c99d #6f83bd #7dbae4
#00afaa #3f59a7 #d35566 #673a93 #6a6d74 #52a3db #f5b086 #00908a #8d6bae #d468a0
#383c45 #b39cc9 #cdced0 #f1894a #44ca9d #fbca40 #e29abf #06b87c #a8d1ed #fab800
#e18d99 #ec610e #76e3dc #ffffff #94af6d #9b9da2 #d1f6f3 #fcdb7f #9facd3 #70943c
#82dbbd'''.split()
scatter_colors_3d(here_all_colors)
Explanation: Plot Color Schemes
All 'Scraped' Colors
Before you continue, have a look at the color scheme description to know what to expect for the primary and secondary colors used in the full color scheme!
End of explanation
all_cols = here_all_colors
pri_cols = here_primary_cols.values()
sec_cols = {*all_cols}.difference(pri_cols)
scatter_colors_3d(sec_cols)
Explanation: Secondary Colors Only
End of explanation
here_primary_cols
scatter_colors_3d(here_primary_cols.values())
Explanation: Primary Colors Only
End of explanation |
4,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Original paper
Step1: Make sure that the values of real and generated data are of the same order - it is important for cooperative binarizing
Step2: 1. Binarize#
To understand how real and generated objects are close to each other, we need to choose a space of features in which we look these objects at
We go the easiest way and take pixels' values as features.
The code snippets for this task is provided here https
Step3: Create $\alpha-$ and $\beta-$ vectors as in
$\hat{PRD}(Q,P) = {(\alpha(\lambda), \beta(\lambda))| \lambda \in \Lambda }$, where $\Lambda = {\tan (\frac{i}{m+1} \frac{\pi}{2}) | i = 1, 2 ... m}$
Step4: For stability, take the average of several repetitions
Step5: 2. Apply it
Step6: 3. Make vectors for plot and plot
Step7: What curves were obtained for the first(VAE) and the second(GAN) models? What can we say about the advantages and disadvantages of each model?
$P$ - reference distribution, while $Q$ is an obtained one (i.e. generated)
Precision should measure how much of $Q$ can be generated by a “part” of $P$ while <br>
recall should measure how much of $P$ can be generated by a “part” of $Q$
If $P$ is bimodal and $Q$ only captures one of the modes, we should have perfect <Br>
precision but only limited recall.
So, speaking about GAN, one can observe a high recall and a high precision, which in fact says that the model generates images from all classes (modes) with acceptable level of blurriness. But this might not have been the case, GAN models are prone to mode collapse, which could have led to decreased recall. One can see what might happen in case of modes being under the target number or beyond (section 4.1 Adding and dropping modes from the target distribution)
As for VAE, the situation is very classical as this model tries to produce mode modes but images are blurrish. In fact we obtained mediocre level of precision as well as recall, which implies generation not only blurry images but also lack of modes (and prevalence of dark images).
The difference between generated images is very noticeable. So, this time GAN rules.
Bonus | Python Code:
import math
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
import matplotlib
import torch
import sklearn
CHANNEL_NUM = 3
PICTURE_SIZE = 36
class ParticleDataset():
def __init__(self, file):
self.data = np.load(file)
self.image = self.data['Pictures'].reshape(-1, CHANNEL_NUM*PICTURE_SIZE*PICTURE_SIZE)
def __len__(self):
return len(self.image)
def __getitem__(self, i):
return {
"Pictures": self.image[i],
}
class ParticleDatasetVAE():
def __init__(self, file):
self.data = np.load(file)
self.image = [x.reshape(-1, CHANNEL_NUM*PICTURE_SIZE*PICTURE_SIZE) for x in self.data['Pictures']]
self.image = torch.stack(self.image).squeeze_().cpu().numpy()
def __len__(self):
return len(self.image)
def __getitem__(self, i):
return {
"Pictures": self.image[i],
}
real_data = ParticleDataset('real.npz')
vae_data = ParticleDatasetVAE('vae.npz')
gan_data = ParticleDataset('gan.npz')
Explanation: Original paper: https://arxiv.org/abs/1806.00035
0. Read real and generated images
End of explanation
print (np.min(real_data.image), np.max(real_data.image))
print (np.min(gan_data.image), np.max(gan_data.image))
print (np.min(vae_data.image), np.max(vae_data.image))
Explanation: Make sure that the values of real and generated data are of the same order - it is important for cooperative binarizing
End of explanation
from sklearn.cluster import KMeans, MiniBatchKMeans
import math
## function which map object to probability distribution ##
def bin_counts (real_data, generated_data, number_of_bins=25):
cluster_data = np.vstack([generated_data, real_data])
kmeans = sklearn.cluster.MiniBatchKMeans(n_clusters=number_of_bins, n_init=10)
labels = kmeans.fit(cluster_data).labels_
generated_labels = labels[:len(generated_data)]
real_labels = labels[len(generated_data):]
gen_density = np.histogram(generated_labels, bins=number_of_bins,
range=[0, number_of_bins], density=True)[0]
real_density = np.histogram(real_labels, bins=number_of_bins,
range=[0, number_of_bins], density=True)[0]
return real_density, gen_density
Explanation: 1. Binarize#
To understand how real and generated objects are close to each other, we need to choose a space of features in which we look these objects at
We go the easiest way and take pixels' values as features.
The code snippets for this task is provided here https://github.com/msmsajjadi/precision-recall-distributions by authors of the article. <Br> The idea of algorithms is rather intuitive and well-explained in the article.
End of explanation
def count_alpha_beta (real_density, gen_density, num_angles = 1000):
assert real_density.shape == gen_density.shape
alpha_vec = []
beta_vec = []
angles = np.linspace(1e-6, np.pi/2 - 1e-6, num=num_angles)
slopes = np.tan(angles)
slopes_2d = np.expand_dims(slopes, 1)
real_den_2d = np.expand_dims(real_density, 0)
gen_den_2d = np.expand_dims(gen_density, 0)
alpha_vec = np.minimum(real_den_2d*slopes_2d, gen_den_2d).sum(axis=1)
beta_vec = alpha_vec / slopes
# handle numerical instabilities leaing to precision/recall just above 1
max_val = max(np.max(alpha_vec), np.max(beta_vec))
if max_val > 1.001:
raise ValueError('Detected value > 1.001, this should not happen.')
alpha_vec = np.clip(alpha_vec, 0, 1)
beta_vec = np.clip(beta_vec, 0, 1)
return alpha_vec, beta_vec
Explanation: Create $\alpha-$ and $\beta-$ vectors as in
$\hat{PRD}(Q,P) = {(\alpha(\lambda), \beta(\lambda))| \lambda \in \Lambda }$, where $\Lambda = {\tan (\frac{i}{m+1} \frac{\pi}{2}) | i = 1, 2 ... m}$
End of explanation
def count_prd(reals, gens, repeat_number = 10):
vectors = [count_alpha_beta(reals, gens) for i in range(repeat_number)]
vectors = np.array(vectors).mean(axis=0)
print (vectors.shape)
return vectors
Explanation: For stability, take the average of several repetitions
End of explanation
a, b = bin_counts(real_data.image, vae_data.image)
c, d = bin_counts(real_data.image, gan_data.image)
Explanation: 2. Apply it
End of explanation
data_for_plots = count_prd(a, b)
data_for_plots2 = count_prd(c, d)
fig = plt.figure(figsize=(2.5,2.5), dpi=200)
fig.add_subplot(111).tick_params(axis='both', which='major', labelsize=8)
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.xlabel('Recall', fontsize=12)
plt.ylabel('Precision', fontsize=12)
plt.plot(data_for_plots[0], data_for_plots[1], label = "VAE")
plt.plot(data_for_plots2[0], data_for_plots2[1], label = "GAN")
plt.legend()
plt.show()
Explanation: 3. Make vectors for plot and plot
End of explanation
# if you came here and still alive, the implementation of idea above will give you extra points =)
Explanation: What curves were obtained for the first(VAE) and the second(GAN) models? What can we say about the advantages and disadvantages of each model?
$P$ - reference distribution, while $Q$ is an obtained one (i.e. generated)
Precision should measure how much of $Q$ can be generated by a “part” of $P$ while <br>
recall should measure how much of $P$ can be generated by a “part” of $Q$
If $P$ is bimodal and $Q$ only captures one of the modes, we should have perfect <Br>
precision but only limited recall.
So, speaking about GAN, one can observe a high recall and a high precision, which in fact says that the model generates images from all classes (modes) with acceptable level of blurriness. But this might not have been the case, GAN models are prone to mode collapse, which could have led to decreased recall. One can see what might happen in case of modes being under the target number or beyond (section 4.1 Adding and dropping modes from the target distribution)
As for VAE, the situation is very classical as this model tries to produce mode modes but images are blurrish. In fact we obtained mediocre level of precision as well as recall, which implies generation not only blurry images but also lack of modes (and prevalence of dark images).
The difference between generated images is very noticeable. So, this time GAN rules.
Bonus: about features' space
It is possible to transfer the picture-> embedding, for example, using the 1st part of the Inception network as a feature extraxtor. This embedding can be used for bin counts also
End of explanation |
4,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create a new Materials Commons project
This example demonstrates creating a new Materials Commons project from a Jupyter notebook. To try running it locally, download the notebook from here.
To install the necessary dependencies (requires Python 3)
Step1: Cloning a project
"Cloning" a project creates a local directory which is used as a place to upload and download project files. There are three construction options
Step2: Example 1
Step3: Example 2
Step4: Example 3
Step5: Using the ClonedProject
The ClonedProject instance provides access to Client and Project objects from the Materials Commons API (materials_commons.api) along with the location of the cloned local project directory local_path.
Step6: File transfer
The ClonedProject instance from the CLI also provides methods for uploading and downloading files using features beyond those included in materials_commons.api.
For example, transfers can include checks to skip transferring files that are equivalent, support recursive upload and download, globus transfer, etc.
Other methods implemented by the CLI will be added to ClonedProject in the future.
Setup for upload examples
Step7: Upload one file
Step8: By default, files that already exist will be skipped
Use no_compare=True to transfer without comparing checksums. Materials Commons will still check for file equivalence and only create a new file version if the file is different.
Step9: Upload multiple files
Step10: Upload files and directories, recursively
Use the recursive=True argument to transfer files and directories recursively
Step11: Uploading the notebook itself / use of "upload_as"
It is possible to upload a notebook, from within the notebook itself. To do so, we can use the "upload_as" option which allows uploading files that do not exist in the local cloned project directory. The following cells demonstrate getting the notebook's name, nb_name, and then uploading the notebook itself to the Materials Commons project. It is placed in a "notebooks" directory. Note that it uploads the last saved version of the notebook, not the current state.
<i> Note
Step12: Setup for download examples
Step13: Download one file
Step14: By default, files that already exist will be skipped
Use no_compare=True to transfer without comparing checksums.
Use force=True to force overwriting existing files without prompting.
Step15: Download multiple files
Step16: Download files and directories, recursively
Use the recursive=True argument to transfer files and directories recursively
Step17: Download with different name
Use the output argument to output one file or directory to a different location
Step18: Using Globus file transfer
Use the globus=True argument to peform the file transfer using the current globus upload or download directory
Use the label argument to give the transfer a label for easier tracking
Globus configuration and transfer management is not currently supported via ClonedProject but it can be done using shell commands or the Materials Commons website
See the online documentation for more information on Globus file transfers.
Step19: Monitor transfer status
Step20: Finish the transfer
Uploads must be "finished" to transfer files into the project. Once processing is finished, the upload directory will no longer appear in mc globus upload results, and all files should appear in the project directory. The processing time required before files appear in your project will depend on the size of the transfer.
Globus download directories should be "deleted" when download tasks are finished. The download directory may be left as long as desired, but it will not reflect any file or directory changes to the project since the time the download directory was created.
Step21: Example cleanup
The delete_project call will delete a project on Materials Commons.
Notes | Python Code:
# Login information (Edit here or be prompted by the next cell)
email = None
mcurl = "https://materialscommons.org/api"
# Construct a Materials Commons client
from materials_commons.cli.user_config import make_client_and_login_if_necessary
if email is None:
print("Account (email):")
email = input()
client = make_client_and_login_if_necessary(email=email, mcurl=mcurl)
import materials_commons.api as mcapi
# Project name
name = "ExampleProjectFromJupyter"
# Projct summary - short description to show in tables
summary = "Example project created via Jupyter notebook"
# Project description - describes the project, may be more detailed
description = "This project was created as an example of how to create "\
"and use Materials Commons projects from within a Jupyter notebook"
# Create a new project (or return existing one with same name)
request = mcapi.CreateProjectRequest(description=description, summary=summary)
remote_mc_proj = client.create_project(name, request)
print(str(remote_mc_proj))
print("URL:", client.base_url)
print("Project ID:", remote_mc_proj.id)
print("Project name:", remote_mc_proj.name)
Explanation: Create a new Materials Commons project
This example demonstrates creating a new Materials Commons project from a Jupyter notebook. To try running it locally, download the notebook from here.
To install the necessary dependencies (requires Python 3):
pip install materials-commons-cli
Notes:
- If you do not yet have a Materials Commons account, it can be created here.
- If you have not yet configured the Materials Commons client to access your account, you will be prompted to enter a password.
- Only one project per owner may have the same name. If the same name as an existing project is given, then the existing project is returned instead of creating a new project.
End of explanation
import os
import pathlib
import shutil
from materials_commons.cli.cloned_project import ClonedProject
Explanation: Cloning a project
"Cloning" a project creates a local directory which is used as a place to upload and download project files. There are three construction options:
Clone the project in a temporary directory (default)
Clone the project in a particular location or open the project if it already exists. This option makes use of the "parent_path" and "name" constructor arguments to specify where the local project directory will be constructed if it doesn't already exist.
Open an existing local cloned project. This option uses the "path" constructor argument to It can be at a particular location that a user chooses to reuse (by constructing with the
End of explanation
cloned_mc_proj = ClonedProject(email=email, mcurl=mcurl, proj_id=remote_mc_proj.id)
print(str(cloned_mc_proj))
print("Cloned project local path:", cloned_mc_proj.local_path)
Explanation: Example 1: Clone the project - using a temporary directory
This example clones the project using a temporary directory. Downloaded files will be eventually be cleaned up by the system when the ClonedObject instance is no longer in use.
End of explanation
parent_path = pathlib.Path.home() / "mc_projects"
os.makedirs(parent_path, exist_ok=True)
cloned_mc_proj = ClonedProject(email=email,
mcurl=mcurl,
proj_id=remote_mc_proj.id,
parent_path=parent_path, # must exist
name=None) # default uses project name
print(str(cloned_mc_proj))
print("Cloned project local path:", cloned_mc_proj.local_path)
Explanation: Example 2: Clone the project - specifying the location
This example clones the project to ~/mc_projects/ExampleProjectFromJupyter.
End of explanation
cloned_mc_proj = ClonedProject(email=email,
mcurl=mcurl,
proj_id=remote_mc_proj.id,
path=pathlib.Path.home() / "mc_projects" / "ExampleProjectFromJupyter")
print(str(cloned_mc_proj))
print("Cloned project local path:", cloned_mc_proj.local_path)
Explanation: Example 3: Open an existing cloned project
This example opens a local project that has already been cloned.
End of explanation
print(str(cloned_mc_proj.proj))
print(str(cloned_mc_proj.proj.remote))
print(type(cloned_mc_proj.local_path), str(cloned_mc_proj.local_path))
Explanation: Using the ClonedProject
The ClonedProject instance provides access to Client and Project objects from the Materials Commons API (materials_commons.api) along with the location of the cloned local project directory local_path.
End of explanation
example_file1 = cloned_mc_proj.local_path / "example_file1.txt"
with open(example_file1, 'w') as f:
f.write("Hello World!\n")
example_file2 = cloned_mc_proj.local_path / "example_file2.txt"
with open(example_file2, 'w') as f:
f.write("Hello World, again!\n")
example_dir = cloned_mc_proj.local_path / "dir"
os.makedirs(example_dir, exist_ok=True)
example_file3 = example_dir / "example_file3.txt"
with open(example_file3, 'w') as f:
f.write("Got some data here!\n")
example_file4 = example_dir / "example_file4.txt"
with open(example_file4, 'w') as f:
f.write("So much data!\n")
Explanation: File transfer
The ClonedProject instance from the CLI also provides methods for uploading and downloading files using features beyond those included in materials_commons.api.
For example, transfers can include checks to skip transferring files that are equivalent, support recursive upload and download, globus transfer, etc.
Other methods implemented by the CLI will be added to ClonedProject in the future.
Setup for upload examples:
This creates a directory and writes some files used in the upload examples.
End of explanation
cloned_mc_proj.upload(example_file1)
Explanation: Upload one file
End of explanation
cloned_mc_proj.upload(example_file1)
cloned_mc_proj.upload(example_file1, no_compare=True)
Explanation: By default, files that already exist will be skipped
Use no_compare=True to transfer without comparing checksums. Materials Commons will still check for file equivalence and only create a new file version if the file is different.
End of explanation
cloned_mc_proj.upload(example_file1, example_file2)
Explanation: Upload multiple files
End of explanation
cloned_mc_proj.upload(example_dir, recursive=True)
Explanation: Upload files and directories, recursively
Use the recursive=True argument to transfer files and directories recursively
End of explanation
nb_name = "MaterialsCommons-Project-Example.ipynb"
notebook_local_abspath = os.path.join(os.getcwd(), nb_name)
notebook_upload_as = cloned_mc_proj.local_path / "notebooks" / nb_name
cloned_mc_proj.upload(notebook_local_abspath, upload_as=notebook_upload_as)
Explanation: Uploading the notebook itself / use of "upload_as"
It is possible to upload a notebook, from within the notebook itself. To do so, we can use the "upload_as" option which allows uploading files that do not exist in the local cloned project directory. The following cells demonstrate getting the notebook's name, nb_name, and then uploading the notebook itself to the Materials Commons project. It is placed in a "notebooks" directory. Note that it uploads the last saved version of the notebook, not the current state.
<i> Note: Getting the notebook file path from os.path.join(os.getcwd(), nb_name) may not work in all cases </i>
End of explanation
for file in [example_file1, example_file2]:
if os.path.exists(file):
os.remove(file)
if os.path.exists(example_dir):
shutil.rmtree(example_dir)
print("Local project directory contents:", os.listdir(cloned_mc_proj.local_path))
Explanation: Setup for download examples:
This removes the existing local files and directories to demonstrate downloading from Materials Commons.
End of explanation
cloned_mc_proj.download(example_file1)
Explanation: Download one file
End of explanation
cloned_mc_proj.download(example_file1, no_compare=True)
cloned_mc_proj.download(example_file2)
shutil.copyfile(example_file2, example_file1)
cloned_mc_proj.download(example_file1, force=True)
Explanation: By default, files that already exist will be skipped
Use no_compare=True to transfer without comparing checksums.
Use force=True to force overwriting existing files without prompting.
End of explanation
cloned_mc_proj.download(example_file1, example_file2, force=True)
Explanation: Download multiple files
End of explanation
cloned_mc_proj.download(example_dir, recursive=True)
Explanation: Download files and directories, recursively
Use the recursive=True argument to transfer files and directories recursively
End of explanation
cloned_mc_proj.download(example_file1, output=cloned_mc_proj.local_path / "example_file3.txt")
Explanation: Download with different name
Use the output argument to output one file or directory to a different location
End of explanation
cloned_mc_proj.upload(example_file1, globus=True)
cloned_mc_proj.download(example_file2, globus=True, force=True)
Explanation: Using Globus file transfer
Use the globus=True argument to peform the file transfer using the current globus upload or download directory
Use the label argument to give the transfer a label for easier tracking
Globus configuration and transfer management is not currently supported via ClonedProject but it can be done using shell commands or the Materials Commons website
See the online documentation for more information on Globus file transfers.
End of explanation
! cd {cloned_mc_proj.local_path} && mc globus upload && mc globus download
! globus task list
Explanation: Monitor transfer status
End of explanation
from materials_commons.cli.functions import read_project_config
project_config = read_project_config(cloned_mc_proj.local_path)
! cd {cloned_mc_proj.local_path} && mc globus upload --id {project_config.globus_upload_id} --finish --force
! cd {cloned_mc_proj.local_path} && mc globus download --id {project_config.globus_download_id} --delete --force
Explanation: Finish the transfer
Uploads must be "finished" to transfer files into the project. Once processing is finished, the upload directory will no longer appear in mc globus upload results, and all files should appear in the project directory. The processing time required before files appear in your project will depend on the size of the transfer.
Globus download directories should be "deleted" when download tasks are finished. The download directory may be left as long as desired, but it will not reflect any file or directory changes to the project since the time the download directory was created.
End of explanation
# Delete the remote project
projs = client.get_all_projects()
for proj in projs:
if proj.name == "ExampleProjectFromJupyter":
client.delete_project(proj.id)
# Delete the local project
local_project_path = pathlib.Path.home() / "mc_projects" / "ExampleProjectFromJupyter"
if os.path.exists(local_project_path):
shutil.rmtree(local_project_path)
Explanation: Example cleanup
The delete_project call will delete a project on Materials Commons.
Notes:
- Only the project owner can delete a project
- A project that has published datasets may not be deleted
- Be careful, there is no undo! Deleting a project deletes all project files and data.
- Deleting the remote project does not delete the local project files.
- Deleting the local project files does not delete the remote project
End of explanation |
4,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CrowdTruth for Free Input Tasks
Step1: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class
Step2: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Person Type/Role Annotation in Video task
Step3: The complete configuration class is declared below
Step4: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data
Step5: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics
Step6: results is a dict object that contains the quality metrics for the video fragments, annotations and crowd workers.
The video fragment metrics are stored in results["units"]
Step7: The uqs column in results["units"] contains the video fragment quality scores, capturing the overall workers agreement over each video fragment. Here we plot its histogram
Step8: The unit_annotation_score column in results["units"] contains the video fragment-annotation scores, capturing the likelihood that an annotation is expressed in a video fragment. For each video fragment, we store a dictionary mapping each annotation to its video fragment-relation score.
Step9: The worker metrics are stored in results["workers"]
Step10: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers. | Python Code:
import pandas as pd
test_data = pd.read_csv("../data/person-video-free-input.csv")
test_data.head()
Explanation: CrowdTruth for Free Input Tasks: Person Annotation in Video
In this tutorial, we will apply CrowdTruth metrics to a free input crowdsourcing task for Person Annotation from video fragments. The workers were asked to watch a video of about 3-5 seconds and then add tags that are relevant for the people that appear in the video fragment. The task was executed on FigureEight. For more crowdsourcing annotation task examples, click here.
To replicate this experiment, the code used to design and implement this crowdsourcing annotation template is available here: template, css, javascript.
This is a screenshot of the task as it appeared to workers:
A sample dataset for this task is available in this file, containing raw output from the crowd on FigureEight. Download the file and place it in a folder named data that has the same root as this notebook. Now you can check your data:
End of explanation
import crowdtruth
from crowdtruth.configuration import DefaultConfig
Explanation: Declaring a pre-processing configuration
The pre-processing configuration defines how to interpret the raw crowdsourcing input. To do this, we need to define a configuration class. First, we import the default CrowdTruth configuration class:
End of explanation
import nltk
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('averaged_perceptron_tagger')
nltk.download('wordnet')
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
from nltk.corpus import wordnet
from autocorrect import spell
def correct_words(keywords, separator):
keywords_list = keywords.split(separator)
corrected_keywords = []
for keyword in keywords_list:
words_in_keyword = keyword.split(" ")
corrected_keyword = []
for word in words_in_keyword:
correct_word = spell(word)
corrected_keyword.append(correct_word)
corrected_keywords.append(" ".join(corrected_keyword))
return separator.join(corrected_keywords)
def cleanup_keywords(keywords, separator):
keywords_list = keywords.split(separator)
stopset = set(stopwords.words('english'))
filtered_keywords = []
for keyword in keywords_list:
tokens = nltk.word_tokenize(keyword)
cleanup = " ".join(filter(lambda word: str(word) not in stopset or str(word) == "no" or str(word) == "not", keyword.split()))
filtered_keywords.append(cleanup)
return separator.join(filtered_keywords)
def nltk2wn_tag(nltk_tag):
if nltk_tag.startswith('J'):
return wordnet.ADJ
elif nltk_tag.startswith('V'):
return wordnet.VERB
elif nltk_tag.startswith('N'):
return wordnet.NOUN
elif nltk_tag.startswith('R'):
return wordnet.ADV
else:
return None
def lemmatize_keywords(keywords, separator):
keywords_list = keywords.split(separator)
lematized_keywords = []
for keyword in keywords_list:
nltk_tagged = nltk.pos_tag(nltk.word_tokenize(str(keyword)))
wn_tagged = map(lambda x: (str(x[0]), nltk2wn_tag(x[1])), nltk_tagged)
res_words = []
for word, tag in wn_tagged:
if tag is None:
res_word = wordnet._morphy(str(word), wordnet.NOUN)
if res_word == []:
res_words.append(str(word))
else:
if len(res_word) == 1:
res_words.append(str(res_word[0]))
else:
res_words.append(str(res_word[1]))
else:
res_word = wordnet._morphy(str(word), tag)
if res_word == []:
res_words.append(str(word))
else:
if len(res_word) == 1:
res_words.append(str(res_word[0]))
else:
res_words.append(str(res_word[1]))
lematized_keyword = " ".join(res_words)
lematized_keywords.append(lematized_keyword)
return separator.join(lematized_keywords)
Explanation: Our test class inherits the default configuration DefaultConfig, while also declaring some additional attributes that are specific to the Person Type/Role Annotation in Video task:
inputColumns: list of input columns from the .csv file with the input data
outputColumns: list of output columns from the .csv file with the answers from the workers
annotation_separator: string that separates between the crowd annotations in outputColumns
open_ended_task: boolean variable defining whether the task is open-ended (i.e. the possible crowd annotations are not known beforehand, like in the case of free text input); in the task that we are processing, workers pick the answers from a pre-defined list, therefore the task is not open ended, and this variable is set to False
annotation_vector: list of possible crowd answers, mandatory to declare when open_ended_task is False; for our task, this is the list of relations
processJudgments: method that defines processing of the raw crowd data; for this task, we process the crowd answers to correspond to the values in annotation_vector
Same examples of possible processing functions of crowd answers are given below:
End of explanation
class TestConfig(DefaultConfig):
inputColumns = ["videolocation", "subtitles", "imagetags", "subtitletags"]
outputColumns = ["keywords"]
# processing of a closed task
open_ended_task = True
annotation_vector = []
def processJudgments(self, judgments):
# pre-process output to match the values in annotation_vector
for col in self.outputColumns:
# transform to lowercase
judgments[col] = judgments[col].apply(lambda x: str(x).lower())
# remove square brackets from annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace('[]','no tags'))
judgments[col] = judgments[col].apply(lambda x: str(x).replace('[',''))
judgments[col] = judgments[col].apply(lambda x: str(x).replace(']',''))
# remove the quotes around the annotations
judgments[col] = judgments[col].apply(lambda x: str(x).replace('"',''))
# apply custom processing functions
judgments[col] = judgments[col].apply(lambda x: correct_words(str(x), self.annotation_separator))
judgments[col] = judgments[col].apply(lambda x: "no tag" if cleanup_keywords(str(x), self.annotation_separator) == '' else cleanup_keywords(str(x), self.annotation_separator))
judgments[col] = judgments[col].apply(lambda x: lemmatize_keywords(str(x), self.annotation_separator))
return judgments
Explanation: The complete configuration class is declared below:
End of explanation
data, config = crowdtruth.load(
file = "../data/person-video-free-input.csv",
config = TestConfig()
)
data['judgments'].head()
Explanation: Pre-processing the input data
After declaring the configuration of our input file, we are ready to pre-process the crowd data:
End of explanation
results = crowdtruth.run(data, config)
Explanation: Computing the CrowdTruth metrics
The pre-processed data can then be used to calculate the CrowdTruth metrics:
End of explanation
results["units"].head()
Explanation: results is a dict object that contains the quality metrics for the video fragments, annotations and crowd workers.
The video fragment metrics are stored in results["units"]:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(results["units"]["uqs"])
plt.xlabel("Video Fragment Quality Score")
plt.ylabel("Video Fragment")
Explanation: The uqs column in results["units"] contains the video fragment quality scores, capturing the overall workers agreement over each video fragment. Here we plot its histogram:
End of explanation
results["units"]["unit_annotation_score"].head()
Explanation: The unit_annotation_score column in results["units"] contains the video fragment-annotation scores, capturing the likelihood that an annotation is expressed in a video fragment. For each video fragment, we store a dictionary mapping each annotation to its video fragment-relation score.
End of explanation
results["workers"].head()
Explanation: The worker metrics are stored in results["workers"]:
End of explanation
plt.hist(results["workers"]["wqs"])
plt.xlabel("Worker Quality Score")
plt.ylabel("Workers")
Explanation: The wqs columns in results["workers"] contains the worker quality scores, capturing the overall agreement between one worker and all the other workers.
End of explanation |
4,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy Exercise 1
Imports
Step2: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0
Step3: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
Step4: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px. | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
Explanation: Numpy Exercise 1
Imports
End of explanation
def checkerboard(size):
Return a 2d checkboard of 0.0 and 1.0 as a NumPy array
a=np.zeros((size,size))
a[::2,::2]=1.0 #slices the array at every even index and makes it 1.0 in first, third,... rows
a[1::2,1::2]=1.0 #slices the array at every odd index and makes it 1.0 in second, fourth,... rows
return a
raise NotImplementedError()
a = checkerboard(4)
assert a[0,0]==1.0
assert a.sum()==8.0
assert a.dtype==np.dtype(float)
assert np.all(a[0,0:5:2]==1.0)
assert np.all(a[1,0:5:2]==0.0)
b = checkerboard(5)
assert b[0,0]==1.0
assert b.sum()==13.0
assert np.all(b.ravel()[0:26:2]==1.0)
assert np.all(b.ravel()[1:25:2]==0.0)
Explanation: Checkerboard
Write a Python function that creates a square (size,size) 2d Numpy array with the values 0.0 and 1.0:
Your function should work for both odd and even size.
The 0,0 element should be 1.0.
The dtype should be float.
End of explanation
f=checkerboard(20)
va.set_block_size(10) #creating the checkerboard and setting pixel size to 10
va.enable()
f
va.disable()
assert True
Explanation: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
End of explanation
g=checkerboard(27)
va.set_block_size(5) #same as above process, with different pixel size
va.enable()
g
va.disable()
assert True
Explanation: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
End of explanation |
4,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-worker training with Keras
Learning Objectives
Multi-worker Configuration
Choose the right strategy
Train the model
Multi worker training in depth
Introduction
This notebook demonstrates multi-worker distributed training with Keras model using tf.distribute.Strategy API, specifically tf.distribute.MultiWorkerMirroredStrategy. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Setup
First, some necessary imports.
Step1: Before importing TensorFlow, make a few changes to the environment.
Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
Step2: Reset the TF_CONFIG environment variable, you'll see more about this later.
Step3: Be sure that the current directory is on python's path. This allows the notebook to import the files written by %%writefile later.
Step4: Now import TensorFlow.
Step5: Dataset and model definition
Next create an mnist.py file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial
Step6: Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
Step7: Multi-worker Configuration
Now let's enter the world of multi-worker training. In TensorFlow, the TF_CONFIG environment variable is required for training on multiple machines, each of which possibly has a different role. TF_CONFIG is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.
Here is an example configuration
Step8: Here is the same TF_CONFIG serialized as a JSON string
Step9: There are two components of TF_CONFIG
Step10: You can access the environment variable from a subprocesses
Step11: In the next section, you'll use this to pass the TF_CONFIG to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial
Step12: Note
Step13: Note
Step14: In the code snippet above note that the global_batch_size, which gets passed to Dataset.batch, is set to per_worker_batch_size * num_workers. This ensures that each worker processes batches of per_worker_batch_size examples regardless of the number of workers.
The current directory now contains both Python files
Step15: So json-serialize the TF_CONFIG and add it to the environment variables
Step16: Now, you can launch a worker process that will run the main.py and use the TF_CONFIG
Step17: There are a few things to note about the above command
Step18: Now look what's been output to the worker's logfile so far
Step19: The last line of the log file should say
Step20: Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process)
Step21: Now if you recheck the logs written by the first worker you'll see that it participated in training that model
Step22: Unsurprisingly this ran slower than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
Step23: Multi worker training in depth
So far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases.
Dataset sharding
In multi-worker training, dataset sharding is needed to ensure convergence and performance.
The example in the previous section relies on the default autosharding provided by the tf.distribute.Strategy API. You can control the sharding by setting the tf.data.experimental.AutoShardPolicy of the tf.data.experimental.DistributeOptions. To learn more about auto-sharding see the Distributed input guide.
Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended)
Step24: Evaluation
If you pass validation_data into model.fit, it will alternate between training and evaluation for each epoch. The evaluation taking validation_data is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set validation_steps. A repeated dataset is also recommended for evaluation.
Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted.
Performance
You now have a Keras model that is all set up to run in multiple workers with MultiWorkerMirroredStrategy. You can try the following techniques to tweak performance of multi-worker training with MultiWorkerMirroredStrategy.
MultiWorkerMirroredStrategy provides multiple collective communication implementations. RING implements ring-based collectives using gRPC as the cross-host communication layer. NCCL uses Nvidia's NCCL to implement collectives. AUTO defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify communication_options parameter of MultiWorkerMirroredStrategy's constructor, e.g. communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL).
Cast the variables to tf.float if possible. The official ResNet model includes an example of how this can be done.
Fault tolerance
In synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with tf.distribute.Strategy comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.
When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.
Note
Step25: With that, you're now ready to save
Step26: As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved
Step27: Now, when it's time to load, let's use convenient tf.keras.models.load_model API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call tf.keras.models.load_model within another strategy.scope().
Step28: Checkpoint saving and restoring
On the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one tf.train.Checkpoint that tracks the model, which is managed by a tf.train.CheckpointManager so that only the latest checkpoint is preserved.
Step29: Once the CheckpointManager is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
Step30: Now, when you need to restore, you can find the latest checkpoint saved using the convenient tf.train.latest_checkpoint function. After restoring the checkpoint, you can continue with training.
Step31: BackupAndRestore callback
BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under backup_dir argument to BackupAndRestore. This is done at the end of each epoch.
Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.
To use it, provide an instance of tf.keras.callbacks.experimental.BackupAndRestore at the tf.keras.Model.fit() call.
With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.
BackupAndRestore callback uses CheckpointManager to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, backup_dir should not be re-used to store other checkpoints in order to avoid name collision.
Currently, BackupAndRestore callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.
Below are two examples for both multi-worker training and single worker training. | Python Code:
import json
import os
import sys
Explanation: Multi-worker training with Keras
Learning Objectives
Multi-worker Configuration
Choose the right strategy
Train the model
Multi worker training in depth
Introduction
This notebook demonstrates multi-worker distributed training with Keras model using tf.distribute.Strategy API, specifically tf.distribute.MultiWorkerMirroredStrategy. With the help of this strategy, a Keras model that was designed to run on single-worker can seamlessly work on multiple workers with minimal code change.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Setup
First, some necessary imports.
End of explanation
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
Explanation: Before importing TensorFlow, make a few changes to the environment.
Disable all GPUs. This prevents errors caused by the workers all trying to use the same GPU. For a real application each worker would be on a different machine.
End of explanation
os.environ.pop('TF_CONFIG', None)
Explanation: Reset the TF_CONFIG environment variable, you'll see more about this later.
End of explanation
if '.' not in sys.path:
sys.path.insert(0, '.')
Explanation: Be sure that the current directory is on python's path. This allows the notebook to import the files written by %%writefile later.
End of explanation
import tensorflow as tf
print(tf.__version__)
Explanation: Now import TensorFlow.
End of explanation
%%writefile mnist.py
import os
import tensorflow as tf
import numpy as np
def mnist_dataset(batch_size):
(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
# The `x` arrays are in uint8 and have values in the range [0, 255].
# You need to convert them to float32 with values in the range [0, 1]
x_train = x_train / np.float32(255)
y_train = y_train.astype(np.int64)
train_dataset = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(60000).repeat().batch(batch_size)
return train_dataset
def build_and_compile_cnn_model():
model = tf.keras.Sequential([
tf.keras.Input(shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001),
metrics=['accuracy'])
return model
Explanation: Dataset and model definition
Next create an mnist.py file with a simple model and dataset setup. This python file will be used by the worker-processes in this tutorial:
End of explanation
import mnist
batch_size = 64
single_worker_dataset = mnist.mnist_dataset(batch_size)
single_worker_model = mnist.build_and_compile_cnn_model()
single_worker_model.fit(single_worker_dataset, epochs=3, steps_per_epoch=70)
Explanation: Try training the model for a small number of epochs and observe the results of a single worker to make sure everything works correctly. As training progresses, the loss should drop and the accuracy should increase.
End of explanation
tf_config = {
'cluster': {
'worker': ['localhost:12345', 'localhost:23456']
},
'task': {'type': 'worker', 'index': 0}
}
Explanation: Multi-worker Configuration
Now let's enter the world of multi-worker training. In TensorFlow, the TF_CONFIG environment variable is required for training on multiple machines, each of which possibly has a different role. TF_CONFIG is a JSON string used to specify the cluster configuration on each worker that is part of the cluster.
Here is an example configuration:
End of explanation
# converts a Python object into a json string.
# TODO: Your code goes here
Explanation: Here is the same TF_CONFIG serialized as a JSON string:
End of explanation
os.environ['GREETINGS'] = 'Hello TensorFlow!'
Explanation: There are two components of TF_CONFIG: cluster and task.
cluster is the same for all workers and provides information about the training cluster, which is a dict consisting of different types of jobs such as worker. In multi-worker training with MultiWorkerMirroredStrategy, there is usually one worker that takes on a little more responsibility like saving checkpoint and writing summary file for TensorBoard in addition to what a regular worker does. Such a worker is referred to as the chief worker, and it is customary that the worker with index 0 is appointed as the chief worker (in fact this is how tf.distribute.Strategy is implemented).
task provides information of the current task and is different on each worker. It specifies the type and index of that worker.
In this example, you set the task type to "worker" and the task index to 0. This machine is the first worker and will be appointed as the chief worker and do more work than the others. Note that other machines will need to have the TF_CONFIG environment variable set as well, and it should have the same cluster dict, but different task type or task index depending on what the roles of those machines are.
For illustration purposes, this tutorial shows how one may set a TF_CONFIG with 2 workers on localhost. In practice, users would create multiple workers on external IP addresses/ports, and set TF_CONFIG on each worker appropriately.
In this example you will use 2 workers, the first worker's TF_CONFIG is shown above. For the second worker you would set tf_config['task']['index']=1
Above, tf_config is just a local variable in python. To actually use it to configure training, this dictionary needs to be serialized as JSON, and placed in the TF_CONFIG environment variable.
Environment variables and subprocesses in notebooks
Subprocesses inherit environment variables from their parent. So if you set an environment variable in this jupyter notebook process:
End of explanation
%%bash
echo ${GREETINGS}
Explanation: You can access the environment variable from a subprocesses:
End of explanation
# A distribution strategy for synchronous training on multiple workers.
strategy = # TODO: Your code goes here
Explanation: In the next section, you'll use this to pass the TF_CONFIG to the worker subprocesses. You would never really launch your jobs this way, but it's sufficient for the purposes of this tutorial: To demonstrate a minimal multi-worker example.
Choose the right strategy
In TensorFlow there are two main forms of distributed training:
Synchronous training, where the steps of training are synced across the workers and replicas, and
Asynchronous training, where the training steps are not strictly synced.
MultiWorkerMirroredStrategy, which is the recommended strategy for synchronous multi-worker training, will be demonstrated in this guide.
To train the model, use an instance of tf.distribute.MultiWorkerMirroredStrategy.
MultiWorkerMirroredStrategy creates copies of all variables in the model's layers on each device across all workers. It uses CollectiveOps, a TensorFlow op for collective communication, to aggregate gradients and keep the variables in sync. The tf.distribute.Strategy guide has more details about this strategy.
End of explanation
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = # TODO: Your code goes here
Explanation: Note: TF_CONFIG is parsed and TensorFlow's GRPC servers are started at the time MultiWorkerMirroredStrategy() is called, so the TF_CONFIG environment variable must be set before a tf.distribute.Strategy instance is created. Since TF_CONFIG is not set yet the above strategy is effectively single-worker training.
MultiWorkerMirroredStrategy provides multiple implementations via the CommunicationOptions parameter. RING implements ring-based collectives using gRPC as the cross-host communication layer. NCCL uses Nvidia's NCCL to implement collectives. AUTO defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster.
Train the model
With the integration of tf.distribute.Strategy API into tf.keras, the only change you will make to distribute the training to multiple-workers is enclosing the model building and model.compile() call inside strategy.scope(). The distribution strategy's scope dictates how and where the variables are created, and in the case of MultiWorkerMirroredStrategy, the variables created are MirroredVariables, and they are replicated on each of the workers.
End of explanation
%%writefile main.py
import os
import json
import tensorflow as tf
import mnist
per_worker_batch_size = 64
tf_config = json.loads(os.environ['TF_CONFIG'])
num_workers = len(tf_config['cluster']['worker'])
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
global_batch_size = per_worker_batch_size * num_workers
multi_worker_dataset = mnist.mnist_dataset(global_batch_size)
with strategy.scope():
# Model building/compiling need to be within `strategy.scope()`.
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset, epochs=3, steps_per_epoch=70)
Explanation: Note: Currently there is a limitation in MultiWorkerMirroredStrategy where TensorFlow ops need to be created after the instance of strategy is created. If you see RuntimeError: Collective ops must be configured at program startup, try creating the instance of MultiWorkerMirroredStrategy at the beginning of the program and put the code that may create ops after the strategy is instantiated.
To actually run with MultiWorkerMirroredStrategy you'll need to run worker processes and pass a TF_CONFIG to them.
Like the mnist.py file written earlier, here is the main.py that each of the workers will run:
End of explanation
%%bash
ls *.py
Explanation: In the code snippet above note that the global_batch_size, which gets passed to Dataset.batch, is set to per_worker_batch_size * num_workers. This ensures that each worker processes batches of per_worker_batch_size examples regardless of the number of workers.
The current directory now contains both Python files:
End of explanation
os.environ['TF_CONFIG'] = json.dumps(tf_config)
Explanation: So json-serialize the TF_CONFIG and add it to the environment variables:
End of explanation
# first kill any previous runs
%killbgscripts
%%bash --bg
python main.py &> job_0.log
Explanation: Now, you can launch a worker process that will run the main.py and use the TF_CONFIG:
End of explanation
import time
time.sleep(10)
Explanation: There are a few things to note about the above command:
It uses the %%bash which is a notebook "magic" to run some bash commands.
It uses the --bg flag to run the bash process in the background, because this worker will not terminate. It waits for all the workers before it starts.
The backgrounded worker process won't print output to this notebook, so the &> redirects its output to a file, so you can see what happened.
So, wait a few seconds for the process to start up:
End of explanation
%%bash
cat job_0.log
Explanation: Now look what's been output to the worker's logfile so far:
End of explanation
tf_config['task']['index'] = 1
os.environ['TF_CONFIG'] = json.dumps(tf_config)
Explanation: The last line of the log file should say: Started server with target: grpc://localhost:12345. The first worker is now ready, and is waiting for all the other worker(s) to be ready to proceed.
So update the tf_config for the second worker's process to pick up:
End of explanation
%%bash
python main.py
Explanation: Now launch the second worker. This will start the training since all the workers are active (so there's no need to background this process):
End of explanation
%%bash
cat job_0.log
Explanation: Now if you recheck the logs written by the first worker you'll see that it participated in training that model:
End of explanation
# Delete the `TF_CONFIG`, and kill any background tasks so they don't affect the next section.
os.environ.pop('TF_CONFIG', None)
%killbgscripts
Explanation: Unsurprisingly this ran slower than the the test run at the beginning of this tutorial. Running multiple workers on a single machine only adds overhead. The goal here was not to improve the training time, but only to give an example of multi-worker training.
End of explanation
options = tf.data.Options()
options.experimental_distribute.auto_shard_policy = tf.data.experimental.AutoShardPolicy.OFF
global_batch_size = 64
multi_worker_dataset = mnist.mnist_dataset(batch_size=64)
dataset_no_auto_shard = multi_worker_dataset.with_options(options)
Explanation: Multi worker training in depth
So far this tutorial has demonstrated a basic multi-worker setup. The rest of this document looks in detail other factors which may be useful or important for real use cases.
Dataset sharding
In multi-worker training, dataset sharding is needed to ensure convergence and performance.
The example in the previous section relies on the default autosharding provided by the tf.distribute.Strategy API. You can control the sharding by setting the tf.data.experimental.AutoShardPolicy of the tf.data.experimental.DistributeOptions. To learn more about auto-sharding see the Distributed input guide.
Here is a quick example of how to turn OFF the auto sharding, so each replica processes every example (not recommended):
End of explanation
model_path = '/tmp/keras-model'
def _is_chief(task_type, task_id):
# Note: there are two possible `TF_CONFIG` configuration.
# 1) In addition to `worker` tasks, a `chief` task type is use;
# in this case, this function should be modified to
# `return task_type == 'chief'`.
# 2) Only `worker` task type is used; in this case, worker 0 is
# regarded as the chief. The implementation demonstrated here
# is for this case.
# For the purpose of this colab section, we also add `task_type is None`
# case because it is effectively run with only single worker.
return (task_type == 'worker' and task_id == 0) or task_type is None
def _get_temp_dir(dirpath, task_id):
base_dirpath = 'workertemp_' + str(task_id)
temp_dir = os.path.join(dirpath, base_dirpath)
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_filepath(filepath, task_type, task_id):
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
if not _is_chief(task_type, task_id):
dirpath = _get_temp_dir(dirpath, task_id)
return os.path.join(dirpath, base)
task_type, task_id = (strategy.cluster_resolver.task_type,
strategy.cluster_resolver.task_id)
write_model_path = write_filepath(model_path, task_type, task_id)
Explanation: Evaluation
If you pass validation_data into model.fit, it will alternate between training and evaluation for each epoch. The evaluation taking validation_data is distributed across the same set of workers and the evaluation results are aggregated and available for all workers. Similar to training, the validation dataset is automatically sharded at the file level. You need to set a global batch size in the validation dataset and set validation_steps. A repeated dataset is also recommended for evaluation.
Alternatively, you can also create another task that periodically reads checkpoints and runs the evaluation. This is what Estimator does. But this is not a recommended way to perform evaluation and thus its details are omitted.
Performance
You now have a Keras model that is all set up to run in multiple workers with MultiWorkerMirroredStrategy. You can try the following techniques to tweak performance of multi-worker training with MultiWorkerMirroredStrategy.
MultiWorkerMirroredStrategy provides multiple collective communication implementations. RING implements ring-based collectives using gRPC as the cross-host communication layer. NCCL uses Nvidia's NCCL to implement collectives. AUTO defers the choice to the runtime. The best choice of collective implementation depends upon the number and kind of GPUs, and the network interconnect in the cluster. To override the automatic choice, specify communication_options parameter of MultiWorkerMirroredStrategy's constructor, e.g. communication_options=tf.distribute.experimental.CommunicationOptions(implementation=tf.distribute.experimental.CollectiveCommunication.NCCL).
Cast the variables to tf.float if possible. The official ResNet model includes an example of how this can be done.
Fault tolerance
In synchronous training, the cluster would fail if one of the workers fails and no failure-recovery mechanism exists. Using Keras with tf.distribute.Strategy comes with the advantage of fault tolerance in cases where workers die or are otherwise unstable. You do this by preserving training state in the distributed file system of your choice, such that upon restart of the instance that previously failed or preempted, the training state is recovered.
When a worker becomes unavailable, other workers will fail (possibly after a timeout). In such cases, the unavailable worker needs to be restarted, as well as other workers that have failed.
Note:
Previously, the ModelCheckpoint callback provided a mechanism to restore training state upon restart from job failure for multi-worker training. The TensorFlow team are introducing a new BackupAndRestore callback, to also add the support to single worker training for a consistent experience, and removed fault tolerance functionality from existing ModelCheckpoint callback. From now on, applications that rely on this behavior should migrate to the new callback.
ModelCheckpoint callback
ModelCheckpoint callback no longer provides fault tolerance functionality, please use BackupAndRestore callback instead.
The ModelCheckpoint callback can still be used to save checkpoints. But with this, if training was interrupted or successfully finished, in order to continue training from the checkpoint, the user is responsible to load the model manually.
Optionally the user can choose to save and restore model/weights outside ModelCheckpoint callback.
Model saving and loading
To save your model using model.save or tf.saved_model.save, the destination for saving needs to be different for each worker. On the non-chief workers, you will need to save the model to a temporary directory, and on the chief, you will need to save to the provided model directory. The temporary directories on the worker need to be unique to prevent errors resulting from multiple workers trying to write to the same location. The model saved in all the directories are identical and typically only the model saved by the chief should be referenced for restoring or serving. You should have some cleanup logic that deletes the temporary directories created by the workers once your training has completed.
The reason you need to save on the chief and workers at the same time is because you might be aggregating variables during checkpointing which requires both the chief and workers to participate in the allreduce communication protocol. On the other hand, letting chief and workers save to the same model directory will result in errors due to contention.
With MultiWorkerMirroredStrategy, the program is run on every worker, and in order to know whether the current worker is chief, it takes advantage of the cluster resolver object that has attributes task_type and task_id. task_type tells you what the current job is (e.g. 'worker'), and task_id tells you the identifier of the worker. The worker with id 0 is designated as the chief worker.
In the code snippet below, write_filepath provides the file path to write, which depends on the worker id. In the case of chief (worker with id 0), it writes to the original file path; for others, it creates a temporary directory (with id in the directory path) to write in:
End of explanation
multi_worker_model.save(write_model_path)
Explanation: With that, you're now ready to save:
End of explanation
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(os.path.dirname(write_model_path))
Explanation: As described above, later on the model should only be loaded from the path chief saved to, so let's remove the temporary ones the non-chief workers saved:
End of explanation
# load a model saved via model.save()
loaded_model = # TODO: Your code goes here
# Now that the model is restored, and can continue with the training.
loaded_model.fit(single_worker_dataset, epochs=2, steps_per_epoch=20)
Explanation: Now, when it's time to load, let's use convenient tf.keras.models.load_model API, and continue with further work. Here, assume only using single worker to load and continue training, in which case you do not call tf.keras.models.load_model within another strategy.scope().
End of explanation
checkpoint_dir = '/tmp/ckpt'
checkpoint = tf.train.Checkpoint(model=multi_worker_model)
write_checkpoint_dir = write_filepath(checkpoint_dir, task_type, task_id)
checkpoint_manager = tf.train.CheckpointManager(
checkpoint, directory=write_checkpoint_dir, max_to_keep=1)
Explanation: Checkpoint saving and restoring
On the other hand, checkpointing allows you to save model's weights and restore them without having to save the whole model. Here, you'll create one tf.train.Checkpoint that tracks the model, which is managed by a tf.train.CheckpointManager so that only the latest checkpoint is preserved.
End of explanation
checkpoint_manager.save()
if not _is_chief(task_type, task_id):
tf.io.gfile.rmtree(write_checkpoint_dir)
Explanation: Once the CheckpointManager is set up, you're now ready to save, and remove the checkpoints non-chief workers saved.
End of explanation
latest_checkpoint = tf.train.latest_checkpoint(checkpoint_dir)
checkpoint.restore(latest_checkpoint)
multi_worker_model.fit(multi_worker_dataset, epochs=2, steps_per_epoch=20)
Explanation: Now, when you need to restore, you can find the latest checkpoint saved using the convenient tf.train.latest_checkpoint function. After restoring the checkpoint, you can continue with training.
End of explanation
# Multi-worker training with MultiWorkerMirroredStrategy.
callbacks = [tf.keras.callbacks.experimental.BackupAndRestore(backup_dir='/tmp/backup')]
with strategy.scope():
multi_worker_model = mnist.build_and_compile_cnn_model()
multi_worker_model.fit(multi_worker_dataset,
epochs=3,
steps_per_epoch=70,
callbacks=callbacks)
Explanation: BackupAndRestore callback
BackupAndRestore callback provides fault tolerance functionality, by backing up the model and current epoch number in a temporary checkpoint file under backup_dir argument to BackupAndRestore. This is done at the end of each epoch.
Once jobs get interrupted and restart, the callback restores the last checkpoint, and training continues from the beginning of the interrupted epoch. Any partial training already done in the unfinished epoch before interruption will be thrown away, so that it doesn't affect the final model state.
To use it, provide an instance of tf.keras.callbacks.experimental.BackupAndRestore at the tf.keras.Model.fit() call.
With MultiWorkerMirroredStrategy, if a worker gets interrupted, the whole cluster pauses until the interrupted worker is restarted. Other workers will also restart, and the interrupted worker rejoins the cluster. Then, every worker reads the checkpoint file that was previously saved and picks up its former state, thereby allowing the cluster to get back in sync. Then the training continues.
BackupAndRestore callback uses CheckpointManager to save and restore the training state, which generates a file called checkpoint that tracks existing checkpoints together with the latest one. For this reason, backup_dir should not be re-used to store other checkpoints in order to avoid name collision.
Currently, BackupAndRestore callback supports single worker with no strategy, MirroredStrategy, and multi-worker with MultiWorkerMirroredStrategy.
Below are two examples for both multi-worker training and single worker training.
End of explanation |
4,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https
Step1: Bazel
Step2: DeepMind Lab
Step3: Python dependencies
Step11: Imports and Utils
Step12: Experiment
Step13: Learning
Step14: Evaluation | Python Code:
!apt-get install libsdl2-dev
!apt-get install libosmesa6-dev
!apt-get install libffi-dev
!apt-get install gettext
!apt-get install python3-numpy-dev python3-dev
Explanation: Copyright 2021 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
RL Unplugged: Offline R2D2 - DeepMind Lab
A Colab example of an Acme R2D2 agent on DeepMind Lab data.
<a href="https://colab.research.google.com/github/deepmind/deepmind_research/blob/master/rl_unplugged/dmlab_r2d2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Installation
External dependencies
End of explanation
BAZEL_VERSION = '3.6.0'
!wget https://github.com/bazelbuild/bazel/releases/download/{BAZEL_VERSION}/bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!chmod +x bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!./bazel-{BAZEL_VERSION}-installer-linux-x86_64.sh
!bazel --version
Explanation: Bazel
End of explanation
!git clone https://github.com/deepmind/lab.git
%%writefile lab/bazel/python.BUILD
# Description:
# Build rule for Python and Numpy.
# This rule works for Debian and Ubuntu. Other platforms might keep the
# headers in different places, cf. 'How to build DeepMind Lab' in build.md.
cc_library(
name = "python",
hdrs = select(
{
"@bazel_tools//tools/python:PY3": glob([
"usr/include/python3.6m/*.h",
"usr/local/lib/python3.6/dist-packages/numpy/core/include/numpy/*.h",
]),
},
no_match_error = "Internal error, Python version should be one of PY2 or PY3",
),
includes = select(
{
"@bazel_tools//tools/python:PY3": [
"usr/include/python3.6m",
"usr/local/lib/python3.6/dist-packages/numpy/core/include",
],
},
no_match_error = "Internal error, Python version should be one of PY2 or PY3",
),
visibility = ["//visibility:public"],
)
alias(
name = "python_headers",
actual = ":python",
visibility = ["//visibility:public"],
)
!cd lab && bazel build -c opt --python_version=PY3 //python/pip_package:build_pip_package
!cd lab && ./bazel-bin/python/pip_package/build_pip_package /tmp/dmlab_pkg
!pip install /tmp/dmlab_pkg/deepmind_lab-1.0-py3-none-any.whl --force-reinstall
Explanation: DeepMind Lab
End of explanation
!pip install dm_env
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
!pip install dm-sonnet
# Upgrade to recent commit for latest R2D2 learner.
!pip install --upgrade git+https://github.com/deepmind/acme.git@3dfda9d392312d948906e6c567c7f56d8c911de5
Explanation: Python dependencies
End of explanation
# @title Imports
import copy
import functools
from acme import environment_loop
from acme import specs
from acme.adders import reverb as acme_reverb
from acme.agents.tf import actors
from acme.agents.tf.r2d2 import learning as r2d2
from acme.tf import utils as tf_utils
from acme.tf import networks
from acme.utils import loggers
from acme.wrappers import observation_action_reward
import tree
import deepmind_lab
import dm_env
import numpy as np
import reverb
import sonnet as snt
import tensorflow as tf
import trfl
# @title Environment
_ACTION_MAP = {
0: (0, 0, 0, 1, 0, 0, 0),
1: (0, 0, 0, -1, 0, 0, 0),
2: (0, 0, -1, 0, 0, 0, 0),
3: (0, 0, 1, 0, 0, 0, 0),
4: (-10, 0, 0, 0, 0, 0, 0),
5: (10, 0, 0, 0, 0, 0, 0),
6: (-60, 0, 0, 0, 0, 0, 0),
7: (60, 0, 0, 0, 0, 0, 0),
8: (0, 10, 0, 0, 0, 0, 0),
9: (0, -10, 0, 0, 0, 0, 0),
10: (-10, 0, 0, 1, 0, 0, 0),
11: (10, 0, 0, 1, 0, 0, 0),
12: (-60, 0, 0, 1, 0, 0, 0),
13: (60, 0, 0, 1, 0, 0, 0),
14: (0, 0, 0, 0, 1, 0, 0),
}
class DeepMindLabEnvironment(dm_env.Environment):
DeepMind Lab environment.
def __init__(self, level_name: str, action_repeats: int = 4):
Construct environment.
Args:
level_name: DeepMind lab level name (e.g. 'rooms_watermaze').
action_repeats: Number of times the same action is repeated on every
step().
config = dict(fps='30',
height='72',
width='96',
maxAltCameraHeight='1',
maxAltCameraWidth='1',
hasAltCameras='false')
# seekavoid_arena_01 is not part of dmlab30.
if level_name != 'seekavoid_arena_01':
level_name = 'contributed/dmlab30/{}'.format(level_name)
self._lab = deepmind_lab.Lab(level_name, ['RGB_INTERLEAVED'], config)
self._action_repeats = action_repeats
self._reward = 0
def _observation(self):
last_action = getattr(self, '_action', 0)
last_reward = getattr(self, '_reward', 0)
self._last_observation = observation_action_reward.OAR(
observation=self._lab.observations()['RGB_INTERLEAVED'],
action=np.array(last_action, dtype=np.int64),
reward=np.array(last_reward, dtype=np.float32))
return self._last_observation
def reset(self):
self._lab.reset()
return dm_env.restart(self._observation())
def step(self, action):
if not self._lab.is_running():
return dm_env.restart(self.reset())
self._action = action.item()
if self._action not in _ACTION_MAP:
raise ValueError('Action not available')
lab_action = np.array(_ACTION_MAP[self._action], dtype=np.intc)
self._reward = self._lab.step(lab_action, num_steps=self._action_repeats)
if self._lab.is_running():
return dm_env.transition(self._reward, self._observation())
return dm_env.termination(self._reward, self._last_observation)
def observation_spec(self):
return observation_action_reward.OAR(
observation=dm_env.specs.Array(shape=(72, 96, 3), dtype=np.uint8),
action=dm_env.specs.Array(shape=(), dtype=np.int64),
reward=dm_env.specs.Array(shape=(), dtype=np.float32))
def action_spec(self):
return dm_env.specs.DiscreteArray(num_values=15, dtype=np.int64)
# @title Dataset
def _decode_images(pngs):
Decode tensor of PNGs.
decode_rgb_png = functools.partial(tf.io.decode_png, channels=3)
images = tf.map_fn(decode_rgb_png, pngs, dtype=tf.uint8,
parallel_iterations=10)
# [N, 72, 96, 3]
images.set_shape((pngs.shape[0], 72, 96, 3))
return images
def _tf_example_to_step_ds(tf_example: tf.train.Example,
episode_length: int) -> reverb.ReplaySample:
Create a Reverb replay sample from a TF example.
# Parse tf.Example.
def sequence_feature(shape, dtype=tf.float32):
return tf.io.FixedLenFeature(shape=[episode_length] + shape, dtype=dtype)
feature_description = {
'episode_id': tf.io.FixedLenFeature([], tf.int64),
'start_idx': tf.io.FixedLenFeature([], tf.int64),
'episode_return': tf.io.FixedLenFeature([], tf.float32),
'observations_pixels': sequence_feature([], tf.string),
'observations_reward': sequence_feature([]),
# actions are one-hot arrays.
'observations_action': sequence_feature([15]),
'actions': sequence_feature([], tf.int64),
'rewards': sequence_feature([]),
'discounted_rewards': sequence_feature([]),
'discounts': sequence_feature([]),
}
data = tf.io.parse_single_example(tf_example, feature_description)
pixels = _decode_images(data['observations_pixels'])
observation = observation_action_reward.OAR(
observation=pixels,
action=tf.argmax(data['observations_action'],
axis=1, output_type=tf.int64),
reward=data['observations_reward'])
data = acme_reverb.Step(
observation=observation,
action=data['actions'],
reward=data['rewards'],
discount=data['discounts'],
start_of_episode=tf.zeros((episode_length,), tf.bool),
extras={})
# Keys are all zero and probabilities are all one.
info = reverb.SampleInfo(key=tf.zeros((episode_length,), tf.int64),
probability=tf.ones((episode_length,), tf.float32),
table_size=tf.zeros((episode_length,), tf.int64),
priority=tf.ones((episode_length,), tf.float32))
sample = reverb.ReplaySample(info=info, data=data)
return tf.data.Dataset.from_tensor_slices(sample)
def subsequences(step_ds: tf.data.Dataset,
length: int, shift: int = 1
) -> tf.data.Dataset:
Dataset of subsequences from a dataset of episode steps.
window_ds = step_ds.window(length, shift=shift, stride=1)
return window_ds.interleave(_nest_ds).batch(length, drop_remainder=True)
def _nest_ds(nested_ds: tf.data.Dataset) -> tf.data.Dataset:
Produces a dataset of nests from a nest of datasets of the same size.
flattened_ds = tuple(tree.flatten(nested_ds))
zipped_ds = tf.data.Dataset.zip(flattened_ds)
return zipped_ds.map(lambda *x: tree.unflatten_as(nested_ds, x))
def make_dataset(path: str,
episode_length: int,
sequence_length: int,
sequence_shift: int,
num_shards: int = 500) -> tf.data.Dataset:
Create dataset of DeepMind Lab sequences.
filenames = [f'{path}/tfrecord-{i:05d}-of-{num_shards:05d}'
for i in range(num_shards)]
file_ds = tf.data.Dataset.from_tensor_slices(filenames)
file_ds = file_ds.repeat().shuffle(num_shards)
tfrecord_dataset = functools.partial(tf.data.TFRecordDataset,
compression_type='GZIP')
# Dataset of tf.Examples containing full episodes.
example_ds = file_ds.interleave(tfrecord_dataset)
# Dataset of episodes, each represented as a dataset of steps.
_tf_example_to_step_ds_with_length = functools.partial(
_tf_example_to_step_ds, episode_length=episode_length)
episode_ds = example_ds.map(_tf_example_to_step_ds_with_length,
num_parallel_calls=tf.data.experimental.AUTOTUNE)
# Dataset of sequences.
training_sequences = functools.partial(subsequences, length=sequence_length,
shift=sequence_shift)
return episode_ds.interleave(training_sequences)
Explanation: Imports and Utils
End of explanation
# task | episode length | run
# ----------------------------------------------------------------------------
# seekavoid_arena_01 | 301 | training_{0..2}
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.0
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.01
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.1
# seekavoid_arena_01 | 301 | snapshot_{0..1}_eps_0.25
# explore_object_rewards_few | 1351 | training_{0..2}
# explore_object_rewards_many | 1801 | training_{0..2}
# rooms_select_nonmatching_object | 181 | training_{0..2}
# rooms_watermaze | 1801 | training_{0..2}
TASK = 'seekavoid_arena_01'
RUN = 'training_0'
EPISODE_LENGTH = 301
BATCH_SIZE = 1
DATASET_PATH = f'gs://rl_unplugged/dmlab/{TASK}/{RUN}'
environment = DeepMindLabEnvironment(TASK, action_repeats=2)
dataset = make_dataset(DATASET_PATH, num_shards=500,
episode_length=EPISODE_LENGTH,
sequence_length=120,
sequence_shift=40)
dataset = dataset.padded_batch(BATCH_SIZE, drop_remainder=True)
Explanation: Experiment
End of explanation
# Create network.
def process_observations(x):
return x._replace(observation=tf.image.convert_image_dtype(x.observation, tf.float32))
environment_spec = specs.make_environment_spec(environment)
num_actions = environment_spec.actions.maximum + 1
network = snt.DeepRNN([
process_observations,
networks.R2D2AtariNetwork(num_actions=num_actions)
])
tf_utils.create_variables(network, [environment_spec.observations])
# Create a logger.
logger = loggers.TerminalLogger(label='learner', time_delta=1.)
# Create the R2D2 learner.
learner = r2d2.R2D2Learner(
environment_spec=environment_spec,
network=network,
target_network=copy.deepcopy(network),
discount=0.99,
learning_rate=1e-4,
importance_sampling_exponent=0.2,
target_update_period=100,
burn_in_length=0,
sequence_length=120,
store_lstm_state=False,
dataset=dataset,
logger=logger)
for _ in range(5):
learner.step()
Explanation: Learning
End of explanation
# Create a logger.
logger = loggers.TerminalLogger(label='evaluator', time_delta=1.)
# Create evaluation loop.
eval_network = snt.DeepRNN([
network,
lambda q: trfl.epsilon_greedy(q, epsilon=0.4**8).sample(),
])
eval_loop = environment_loop.EnvironmentLoop(
environment=environment,
actor=actors.DeprecatedRecurrentActor(policy_network=eval_network),
logger=logger)
eval_loop.run(2)
Explanation: Evaluation
End of explanation |
4,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A notebook to test and demonstrate the MMD test of Gretton et al., 2012 used as a goodness-of-fit test. Require the ability to sample from the density p.
Step1: MMD test (as a goodness-of-fit test)
Step4: MMD test with parameter search
Step5: | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
#%config InlineBackend.figure_format = 'pdf'
import freqopttest.tst as tst
import kgof
import kgof.data as data
import kgof.density as density
import kgof.goftest as gof
import kgof.mmd as mgof
import kgof.kernel as ker
import kgof.util as util
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
# font options
font = {
#'family' : 'normal',
#'weight' : 'bold',
'size' : 16
}
plt.rc('font', **font)
plt.rc('lines', linewidth=2)
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
Explanation: A notebook to test and demonstrate the MMD test of Gretton et al., 2012 used as a goodness-of-fit test. Require the ability to sample from the density p.
End of explanation
# true p
seed = 20
d = 2
# sample
n = 400
alpha = 0.05
mean = np.zeros(d)
variance = 1
p = density.IsotropicNormal(mean, variance)
q_mean = mean.copy()
q_variance = variance
# q_mean[0] = 1
ds = data.DSIsotropicNormal(q_mean+1, q_variance)
# q_means = np.array([ [0], [0]])
# q_variances = np.array([0.01, 1])
# ds = data.DSIsoGaussianMixture(q_means, q_variances, pmix=[0.2, 0.8])
# Test
dat = ds.sample(n, seed=seed+2)
X = dat.data()
# Use median heuristic to determine the Gaussian kernel width
sig2 = util.meddistance(X, subsample=1000)**2
k = ker.KGauss(sig2)
mmd_test = mgof.QuadMMDGof(p, k, n_permute=300, alpha=alpha, seed=seed)
mmd_result = mmd_test.perform_test(dat)
mmd_result
print('Reject H0?: {0}'.format(mmd_result['h0_rejected']))
sim_stats = mmd_result['list_permuted_mmd2']
stat = mmd_result['test_stat']
unif_weights = np.ones_like(sim_stats)/float(len(sim_stats))
plt.hist(sim_stats, label='Simulated', weights=unif_weights)
plt.plot([stat, stat], [0, 0], 'r*', markersize=30, label='Stat')
plt.legend(loc='best')
Explanation: MMD test (as a goodness-of-fit test)
End of explanation
def gbrbm_perturb(std_perturb_B, dx=50, dh=10):
Get a Gaussian-Bernoulli RBM problem where the first entry of the B matrix
(the matrix linking the latent and the observation) is perturbed.
- var_perturb_B: Gaussian noise variance for perturbing B.
- dx: observed dimension
- dh: latent dimension
Return p (density), data source
with util.NumpySeedContext(seed=10):
B = np.random.randint(0, 2, (dx, dh))*2 - 1.0
b = np.random.randn(dx)
c = np.random.randn(dh)
p = density.GaussBernRBM(B, b, c)
B_perturb = B.copy()
if std_perturb_B > 1e-7:
B_perturb[0, 0] = B_perturb[0, 0] + \
np.random.randn(1)*std_perturb_B
ds = data.DSGaussBernRBM(B_perturb, b, c, burnin=2000)
return p, ds
def gbrbm_perturb_all(std_perturb_B, dx=50, dh=10):
Get a Gaussian-Bernoulli RBM problem where all entries of B
(the matrix linking the latent and the observation) are perturbed.
- var_perturb_B: Gaussian noise variance for perturbing B.
- dx: observed dimension
- dh: latent dimension
Return p (density), data source
with util.NumpySeedContext(seed=11):
B = np.random.randint(0, 2, (dx, dh))*2 - 1.0
b = np.random.randn(dx)
c = np.random.randn(dh)
p = density.GaussBernRBM(B, b, c)
if std_perturb_B > 1e-7:
B_perturb = B + np.random.randn(dx, dh)*std_perturb_B
ds = data.DSGaussBernRBM(B_perturb, b, c, burnin=2000)
return p, ds
n = 1000
d = 50
seed = 991
# p, qds = gbrbm_perturb_all(0.06, dx=d, dh=10)
p, qds = gbrbm_perturb(np.sqrt(0.1), dx=d, dh=10)
qdat = qds.sample(n, seed=seed+3)
Y = qdat.data()
pds = p.get_datasource()
datX = pds.sample(n, seed=seed+1)
X = datX.data()
XY = np.vstack((X, Y))
np.var(X, 0)
np.var(Y, 0)
# Get the median heuristic for each dimension
med_factors = 2.0**np.linspace(-5, 5, 30)
meds = np.zeros(d)
for i in range(d):
medi = util.meddistance(XY[:, [i]], subsample=1000)
meds[i] = medi
candidate_kernels = []
for i in range(len(med_factors)):
ki = ker.KDiagGauss( (meds**2)*med_factors[i] )
candidate_kernels.append(ki)
# k = ker.KDiagGauss(2*meds**2)
# Construct a list of kernels to try based on multiples of the median
# heuristic
# med = util.meddistance(XY, subsample=1000)
# candidate_kernels = [ker.KGauss(f*med**2) for f in med_factors]
# k = ker.KGauss((2.0**-1)*med**2)
# candidate_kernels = [k]
mmd_opt = mgof.QuadMMDGofOpt(p, n_permute=300, alpha=alpha, seed=seed+3)
mmd_result = mmd_opt.perform_test(qdat,
candidate_kernels=candidate_kernels,
tr_proportion=0.2, reg=1e-3)
mmd_result
Explanation: MMD test with parameter search
End of explanation
Kxy = k.eval(X, Y)
Kxx = k.eval(X, X)
Kyy = k.eval(Y, Y)
plt.figure(figsize=(8, 8))
plt.imshow(Kxy)
plt.title('Kxy')
plt.colorbar()
plt.hist(Kxy.ravel(), bins=50)
plt.figure(figsize=(8, 8))
plt.imshow(Kxx)
plt.title('Kxx')
plt.colorbar()
plt.figure(figsize=(8, 8))
plt.imshow(Kyy)
plt.title('Kyy')
plt.colorbar()
mmd = np.mean(Kxx+Kyy-2*Kxy)
mmd
Explanation:
End of explanation |
4,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bins Mark
This Mark is essentially the same as the Hist Mark from a user point of view, but is actually a Bars instance that bins sample data.
The difference with Hist is that the binning is done in the backend, so it will work better for large data as it does not have to ship the whole data back and forth to the frontend.
Step1: Give the Hist mark the data you want to perform as the sample argument, and also give 'x' and 'y' scales.
Step2: The midpoints of the resulting bins and their number of elements can be recovered via the read-only traits x and y
Step3: Tuning the bins
Under the hood, the Bins mark is really a Bars mark, with some additional magic to control the binning. The data in sample is binned into equal-width bins. The parameters controlling the binning are the following traits
Step4: Histogram Styling
The styling of Hist is identical to the one of Bars | Python Code:
# Create a sample of Gaussian draws
np.random.seed(0)
x_data = np.random.randn(1000)
Explanation: Bins Mark
This Mark is essentially the same as the Hist Mark from a user point of view, but is actually a Bars instance that bins sample data.
The difference with Hist is that the binning is done in the backend, so it will work better for large data as it does not have to ship the whole data back and forth to the frontend.
End of explanation
fig = plt.figure(padding_y=0)
hist = plt.bin(x_data, padding=0)
fig
Explanation: Give the Hist mark the data you want to perform as the sample argument, and also give 'x' and 'y' scales.
End of explanation
hist.x, hist.y
Explanation: The midpoints of the resulting bins and their number of elements can be recovered via the read-only traits x and y:
End of explanation
fig = plt.figure(padding_y=0)
hist = plt.bin(x_data, padding=0)
fig
# Changing the number of bins
hist.bins = 'sqrt'
# Changing the range
hist.min = 0
Explanation: Tuning the bins
Under the hood, the Bins mark is really a Bars mark, with some additional magic to control the binning. The data in sample is binned into equal-width bins. The parameters controlling the binning are the following traits:
bins sets the number of bins. It is either a fixed integer (10 by default), or the name of a method to determine the number of bins in a smart way ('auto', 'fd', 'doane', 'scott', 'rice', 'sturges' or 'sqrt').
min and max set the range of the data (sample) to be binned
density, if set to True, normalizes the heights of the bars.
For more information, see the documentation of numpy's histogram
End of explanation
# Normalizing the count
fig = plt.figure(padding_y=0)
hist = plt.bin(x_data, density=True)
fig
# changing the color
hist.colors=['orangered']
# stroke and opacity update
hist.stroke = 'orange'
hist.opacities = [0.5] * len(hist.x)
# Laying the histogram on its side
hist.orientation = 'horizontal'
fig.axes[0].orientation = 'vertical'
fig.axes[1].orientation = 'horizontal'
Explanation: Histogram Styling
The styling of Hist is identical to the one of Bars
End of explanation |
4,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to Experimental Design with Emukit
Overview
Step1: Navigation
What is experimental design?
The ingredients of experimental design
Emukit's experimental design interface
References
1. What is experimental design?
Consider a function $f
Step2: The space object defines the input space $X = [0, 1]$, which in this case is purely continuous and only one dimensional. We may also apply experimental design in other domains that contain discrete or categorical parameters.
Of course in reality, evaluating $f$ on a grid wouldn't be possible, but since the forrester function is a synthetic function we can evaluate it here for visualization purposes.
Step3: <h4 id='bo_intro_init_design'> The Intial Design </h4>
Usually, before we start the actual ExpDesign loop we need to gather a few observations such that we can fit the model. This is called the initial design and common strategies are either a predefined grid or sampling points uniformly at random.
Step4: <h4 id='bo_intro_model'> The Model </h4>
Now we can start with the ExpDesign loop by first fitting a model on the collected data.
A popular model for ExpDesign is a Gaussian process (GP) which defines a probability distribution across classes of functions, typically smooth, such that each linear finite-dimensional restriction is multivariate Gaussian (Rasmussen and Williams, 2006). GPs are fully parametrized by a mean $\mu(x)$ and a covariance function $k(x,x')$. Without loss of generality $\mu(x)$ is assumed to be zero. The covariance function $k(x,x')$ characterizes the smoothness and other properties of $f$. It is known that the kernel of the process has to be continuous, symmetric and positive definite. A widely used kernel is the squared exponential or RBF kernel
Step5: <h4 id='bo_intro_acquisition'> The Acqusition Function </h4>
In the second step of our ExpDesign loop we use our model to compute the acquisition function. Two example of ExpDesign acquisition functions are
Step6: <h4 id='bo_intro_eval'> Evaluating the objective function </h4>
To find the next point to evaluate we optimize the acquisition function using a standard gradient descent optimizer.
Step7: Afterwards we evaluate the true objective function and append it to our initial observations.
Step8: After updating the model, you can see that the uncertainty about the true objective function in this region decreases and our model becomes more certain.
Step9: 3. Emukit's experimental design interface
Of course in practice we don't want to implement all of these steps our self. Emukit provides a convenient and flexible interface to apply experimental design. Below we can see how to run experimental design on the exact same function for 10 iterations. | Python Code:
# General imports
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors as mcolors
# Figure config
LEGEND_SIZE = 15
Explanation: An Introduction to Experimental Design with Emukit
Overview
End of explanation
from emukit.test_functions import forrester_function
from emukit.core.loop.user_function import UserFunctionWrapper
from emukit.core import ContinuousParameter, ParameterSpace
target_function, space = forrester_function()
Explanation: Navigation
What is experimental design?
The ingredients of experimental design
Emukit's experimental design interface
References
1. What is experimental design?
Consider a function $f: \mathbb{X} \rightarrow \mathbb{R}$, $x\mapsto f(x)$ which is defined in some constrained input space $\mathbb{X}$. The function might be unknown, and we may only learn about it by querying it at some locations $x$ to obtain (possibly noisy) measurements $y(x) = f(x) + \epsilon$, $\epsilon \sim \mathcal{N}(0, \sigma^2_{noise})$.
Experimental design (ExpDesign) tries to predict the function $f(x)$ as accurately as possible also in locations where it has not been observed. This is especially useful if one needs to know the value of $f$ at a particular point $x_{new}$ but it would take too long in real life to evaluate $f$. This happens for example when $f(x)$ is the output of a time-consuming computer simulation, and a decision that needs to be made depends on the value $f(x_{new})$.
An example of such a scenario might be a simulation of a tsunami [Saito, 2019]
that is being run whenever an earthquake happens, in order to decide if inhabited regions need to be evacuated, and there is just not enough time to query the precise but expensive simulation. The function $f(x)$ in this case might describe the severity of the tsunami (wave height), and the inputs $x$ might describe physical measurement on the ocean ground.
An emulator for the function $f$ that can be queried instead of the simulation, and would give an approximate answer with a calibrated error bar which can be used to make the decision instead. For this, the emulator first needs to be trained on "datapoints" which are the results of previous simulation runs of $f$.
To make an emulator as reliable and functional as possible, the aim is to learn the function $f$ as well as possible given some limited number of function evaluations.
There are two crucial bits in experimental design:
A prior probability measure $p(f)$ which captures our prior beliefs on $f$, called the model. Everytime we observe new data $D$ the prior will be updated to a 'posterior' $p(f|D)$ using the available data. Obtaining the data $D$ would require running the costly simulation.
An acquisition function $a: \mathbb{X} \rightarrow \mathbb{R}$ which for each point in the input space quantifies the utility of evaluating this point. The central idea of the acquisition function is that the next point that will be acquired should be maximally informative to learn $f$.
Given these ingredients, ExpDesign essentially iterates the following three steps:
1. fit the model $p(f|D_{n})$ on the currently available data $D_{n}$.
2. find the most interesting point to evaluate by $x_{n+1} \in \operatorname*{arg\:max}{x \in \mathbb{X}} a_n(x)$
3. evaluate the objective function at $x{n+1}$, obtain $y_{n+1}$ and add the new observation to the data $D_{n+1} \leftarrow D_{n} \cup {x_{n+1}, y_{n+1}}$.
2. The ingredients of experimental design
<h4 id='bo_intro_objective'>The Objective Function and the Input Space</h4>
As an example let's assume we want to learn the one-dimensional forrester function:
$$
f(x) = (6x - 2)^2\sin(12x - 4)
$$
which is defined over the interval $x \in [0, 1]$. In this example we know the functional form of $f$, but in practice we may not.
Conviently, this function is already implemented in Emukit. Note that in order to pass it to other Emukit modules we wrap the function by the UserFunctionWrapper interface.
End of explanation
x_plot = np.linspace(space.parameters[0].min, space.parameters[0].max, 301)[:, None]
y_plot = target_function(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(x_plot, y_plot, "k", label="Target Function")
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: The space object defines the input space $X = [0, 1]$, which in this case is purely continuous and only one dimensional. We may also apply experimental design in other domains that contain discrete or categorical parameters.
Of course in reality, evaluating $f$ on a grid wouldn't be possible, but since the forrester function is a synthetic function we can evaluate it here for visualization purposes.
End of explanation
X_init = np.array([[0.2],[0.6], [0.9]])
Y_init = target_function(X_init)
plt.figure(figsize=(12, 8))
plt.plot(X_init, Y_init, "ro", markersize=10, label="Observations")
plt.plot(x_plot, y_plot, "k", label="Target Function")
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: <h4 id='bo_intro_init_design'> The Intial Design </h4>
Usually, before we start the actual ExpDesign loop we need to gather a few observations such that we can fit the model. This is called the initial design and common strategies are either a predefined grid or sampling points uniformly at random.
End of explanation
import GPy
from emukit.model_wrappers.gpy_model_wrappers import GPyModelWrapper
gpy_model = GPy.models.GPRegression(X_init, Y_init, GPy.kern.RBF(1, lengthscale=0.08, variance=20), noise_var=1e-10)
emukit_model = GPyModelWrapper(gpy_model)
mu_plot, var_plot = emukit_model.predict(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(X_init, Y_init, "ro", markersize=10, label="Observations")
plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0], color="C0", alpha=0.6)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.4)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: <h4 id='bo_intro_model'> The Model </h4>
Now we can start with the ExpDesign loop by first fitting a model on the collected data.
A popular model for ExpDesign is a Gaussian process (GP) which defines a probability distribution across classes of functions, typically smooth, such that each linear finite-dimensional restriction is multivariate Gaussian (Rasmussen and Williams, 2006). GPs are fully parametrized by a mean $\mu(x)$ and a covariance function $k(x,x')$. Without loss of generality $\mu(x)$ is assumed to be zero. The covariance function $k(x,x')$ characterizes the smoothness and other properties of $f$. It is known that the kernel of the process has to be continuous, symmetric and positive definite. A widely used kernel is the squared exponential or RBF kernel: $$ k(x,x') = \theta_0 \cdot \exp{ \left(-\frac{\|x-x'\|^2}{\theta_1}\right)} $$ where $\theta_0$ and and $\theta_1$ are hyperparameters.
To denote that $f$ is a sample from a GP with mean $\mu$ and covariance $k$ we write
$$f \sim \mathcal{GP}(\mu,k).$$
For regression tasks, the most important feature of GPs is that process priors are conjugate to the likelihood from finitely many observations $Y = (y_1,\dots,y_n)^T$ and $X ={x_1,...,x_n}$, $x_i\in \mathcal{X}$ of the form $y_i = f(x_i) + \epsilon_i$ where $\epsilon_i \sim \mathcal{N} (0,\sigma_{noise}^2)$ and we estimate $\sigma_{noise}$ by an additional hyperparameter $\theta_2$.
We obtain the Gaussian posterior $f(x^)|X, Y, \theta \sim \mathcal{N}(\mu(x^),\sigma^2(x^))$, where $\mu(x^)$ and $\sigma^2(x^*)$ have a close form. See (Rasmussen and Williams, 2006) for more details.
Note that Gaussian processes are also characterized by hyperparameters $\theta = {\theta_0, ... \theta_k}$ such as for instance the kernel lengthscales. For simplicitly we keep these hyperparameters fixed here. However, we usually either optimize or sample these hyperparameters using the marginal loglikelihood of the GP. Of course we could also use any other model that returns a mean $\mu(x)$ and variance $\sigma^2(x)$ on an arbitrary input points $x$ such as Bayesian neural networks or random forests.
End of explanation
from emukit.experimental_design.acquisitions import IntegratedVarianceReduction, ModelVariance
us_acquisition = ModelVariance(emukit_model)
ivr_acquisition = IntegratedVarianceReduction(emukit_model, space)
us_plot = us_acquisition.evaluate(x_plot)
ivr_plot = ivr_acquisition.evaluate(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(x_plot, us_plot / np.max(us_plot), "green", label="US")
plt.plot(x_plot, ivr_plot / np.max(ivr_plot) , "purple", label="IVR")
plt.legend(loc=1, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: <h4 id='bo_intro_acquisition'> The Acqusition Function </h4>
In the second step of our ExpDesign loop we use our model to compute the acquisition function. Two example of ExpDesign acquisition functions are:
Uncertainty Sampling (US): Choose the next value $x_{n+1}$ at the location where the model on $f(x)$ has the highest marginal predictive variance
$$
a_{US}(x) = \sigma^2(x)
$$
This makes sure, that we learn the function $f$ everywhere on $\mathbb{X}$ to a similar level of absolute error.
Integrated variance reduction (IVR): Choose the next value $x_{n+1}$ such that the total variance of the model is reduced maximally [Sacks et al. 1989].
$$
a_{IVR} = \int_{\mathbb{X}}[\sigma^2(x') - \sigma^2(x'; x)]\mathrm{d}x'\approx
\frac{1}{# samples}\sum_i^{# samples}[\sigma^2(x_i) - \sigma^2(x_i; x)].
$$
Here $\sigma^2(x'; x)$ is the predictive variance at $x'$ had $x$ been observed. Thus IVR compute the overall reduction in variance (for all points in $\mathbb{X}$) had $f$ been observed at $x$.
The finite sum approximation on the right hand side of the equation is usually used because the integral over $x'$ is not analytic. In that case $x_i$ are sampled randomly. For a GP model the right hand side simplifies to $a_{LCB} \approx \frac{1}{# samples}\sum_i^{# samples}\frac{k^2(x_i, x)}{\sigma^2(x)}$.
IVR is arguably te more principled approach, but often US is preferred over IVR simply because it lends itself to gradient based optimization more easily, is cheaper to compute, and is exact.
For both of them (stochastic) gradient base optimizers are used to retrieve $x_{n+1} \in \operatorname*{arg\:max}_{x \in \mathbb{X}} a(x)$.
End of explanation
from emukit.core.optimization import GradientAcquisitionOptimizer
optimizer = GradientAcquisitionOptimizer(space)
x_new, _ = optimizer.optimize(us_acquisition)
plt.figure(figsize=(12, 8))
plt.plot(x_plot, us_plot / np.max(us_plot), "green", label="US")
plt.axvline(x_new, color="red", label="x_next", linestyle="--")
plt.legend(loc=1, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(-0.01, 1)
plt.show()
Explanation: <h4 id='bo_intro_eval'> Evaluating the objective function </h4>
To find the next point to evaluate we optimize the acquisition function using a standard gradient descent optimizer.
End of explanation
y_new = target_function(x_new)
X = np.append(X_init, x_new, axis=0)
Y = np.append(Y_init, y_new, axis=0)
Explanation: Afterwards we evaluate the true objective function and append it to our initial observations.
End of explanation
emukit_model.set_data(X, Y)
mu_plot, var_plot = emukit_model.predict(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(emukit_model.X, emukit_model.Y, "ro", markersize=10, label="Observations")
plt.plot(x_plot, y_plot, "k", label="Target Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0], color="C0", alpha=0.6)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.4)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: After updating the model, you can see that the uncertainty about the true objective function in this region decreases and our model becomes more certain.
End of explanation
from emukit.experimental_design.experimental_design_loop import ExperimentalDesignLoop
ed = ExperimentalDesignLoop(space=space, model=emukit_model)
ed.run_loop(target_function, 10)
mu_plot, var_plot = ed.model.predict(x_plot)
plt.figure(figsize=(12, 8))
plt.plot(ed.loop_state.X, ed.loop_state.Y, "ro", markersize=10, label="Observations")
plt.plot(x_plot, y_plot, "k", label="Objective Function")
plt.plot(x_plot, mu_plot, "C0", label="Model")
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - np.sqrt(var_plot)[:, 0], color="C0", alpha=0.6)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 2 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 2 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.4)
plt.fill_between(x_plot[:, 0],
mu_plot[:, 0] + 3 * np.sqrt(var_plot)[:, 0],
mu_plot[:, 0] - 3 * np.sqrt(var_plot)[:, 0], color="C0", alpha=0.2)
plt.legend(loc=2, prop={'size': LEGEND_SIZE})
plt.xlabel(r"$x$")
plt.ylabel(r"$f(x)$")
plt.grid(True)
plt.xlim(0, 1)
plt.show()
Explanation: 3. Emukit's experimental design interface
Of course in practice we don't want to implement all of these steps our self. Emukit provides a convenient and flexible interface to apply experimental design. Below we can see how to run experimental design on the exact same function for 10 iterations.
End of explanation |
4,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python
Basic Functions
Step1: Build In Constants
<ul>
<li>False</li>
<li>True</li>
<li>None -> Represents absence of a value</li>
<li>NotImplememted -> Returned by some functions when the arithmeatic operation is not implemeted like \__add\__</li>
</ul>
Comparison
Step2: Lists
Step3: Tuples
Immutable sequences to store hetrogenous data
Step4: Sets
Unordered collection of elements and thus they do not support indexing, slicing.<br>
Their are two types of sets nameley
<ol>
<li>Sets -> They are mutable amd thus is not hashable. Can use operations like add() or remove().</li>
<li>Frozenset -> Immutable and hashable and thus it can be used as a dictionary key.</li>
</ol>
Step5: Dictionary
Objects that are not hashable like lists, tuples, sets cannot become keys of a dict.
Step6: Exceptions
All exceptions are instances of a class that derieves from BaseExceptions.<br>
General approach to handle exceptions<br>
<pre>
try
Step7: Various exception that youmay face
<ul>
<li>__AssertionError__ -> When an assert statement fails</li>
<li>__AttributeError__ -> When an assignment or attribute reference fails</li>
<li>__EOFError__ -> Most proabl;y when you miss a bracket.</li>
<li>__FloatingPointError__ -> When a flaoting point operation fails</li>
<li>__ImportError__</li>
<li>__ModuleNotFoundError__</li>
<li>__IndexError__ -> During incorrect indexing of a sequence</li>
<li>__KeyError__ -> When a mapping dictionary key is not found in set of existing keys</li>
<li>__KeyboardInterrupt__</li>
<li>__MemoryError__ -> When an operation runs out of memory but it can be overcome by deleting some objects</li>
<li>__NameError__ -> When a local or global name is not found</li>
<li>__OSError__ -> When a system related error occurs like failed file I/O operation or disk full.</li>
<li>__OverflowError__ -> When result of arithematic operation is too large to be represented.</li>
<li>__RecursionError__ -> When maximum recursion depth is exceeded.</li>
<li>__ReferenceError__ -> When we try to access an attribute of the referent after it has been garbage collected.</li>
</ul>
Step8: Fucntional Programming
It is a process of building software by composing pure functions, avoiding shared state, muatble data and side effects. COntrast with object oriented programming hwere application state is usually shared and colocated with methods in objects.
itertools -- Functions for creating iterations for efficient looping | Python Code:
abs(-3) # Gives the absolute value of a variable
all([1,2,3,0]) # Returns True if all iterators are true.
any([1,2,3,4,0]) # If any value is true among the iterators
def prime():
pass
print(callable(prime))
# Returns True if object has a return statement
# instances of classes are callable if they have __call))() method
a = 3
callable(a)
chr(49) # Returns the ASCII represenattion of the number
complex('3')
dir() # Returns a list of names in the current lcoal scope
import numpy as np
dir(numpy)
divmod(10,3) # Returns the quotient and the remainder
temp = ['a', 'b', 'f']
list(enumerate(temp, start = 6))
# Why we use enumerate because it let's use use both the index and the value of the sequence
for index, value in enumerate(temp):
print('The index is %d amd the letter is %c'%(index, value))
format(45, 'b') # To convert in different integer representations
globals() # returns a dictionary of current global symbol table
# You can use to see the globals of a function or class
a = 3
id(a) # Kind of address in the memory
input('Give me something as input')
locals() # returns the free variables
a = 2.3412456432
round(a, 3)
Explanation: Python
Basic Functions
End of explanation
if(3 or 0):
print('Or is True')
if(3 and 0):
print('And is True')
if 3 is 3:
print('The world is round')
if 3 is not 4:
print('The world is round')
a = 3
b = 3.0
if (a == b):
print('Double = works')
if (a is b): # Is does the strict comparison
print('Is works')
# Bitwise operations
x | y # THe or operation
x ^ y # The xor operation
x & y # And operation
x << y # Multiplyting by 2**y
~x # The bits of x reversed
# Some sequence operations
s.count(x) # Return the total number of occurences of x
s.index(x[, i[, j]]) # First occurence of x at or after i and before index j
x in s
x not in s
s+t # Concatenation of two sequences
Explanation: Build In Constants
<ul>
<li>False</li>
<li>True</li>
<li>None -> Represents absence of a value</li>
<li>NotImplememted -> Returned by some functions when the arithmeatic operation is not implemeted like \__add\__</li>
</ul>
Comparison
End of explanation
a = list(('a','b','c','d'))
print(a)
a = ['a','b','c','d']
print(a)
myList = [4,5,8,1,23,4,6,8]
myList.sort(reverse = False)
print(myList)
myList.append('a')
print(myList)
myList.remove(1)
print(myList)
a = myList.pop()
print(a)
myList.insert(3, 'b')
print(myList)
Explanation: Lists
End of explanation
myTuple = tuple((1,2,3,'a'))
print(myTuple)
myTuple = (1,2,3,'a', 1+2j)
print(myTuple[3])
# you cannot now add or remove elements from a tuple after its creation
Explanation: Tuples
Immutable sequences to store hetrogenous data
End of explanation
mySet = set((1,2,3,4,'a'))
print(mySet)
mySet[1]
mySet.remove(1)
mySet = {1,2,3,'a',1+2j}
print(mySet)
len(mySet)
2 in mySet
mySet2 = {'a','b','c','d'}
mySet.isdisjoint(mySet2) # Tells whether two sets have all differen elements
mySet3 = {1,2}
mySet3.issubset(mySet)
mySet.union(mySet2)
myFrozenSet = frozenset((1,2,3,4))
myFrozenSet
Explanation: Sets
Unordered collection of elements and thus they do not support indexing, slicing.<br>
Their are two types of sets nameley
<ol>
<li>Sets -> They are mutable amd thus is not hashable. Can use operations like add() or remove().</li>
<li>Frozenset -> Immutable and hashable and thus it can be used as a dictionary key.</li>
</ol>
End of explanation
myDict = dict(one=1, two=2, three=3, four=4)
myDict
len(myDict)
myDict['one']
# You can use __missing__ to specify what to do when a key is not found
class Counter(dict):
def __missing__(self, key):
print('The key is not present')
return 0
myDict = Counter()
print(myDict['red'])
myDict = {'one':1, 'two':2, 'three':3}
myDict
# You can check for keys
'one' in myDict
# If you want to iterate over the keys in a dictionary
for key in iter(myDict):
print(key)
myDict.get('one')
print(myDict.get('zero'))
for values in myDict.items():
print(values)
# Best way to iterate over a dictionary
for key, value in myDict.items():
print('For key = ', key, ' the value is ', value)
myDict.update({'zero':0})
myDict
Explanation: Dictionary
Objects that are not hashable like lists, tuples, sets cannot become keys of a dict.
End of explanation
for _ in range(2):
try:
x = int(input('Enter a anumber '))
except ValueError:
print('You did not enter a number')
finally:
print('This line will always be executed')
# To create new exceptions
class B(Exception):
pass
try:
pass
except B:
pass
a = 3
assert (a == 4)
# We would get an exception as a is 3 not 4
Explanation: Exceptions
All exceptions are instances of a class that derieves from BaseExceptions.<br>
General approach to handle exceptions<br>
<pre>
try:
...
except SomeExceptionName:
What you want to do if you face that exception
</pre>
Various base classes
<ul>
<li>BaseException -> Base class for all built in exceptions. It is not directly inherited by user-defined exceptions.</li>
<li>Exception -> All built-in, user-defined exceptions are derived form this class</li>
<li>ArithmeticError -> Base Class for those exceptions that are raised during arithematic operations like OverflowError, ZeroDivisionError, FloatinPointError.</li>
<li>BufferError</li>
<li>LookupError -> When a key or index used on a mapping or sequence is invalid like IndexError, KeyError</li>
</ul>
End of explanation
import sys
sys.getrecursionlimit()
# Always remember that tail recursion is not efficient inpython so
# always try to solve the problem iteratively that recursively in pyton.
Explanation: Various exception that youmay face
<ul>
<li>__AssertionError__ -> When an assert statement fails</li>
<li>__AttributeError__ -> When an assignment or attribute reference fails</li>
<li>__EOFError__ -> Most proabl;y when you miss a bracket.</li>
<li>__FloatingPointError__ -> When a flaoting point operation fails</li>
<li>__ImportError__</li>
<li>__ModuleNotFoundError__</li>
<li>__IndexError__ -> During incorrect indexing of a sequence</li>
<li>__KeyError__ -> When a mapping dictionary key is not found in set of existing keys</li>
<li>__KeyboardInterrupt__</li>
<li>__MemoryError__ -> When an operation runs out of memory but it can be overcome by deleting some objects</li>
<li>__NameError__ -> When a local or global name is not found</li>
<li>__OSError__ -> When a system related error occurs like failed file I/O operation or disk full.</li>
<li>__OverflowError__ -> When result of arithematic operation is too large to be represented.</li>
<li>__RecursionError__ -> When maximum recursion depth is exceeded.</li>
<li>__ReferenceError__ -> When we try to access an attribute of the referent after it has been garbage collected.</li>
</ul>
End of explanation
import itertools
import operator
data = [3,4,6,1,9,0,7,5,8]
print(data)
# Accumulate -> To make a iterator that returns accumulated sums
list(itertools.accumulate(data, func=operator.add))
# Chain -> Make an iterator that returns elements from first iterable until it is exhausted.
list1 = [1,2,3,'a']
list2 = ['b', 'c', 'd']
list(itertools.chain(list1, list2))
# Combinations -> Returns subsequences of length r from input iterables
list(itertools.combinations(data,3))
# Combinations with replacements
list(itertools.combinations_with_replacement(data, 3))
# cycle -> To cycle through the sequence. Note it is infinte
Explanation: Fucntional Programming
It is a process of building software by composing pure functions, avoiding shared state, muatble data and side effects. COntrast with object oriented programming hwere application state is usually shared and colocated with methods in objects.
itertools -- Functions for creating iterations for efficient looping
End of explanation |
4,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
Step1: Viscoelastic stress-strain relations
We compare different visco-elastic stress-strain relations in the frequency-domain to achieve a constant Q(ω) behavior.
1D isotropic SH problem
Step2: In the viscoelastic case stress and strain are out of phase. As we will see later, the phase shift $\phi$ can be related to the quality factor $Q$ by
\begin{equation}
\phi = \arctan{\frac{1}{Q}} \notag
\end{equation}
which leads to the viscoelastic stress-strain relation
Step3: The energy loss $\Delta E$ for example via heat, is defined by the area within the hysteresis loop, so we can calculate $\Delta E$ by
Step4: As you can see, the Maxwell-model is not useful to model viscoelastic seismic wave propagation, because we have small damping at high frequencies.
Kelvin–Voigt model
What happens if we connect the Hooke and Newton elements leading to the Kelvin-Voigt model?
<img src="images/Kelvin_Voigt_model.png" width="70%">
In a parallel connection of elements, the effective stress can be calculated by the sum of the basic element stresses
\begin{equation}
\sigma = \sigma_1 + \sigma_2 \notag
\end{equation}
For the Maxwell-model we have the stresses
\begin{align}
\sigma_{Hooke} &= \mu \epsilon\notag \
\sigma_{Newton} &= \eta \dot{\epsilon}\notag \
\end{align}
which yields the ordinary differential equation for the Kelvin-Voigt model
\begin{equation}
\sigma_{KV} = \sigma_{Hooke} + \sigma_{Newton} = \mu \epsilon + \eta \dot{\epsilon} \notag
\end{equation}
After Fourier transform and some rearrangement we get
\begin{equation}
\tilde{\sigma}_{KV}(\omega) = (\mu + \eta i \omega)\; \tilde{\epsilon}(\omega) \notag
\end{equation}
where the complex shear modulus for the Kelvin-Voigt model
\begin{equation}
\tilde{\mu}_{KV} = (\mu + \eta i \omega) \notag
\end{equation}
can be identified. Let's calculate the $1/Q(\omega)$ behavior from real and imaginary part of the complex Kelvin-Voigt shear modulus
Step5: Unfortunately, the Kelvin-Voigt model is also not suitable to model viscoelastic wave propagation, because the damping increases linear with frequency.
Standard Linear Solid (SLS) - Maxwell
Another idea to construct a viscoelastic model is to combine the Maxwell element with a Hooke element in parallel. This is denoted as Standard Linear Solid (SLS) in Maxwell representation
<img src="images/SLS.png" width="70%">
Because we have to combine elements in parallel, the stresses of components have to be added
Step6: This result looks promising. While $1/Q$ is not frequency independent, we see for the Maxwell SLS model a peak at a given frequency and decreases at lower and higher frequencies. For completeness, I note that we could also build a Kelvin-Voigt SLS model by connecting a Hooke element with a Kelvin-Voigt element in parallel. However, in the following we will further focus on the potential of the Maxwell SLS model.
Generalized Maxwell-model
Based on the promising results of the Maxwell SLS model, we add multiple Maxwell models in parallel, which yields the Generalized Maxwell model or Generalized Maxwell body (GMB), also known as Maxwell-Wiechert model. By the superposition of multiple Maxwell models with different elastic modules $\mu_l$ and viscosities $\eta_l$, we can achieve a constant Q-value over a given frequency range.
<img src="images/GMB.png" width="70%">
Because we assemble the Maxwell SLS model with additional L Maxwell bodies in parallel, we have to add the stresses in frequency domain | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
from mpl_toolkits.axes_grid1 import make_axes_locatable
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Stress-stain relation for linear elastic medium
# -----------------------------------------------
t = np.arange(0.0,10.0) # time (s)
sigma0 = 1.0 # maximum stress (Pa)
epsilon0 = 1.0 # maximum strain ()
omega = 1.0 # Circular frequency (rad/s)
# Calculate temporal changes of sigma and epsilon
# -----------------------------------------------
sigma = sigma0 * np.cos(omega*t)
epsilon = epsilon0 * np.cos(omega*t)
# Define figure size
rcParams['figure.figsize'] = 7, 5
# plot stress-strain relation
plt.plot(epsilon, sigma, 'b-',lw=3,label="linear elastic medium")
plt.title('Stress-Strain relation')
plt.xlabel('Strain []')
plt.ylabel('Stress [Pa]')
plt.legend()
plt.grid()
plt.show()
Explanation: Viscoelastic stress-strain relations
We compare different visco-elastic stress-strain relations in the frequency-domain to achieve a constant Q(ω) behavior.
1D isotropic SH problem: linear elastic vs. viscoelastic medium
To understand the difference between the linear elastic and visco-elastic 1D SH problem, we first review the partial differential equations of the 1D elastic SH problem. These consist of the conservation of momentum:
\begin{equation}
\rho \frac{\partial^2 u_y}{\partial t^2} = \frac{\partial \sigma_{yx}}{\partial x} + f_y\notag
\end{equation}
To describe the deformation $\epsilon_{yx}$ within the medium as a response to a given shear stress $\sigma_{yx}$, we use the linear stress-strain relation:
\begin{equation}
\sigma_{yx} = 2\mu \epsilon_{yx}\notag
\end{equation}
with the shear modulus $\mu$. To simplify notation, I define
\begin{equation}
\mu' = 2\mu\notag
\end{equation}
and
\begin{equation}
\mu = \mu'\notag
\end{equation}
Furthermore, I replace:
\begin{align}
\sigma_{yx} &\rightarrow \sigma\notag\
\epsilon_{yx} &\rightarrow \epsilon\notag\
\end{align}
yielding:
\begin{equation}
\sigma = \mu \epsilon\notag
\end{equation}
To describe a linear viscoelastic medium, we only have to modfiy the stress-strain relation, because the conservation of momentum is independent of the material behavior. The viscoelastic stress-strain relation can be described by the Boltzmann superposition and causality principle:
\begin{equation}
\sigma = \int_{-\infty}^{t}\Psi(t-t')\dot{\epsilon}(t')dt'\notag
\end{equation}
with the relaxation function $\Psi$. The relaxation function, together with the integral bounds impose causality. You can see that the viscoelastic medium has a "fading" memory incorporating the strain-rate history. By definition, we can rewrite the stress-strain relation as convolution of the relaxation function with the strain rate $\dot{\epsilon}$
\begin{equation}
\sigma = \Psi*\dot{\epsilon}\notag
\end{equation}
Using the property of the convolution time-derivative:
\begin{equation}
\frac{\partial}{\partial t} (\Psi*\epsilon) = \frac{\partial \Psi}{\partial t} * \epsilon = \Psi * \frac{\partial \epsilon}{\partial t}\notag
\end{equation}
we can rewrite the stress-strain relation for the viscoelastic medium to
\begin{equation}
\sigma = \dot{\Psi}*\epsilon\notag
\end{equation}
Let's compare the stress-strain relation of an elastic and visco-elastic medium, assuming harmonic changes of stress and strain.
In the elastic case both changes are in phase:
End of explanation
# Stress-stain relation for linear viscoelastic medium
# ----------------------------------------------------
t = np.arange(0.0,10.0,0.1) # time (s)
sigma0 = 1.0 # maximum stress (Pa)
epsilon0 = 1.0 # maximum strain ()
omega = 1.0 # Circular frequency (rad/s)
# define phase shift of strain
Q = 10 # quality factor ()
phi = np.arctan(1/Q) # phase shift (rad)
# Calculate temporal changes of sigma and epsilon
# -----------------------------------------------
sigma = sigma0 * np.cos(omega*t)
epsilon = epsilon0 * np.cos(omega*t - phi)
# Define figure size
rcParams['figure.figsize'] = 7, 5
# plot stress-strain relation
plt.plot(epsilon, sigma, 'r-',lw=3,label="linear viscoelastic medium")
plt.title('Stress-Strain relation')
plt.xlabel('Strain []')
plt.ylabel('Stress [Pa]')
plt.legend()
plt.grid()
plt.show()
Explanation: In the viscoelastic case stress and strain are out of phase. As we will see later, the phase shift $\phi$ can be related to the quality factor $Q$ by
\begin{equation}
\phi = \arctan{\frac{1}{Q}} \notag
\end{equation}
which leads to the viscoelastic stress-strain relation:
End of explanation
# 1/Q(omega) for the Maxwell-model
# --------------------------------
f = np.arange(5.0,100.0,0.5) # frequency (Hz)
omega = 2 * np.pi * f
mu = 4e8 # shear modulus (Pa) for soil from Dokter et al. (2017)
eta = 1.0 # viscosity (Pa s)
# Define complex shear modulus for Maxwell-model
# ----------------------------------------------
muM = (1/mu) + (1/(1j*omega*eta))
# Calculate 1/Q(omega)
# --------------------
Qinv = np.abs(np.imag(muM)/np.real(muM))
# Define figure size
rcParams['figure.figsize'] = 7, 5
# plot stress-strain relation
plt.plot(f, Qinv, 'r-',lw=3,label="Maxwell-model")
plt.title(r'$\frac{1}{Q}(\omega)$ for the Maxwell-model')
plt.xlabel('Frequency f [Hz]')
plt.ylabel(r'$\frac{1}{Q}$ []')
plt.grid()
plt.show()
Explanation: The energy loss $\Delta E$ for example via heat, is defined by the area within the hysteresis loop, so we can calculate $\Delta E$ by:
\begin{equation}
\Delta E = \oint \sigma(\epsilon') d\epsilon' \notag
\end{equation}
Furthermore, we can distinguish two special cases of linear viscoelasticity:
The case above is irreversible (inelastic)
A reversible (anelastic) medium can recover its original state after removing the load
Viscoelastic models
So far the details of the relaxation function are not defined. Therefore, the next step is to find a relaxation function with a frequency-independent $Q(\omega)$-value. Similar to electrical networks, we can construct viscoelastic models composed of two basic elements.
Basic elements
These basic elements are
<img src="images/Hooke_Newton_model.png" width="60%">
The Hooke element (spring), representing the linear elastic medium
\begin{align}
\sigma_{Hooke} &= \mu \epsilon \notag\
&\text{or} \notag\
\epsilon_{Hooke} &= \frac{\sigma}{\mu}\notag\
\end{align}
The Newton element (dashpot), representing the viscous damping part with the stress-strainrate relation:
\begin{align}
\sigma_{Newton} &= \eta \dot{\epsilon} \notag\
&\text{or} \notag\
\dot{\epsilon}_{Newton} &= \frac{\sigma}{\eta}\notag\
\end{align}
where $\eta$ denotes the viscosity of the medium.
Maxwell-model
To realize different viscoelastic media we can combine the basic elements in different networks. A serial connection of the Hooke with the Newton element yields the Maxwell-model:
<img src="images/Maxwell_model.png" width="70%">
In a serial connection, the effective deformation can be calculated by the sum of the basic element deformations
\begin{equation}
\epsilon = \epsilon_1 + \epsilon_2 \notag
\end{equation}
or by taking the time derivative
\begin{equation}
\dot{\epsilon} = \dot{\epsilon}_1 + \dot{\epsilon}_2 \notag
\end{equation}
For the Maxwell-model we get the following ordinary differential equation for $\epsilon$ and $\sigma$
\begin{equation}
\dot{\epsilon}{M} = \dot{\epsilon}{Newton} + \dot{\epsilon}_{Hooke} = \frac{\sigma}{\eta} + \frac{\dot{\sigma}}{\mu}\notag
\end{equation}
We solve this problem by Fourier transform, where the wavefields in the frequency domain are defined as
\begin{equation}
\tilde{f}(\omega) = \int_{-\infty}^{\infty} f(t) exp(-i\omega t) dt\notag
\end{equation}
Time-derivatives are transformed according to:
\begin{equation}
i \omega \tilde{f}(\omega) = \int_{-\infty}^{\infty} \dot{f}(t) exp(-i\omega t) dt\notag
\end{equation}
Therefore, the Maxwell-model in the frequency domain is:
\begin{equation}
i \omega \tilde{\epsilon}_{M} = \frac{\tilde{\sigma}}{\eta} + \frac{i \omega \tilde{\sigma}}{\mu}\notag
\end{equation}
After some rearrangements, we get the stress-strain relation in the frequency domain
\begin{equation}
\tilde{\epsilon}_{M} = \biggl(\frac{1}{\mu}+\frac{1}{i \omega \eta}\biggr) \tilde{\sigma}\notag
\end{equation}
where we can connect strain with stress via a complex shear modulus $\tilde{\mu}_M$:
\begin{equation}
\tilde{\epsilon}_{M} = \tilde{\mu_M} \tilde{\sigma}\notag
\end{equation}
with
\begin{equation}
\tilde{\mu}_M = \frac{1}{\mu}+\frac{1}{i \omega \eta}\notag
\end{equation}
For the application in seismic modelling, it is important that the visco-elastic model can describe a frequency-independent $Q(\omega)$. This question can be quite easily answered, because we can connect the real and imaginary parts of the complex shear modulus $\tilde{\mu}_M$ for the Maxwell-model with the quality factor $Q$ by:
\begin{equation}
\frac{1}{Q} = \frac{{\frak{I}}{\tilde{\mu}_M}}{{\frak{R}}{\tilde{\mu}_M}} = \tan{\phi},\notag
\end{equation}
which also explains why we could relate the phase angle $\phi$ with $1/Q$, when we investigated the stress-strain relations for harmonic stress/strain above. So let's plot $1/Q$ as a function of $\omega$:
End of explanation
# 1/Q(omega) for the Kelvin-Voigt model
# -------------------------------------
f = np.arange(5.0,100.0,0.5) # frequency (Hz)
omega = 2 * np.pi * f
mu = 4e8 # shear modulus (Pa) for soil from Dokter et al. (2017)
eta = 1.0 # viscosity (Pa s)
# Define complex shear modulus for Kelvin-Voigt model
# ---------------------------------------------------
muKV = mu + 1j * omega * eta
# Calculate 1/Q(omega)
# --------------------
Qinv = np.imag(muKV)/np.real(muKV)
# Define figure size
rcParams['figure.figsize'] = 7, 5
# plot stress-strain relation
plt.plot(f, Qinv, 'r-',lw=3,label="Kelvin-Voigt model")
plt.title(r'$\frac{1}{Q}(\omega)$ for the Kelvin-Voigt model')
plt.xlabel('Frequency f [Hz]')
plt.ylabel(r'$\frac{1}{Q}$ []')
plt.grid()
plt.show()
Explanation: As you can see, the Maxwell-model is not useful to model viscoelastic seismic wave propagation, because we have small damping at high frequencies.
Kelvin–Voigt model
What happens if we connect the Hooke and Newton elements leading to the Kelvin-Voigt model?
<img src="images/Kelvin_Voigt_model.png" width="70%">
In a parallel connection of elements, the effective stress can be calculated by the sum of the basic element stresses
\begin{equation}
\sigma = \sigma_1 + \sigma_2 \notag
\end{equation}
For the Maxwell-model we have the stresses
\begin{align}
\sigma_{Hooke} &= \mu \epsilon\notag \
\sigma_{Newton} &= \eta \dot{\epsilon}\notag \
\end{align}
which yields the ordinary differential equation for the Kelvin-Voigt model
\begin{equation}
\sigma_{KV} = \sigma_{Hooke} + \sigma_{Newton} = \mu \epsilon + \eta \dot{\epsilon} \notag
\end{equation}
After Fourier transform and some rearrangement we get
\begin{equation}
\tilde{\sigma}_{KV}(\omega) = (\mu + \eta i \omega)\; \tilde{\epsilon}(\omega) \notag
\end{equation}
where the complex shear modulus for the Kelvin-Voigt model
\begin{equation}
\tilde{\mu}_{KV} = (\mu + \eta i \omega) \notag
\end{equation}
can be identified. Let's calculate the $1/Q(\omega)$ behavior from real and imaginary part of the complex Kelvin-Voigt shear modulus:
End of explanation
# 1/Q(omega) for the Maxwell SLS
# ------------------------------
f = np.arange(5.0,100.0,0.5) # frequency (Hz)
omega = 2 * np.pi * f
mu0 = 4e8 # shear modulus (Pa) for soil from Dokter et al. (2017)
mu1 = 1 # shear modulus (Pa)
f1 = 50.0
omega1 = 2 * np.pi * f1
# Define complex shear modulus for Kelvin-Voigt model
# ---------------------------------------------------
muSLSM = mu0 + (1j * mu1 * omega)/((1j * omega) + omega1)
# Calculate 1/Q(omega)
# --------------------
Qinv = np.imag(muSLSM)/np.real(muSLSM)
# Define figure size
rcParams['figure.figsize'] = 7, 5
# plot stress-strain relation
plt.plot(f, Qinv, 'r-',lw=3,label="Maxwell SLS model")
plt.title(r'$\frac{1}{Q}(\omega)$ for Maxwell SLS model')
plt.xlabel('Frequency f [Hz]')
plt.ylabel(r'$\frac{1}{Q}$ []')
plt.grid()
plt.show()
Explanation: Unfortunately, the Kelvin-Voigt model is also not suitable to model viscoelastic wave propagation, because the damping increases linear with frequency.
Standard Linear Solid (SLS) - Maxwell
Another idea to construct a viscoelastic model is to combine the Maxwell element with a Hooke element in parallel. This is denoted as Standard Linear Solid (SLS) in Maxwell representation
<img src="images/SLS.png" width="70%">
Because we have to combine elements in parallel, the stresses of components have to be added:
\begin{equation}
\sigma_{SLSM} = \sigma_{Hooke} + \sigma_{Maxwell} \notag
\end{equation}
For simplicity we will do this directly in the frequency domain:
\begin{equation}
\tilde{\sigma}{SLSM} = \tilde{\sigma}{Hooke} + \tilde{\sigma}_{Maxwell} \notag
\end{equation}
Inserting the stresses
\begin{align}
\tilde{\sigma}{Hooke} &= \mu_0 \tilde{\epsilon}\notag \
\tilde{\sigma}{Maxwell} &= \frac{i \mu_1 \omega \eta}{i \omega \eta + \mu_1} \tilde{\epsilon}\notag \
\end{align}
yields the frequency-domain stress-strain relation for the SLS in Maxwell representation:
\begin{equation}
\tilde{\sigma}_{SLSM} = \biggl(\mu_0 + \frac{i \mu_1 \omega \eta}{i \omega \eta + \mu_1}\biggr) \tilde{\epsilon} \notag
\end{equation}
The complex SLS shear modulus $\tilde{\mu}_{SLSM}$ can be rewritten as
\begin{equation}
\tilde{\mu}_{SLSM} = \mu_0 + \frac{i \mu_1 \omega}{i \omega + \frac{\mu_1}{\eta}} \notag
\end{equation}
Notice that the quantity $\frac{\mu_1}{\eta}$ has the dimension $\left[\frac{Pa}{Pa\; s}\right] = \left[\frac{1}{s}\right]$ of a frequency. A more detailed analysis reveals that this is the relaxation frequency
\begin{equation}
\omega_1 :=\frac{\mu_1}{\eta} \notag
\end{equation}
Therefore, we get:
\begin{equation}
\tilde{\mu}_{SLSM} = \mu_0 + \frac{i \mu_1 \omega}{i \omega + \omega_1} \notag
\end{equation}
How does the $1/Q(\omega)$ frequency spectrum look like?
End of explanation
# Q(omega) for the GMB with 4 Maxwell bodies
# --------------------------------------------
f = np.arange(5.0,100.0,0.5) # frequency (Hz)
nf = len(f)
omega = 2 * np.pi * f
# relaxation frequencies
L = 4 # number of Maxwell bodies
fl = np.linspace(5.0,100.0,L)
omegal = 2. * np.pi * fl
# Define relaxed shear modulus and defect for GMB
mu0 = 4e8 # shear modulus (Pa) for soil from Dokter et al. (2017)
dmu = 1.4e8 # shear moduli of Maxwell bodies (Pa)
# Simply set a_l = 1.0 / L
a_l = 1.0 / L
# Define complex shear modulus for GMB model
# ------------------------------------------
muM = np.zeros(nf,dtype='complex128')
for l in range(0,L):
muM += (1j * a_l * omega)/((1j * omega) + omegal[l])
muGMB = mu0 + dmu * muM
# Calculate Q(omega)
# --------------------
Q = np.real(muGMB)/np.imag(muGMB)
# Define figure size
rcParams['figure.figsize'] = 7, 5
# plot stress-strain relation
plt.plot(f, Q, 'r-',lw=3,label="General Maxwell model")
plt.title(r'$Q(\omega)$ for General Maxwell model')
plt.xlabel('Frequency f [Hz]')
plt.ylabel(r'$Q$ []')
plt.ylim(0,20)
plt.grid()
plt.show()
Explanation: This result looks promising. While $1/Q$ is not frequency independent, we see for the Maxwell SLS model a peak at a given frequency and decreases at lower and higher frequencies. For completeness, I note that we could also build a Kelvin-Voigt SLS model by connecting a Hooke element with a Kelvin-Voigt element in parallel. However, in the following we will further focus on the potential of the Maxwell SLS model.
Generalized Maxwell-model
Based on the promising results of the Maxwell SLS model, we add multiple Maxwell models in parallel, which yields the Generalized Maxwell model or Generalized Maxwell body (GMB), also known as Maxwell-Wiechert model. By the superposition of multiple Maxwell models with different elastic modules $\mu_l$ and viscosities $\eta_l$, we can achieve a constant Q-value over a given frequency range.
<img src="images/GMB.png" width="70%">
Because we assemble the Maxwell SLS model with additional L Maxwell bodies in parallel, we have to add the stresses in frequency domain:
\begin{equation}
\tilde{\sigma}{GMB} = \tilde{\sigma}{SLSM} + \sum_{l=2}^{L} \tilde{\sigma}_{Maxwell, l} \notag
\end{equation}
Inserting the stresses
\begin{align}
\tilde{\sigma}{SLSM} &= \biggl(\mu_0 + \frac{i \mu_1 \omega \eta_1}{i \omega \eta_1 + \mu_1}\biggr) \tilde{\epsilon} \notag \
\tilde{\sigma}{Maxwell, l} &= \frac{i \mu_l \omega \eta_l}{i \omega \eta_l + \mu_l} \tilde{\epsilon}\notag \
\end{align}
yields the frequency-domain stress-strain relation for the GMB
\begin{equation}
\tilde{\sigma}{GMB} = \biggl(\mu_0 + \frac{i \mu_1 \omega \eta_1}{i \omega \eta_1 + \mu_1} + \sum{l=2}^{L} \frac{i \mu_l \omega \eta_l}{i \omega \eta_l + \mu_l}\biggr) \tilde{\epsilon} \notag
\end{equation}
We can move the second term into the sum over the L Maxwell-models:
\begin{equation}
\tilde{\sigma}{GMB} = \biggl(\mu_0 + \sum{l=1}^{L} \frac{i \mu_l \omega \eta_l}{i \omega \eta_l + \mu_l}\biggr) \tilde{\epsilon} \notag
\end{equation}
Introducing the relaxation frequencies
\begin{equation}
\omega_l :=\frac{\mu_1}{\eta_l} \notag
\end{equation}
leads to
\begin{equation}
\tilde{\sigma}{GMB} = \biggl(\mu_0 + \sum{l=1}^{L} \frac{i \mu_l \omega}{i \omega + \omega_l}\biggr) \tilde{\epsilon} \notag
\end{equation}
I want to simplify the complex modulus
\begin{equation}
\tilde{\mu}{GMB} = \mu_0 + \sum{l=1}^{L} \frac{i \mu_l \omega}{i \omega + \omega_l}. \notag
\end{equation}
First we estimate the relaxed shear modulus:
\begin{equation}
\tilde{\mu}{GMB,R} = \lim{\omega\rightarrow 0} \tilde{\mu}_{GMB} = \mu_0\notag
\end{equation}
and unrelaxed shear modulus:
\begin{equation}
\tilde{\mu}{GMB,U} = \lim{\omega\rightarrow \infty} \tilde{\mu}{GMB} = \mu_0 + \sum{l=1}^{L} \mu_l\notag
\end{equation}
With the modulus defect or relaxation of modulus
\begin{equation}
\delta \mu = \tilde{\mu}{GMB,U} - \tilde{\mu}{GMB,R} = \sum_{l=1}^{L} \mu_l\notag
\end{equation}
we get
\begin{equation}
\delta \mu_l = \mu_l\notag
\end{equation}
Without any simplification we can consider
\begin{equation}
\delta \mu_l = a_l \delta \mu\notag
\end{equation}
with the normalization
\begin{equation}
\sum_{l=1}^L a_l = 1\notag
\end{equation}
This yields
\begin{equation}
\tilde{\mu}{GMB} = \mu_0 + \delta \mu \sum{l=1}^{L} \frac{i a_l \omega}{i \omega + \omega_l}. \notag
\end{equation}
Let's try to approximate a constant $Q(\omega) = 10$ model for frequencies between 5 and 100 Hz by using 4 Maxwell bodies and setting all shear moduli of the Maxwell bodies to a constant value.
End of explanation |
4,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Python and MySQL tutorial </center>
<center> Author
Step1: Calculator
Step2: Strings
Step3: show ' and " in a string
Step4: span multiple lines
Step5: slice and index
Step6: Index in the Python way
Step7: List
Step8: Built-in functions like sum and len are explained in the document too. Here is a link to it.
Mutable
Step9: Nest lists
Step10: tuple
similar to list, but immutable (element cannot be changed)
Step11: dict
Step12: Quiz
Step13: while
Fibonacci series
Step14: for
Step16: Define function
Step17: Data I/O
Create some data in Python and populate the database with the created data. We want to create a table with 3 columns
Step18: MySQL
Install MySQL 5.7 Workbench first following this link. You might also need to install the prerequisites listed here before you can install the Workbench. The Workbench is an interface to interact with MySQL database. The actual MySQL database server requires a second step
Step19: Quiz
Step20: Target
Step21: Negative index
Step22: More about list
Step23: Versatile features
Step24: Target
Step25: Use if in list comprehension
Target
Step26: Use Python to access MySQL database
Since the official MySQL 5.7.11 provides support for Python upto Version 3.4, we need to install a package to provide to support the Python 3.5. Execute the following line in Windows command line to install it.
Step27: To get better understanding of the table we just created. We will use MySQL command line again.
Step28: Regular expression in Python | Python Code:
width = 20
height = 5*9
width * height
Explanation: <center> Python and MySQL tutorial </center>
<center> Author: Cheng Nie </center>
<center> Check chengnie.com for the most recent version </center>
<center> Current Version: Feb 12, 2016</center>
Python Setup
Since most students in this class use Windows 7, I will use Windows 7 for illustration of the setup. Setting up the environmnet in Mac OS and Linux should be similar. Please note that the code should produce the same results whichever operating system (even on your smart phone) you are using because Python is platform independent.
Download the Python 3.5 version of Anaconda that matches your operating system from this link. You can accept the default options during installation. To see if your Windows is 32 bit or 64 bit, check here
You can save and run this document using the Jupyter notebook (previously known as IPython notebook). Another tool that I recommend would be PyCharm, which has a free community edition.
This is a tutorial based on the official Python Tutorial for Python 3.5.1. If you need a little more motivation to learn this programming language, consider reading this article.
Numbers
End of explanation
tax = 8.25 / 100
price = 100.50
price * tax
price + _
round(_, 2)
Explanation: Calculator
End of explanation
print('spam email')
Explanation: Strings
End of explanation
# This would cause error
print('doesn't')
# One way of doing it correctly
print('doesn\'t')
# Another way of doing it correctly
print("doesn't")
Explanation: show ' and " in a string
End of explanation
print('''
Usage: thingy [OPTIONS]
-h Display this usage message
-H hostname Hostname to connect to
''')
print('''Cheng highly recommends Python programming language''')
Explanation: span multiple lines
End of explanation
word = 'HELP' + 'A'
word
Explanation: slice and index
End of explanation
word[0]
word[4]
# endding index not included
word[0:2]
word[2:4]
# length of a string
len(word)
Explanation: Index in the Python way
End of explanation
a = ['spam', 'eggs', 100, 1234]
a
a[0]
a[3]
a[2:4]
sum(a[2:4])
Explanation: List
End of explanation
a
a[2] = a[2] + 23
a
Explanation: Built-in functions like sum and len are explained in the document too. Here is a link to it.
Mutable
End of explanation
q = [2, 3]
p = [1, q, 4]
p
len(p)
p[1]
p[1][0]
Explanation: Nest lists
End of explanation
x=(1,2,3,4)
x[0]
x[0]=7 # it will raise error since tuple is immutable
Explanation: tuple
similar to list, but immutable (element cannot be changed)
End of explanation
tel = {'jack': 4098, 'sam': 4139}
tel['dan'] = 4127
tel
tel['jack']
del tel['sam']
tel
tel['mike'] = 4127
tel
# Is dan in the dict?
'dan' in tel
for key in tel:
print('key:', key, '; value:', tel[key])
Explanation: dict
End of explanation
x = int(input("Please enter an integer for x: "))
if x < 0:
x = 0
print('Negative; changed to zero')
elif x == 0:
print('Zero')
elif x == 1:
print('Single')
else:
print('More')
Explanation: Quiz: how to print the tel dict sorted by the key?
Control of flow
if
Ask a user to input a number, if it's negative, x=0, else if it's 1
End of explanation
a, b = 0, 1 # multiple assignment
while a < 10:
print(a)
a, b = b, a+b
Explanation: while
Fibonacci series: the sum of two elements defines the next with the first two elements to be 0 and 1.
End of explanation
# Measure some strings:
words = ['cat', 'window', 'defenestrate']
for i in words:
print(i, len(i))
Explanation: for
End of explanation
def fib(n): # write Fibonacci series up to n
Print a Fibonacci series up to n.
a, b = 0, 1
while a < n:
print(a)
a, b = b, a+b
fib(200)
fib(2000000000000000) # do not need to worry about the type of a,b
Explanation: Define function
End of explanation
# output for viewing first
import string
import random
# fix the pseudo-random sequences for easy replication
# It will generate the same random sequences
# of nubmers/letters with the same seed.
random.seed(123)
for i in range(50):
# Data values separated by comma(csv file)
print(i+1,random.choice(string.ascii_uppercase),
random.choice(range(6)), sep=',')
# write the data to a file
random.seed(123)
out_file=open('data.csv','w')
columns=['id','name','age']
out_file.write(','.join(columns)+'\n')
for i in range(50):
row=[str(i+1),random.choice(string.ascii_uppercase),
str(random.choice(range(6)))]
out_file.write(','.join(row)+'\n')
else:
out_file.close()
# read data into Python
for line in open('data.csv', 'r'):
print(line)
Explanation: Data I/O
Create some data in Python and populate the database with the created data. We want to create a table with 3 columns: id, name, and age to store information about 50 kids in a day care.
The various modules that extend the basic Python funtions are indexed here.
End of explanation
# crawl_UTD_reviews
# Author: Cheng Nie
# Email: [email protected]
# Date: Feb 8, 2016
# Updated: Feb 12, 2016
from urllib.request import urlopen
num_pages = 2
reviews_per_page = 20
# the file we will save the rating and date
out_file = open('UTD_reviews.csv', 'w')
# the url that we need to locate the page for UTD reviews
url = 'http://www.yelp.com/biz/university-of-texas-at-dallas-\
richardson?start={start_number}'
# the three string patterns we just explained
review_start_pattern = '<div class="review-wrapper">'
rating_pattern = '<i class="star-img stars_'
date_pattern = '"datePublished" content="'
reviews_count = 0
for page in range(num_pages):
print('processing page', page)
# open the url and save the source code string to page_content
html = urlopen(url.format(start_number = page * reviews_per_page))
page_content = html.read().decode('utf-8')
# locate the beginning of an individual review
review_start = page_content.find(review_start_pattern)
while review_start != -1:
# it means there at least one more review to be crawled
reviews_count += 1
# get the rating
cut_front = page_content.find(rating_pattern, review_start) \
+ len(rating_pattern)
cut_end = page_content.find('" title="', cut_front)
rating = page_content[cut_front:cut_end]
# get the date
cut_front = page_content.find(date_pattern, cut_end) \
+ len(date_pattern)
cut_end = page_content.find('">', cut_front)
date = page_content[cut_front:cut_end]
# save the data into out_file
out_file.write(','.join([rating, date]) + '\n')
review_start = page_content.find(review_start_pattern, cut_end)
print('crawled', reviews_count, 'reviews so far')
out_file.close()
Explanation: MySQL
Install MySQL 5.7 Workbench first following this link. You might also need to install the prerequisites listed here before you can install the Workbench. The Workbench is an interface to interact with MySQL database. The actual MySQL database server requires a second step: run the MySQL Installer, then add and intall the MySQL servers using the Installer. You can accept the default options during installation. Later, you will connect to MySQL using the password you set during the installation and configuration. I set the password to be pythonClass.
The documentation for MySQL is here.
To get comfortable with it, you might find this tutorial of Structured Query Language(SQL) to be helpful.
Crawl the reviews for UT Dallas at Yelp.com
The University of Texas at Dallas is reviewed on Yelp.com. It shows on this page that it attracted 38 reviews so far from various reviewers. You learn from the webpage that Yelp displays at most 20 recommended reviews per page and we need to go to page 2 to see the review 21 to review 38. You notice that the URL in the address box of your browser changed when you click on the Next page. Previouly, on page 1, the URL is:
http://www.yelp.com/biz/university-of-texas-at-dallas-richardson
On page 2, the URL is:
http://www.yelp.com/biz/university-of-texas-at-dallas-richardson?start=20
You learn that probably Yelp use this ?start=20 to skip(or offset in MySQL language) the first 20 records to show you the next 18 reviews. You can use this pattern of going to the next page to enumerate all pages of a business in Yelp.com.
In this exmaple, we are going get the rating (number of stars) and the date for each of these 38 reviews.
The general procedure to crawl any web page is the following:
Look for the string patterns proceeding and succeeding the information you are looking for in the source code of the page (the html file).
Write a program to enumerate (for or while loop) all the pages.
For this example, I did a screenshot with my annotation to illustrate the critical patterns in the Yelp page for UTD reviews.
review_start_pattern is a variable to stroe the string of '<div class="review-wrapper">' to locate the beginning of an individual review.
rating_pattern is a variable to stroe the string of '<i class="star-img stars_' to locate the rating.
date_pattern is a variable to stroe the string of '"datePublished" content="' to locate date of the rating.
It takes some trails and errors to figure out what are good string patterns to use to locate the information you need in an html. For example, I found that '<div class="review-wrapper">' appeared exactly 20 times in the webpage, which is a good indication that it corresponds to the 20 individual reviews on the page (the review-wrapper tag seems to imply that too).
End of explanation
word
# first index default to 0 and second index default to the size
word[:2]
# It's equivalent to
word[0:2]
# Everything except the first two characters
word[2:]
# It's equivalent to
word[2:len(word)]
# start: end: step
word[0::2]
Explanation: Quiz: import the crawled file into a table in your database.
More about index
End of explanation
word[0:len(word):2]
Explanation: Target: "HLA", select every other character
End of explanation
word[-1] # The last character
word[-2] # The last-but-one character
word[-2:] # The last two characters
word[:-2] # Everything except the last two characters
Explanation: Negative index
End of explanation
a
a[-2]
a[1:-1]
a[:2] + ['bacon', 2*2]
3*a[:3] + ['Boo!']
Explanation: More about list
End of explanation
# Replace some items:
a[0:2] = [1, 12]
a
# Remove some:
a[0:2] = [] # or del a[0:2]
a
# Insert some:
a[1:1] = ['insert', 'some']
a
# inserting at one position is not the same as changing one element
# a=[1, 12, 100, 1234]
a = [123, 1234]
sum(a)
a[1] = ['insert', 'some']
a
Explanation: Versatile features
End of explanation
# loop way
cubes = []
for x in range(11):
cubes.append(x**3)
cubes
# map way
def cube(x):
return x*x*x
list(map(cube, range(11)))
# list comprehension way
[x**3 for x in range(11)]
Explanation: Target: Get the third power of integers between 0 and 10.
End of explanation
result = []
for i in range(11):
if i%2 == 0:
result.append(i)
else:
print(result)
[i for i in range(11) if i%2==0]
l=[1,3,5,6,8,10]
[i for i in l if i%2==0]
Explanation: Use if in list comprehension
Target: find the even number below 10
End of explanation
#
# ----------------------- In Python ------------------
# access table from Python
# connect to MySQL in Python
import mysql.connector
cnx = mysql.connector.connect(user='root',
password='pythonClass',
database='test')
# All DDL (Data Definition Language) statements are
# executed using a handle structure known as a cursor
cursor = cnx.cursor()
#cursor.execute("")
# write the same data to the example table
query0 = '''insert into example (id, name, age) \
values ({id_num},"{c_name}",{c_age});'''
random.seed(123)
for i in range(50):
query1 = query0.format(id_num = i+1,
c_name = random.choice(string.ascii_uppercase),
c_age = random.choice(range(6)))
print(query1)
cursor.execute(query1)
cnx.commit()
Explanation: Use Python to access MySQL database
Since the official MySQL 5.7.11 provides support for Python upto Version 3.4, we need to install a package to provide to support the Python 3.5. Execute the following line in Windows command line to install it.
End of explanation
#
# ----------------------- In Python ------------------
#
cursor.execute('select * from e_copy;')
for i in cursor:
print(i)
#
# ----------------------- In Python ------------------
#
# # example for adding new info for existing record
# cursor.execute('alter table e_copy add mother_name varchar(1) default null')
query='update e_copy set mother_name="{m_name}" where id={id_num};'
# random.seed(333)
for i in range(50):
query1=query.format(m_name = random.choice(string.ascii_uppercase),id_num = i+1)
print(query1)
cursor.execute(query1)
cnx.commit()
#
# ----------------------- In Python ------------------
#
# example for insert new records
query2='insert into e_copy (id, name,age,mother_name) \
values ({id_num},"{c_name}",{c_age},"{m_name}")'
for i in range(10):
query3=query2.format(id_num = i+60,
c_name = random.choice(string.ascii_uppercase),
c_age = random.randint(0,6),
m_name = random.choice(string.ascii_uppercase))
print(query3)
cursor.execute(query3)
cnx.commit()
# check if you've updated the data successfully in MySQL
Explanation: To get better understanding of the table we just created. We will use MySQL command line again.
End of explanation
import re
# digits
# find all the numbers
infile=open('digits.txt','r')
content=infile.read()
print(content)
# Find all the numbers in the file
numbers=re.findall(r'\d+',content)
for n in numbers:
print(n)
# find equations
equations=re.findall(r'(\d+)=\d+',content)
for e in equations:
print(e)
# subsitute equations to correct them
print(re.sub(r'(\d+)=\d+',r'\1=\1',content))
# Save to file
print(re.sub(r'(\d+)=\d+',r'\1=\1',content), file = open('digits_corrected.txt', 'w'))
Explanation: Regular expression in Python
End of explanation |
4,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
To run this sample on Google Cloud Platform with various accelerator setups
Step1: Overview
This notebook is a fork of the Geting started notebook for the Jigsaw Multilingual Toxic Comment classification competition by Ian Kivlichan.
It only takes one toxic comment to sour an online discussion. The Conversation AI team, a research initiative founded by Jigsaw and Google, builds technology to protect voices in conversation. A main area of focus is machine learning models that can identify toxicity in online conversations, where toxicity is defined as anything rude, disrespectful or otherwise likely to make someone leave a discussion. Our API, Perspective, serves these models and others in a growing set of languages (see our documentation for the full list). If these toxic contributions can be identified, we could have a safer, more collaborative internet.
In this competition, we'll explore how models for recognizing toxicity in online conversations might generalize across different languages. Specifically, in this notebook, we'll demonstrate this with a multilingual BERT (m-BERT) model. Multilingual BERT is pretrained on monolingual data in a variety of languages, and through this learns multilingual representations of text. These multilingual representations enable zero-shot cross-lingual transfer, that is, by fine-tuning on a task in one language, m-BERT can learn to perform that same task in another language (for some examples, see e.g. How multilingual is Multilingual BERT?).
We'll study this zero-shot transfer in the context of toxicity in online conversations, similar to past competitions we've hosted ([1], [2]). But rather than analyzing toxicity in English as in those competitions, here we'll ask you to do it in several different languages. For training, we're including the (English) datasets from our earlier competitions, as well as a small amount of new toxicity data in other languages.
Step2: TPU or GPU detection
Step3: Configuration
Set maximum sequence length and path variables.
Step5: Model
Define the model. We convert m-BERT's output to a final probabilty estimate. We're using an m-BERT model from TensorFlow Hub.
Step6: Dataset
Load the preprocessed dataset. See the demo notebook for sample code for performing this preprocessing.
Step8: Set up our data pipelines for training and evaluation.
Step9: Instantiate the model
Compile our model. We will fine-tune the multilingual model on one of our English datasets, and then evaluate its performance on the new multilingual toxicity data. As our metric, we'll use the AUC. | Python Code:
# When not running on Kaggle, comment out this import
from kaggle_datasets import KaggleDatasets
# When not running on Kaggle, set a fixed GCS path here
GCS_PATH = KaggleDatasets().get_gcs_path('jigsaw-multilingual-toxic-comment-classification')
print(GCS_PATH)
Explanation: To run this sample on Google Cloud Platform with various accelerator setups:
1. Download this notebook
1. Create a Cloud AI Platform Notebook VM with your choice of accelerator.
* V100 GPU (AI Platform Notebook UI > New Instance > Tensorflow 2.2 > Customize > V100 x1)
* 4x V100 GPU (AI Platform Notebook UI > New Instance > Tensorflow 2.2 > Customize > V100 x 4)
* 8x V100 GPU (AI Platform Notebook UI > New Instance > Tensorflow 2.2 > Customize > V100 x 8)
* TPU v3-8 (use create-tpu-deep-learning-vm.sh script from this page with --tpu-type v3-8)
* TPU v3-32 pod (use create-tpu-deep-learning-vm.sh script from this page with --tpu-type v3-32)
1. Get the data from Kaggle. The easiest is to run the cell below on Kaggle and copy the name of the GCS bucket where the dataset is cached. This bucket is a cache and will expire after a couple of days but it should be enough to run the notebook. Optionnally, for best performance, copy the data to your own bucket located in the same region as your TPU.
1. adjust the import and the GCS_PATH in the cell below.
End of explanation
import os, time, logging
import tensorflow as tf
import tensorflow_hub as hub
from matplotlib import pyplot as plt
print(tf.version.VERSION)
tf.get_logger().setLevel(logging.ERROR)
Explanation: Overview
This notebook is a fork of the Geting started notebook for the Jigsaw Multilingual Toxic Comment classification competition by Ian Kivlichan.
It only takes one toxic comment to sour an online discussion. The Conversation AI team, a research initiative founded by Jigsaw and Google, builds technology to protect voices in conversation. A main area of focus is machine learning models that can identify toxicity in online conversations, where toxicity is defined as anything rude, disrespectful or otherwise likely to make someone leave a discussion. Our API, Perspective, serves these models and others in a growing set of languages (see our documentation for the full list). If these toxic contributions can be identified, we could have a safer, more collaborative internet.
In this competition, we'll explore how models for recognizing toxicity in online conversations might generalize across different languages. Specifically, in this notebook, we'll demonstrate this with a multilingual BERT (m-BERT) model. Multilingual BERT is pretrained on monolingual data in a variety of languages, and through this learns multilingual representations of text. These multilingual representations enable zero-shot cross-lingual transfer, that is, by fine-tuning on a task in one language, m-BERT can learn to perform that same task in another language (for some examples, see e.g. How multilingual is Multilingual BERT?).
We'll study this zero-shot transfer in the context of toxicity in online conversations, similar to past competitions we've hosted ([1], [2]). But rather than analyzing toxicity in English as in those competitions, here we'll ask you to do it in several different languages. For training, we're including the (English) datasets from our earlier competitions, as well as a small amount of new toxicity data in other languages.
End of explanation
try: # detect TPU
tpu = None
tpu = tf.distribute.cluster_resolver.TPUClusterResolver() # TPU detection
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
except ValueError: # detect GPU(s) and enable mixed precision
strategy = tf.distribute.MirroredStrategy() # works on GPU and multi-GPU
policy = tf.keras.mixed_precision.experimental.Policy('mixed_float16')
tf.config.optimizer.set_jit(True) # XLA compilation
tf.keras.mixed_precision.experimental.set_policy(policy)
print('Mixed precision enabled')
print("REPLICAS: ", strategy.num_replicas_in_sync)
# mixed precision
# On TPU, bfloat16/float32 mixed precision is automatically used in TPU computations.
# Enabling it in Keras also stores relevant variables in bfloat16 format (memory optimization).
# This additional optimization was not used for TPUs in this sample.
# On GPU, specifically V100, mixed precision must be enabled for hardware TensorCores to be used.
# XLA compilation must be enabled for this to work. (On TPU, XLA compilation is the default and cannot be turned off)
Explanation: TPU or GPU detection
End of explanation
SEQUENCE_LENGTH = 128
# Copy of the TF Hub model at https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2
BERT_GCS_PATH = 'gs://bert_multilingual_public/bert_multi_cased_L-12_H-768_A-12_2/'
EPOCHS = 6
if tpu:
BATCH_SIZE = 128 * strategy.num_replicas_in_sync
else:
BATCH_SIZE = 64 * strategy.num_replicas_in_sync
TRAIN_DATA = GCS_PATH + "/jigsaw-toxic-comment-train-processed-seqlen{}.csv".format(SEQUENCE_LENGTH)
TRAIN_DATA_LENGTH = 223549 # rows
VALID_DATA = GCS_PATH + "/validation-processed-seqlen{}.csv".format(SEQUENCE_LENGTH)
STEPS_PER_EPOCH = TRAIN_DATA_LENGTH // BATCH_SIZE
LR_MAX = 0.001 * strategy.num_replicas_in_sync
LR_EXP_DECAY = .9
LR_MIN = 0.0001
@tf.function
def lr_fn(epoch):
lr = (LR_MAX - LR_MIN) * LR_EXP_DECAY**(epoch) + LR_MIN
return lr
print("Learning rate schedule:")
rng = [i for i in range(EPOCHS)]
y = [lr_fn(x) for x in rng]
plt.plot(rng, [lr_fn(x) for x in rng])
plt.show()
Explanation: Configuration
Set maximum sequence length and path variables.
End of explanation
def multilingual_bert_model(max_seq_length=SEQUENCE_LENGTH):
Build and return a multilingual BERT model and tokenizer.
input_word_ids = tf.keras.layers.Input(
shape=(max_seq_length,), dtype=tf.int32, name="input_word_ids")
input_mask = tf.keras.layers.Input(
shape=(max_seq_length,), dtype=tf.int32, name="input_mask")
segment_ids = tf.keras.layers.Input(
shape=(max_seq_length,), dtype=tf.int32, name="all_segment_id")
bert_layer = tf.saved_model.load(BERT_GCS_PATH) # copy of TF Hub model 'https://tfhub.dev/tensorflow/bert_multi_cased_L-12_H-768_A-12/2'
bert_layer = hub.KerasLayer(bert_layer, trainable=True)
pooled_output, _ = bert_layer([input_word_ids, input_mask, segment_ids])
output = tf.keras.layers.Dense(32, activation='relu')(pooled_output)
output = tf.keras.layers.Dense(1, activation='sigmoid', name='labels', dtype=tf.float32)(output)
return tf.keras.Model(inputs={'input_word_ids': input_word_ids,
'input_mask': input_mask,
'all_segment_id': segment_ids},
outputs=output)
Explanation: Model
Define the model. We convert m-BERT's output to a final probabilty estimate. We're using an m-BERT model from TensorFlow Hub.
End of explanation
def parse_string_list_into_ints(strlist):
s = tf.strings.strip(strlist)
s = tf.strings.substr(
strlist, 1, tf.strings.length(s) - 2) # Remove parentheses around list
s = tf.strings.split(s, ',', maxsplit=SEQUENCE_LENGTH)
s = tf.strings.to_number(s, tf.int32)
s = tf.reshape(s, [SEQUENCE_LENGTH]) # Force shape here needed for XLA compilation (TPU)
return s
def format_sentences(data, label='toxic', remove_language=False):
labels = {'labels': data.pop(label)}
if remove_language:
languages = {'language': data.pop('lang')}
# The remaining three items in the dict parsed from the CSV are lists of integers
for k,v in data.items(): # "input_word_ids", "input_mask", "all_segment_id"
data[k] = parse_string_list_into_ints(v)
return data, labels
def make_sentence_dataset_from_csv(filename, label='toxic', language_to_filter=None):
# This assumes the column order label, input_word_ids, input_mask, segment_ids
SELECTED_COLUMNS = [label, "input_word_ids", "input_mask", "all_segment_id"]
label_default = tf.int32 if label == 'id' else tf.float32
COLUMN_DEFAULTS = [label_default, tf.string, tf.string, tf.string]
if language_to_filter:
insert_pos = 0 if label != 'id' else 1
SELECTED_COLUMNS.insert(insert_pos, 'lang')
COLUMN_DEFAULTS.insert(insert_pos, tf.string)
preprocessed_sentences_dataset = tf.data.experimental.make_csv_dataset(
filename, column_defaults=COLUMN_DEFAULTS, select_columns=SELECTED_COLUMNS,
batch_size=1, num_epochs=1, shuffle=False) # We'll do repeating and shuffling ourselves
# make_csv_dataset required a batch size, but we want to batch later
preprocessed_sentences_dataset = preprocessed_sentences_dataset.unbatch()
if language_to_filter:
preprocessed_sentences_dataset = preprocessed_sentences_dataset.filter(
lambda data: tf.math.equal(data['lang'], tf.constant(language_to_filter)))
#preprocessed_sentences.pop('lang')
preprocessed_sentences_dataset = preprocessed_sentences_dataset.map(
lambda data: format_sentences(data, label=label,
remove_language=language_to_filter))
return preprocessed_sentences_dataset
Explanation: Dataset
Load the preprocessed dataset. See the demo notebook for sample code for performing this preprocessing.
End of explanation
def make_dataset_pipeline(dataset, repeat_and_shuffle=True):
Set up the pipeline for the given dataset.
Caches, repeats, shuffles, and sets the pipeline up to prefetch batches.
cached_dataset = dataset.cache()
if repeat_and_shuffle:
cached_dataset = cached_dataset.repeat().shuffle(2048)
cached_dataset = cached_dataset.batch(BATCH_SIZE, drop_remainder=True) # no remainder on repeated dataset
else:
cached_dataset = cached_dataset.batch(BATCH_SIZE)
cached_dataset = cached_dataset.prefetch(tf.data.experimental.AUTOTUNE)
return cached_dataset
# Load the preprocessed English dataframe.
preprocessed_en_filename = TRAIN_DATA
# Set up the dataset and pipeline.
english_train_dataset = make_dataset_pipeline(
make_sentence_dataset_from_csv(preprocessed_en_filename))
# Process the new datasets by language.
preprocessed_val_filename = VALID_DATA
nonenglish_val_datasets = {}
for language_name, language_label in [('Spanish', 'es'), ('Italian', 'it'),
('Turkish', 'tr')]:
nonenglish_val_datasets[language_name] = make_sentence_dataset_from_csv(
preprocessed_val_filename, language_to_filter=language_label)
nonenglish_val_datasets[language_name] = make_dataset_pipeline(
nonenglish_val_datasets[language_name], repeat_and_shuffle=False)
nonenglish_val_datasets['Combined'] = make_sentence_dataset_from_csv(preprocessed_val_filename)
nonenglish_val_datasets['Combined'] = make_dataset_pipeline(nonenglish_val_datasets['Combined'], repeat_and_shuffle=False)
Explanation: Set up our data pipelines for training and evaluation.
End of explanation
with strategy.scope():
multilingual_bert = multilingual_bert_model()
# Compile the model. Optimize using stochastic gradient descent.
multilingual_bert.compile(
loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.SGD(learning_rate=0.001*strategy.num_replicas_in_sync),
metrics=[tf.keras.metrics.AUC()])
multilingual_bert.summary()
%%time
# Train on English Wikipedia comment data.
lr_callback = tf.keras.callbacks.LearningRateScheduler(lr_fn)
history = multilingual_bert.fit(
english_train_dataset, steps_per_epoch=STEPS_PER_EPOCH, epochs=EPOCHS,
#validation_data=nonenglish_val_datasets['Combined'],
callbacks=[lr_callback])
# Performance on non-English comments after training.
for language in nonenglish_val_datasets:
results = multilingual_bert.evaluate(nonenglish_val_datasets[language], verbose=0)
print('{} loss, AUC after training:'.format(language), results)
Explanation: Instantiate the model
Compile our model. We will fine-tune the multilingual model on one of our English datasets, and then evaluate its performance on the new multilingual toxicity data. As our metric, we'll use the AUC.
End of explanation |
4,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Fisheries competition
In this notebook we're going to investigate a range of different architectures for the Kaggle fisheries competition. We use VGG with batch normalization through out this notebook.
Step2: Basic VGG
We start with our usual VGG approach. We will be using VGG with batch normalization. For more information about batch normalization please see this notebook.
Initial model
First we create a simple fine-tuned VGG model to be our starting point.
Step3: Precompute convolutional output
We pre-compute the output of the last convolution layer of VGG, since we're unlikely to need to fine-tune those layers. (All following analysis will be done on just the pre-computed convolutional features.)
Step4: Train model
We can now create our first baseline model - a simple 3-layer FC net.
Step5: Multi-input
The images are of different sizes, which are likely to represent the boat they came from (since different boats will use different cameras). Perhaps this creates some data leakage that we can take advantage of to get a better Kaggle leaderboard position? To find out, first we create arrays of the file sizes for each image
Step6: Then we one-hot encode them (since we want to treat them as categorical) and normalize the data.
Step7: The model did not show an improvement by using the leakage, other than in the early epochs. This is most likely because the information about what boat the picture came from is readily identified from the image itself, so the meta-data turned out not to add any additional information.
Bounding boxes & multi output
Import / view bounding boxes
A kaggle user has created bounding box annotations for each fish in each training set image. You can download them from here. We will see if we can utilize this additional information. First, we'll load in the data, and keep just the largest bounding box for each image.
Step8: For any images that have no annotations, we'll create an empty bounding box.
Step9: Finally, we convert the dictionary into an array, and convert the coordinates to our resized 224x224 images.
Step10: Now we can check our work by drawing one of the annotations.
Step11: Create & train model
Since we're not allowed (by the kaggle rules) to manually annotate the test set, we'll need to create a model that predicts the locations of the bounding box on each image. To do so, we create a model with multiple outputs
Step12: Since we have multiple outputs, we need to provide them to the model constructor in an array, and we also need to say what loss function to use for each. We also weight the bounding box loss function down by 1000x since the scale of the cross-entropy loss and the MSE is very different.
Step13: Excitingly, it turned out that the classification model is much improved by giving it this additional task. Let's see how well the bounding box model did by taking a look at its output.
Step14: Larger size
Set up data
Let's see if we get better results if we use larger images. We'll use 640x360, since it's the same shape as the most common size we saw earlier (1280x720), without being too big.
Step15: The image shows that things are much clearer at this size.
Step16: We can now create our VGG model - we'll need to tell it we're not using the normal 224x224 images, which also means it won't include the fully connected layers (since they don't make sense for non-default sizes). We will also remove the last max pooling layer, since we don't want to throw away information yet.
Step17: Fully convolutional net (FCN)
Since we're using a larger input, the output of the final convolutional layer is also larger. So we probably don't want to put a dense layer there - that would be a lot of parameters! Instead, let's use a fully convolutional net (FCN); this also has the benefit that they tend to generalize well, and also seems like a good fit for our problem (since the fish are a small part of the image).
Step18: I'm not using any dropout, since I found I got better results without it.
Step19: Another benefit of this kind of model is that the last convolutional layer has to learn to classify each part of the image (since there's only an average pooling layer after). Let's create a function that grabs the output of this layer (which is the 4th-last layer of our model).
We have to add an extra dimension to our input since the CNN expects a 'batch' (even if it's just a batch of one).
Step20: The heatmap shows that (at very low resolution) the model is finding the fish!
Step21: All convolutional net heatmap
To create a higher resolution heatmap, we'll remove all the max pooling layers, and repeat the previous steps.
Step22: Create heatmap
Step23: Inception mini-net
Here's an example of how to create and use "inception blocks" - as you see, they use multiple different convolution filter sizes and concatenate the results together. We'll talk more about these next year.
Step24: Pseudo-labeling
Step25: Submit | Python Code:
import torch
import torchvision.models as models
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from torchvision.utils import make_grid
from PIL import Image
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.optim as optim
import torch.utils.trainer as trainer
import torch.utils.trainer.plugins
from torch.autograd import Variable
import numpy as np
import pandas as pd
import os
from torchsample.modules import ModuleTrainer
from torchsample.metrics import CategoricalAccuracy
import glob
import PIL
import matplotlib.pyplot as plt
import scipy.misc
%load_ext autoreload
%autoreload 2
%matplotlib inline
def denorm(tensor):
# Undo the image normalization + clamp between 0 and 1 to avoid image artifacts
for t, m, s in zip(tensor, [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]):
t.mul_(s).add_(m).clamp_(0, 1)
return tensor
def get_images_to_plot(images_tensor):
denormalize = transforms.Compose([
transforms.Lambda(denorm)
])
return denormalize(images_tensor)
def show(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1,2,0)), interpolation='nearest')
data_path = "data/fish/"
# data_path = "data/fish/sample/"
use_cuda = True
batch_size = 64
print('Using CUDA:', use_cuda)
cuda_device = -1
if use_cuda:
cuda_device = 0
# Change to True to create & populate the validation directory
if False:
# Create validation directory
%cd data/fish/train
%mkdir -p ../valid
%cd ../../../
# Create a folder for each category of fish
for d in glob.glob('*'): os.mkdir('../valid/' + d)
# Copy some random images from each class
shuf = np.random.permutation(glob.glob('*/*.jpg'))
for i in range(500): os.rename(shuf[i], '../valid/' + shuf[i])
# Change to True to create the sample dir
# Manually inspect all classes have at least one fish
if False:
%cd data/fish/train
%mkdir -p ../sample
%mkdir -p ../sample/train
%mkdir -p ../sample/valid
from shutil import copyfile
# Create a folder for each category of fish
for d in glob.glob('*'):
os.mkdir('../sample/train/' + d)
os.mkdir('../sample/valid/' + d)
# Copy a few samples per fish
shuf = np.random.permutation(glob.glob('*/*.jpg'))
for i in range(60): copyfile(shuf[i], '../sample/train/' + shuf[i])
%cd ../valid
shuf = np.random.permutation(glob.glob('*/*.jpg'))
for i in range(50): copyfile(shuf[i], '../sample/valid/' + shuf[i])
%cd ../../../
# This class is required so we can easily extract the labels of the training dataset
class ShuffleOnceSampler(torch.utils.data.sampler.Sampler):
Randomly shuffles the data source on creation, without replacement.
Returns the same sequential order on every epoch.
Arguments:
data_source (Dataset): dataset to sample from
def __init__(self, data_source):
self.shuffled_order = torch.randperm(len(data_source)).long()
def __iter__(self):
return iter(self.shuffled_order)
def __len__(self):
return len(self.shuffled_order)
# Data loading code
traindir = os.path.join(data_path, 'train')
valdir = os.path.join(data_path, 'valid')
testdir = os.path.join(data_path, 'test')
# pytorch way of implementing fastai's get_batches, (utils.py)
def get_data_loader(dirname, batch_size=64, shuffle_once=False, image_size=(224, 224)):
# pytorch's VGG requires images to be 224x224 and normalized using https://github.com/pytorch/vision#models
normalize = transforms.Compose([
transforms.Lambda(lambda img: img.resize(image_size, Image.BILINEAR)),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225]),
])
image_folder = datasets.ImageFolder(dirname, normalize)
sampler = None
if shuffle_once:
sampler = ShuffleOnceSampler(image_folder)
return torch.utils.data.DataLoader(image_folder, batch_size=batch_size,
shuffle=False, pin_memory=use_cuda, sampler=sampler), image_folder
train_loader, train_folder = get_data_loader(traindir, batch_size=batch_size, shuffle_once=True)
val_loader, val_folder = get_data_loader(valdir, batch_size=batch_size)
test_loader, test_folder = get_data_loader(testdir, batch_size=batch_size)
print('Images in train folder:', len(train_folder.imgs))
print('Images in val folder:', len(val_folder.imgs))
print('Images in test folder:', len(test_folder.imgs))
Explanation: Fisheries competition
In this notebook we're going to investigate a range of different architectures for the Kaggle fisheries competition. We use VGG with batch normalization through out this notebook.
End of explanation
# Monkey patch the parameters() to return trainable weights only
import types
def parameters(self):
p = filter(lambda p: p.requires_grad, nn.Module.parameters(self))
return p
# TODO create a utiliy class that inits models correctly
# Keras inits the model with sensible defaults, PyTorch does not
def init_model(model):
for m in model.modules():
if isinstance(m, nn.Conv2d):
nn.init.xavier_uniform(m.weight)
if m.bias is not None:
nn.init.constant(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant(m.weight, 1)
nn.init.constant(m.bias, 0)
elif isinstance(m, nn.BatchNorm1d):
nn.init.constant(m.weight, 1)
nn.init.constant(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.normal(m.weight, mean=0, std=0.01)
nn.init.constant(m.bias, 0)
# Load the model
model = models.vgg16_bn(pretrained=True)
# Finetune by replacing the last fully connected layer and freezing all network parameters
for param in model.parameters():
param.requires_grad = False
model.parameters = types.MethodType(parameters, model)
# Replace the last fully-connected layer matching the new class count
classes = train_loader.dataset.classes
num_classes = len(classes)
print('Using {:d} classes: {}'.format(num_classes, classes))
model.classifier = nn.Sequential(
nn.Linear(512 * 7 * 7, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(True),
nn.Dropout(),
nn.Linear(4096, num_classes),
)
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss()
# enable cuda if available
if(use_cuda):
model.cuda()
criterion.cuda()
def getTrainer(model):
trainer = ModuleTrainer(model)
trainer.compile(optimizer='adam', loss=criterion, metrics=[CategoricalAccuracy()])
return trainer
trainer = getTrainer(model)
# TODO fix this: 'ImageFolder' object has no attribute 'num_inputs', module_trainer.py (318)
# trainer.fit_loader(train_loader, val_loader=val_loader, num_epoch=1, cuda_device=cuda_device)
Explanation: Basic VGG
We start with our usual VGG approach. We will be using VGG with batch normalization. For more information about batch normalization please see this notebook.
Initial model
First we create a simple fine-tuned VGG model to be our starting point.
End of explanation
class VggNoClassifier(nn.Module):
def __init__(self, vgg):
super(VggNoClassifier, self).__init__()
# The last feature is a Max Pooling layer, remove it
num_features = len(vgg.features._modules)
print(num_features, type(vgg.features[num_features - 2]), type(vgg.features[num_features - 1]))
self.features = nn.Sequential(*[vgg.features[idx] for idx in range(num_features - 1)])
def forward(self, x):
x = self.features(x)
return x
vgg = VggNoClassifier(model)
if(use_cuda):
vgg.cuda()
trainer = ModuleTrainer(vgg)
if False:
%time conv_train = trainer.predict_loader(train_loader, cuda_device=cuda_device).data # Extract Tensor from Variable
%time conv_val = trainer.predict_loader(val_loader, cuda_device=cuda_device).data
%time conv_test = trainer.predict_loader(test_loader, cuda_device=cuda_device).data
labels_train = torch.cat([labels for (batch, labels) in train_loader])
labels_val = torch.cat([labels for (batch, labels) in val_loader])
%mkdir -p data/fish/results
torch.save(conv_train, data_path + 'results/conv_train_224.pth')
torch.save(conv_val, data_path + 'results/conv_val_224.pth')
torch.save(conv_test, data_path + 'results/conv_test_224.pth')
torch.save(labels_train, data_path + 'results/labels_train_224.pth')
torch.save(labels_val, data_path + 'results/labels_val_224.pth')
else:
conv_train = torch.load(data_path + 'results/conv_train_224.pth')
conv_val = torch.load(data_path + 'results/conv_val_224.pth')
conv_test = torch.load(data_path + 'results/conv_test_224.pth')
labels_train = torch.load(data_path + 'results/labels_train_224.pth')
labels_val = torch.load(data_path + 'results/labels_val_224.pth')
conv_train.size(), labels_train.size()
Explanation: Precompute convolutional output
We pre-compute the output of the last convolution layer of VGG, since we're unlikely to need to fine-tune those layers. (All following analysis will be done on just the pre-computed convolutional features.)
End of explanation
class FCNet3LayerClassifer(nn.Module):
def __init__(self, p):
super(FCNet3LayerClassifer, self).__init__()
size_after_pool = 512 * 7 * 7 # 7 = 14 / 2
feature_size = 512
self.maxPool = nn.Sequential(nn.MaxPool2d((2, 2)),
nn.BatchNorm2d(feature_size),
nn.Dropout2d(p / 4))
self.linear = nn.Sequential(nn.Linear(size_after_pool, feature_size),
nn.ReLU(inplace=True),
nn.BatchNorm1d(feature_size),
nn.Dropout(p),
nn.Linear(feature_size, feature_size),
nn.ReLU(inplace=True),
nn.BatchNorm1d(feature_size),
nn.Dropout(p / 2))
self.classifier = nn.Linear(feature_size, num_classes)
init_model(self)
def forward(self, x):
x = self.maxPool(x)
x = x.view(x.size(0), -1)
x = self.linear(x)
x = self.classifier(x)
return x
model = FCNet3LayerClassifer(0.6)
if(use_cuda):
model.cuda()
trainer = getTrainer(model)
trainer.fit(conv_train, labels_train, val_data=(conv_val, labels_val), num_epoch=3, batch_size=batch_size, cuda_device=cuda_device)
trainer.adjust_learning_rate(1e-4)
trainer.fit(conv_train, labels_train, val_data=(conv_val, labels_val), num_epoch=7, batch_size=batch_size, cuda_device=cuda_device)
Explanation: Train model
We can now create our first baseline model - a simple 3-layer FC net.
End of explanation
filenames_train = [filename for filename, _ in train_loader.dataset.imgs]
sizes_tuples_train = [PIL.Image.open(f).size for f in filenames_train]
unique_sizes = list(set(sizes_tuples_train))
size_to_idx = {size : idx for idx, size in enumerate(unique_sizes)}
size_to_idx[(1334, 750)] = 8 # Add any other sizes not present in training set, but present in val or test sets
image_sizes_count = len(size_to_idx)
import collections
collections.Counter(sizes_tuples_train)
def one_hot_encode_normalize(list_indexes, index_size):
hot_encoded = torch.FloatTensor(len(list_indexes), index_size).zero_()
idx = torch.LongTensor(list_indexes).view(-1, 1)
hot_encoded.scatter_(1, idx, 1.0)
return hot_encoded - hot_encoded.mean() / hot_encoded.std()
Explanation: Multi-input
The images are of different sizes, which are likely to represent the boat they came from (since different boats will use different cameras). Perhaps this creates some data leakage that we can take advantage of to get a better Kaggle leaderboard position? To find out, first we create arrays of the file sizes for each image:
End of explanation
sizes_train = one_hot_encode_normalize(list(map(size_to_idx.__getitem__, sizes_tuples_train)), image_sizes_count)
sizes_train.size()
filenames_val = [filename for filename, _ in val_loader.dataset.imgs]
sizes_tuples_val = [PIL.Image.open(f).size for f in filenames_val]
sizes_val = one_hot_encode_normalize(list(map(size_to_idx.__getitem__, sizes_tuples_val)), image_sizes_count)
sizes_val.size()
class MultiInput3LayerFCNetClassifer(FCNet3LayerClassifer):
def __init__(self, p):
super(MultiInput3LayerFCNetClassifer, self).__init__(p)
image_size_feature = image_sizes_count
feature_size = 512 + image_size_feature
self.batchnorm = nn.BatchNorm1d(image_sizes_count)
self.classifier = nn.Linear(feature_size, num_classes)
init_model(self)
def forward(self, x, x_image_sizes):
x_bn = self.batchnorm(x_image_sizes)
x = self.maxPool(x)
x = x.view(x.size(0), -1)
x = self.linear(x)
x = torch.cat([x, x_bn], dim=1)
x = self.classifier(x)
return x
model = MultiInput3LayerFCNetClassifer(0.6)
if(use_cuda):
model.cuda()
trainer = getTrainer(model)
trainer.fit((conv_train, sizes_train), labels_train, val_data=((conv_val, sizes_val), labels_val), num_epoch=3, batch_size=batch_size, cuda_device=cuda_device)
trainer.adjust_learning_rate(1e-4)
trainer.fit((conv_train, sizes_train), labels_train, val_data=((conv_val, sizes_val), labels_val), num_epoch=7, batch_size=batch_size, cuda_device=cuda_device)
Explanation: Then we one-hot encode them (since we want to treat them as categorical) and normalize the data.
End of explanation
import json
anno_classes = ['alb', 'bet', 'dol', 'lag', 'other', 'shark', 'yft']
bb_json = {}
for c in anno_classes:
j = json.load(open('data/fish/annotations/{}_labels.json'.format(c), 'r'))
for l in j:
if 'annotations' in l.keys() and len(l['annotations'])>0:
bb_json[l['filename'].split('/')[-1]] = sorted(
l['annotations'], key=lambda x: x['height']*x['width'])[-1]
bb_json['img_04908.jpg']
Explanation: The model did not show an improvement by using the leakage, other than in the early epochs. This is most likely because the information about what boat the picture came from is readily identified from the image itself, so the meta-data turned out not to add any additional information.
Bounding boxes & multi output
Import / view bounding boxes
A kaggle user has created bounding box annotations for each fish in each training set image. You can download them from here. We will see if we can utilize this additional information. First, we'll load in the data, and keep just the largest bounding box for each image.
End of explanation
empty_bbox = {'height': 0., 'width': 0., 'x': 0., 'y': 0.}
Explanation: For any images that have no annotations, we'll create an empty bounding box.
End of explanation
bb_params = ['height', 'width', 'x', 'y']
def convert_bb(filename, size):
bb = bb_json.get(no_folders(filename), empty_bbox)
bb = [bb[p] for p in bb_params]
conv_x = (224. / size[0])
conv_y = (224. / size[1])
bb[0] = bb[0] * conv_y
bb[1] = bb[1] * conv_x
bb[2] = max(bb[2] * conv_x, 0)
bb[3] = max(bb[3] * conv_y, 0)
return torch.FloatTensor(bb)
def no_folders(filename):
return filename.split('/')[-1]
bbox_train = torch.stack([convert_bb(filename, size) for filename, size in zip(filenames_train, sizes_tuples_train)])
bbox_val = torch.stack([convert_bb(filename, size) for filename, size in zip(filenames_val, sizes_tuples_val)])
Explanation: Finally, we convert the dictionary into an array, and convert the coordinates to our resized 224x224 images.
End of explanation
def create_rect(bb, color='red'):
return plt.Rectangle((bb[2], bb[3]), bb[1], bb[0], color=color, fill=False, lw=3)
def show_bb(i):
bb = bbox_val[i]
show(get_images_to_plot(val_loader.dataset[i][0]))
plt.gca().add_patch(create_rect(bb))
show_bb(0)
Explanation: Now we can check our work by drawing one of the annotations.
End of explanation
class MultiOutput3LayerFCNetClassifer(FCNet3LayerClassifer):
def __init__(self, p):
super(MultiOutput3LayerFCNetClassifer, self).__init__(p)
feature_size = 512
bbox_corners_count = 4
self.bbox_regressor = nn.Linear(feature_size, bbox_corners_count)
init_model(self)
def forward(self, x):
x = self.maxPool(x)
x = x.view(x.size(0), -1)
x = self.linear(x)
x_bb = self.bbox_regressor(x)
x = self.classifier(x)
return x, x_bb
Explanation: Create & train model
Since we're not allowed (by the kaggle rules) to manually annotate the test set, we'll need to create a model that predicts the locations of the bounding box on each image. To do so, we create a model with multiple outputs: it will predict both the type of fish (the 'class'), and the 4 bounding box coordinates. We prefer this approach to only predicting the bounding box coordinates, since we hope that giving the model more context about what it's looking for will help it with both tasks.
End of explanation
# TODO how to pass a weight to each loss function?
model = MultiOutput3LayerFCNetClassifer(0.6)
mse_loss = nn.MSELoss()
if(use_cuda):
model.cuda()
mse_loss.cuda()
trainer = getTrainer(model)
trainer.compile(optimizer='adam', loss=[criterion, mse_loss], metrics=[CategoricalAccuracy(), None], loss_weights=[1., 0.001])
trainer.fit(conv_train, (labels_train, bbox_train), val_data=(conv_val, (labels_val, bbox_val)), num_epoch=3, batch_size=batch_size, cuda_device=cuda_device)
trainer.adjust_learning_rate(1e-5)
trainer.fit(conv_train, (labels_train, bbox_train), val_data=(conv_val, (labels_val, bbox_val)), num_epoch=10, batch_size=batch_size, cuda_device=cuda_device)
# TODO This model does not seem to converge on a solution that accurately finds the bounding boxes.
# (It tends to get stucked and always give the same or similar result)
Explanation: Since we have multiple outputs, we need to provide them to the model constructor in an array, and we also need to say what loss function to use for each. We also weight the bounding box loss function down by 1000x since the scale of the cross-entropy loss and the MSE is very different.
End of explanation
predictions = trainer.predict(conv_train, batch_size=batch_size, cuda_device=cuda_device)
def show_bb_pred(i):
bb = bbox_val[i]
bb_pred = predictions[1][i].data
plt.figure(figsize=(6,6))
show(get_images_to_plot(val_loader.dataset[i][0]))
ax = plt.gca()
ax.add_patch(create_rect(bb_pred, 'yellow'))
ax.add_patch(create_rect(bb))
_, class_id = predictions[0][i].max(0)
class_id_number = torch.max(class_id.data) # From Tensor to Number
print(classes[class_id_number])
print(bb_pred, bb)
show_bb_pred(6)
Explanation: Excitingly, it turned out that the classification model is much improved by giving it this additional task. Let's see how well the bounding box model did by taking a look at its output.
End of explanation
train_loader, train_folder = get_data_loader(traindir, batch_size=32, shuffle_once=True, image_size=(640, 360))
val_loader, val_folder = get_data_loader(valdir, batch_size=32, image_size=(640, 360))
test_loader, test_folder = get_data_loader(testdir, batch_size=32, image_size=(640, 360))
print('Images in train folder:', len(train_folder.imgs))
print('Images in val folder:', len(val_folder.imgs))
print('Images in test folder:', len(test_folder.imgs))
Explanation: Larger size
Set up data
Let's see if we get better results if we use larger images. We'll use 640x360, since it's the same shape as the most common size we saw earlier (1280x720), without being too big.
End of explanation
show(get_images_to_plot(train_loader.dataset[0][0]))
Explanation: The image shows that things are much clearer at this size.
End of explanation
# Load the model
model = models.vgg16_bn(pretrained=True)
vgg = VggNoClassifier(model)
if(use_cuda):
vgg.cuda()
trainer = ModuleTrainer(vgg)
if False:
%time conv_train = trainer.predict_loader(train_loader, cuda_device=cuda_device).data # Extract Tensor from Variable
%time conv_val = trainer.predict_loader(val_loader, cuda_device=cuda_device).data
%time conv_test = trainer.predict_loader(test_loader, cuda_device=cuda_device).data
labels_train = torch.cat([labels for (batch, labels) in train_loader])
labels_val = torch.cat([labels for (batch, labels) in val_loader])
%mkdir -p data/fish/results
torch.save(conv_train, data_path + 'results/conv_train_640.pth')
torch.save(conv_val, data_path + 'results/conv_val_640.pth')
torch.save(conv_test, data_path + 'results/conv_test_640.pth')
torch.save(labels_train, data_path + 'results/labels_train_640.pth')
torch.save(labels_val, data_path + 'results/labels_val_640.pth')
else:
conv_train = torch.load(data_path + 'results/conv_train_640.pth')
conv_val = torch.load(data_path + 'results/conv_val_640.pth')
conv_test = torch.load(data_path + 'results/conv_test_640.pth')
labels_train = torch.load(data_path + 'results/labels_train_640.pth')
labels_val = torch.load(data_path + 'results/labels_val_640.pth')
conv_train.size(), labels_train.size(), conv_test.size()
Explanation: We can now create our VGG model - we'll need to tell it we're not using the normal 224x224 images, which also means it won't include the fully connected layers (since they don't make sense for non-default sizes). We will also remove the last max pooling layer, since we don't want to throw away information yet.
End of explanation
import torch.nn.functional as F
class FCNClassifer(nn.Module):
def __init__(self, p):
super(FCNClassifer, self).__init__()
feature_size = 512
feature_size_conv = 128
kernel_size = (3, 3)
padding = (1, 1)
self.fcn = nn.Sequential(nn.BatchNorm2d(feature_size),
nn.Conv2d(feature_size, feature_size_conv, kernel_size, padding=padding),
nn.ReLU(inplace=True),
nn.BatchNorm2d(feature_size_conv),
nn.MaxPool2d((2, 2)),
nn.Conv2d(feature_size_conv, feature_size_conv, kernel_size, padding=padding),
nn.ReLU(inplace=True),
nn.BatchNorm2d(feature_size_conv),
nn.MaxPool2d((2, 2)),
nn.Conv2d(feature_size_conv, feature_size_conv, kernel_size, padding=padding),
nn.ReLU(inplace=True),
nn.BatchNorm2d(feature_size_conv),
nn.MaxPool2d((1, 2)),
nn.Conv2d(feature_size_conv, num_classes, kernel_size, padding=padding),
)
self.dropout = nn.Dropout2d(p)
init_model(self)
def forward(self, x):
x = self.fcn(x)
x = self.dropout(x)
h_x_w = x.size()[2:] # h x w = 5x5
x = F.avg_pool2d(x, kernel_size=h_x_w)
x = x.view(-1, num_classes)
return x
Explanation: Fully convolutional net (FCN)
Since we're using a larger input, the output of the final convolutional layer is also larger. So we probably don't want to put a dense layer there - that would be a lot of parameters! Instead, let's use a fully convolutional net (FCN); this also has the benefit that they tend to generalize well, and also seems like a good fit for our problem (since the fish are a small part of the image).
End of explanation
model_fc = FCNClassifer(0.0)
if(use_cuda):
model_fc.cuda()
trainer_fc = getTrainer(model_fc)
trainer_fc.fit(conv_train, labels_train, val_data=(conv_val, labels_val), num_epoch=3, batch_size=batch_size, cuda_device=cuda_device)
trainer_fc.adjust_learning_rate(1e-5)
trainer_fc.fit(conv_train, labels_train, val_data=(conv_val, labels_val), num_epoch=4, batch_size=batch_size, cuda_device=cuda_device)
Explanation: I'm not using any dropout, since I found I got better results without it.
End of explanation
def get_convolution_image(model, image_index, channel):
image = Variable(conv_val[image_index], volatile=True)
x = image.unsqueeze(0) # Add the extra dimension
if(use_cuda):
x = x.cuda()
print(x.size())
conv = model.fcn(x)
print(conv.size())
# Get first result of batch, then grab one of the filters out of the 8 prediction ones
print('Predicted class:', torch.max(model.forward(x), 1)[1])
conv = conv.data[0][channel].cpu().numpy()
return scipy.misc.imresize(conv, (360,640), interp='nearest')
image_index = 88
predicted_class = val_loader.dataset[image_index][1]
print('Class =', predicted_class)
show(get_images_to_plot(val_loader.dataset[image_index][0]))
Explanation: Another benefit of this kind of model is that the last convolutional layer has to learn to classify each part of the image (since there's only an average pooling layer after). Let's create a function that grabs the output of this layer (which is the 4th-last layer of our model).
We have to add an extra dimension to our input since the CNN expects a 'batch' (even if it's just a batch of one).
End of explanation
plt.imshow(get_convolution_image(model_fc, image_index, channel=predicted_class), cmap='cool')
Explanation: The heatmap shows that (at very low resolution) the model is finding the fish!
End of explanation
import torch.nn.functional as F
class FCNClassiferNoMaxPooling(nn.Module):
def __init__(self, p):
super(FCNClassiferNoMaxPooling, self).__init__()
self.fcn_module = FCNClassifer(p)
self.fcn_module.fcn = nn.Sequential(* list(filter(lambda module: not isinstance(module, nn.MaxPool2d), self.fcn_module.fcn)))
init_model(self)
def forward(self, x):
return self.fcn_module.forward(x)
model_heatmap = FCNClassiferNoMaxPooling(0)
if(use_cuda):
model_heatmap.cuda()
trainer_heatmap = getTrainer(model_heatmap)
trainer_heatmap.fit(conv_train, labels_train, val_data=(conv_val, labels_val), num_epoch=2, batch_size=batch_size, cuda_device=cuda_device)
trainer_heatmap.adjust_learning_rate(1e-5)
trainer_heatmap.fit(conv_train, labels_train, val_data=(conv_val, labels_val), num_epoch=6, batch_size=batch_size, cuda_device=cuda_device)
Explanation: All convolutional net heatmap
To create a higher resolution heatmap, we'll remove all the max pooling layers, and repeat the previous steps.
End of explanation
image_index = 88
predicted_class = val_loader.dataset[image_index][1]
print('Class =', predicted_class)
show(get_images_to_plot(val_loader.dataset[image_index][0]))
convolution_map = get_convolution_image(model_heatmap.fcn_module, image_index, channel=predicted_class)
plt.imshow(convolution_map, cmap='cool')
plt.figure(figsize=(10,10))
show(get_images_to_plot(val_loader.dataset[image_index][0]))
plt.imshow(convolution_map, cmap="cool", alpha=0.5)
Explanation: Create heatmap
End of explanation
class BasicConv2d(nn.Module):
def __init__(self, in_channels, out_channels, **kwargs):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs)
self.bn = nn.BatchNorm2d(out_channels, eps=0.001)
def forward(self, x):
x = self.conv(x)
x = self.bn(x)
return F.relu(x, inplace=True)
class InceptionBlock(nn.Module):
def __init__(self, in_channels, **kwargs):
super(InceptionBlock, self).__init__()
self.conv2d = BasicConv2d(in_channels, 16, kernel_size=1)
self.branch1x1 = BasicConv2d(in_channels, 32, kernel_size=1, stride=2)
self.branch5x5 = nn.Sequential(
BasicConv2d(in_channels, 24, kernel_size=1),
BasicConv2d(24, 32, kernel_size=5, stride=2, padding=2))
self.branch3x3dbl = nn.Sequential(
BasicConv2d(in_channels, 31, kernel_size=1),
BasicConv2d(31, 48, kernel_size=3),
BasicConv2d(48, 48, kernel_size=3, stride=2, padding=2))
def forward(self, x):
branch1x1 = self.branch1x1(x)
branch5x5 = self.branch5x5(x)
branch3x3dbl = self.branch3x3dbl(x)
branch_pool = F.avg_pool2d(x, kernel_size=3, stride=2, padding=1)
branch_pool = self.conv2d(branch_pool)
outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool]
# print(list(map(lambda x: x.size(), outputs)))
return torch.cat(outputs, 1)
class InceptionModule(nn.Module):
def __init__(self, p, **kwargs):
super(InceptionModule, self).__init__()
in_channels = 512
in_channels_inception = 128
self.batchnorm = nn.BatchNorm2d(in_channels)
self.inception = nn.Sequential(
InceptionBlock(in_channels),
InceptionBlock(in_channels_inception),
InceptionBlock(in_channels_inception))
self.dropout = nn.Dropout(p)
self.classifier = nn.Conv2d(in_channels_inception, num_classes, (3, 3), padding=(1, 1))
def forward(self, x):
x = self.batchnorm(x)
x = self.inception(x)
x = self.dropout(x)
x = self.classifier(x)
h_x_w = x.size()[2:] # h x w = 5x5
x = F.avg_pool2d(x, kernel_size=h_x_w)
x = x.view(-1, num_classes)
return x
model_inception = InceptionModule(0.08)
if(use_cuda):
model_inception.cuda()
trainer_inception = getTrainer(model_inception)
trainer_inception.fit(conv_train, labels_train, val_data=(conv_val, labels_val), num_epoch=2, batch_size=batch_size, cuda_device=cuda_device)
trainer_inception.adjust_learning_rate(1e-5)
trainer_inception.fit(conv_train, labels_train, val_data=(conv_val, labels_val), num_epoch=6, batch_size=batch_size, cuda_device=cuda_device)
Explanation: Inception mini-net
Here's an example of how to create and use "inception blocks" - as you see, they use multiple different convolution filter sizes and concatenate the results together. We'll talk more about these next year.
End of explanation
kaggle_trainer = trainer_fc
conv_val_test = torch.cat([conv_val, conv_test[:2000]]) # The 13K Test samples don't fit in ram :(
predictions_val_test_float = kaggle_trainer.predict(conv_val_test, batch_size=batch_size, cuda_device=cuda_device)
_, predictions_val_test = torch.max(predictions_val_test_float.data, 1)
predictions_val_test = predictions_val_test.view(-1)
conv_train_val_test = torch.cat([conv_train, conv_val_test])
labels_train_val_test = torch.cat([labels_train, predictions_val_test])
print(conv_train_val_test.size(), labels_train_val_test.size())
# Need to create a Dataset and DataLoader as using kaggle_trainer.fit() runs out of memory
train = torch.utils.data.TensorDataset(conv_train_val_test, labels_train_val_test)
train_loader = torch.utils.data.DataLoader(train, batch_size=batch_size, shuffle=True)
val = torch.utils.data.TensorDataset(conv_val, labels_val)
val_loader = torch.utils.data.DataLoader(val, batch_size=batch_size, shuffle=False)
kaggle_trainer.fit_loader(train_loader, val_loader=val_loader, num_epoch=8, cuda_device=cuda_device)
Explanation: Pseudo-labeling
End of explanation
predictions_kaggle = kaggle_trainer.predict(conv_test, batch_size=batch_size, cuda_device=cuda_device)
predictions_kaggle = F.softmax(predictions_kaggle).data
len(predictions_kaggle)
def get_csv_filename(filename):
file = filename.split('/')[-1]
if 'test_stg2' in filename:
return 'test_stg2/' + file
else:
return file
filenames_test = [ get_csv_filename(filename) for filename, _ in test_loader.dataset.imgs]
print(len(filenames_test))
classes
max_value = 0.85
min_value = (1. - max_value) / 8.
predictions_csv = torch.clamp(predictions_kaggle, min_value, max_value).numpy()
submission = pd.DataFrame(predictions_csv, columns=classes)
submission.insert(0, 'image', filenames_test)
submission.head()
submission_name = data_path + 'results/submission_fc.gz'
submission.to_csv(submission_name, index=False, compression='gzip')
from IPython.display import FileLink
FileLink(submission_name)
Explanation: Submit
End of explanation |
4,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
From v2.8.0, pymatgen comes with a fairly robust system of managing units. In essence, subclasses of float and numpy array is provided to attach units to any quantity, as well as provide for conversions. These are loaded at the root level of pymatgen and some properties (e.g., atomic masses, final energies) are returned with attached units. This demo provides an outline of some of the capabilities.
Let's start with some common units, like Energy.
Step1: Units support all functionality that is supported by floats. Unit combinations are automatically taken care of.
Step2: Note that complex units are specified as space-separated powers of units. Powers are specified using "^". E.g., "kg m s^-1". Only integer powers are supported.
Now, let's do some basic science.
Step3: Some highly complex conversions are possible with this system. Let's do some made up units. We will also demonstrate pymatgen's internal unit consistency checks.
Step4: For arrays, we have the equivalent EnergyArray, ... and ArrayWithUnit classes. All other functionality remain the same. | Python Code:
import pymatgen as mg
#The constructor is simply the value + a string unit.
e = mg.Energy(1000, "Ha")
#Let's perform a conversion. Note that when printing, the units are printed as well.
print "{} = {}".format(e, e.to("eV"))
#To check what units are supported
print "Supported energy units are {}".format(e.supported_units)
Explanation: Introduction
From v2.8.0, pymatgen comes with a fairly robust system of managing units. In essence, subclasses of float and numpy array is provided to attach units to any quantity, as well as provide for conversions. These are loaded at the root level of pymatgen and some properties (e.g., atomic masses, final energies) are returned with attached units. This demo provides an outline of some of the capabilities.
Let's start with some common units, like Energy.
End of explanation
dist = mg.Length(65, "mile")
time = mg.Time(30, "min")
speed = dist / time
print "The speed is {}".format(speed)
#Let's do a more sensible unit.
print "The speed is {}".format(speed.to("mile h^-1"))
Explanation: Units support all functionality that is supported by floats. Unit combinations are automatically taken care of.
End of explanation
g = mg.FloatWithUnit(9.81, "m s^-2") #Acceleration due to gravity
m = mg.Mass(2, "kg")
h = mg.Length(10, "m")
print "The force is {}".format(m * g)
print "The potential energy is force is {}".format((m * g * h).to("J"))
Explanation: Note that complex units are specified as space-separated powers of units. Powers are specified using "^". E.g., "kg m s^-1". Only integer powers are supported.
Now, let's do some basic science.
End of explanation
made_up = mg.FloatWithUnit(100, "Ha^3 bohr^-2")
print made_up.to("J^3 ang^-2")
try:
made_up.to("J^2")
except mg.UnitError as ex:
print ex
Explanation: Some highly complex conversions are possible with this system. Let's do some made up units. We will also demonstrate pymatgen's internal unit consistency checks.
End of explanation
dists = mg.LengthArray([1, 2, 3], "mile")
times = mg.TimeArray([0.11, 0.12, 0.23], "h")
print "Speeds are {}".format(dists / times)
Explanation: For arrays, we have the equivalent EnergyArray, ... and ArrayWithUnit classes. All other functionality remain the same.
End of explanation |
4,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Non linear curve fitting with python
Germain Salvato Vallverdu [email protected]
This cookbook presents how to fit a non linear model on a set of data using python. Two kind of algorithms will be presented. First a standard least squares approach using the curve_fit fonction of scipy.optimize in which we will take into account the uncertaintes on the response, that is y. Second a fit with an orthogonal distance regression (ODR) using scipy.odr in which we will take into account both the uncertaintes on x and y.
Python set up
Step1: Read and plot data
Read the data from a csv file with pandas.
Step2: Plot the data with error bars.
Step3: Fit a model on the data
We want to fit the following model, with parameters, $a$ and $b$, on the above data.
$$f(x) = \ln \dfrac{(a + x)^2}{(x-c)^2}$$
First step
Step4: Second step
Step5: Now plot your first estimation of the model.
Step6: Third step
Step7: That's it !
Fourth step
Step8: You can compute a standard deviation error from pcov
Step9: You can compute the determination coefficient with
Step10: Make a plot
Now, see the results on a plot
Step11: Or using more x values for the model, in order to get a smoother curve
Step12: Uncertainties on both x and y
x and y are called the independant (or explanatory) and the dependant (the response) variables, respectively. As in the above example, uncertainties are often only take into account on the response variable (y). Here, we will do the same fit but with uncertainties on both x and y variables.
In least square approches one minimizes, for each value of x, the distance between the response of the model and the data. As you do this for each specific value of x, you cannot include x uncetainties. In order to include them, we will use an orthogonal distance regression approach (ODR).
Look at this stackoverflow question from which the following was written.
Add x uncertaintes
Add, artifically a random normal uncertainties on x.
Step13: Make the fits
1) Define the model
The model function has to be define in a slight different way. The first argument (called beta here) must be the list of the parameters
Step14: Define the data and the model
Step15: 2) Run the algorithms
Two calculations will be donne
Step16: Now the explicit ODR approach with fit_type=0.
Step17: Plot the results
Plot the different results. | Python Code:
# manage data and fit
import pandas as pd
import numpy as np
# first part with least squares
from scipy.optimize import curve_fit
# second part about ODR
from scipy.odr import ODR, Model, Data, RealData
# style and notebook integration of the plots
import seaborn as sns
%matplotlib inline
sns.set(style="whitegrid")
Explanation: Non linear curve fitting with python
Germain Salvato Vallverdu [email protected]
This cookbook presents how to fit a non linear model on a set of data using python. Two kind of algorithms will be presented. First a standard least squares approach using the curve_fit fonction of scipy.optimize in which we will take into account the uncertaintes on the response, that is y. Second a fit with an orthogonal distance regression (ODR) using scipy.odr in which we will take into account both the uncertaintes on x and y.
Python set up
End of explanation
df = pd.read_csv("donnees_exo9.csv", sep=";")
df.head(8) # first 8 lines
Explanation: Read and plot data
Read the data from a csv file with pandas.
End of explanation
ax = df.plot(
x="x", y="y",
kind="line", yerr="Dy", title="Some experimetal data",
linestyle="", marker=".",
capthick=1, ecolor="gray", linewidth=1
)
Explanation: Plot the data with error bars.
End of explanation
def f_model(x, a, c):
return pd.np.log((a + x)**2 / (x - c)**2)
Explanation: Fit a model on the data
We want to fit the following model, with parameters, $a$ and $b$, on the above data.
$$f(x) = \ln \dfrac{(a + x)^2}{(x-c)^2}$$
First step : the function
First, we define a function corresponding to the model :
End of explanation
df["model"] = f_model(df["x"], 3, -2)
df.head(8)
Explanation: Second step : initialisation of parameters
Compute y values for the model with an estimate.
End of explanation
ax = df.plot(
x="x", y="y",
kind="line", yerr="Dy", title="Some experimetal data",
linestyle="", marker=".",
capthick=1, ecolor="gray", linewidth=1
)
ax = df.plot(
x="x", y="model",
kind="line", ax=ax, linewidth=1
)
Explanation: Now plot your first estimation of the model.
End of explanation
help(curve_fit)
popt, pcov = curve_fit(
f=f_model, # model function
xdata=df["x"], # x data
ydata=df["y"], # y data
p0=(3, -2), # initial value of the parameters
sigma=df["Dy"] # uncertainties on y
)
print(popt)
Explanation: Third step : Do the fit
Now we explicitely do the fit with curve_fit using our f_model() function and the initial guess for the parameters. Run help(curve_fit) and read the documentation about the function. curve_fit follow a least-square approach and will minimize :
$$\sum_k \dfrac{\left(f(\text{xdata}_k, \texttt{*popt}) - \text{ydata}_k\right)^2}{\sigma_k^2}$$
End of explanation
a_opt, c_opt = popt
print("a = ", a_opt)
print("c = ", c_opt)
Explanation: That's it !
Fourth step : Results of the fit
Parameters are in popt :
End of explanation
perr = np.sqrt(np.diag(pcov))
Da, Dc = perr
print("a = %6.2f +/- %4.2f" % (a_opt, Da))
print("c = %6.2f +/- %4.2f" % (c_opt, Dc))
Explanation: You can compute a standard deviation error from pcov :
End of explanation
R2 = np.sum((f_model(df.x, a_opt, c_opt) - df.y.mean())**2) / np.sum((df.y - df.y.mean())**2)
print("r^2 = %10.6f" % R2)
Explanation: You can compute the determination coefficient with :
\begin{equation}
R^2 = \frac{\sum_k (y^{calc}_k - \overline{y})^2}{\sum_k (y_k - \overline{y})^2}
\end{equation}
End of explanation
df["model"] = f_model(df.x, a_opt, c_opt)
df.head()
ax = df.plot(
x="x", y="y",
kind="line", yerr="Dy", title="Some experimetal data",
linestyle="", marker=".",
capthick=1, ecolor="gray", linewidth=1
)
ax = df.plot(
x="x", y="model",
kind="line", ax=ax, linewidth=1
)
Explanation: Make a plot
Now, see the results on a plot :
End of explanation
x = np.linspace(0, 20, 200)
ax = df.plot(
x="x", y="y",
kind="line", yerr="Dy", title="Some experimetal data",
linestyle="", marker=".",
capthick=1, ecolor="gray", linewidth=1
)
ax.plot(x, f_model(x, a_opt, c_opt), linewidth=1)
Explanation: Or using more x values for the model, in order to get a smoother curve :
End of explanation
nval = len(df)
del df["model"]
Dx = [np.random.normal(0.3, 0.2) for i in range(nval)]
df["Dx"] = Dx
df.head()
ax = df.plot(
x="x", y="y",
kind="line", yerr="Dy", xerr="Dx",
title="Some experimetal data",
linestyle="", marker=".",
capthick=1, ecolor="gray", linewidth=1
)
Explanation: Uncertainties on both x and y
x and y are called the independant (or explanatory) and the dependant (the response) variables, respectively. As in the above example, uncertainties are often only take into account on the response variable (y). Here, we will do the same fit but with uncertainties on both x and y variables.
In least square approches one minimizes, for each value of x, the distance between the response of the model and the data. As you do this for each specific value of x, you cannot include x uncetainties. In order to include them, we will use an orthogonal distance regression approach (ODR).
Look at this stackoverflow question from which the following was written.
Add x uncertaintes
Add, artifically a random normal uncertainties on x.
End of explanation
def fxy_model(beta, x):
a, c = beta
return pd.np.log((a + x)**2 / (x - c)**2)
Explanation: Make the fits
1) Define the model
The model function has to be define in a slight different way. The first argument (called beta here) must be the list of the parameters :
End of explanation
data = RealData(df.x, df.y, df.Dx, df.Dy)
model = Model(fxy_model)
Explanation: Define the data and the model
End of explanation
odr = ODR(data, model, [3, -2])
odr.set_job(fit_type=2)
lsq_output = odr.run()
print("Iteration 1:")
print("------------")
print(" stop reason:", lsq_output.stopreason)
print(" params:", lsq_output.beta)
print(" info:", lsq_output.info)
print(" sd_beta:", lsq_output.sd_beta)
print("sqrt(diag(cov):", np.sqrt(np.diag(lsq_output.cov_beta)))
# if convergence is not reached, run again the algorithm
if lsq_output.info != 1:
print("\nRestart ODR till convergence is reached")
i = 1
while lsq_output.info != 1 and i < 100:
print("restart", i)
lsq_output = odr.restart()
i += 1
print(" stop reason:", lsq_output.stopreason)
print(" params:", lsq_output.beta)
print(" info:", lsq_output.info)
print(" sd_beta:", lsq_output.sd_beta)
print("sqrt(diag(cov):", np.sqrt(np.diag(lsq_output.cov_beta)))
a_lsq, c_lsq = lsq_output.beta
print(" ODR(lsq) curve_fit")
print("------------------------------")
print("a = %12.7f %12.7f" % (a_lsq, a_opt))
print("c = %12.7f %12.7f" % (c_lsq, c_opt))
Explanation: 2) Run the algorithms
Two calculations will be donne :
fit_type=2 is a least squares approach and consider only y uncertainties.
fit_type=0 explicit ODR
For each calculation, we make a first iteration and check if convergence is reached with output.info. If not we run at most 100 more time the algorithm while the convergence is not reached.
First you can see that the least squares approach gives the same results as the curve_fit function used above.
End of explanation
odr = ODR(data, model, [3, -2])
odr.set_job(fit_type=0)
odr_output = odr.run()
print("Iteration 1:")
print("------------")
print(" stop reason:", odr_output.stopreason)
print(" params:", odr_output.beta)
print(" info:", odr_output.info)
print(" sd_beta:", odr_output.sd_beta)
print("sqrt(diag(cov):", np.sqrt(np.diag(odr_output.cov_beta)))
# if convergence is not reached, run again the algorithm
if odr_output.info != 1:
print("\nRestart ODR till convergence is reached")
i = 1
while odr_output.info != 1 and i < 100:
print("restart", i)
odr_output = odr.restart()
i += 1
print(" stop reason:", odr_output.stopreason)
print(" params:", odr_output.beta)
print(" info:", odr_output.info)
print(" sd_beta:", odr_output.sd_beta)
print("sqrt(diag(cov):", np.sqrt(np.diag(odr_output.cov_beta)))
# Print the results and compare to least square
a_odr, c_odr = odr_output.beta
print("\n ODR(lsq) curve_fit True ODR")
print("--------------------------------------------")
print("a = %12.7f %12.7f %12.7f" % (a_lsq, a_opt, a_odr))
print("c = %12.7f %12.7f %12.7f" % (c_lsq, c_opt, c_odr))
Explanation: Now the explicit ODR approach with fit_type=0.
End of explanation
x = np.linspace(0, 20, 200)
ax = df.plot(
x="x", y="y",
kind="line", yerr="Dy", xerr="Dx",
title="Some experimetal data",
linestyle="", marker=".",
capthick=1, ecolor="gray", linewidth=1
)
ax.plot(x, f_model(x, a_lsq, c_lsq), linewidth=1, label="least square")
ax.plot(x, f_model(x, a_odr, c_odr), linewidth=1, label="ODR")
ax.legend(fontsize=14, frameon=True)
Explanation: Plot the results
Plot the different results.
End of explanation |
4,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab Part II
Step1: Our model will be like this
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: Step 7
Step8: Solutions | Python Code:
import math
import pickle as p
import tensorflow as tf
import numpy as np
import utils
import json
Explanation: Lab Part II: RNN Sentiment Classifier
In the previous lab, you built a tweet sentiment classifier with a simple feedforward neural network. Now we ask you to improve this model by representing it as a sequence of words, with a recurrent neural network.
First import some things:
End of explanation
# set variables
tweet_size = 20
hidden_size = 100
vocab_size = 7597
batch_size = 64
# this just makes sure that all our following operations will be placed in the right graph.
tf.reset_default_graph()
# create a session variable that we can run later.
session = tf.Session()
Explanation: Our model will be like this:
We feed words one by one into LSTM layers. After feeding in all the words, we take the final state of the LSTM and run it thorugh one fully connected layer to multiply it by a final set of weights. We specificy that this fully connected layer should have a single output, which, one sigmoid-ed, is the probability that the tweet is positive!
Step 1: Set up our Model Parameters
Similarly to the last lab, we'll be training using batches. Our hidden layer will have 100 units, and we have 7597 words in the vocabulary.
End of explanation
# the placeholder for tweets has first dimension batch_size for each tweet in a batch,
# second dimension tweet_size for each word in the tweet, and third dimension vocab_size
# since each word itself is represented by a one-hot vector of size vocab_size.
# Note that we use 'None' instead of batch_size for the first dimsension. This allows us
# to deal with variable batch sizes
tweets = tf.placeholder(tf.float32, [None, tweet_size, vocab_size])
'''TODO: create a placeholder for the labels (our predictions).
This should be a 1D vector with size = None,
since we are predicting one value for each tweet in the batch,
but we want to be able to deal with variable batch sizes.''';
labels = #todo
Explanation: Step 2: Create Placeholders
We need to create placeholders for variable data that we will feed in ourselves (aka our tweets). Placeholders allow us to incorporate this data into the graph even though we don't know what it is yet.
End of explanation
'''TODO: create an LSTM Cell using BasicLSTMCell. Note that this creates a *layer* of LSTM
cells, not just a single one.''';
lstm_cell = #todo
'''TODO: create three LSTM layers by wrapping three instances of
lstm_cell from above in tf.contrib.rnn_cell.MultiRNNCell. Note that
you can create multiple cells by doing [lstm_cell] * 2. Also note
that you should use state_is_tuple=True as an argument. This will allow
us to access the part of the cell state that we need later on.''';
multi_lstm_cells = #todo
'''TODO: define the operation to create the RNN graph across time.
tf.nn.dynamic_rnn dynamically constructs the graph when it is executed,
and returns the final cell state.''';
_, final_state = #todo
Explanation: Step 3: Build the LSTM Layers
We want to feed the input sequence, word by word, into an LSTM layer, or multiple LSTM layers (we could also call this an LSTM encoder). At each "timestep", we feed in the next word, and the LSTM updates its cell state. The final LSTM cell state can then be fed through a final classification layer(s) to get our sentiment prediction.
Now let's make our LSTM layer. The steps for this are:
1. Create a LSTM Cell using tf.contrib.rnn.LSTMCell
Wrap a couple of these cells in tf.nn.rnn_cell.MultiRNNCell to create a multiple LSTM layers.
Define the operation to run these layers with dynamic_rnn.
End of explanation
## We define this function that creates a weight matrix + bias parameter
## and uses them to do a matrix multiplication.
def linear(input_, output_size, name, init_bias=0.0):
shape = input_.get_shape().as_list()
with tf.variable_scope(name):
W = tf.get_variable("weights", [shape[-1], output_size], tf.float32, tf.random_normal_initializer(stddev=1.0 / math.sqrt(shape[-1])))
if init_bias is None:
return tf.matmul(input_, W)
with tf.variable_scope(name):
b = tf.get_variable("bias", [output_size], initializer=tf.constant_initializer(init_bias))
return tf.matmul(input_, W) + b
'''TODO: pass the final state into this linear function to multiply it
by the weights and add bias to get our output.
{Quick note that we need to feed in final_state[-1][-1] into linear since
final_state is actually a tuple consisting of the cell state
(used internally for the cell to keep track of things)
as well as the hidden state (the output of the cell), and one of these
tuples for each layer. We want the hidden state for the last layer, so we use
final_state[-1][-1]}''';
sentiment = #todo
Explanation: Step 4: Classification Layer
Now we have the final state of the LSTM layers after feeding in the tweet word by word. We can take this final state and feed it into a simple classfication layer that takes the cell state, multiplies it by some weight matrix (with bias) and outputs a single value corresponding to whether it thinks the tweet is overall positive or not.
End of explanation
sentiment = tf.squeeze(sentiment, [1])
'''TODO: define our loss function.
We will use tf.nn.sigmoid_cross_entropy_with_logits, which will compare our
sigmoid-ed prediction (sentiment from above) to the ground truth (labels).''';
loss = #todo
# our loss with sigmoid_cross_entropy_with_logits gives us a loss for each
# example in the batch. We take the mean of all these losses.
loss = tf.reduce_mean(loss)
# to get actual results like 'positive' or 'negative' ,
# we round the prediction probability to 0 or 1.
prediction = tf.to_float(tf.greater_equal(sentiment, 0.5))
# calculate the error based on which predictions were actually correct.
pred_err = tf.to_float(tf.not_equal(prediction, labels))
pred_err = tf.reduce_sum(pred_err)
Explanation: Step 5: Define Loss
Now we define a loss function that we'll use to determine the difference between what we predicted and what's actually correct. We'll want to use cross entropy, since we can take into account what probability the model gave to the a tweet being positive.
The output we just got from the linear classification layer is called a 'logit' -- the raw value before transforming it into a probability between 0 and 1. We can feed these logits to tf.nn.sigmoid_cross_entropy_with_logits, which will take the sigmoid of these logits (making them between 0 and 1) and then calculate the cross-entropy with the ground truth labels.
End of explanation
'''Define the operation that specifies the AdamOptimizer and tells
it to minimize the loss.''';
optimizer = #todo
Explanation: Step 6: Train
Now we define the operation that actually changes the weights by minimizing the loss.
tf.train.AdamOptimizer is just a gradient descent algorithm that uses a variable learning rate to converge faster and more effectively.
We want to specify this optimizer and then call the minimize function, the optimizer knows it wants to minimize the loss we defined above.
End of explanation
# initialize any variables
tf.global_variables_initializer().run(session=session)
# load our data and separate it into tweets and labels
train_data = json.load(open('data/trainTweets_preprocessed.json', 'r'))
train_data = list(map(lambda row:(np.array(row[0],dtype=np.int32),str(row[1])),train_data))
train_tweets = np.array([t[0] for t in train_data])
train_labels = np.array([int(t[1]) for t in train_data])
test_data = json.load(open('data/testTweets_preprocessed.json', 'r'))
test_data = list(map(lambda row:(np.array(row[0],dtype=np.int32),str(row[1])),test_data))
# we are just taking the first 1000 things from the test set for faster evaluation
test_data = test_data[0:1000]
test_tweets = np.array([t[0] for t in test_data])
one_hot_test_tweets = utils.one_hot(test_tweets, vocab_size)
test_labels = np.array([int(t[1]) for t in test_data])
# we'll train with batches of size 128. This means that we run
# our model on 128 examples and then do gradient descent based on the loss
# over those 128 examples.
num_steps = 1000
for step in range(num_steps):
# get data for a batch
offset = (step * batch_size) % (len(train_data) - batch_size)
batch_tweets = utils.one_hot(train_tweets[offset : (offset + batch_size)], vocab_size)
batch_labels = train_labels[offset : (offset + batch_size)]
# put this data into a dictionary that we feed in when we run
# the graph. this data fills in the placeholders we made in the graph.
data = {tweets: batch_tweets, labels: batch_labels}
# run the 'optimizer', 'loss', and 'pred_err' operations in the graph
_, loss_value_train, error_value_train = session.run(
[optimizer, loss, pred_err], feed_dict=data)
# print stuff every 50 steps to see how we are doing
if (step % 50 == 0):
print("Minibatch train loss at step", step, ":", loss_value_train)
print("Minibatch train error: %.3f%%" % error_value_train)
# get test evaluation
test_loss = []
test_error = []
for batch_num in range(int(len(test_data)/batch_size)):
test_offset = (batch_num * batch_size) % (len(test_data) - batch_size)
test_batch_tweets = one_hot_test_tweets[test_offset : (test_offset + batch_size)]
test_batch_labels = test_labels[test_offset : (test_offset + batch_size)]
data_testing = {tweets: test_batch_tweets, labels: test_batch_labels}
loss_value_test, error_value_test = session.run([loss, pred_err], feed_dict=data_testing)
test_loss.append(loss_value_test)
test_error.append(error_value_test)
print("Test loss: %.3f" % np.mean(test_loss))
print("Test error: %.3f%%" % np.mean(test_error))
Explanation: Step 7: Run Session!
Now that we've made all the variable and operations in our graph, we can load the data, feed it in, and run the model!
End of explanation
import math
import pickle as p
import tensorflow as tf
import numpy as np
import utils
# set variables
tweet_size = 20
hidden_size = 100
vocab_size = 7597
batch_size = 64
# this just makes sure that all our following operations will be placed in the right graph.
tf.reset_default_graph()
# create a session variable that we can run later.
session = tf.Session()
# make placeholders for data we'll feed in
tweets = tf.placeholder(tf.float32, [None, tweet_size, vocab_size])
labels = tf.placeholder(tf.float32, [None])
# make the lstm cells, and wrap them in MultiRNNCell for multiple layers
lstm_cell = tf.contrib.rnn.LSTMCell(hidden_size)
multi_lstm_cells = tf.contrib.rnn.MultiRNNCell(cells=[lstm_cell] * 2, state_is_tuple=True)
# define the op that runs the LSTM, across time, on the data
_, final_state = tf.nn.dynamic_rnn(multi_lstm_cells, tweets, dtype=tf.float32)
# a useful function that takes an input and what size we want the output
# to be, and multiples the input by a weight matrix plus bias (also creating
# these variables)
def linear(input_, output_size, name, init_bias=0.0):
shape = input_.get_shape().as_list()
with tf.variable_scope(name):
W = tf.get_variable("weight_matrix", [shape[-1], output_size], tf.float32, tf.random_normal_initializer(stddev=1.0 / math.sqrt(shape[-1])))
if init_bias is None:
return tf.matmul(input_, W)
with tf.variable_scope(name):
b = tf.get_variable("bias", [output_size], initializer=tf.constant_initializer(init_bias))
return tf.matmul(input_, W) + b
# define that our final sentiment logit is a linear function of the final state
# of the LSTM
sentiment = linear(final_state[-1][-1], 1, name="output")
sentiment = tf.squeeze(sentiment, [1])
# define cross entropy loss function
loss = tf.nn.sigmoid_cross_entropy_with_logits(logits=sentiment, labels=labels)
loss = tf.reduce_mean(loss)
# round our actual probabilities to compute error
prob = tf.nn.sigmoid(sentiment)
prediction = tf.to_float(tf.greater_equal(prob, 0.5))
pred_err = tf.to_float(tf.not_equal(prediction, labels))
pred_err = tf.reduce_sum(pred_err)
# define our optimizer to minimize the loss
optimizer = tf.train.AdamOptimizer().minimize(loss)
# initialize any variables
tf.global_variables_initializer().run(session=session)
# load our data and separate it into tweets and labels
train_data = json.load(open('data/trainTweets_preprocessed.json', 'r'))
train_data = list(map(lambda row:(np.array(row[0],dtype=np.int32),str(row[1])),train_data))
train_tweets = np.array([t[0] for t in train_data])
train_labels = np.array([int(t[1]) for t in train_data])
test_data = json.load(open('data/testTweets_preprocessed.json', 'r'))
test_data = map(lambda row:(np.array(row[0],dtype=np.int32),str(row[1])),test_data)
# we are just taking the first 1000 things from the test set for faster evaluation
test_data = test_data[0:1000]
test_tweets = np.array([t[0] for t in test_data])
one_hot_test_tweets = utils.one_hot(test_tweets, vocab_size)
test_labels = np.array([int(t[1]) for t in test_data])
# we'll train with batches of size 128. This means that we run
# our model on 128 examples and then do gradient descent based on the loss
# over those 128 examples.
num_steps = 1000
for step in range(num_steps):
# get data for a batch
offset = (step * batch_size) % (len(train_data) - batch_size)
batch_tweets = utils.one_hot(train_tweets[offset : (offset + batch_size)], vocab_size)
batch_labels = train_labels[offset : (offset + batch_size)]
# put this data into a dictionary that we feed in when we run
# the graph. this data fills in the placeholders we made in the graph.
data = {tweets: batch_tweets, labels: batch_labels}
# run the 'optimizer', 'loss', and 'pred_err' operations in the graph
_, loss_value_train, error_value_train = session.run(
[optimizer, loss, pred_err], feed_dict=data)
# print stuff every 50 steps to see how we are doing
if (step % 50 == 0):
print("Minibatch train loss at step", step, ":", loss_value_train)
print("Minibatch train error: %.3f%%" % error_value_train)
# get test evaluation
test_loss = []
test_error = []
for batch_num in range(int(len(test_data)/batch_size)):
test_offset = (batch_num * batch_size) % (len(test_data) - batch_size)
test_batch_tweets = one_hot_test_tweets[test_offset : (test_offset + batch_size)]
test_batch_labels = test_labels[test_offset : (test_offset + batch_size)]
data_testing = {tweets: test_batch_tweets, labels: test_batch_labels}
loss_value_test, error_value_test = session.run([loss, pred_err], feed_dict=data_testing)
test_loss.append(loss_value_test)
test_error.append(error_value_test)
print("Test loss: %.3f" % np.mean(test_loss))
print("Test error: %.3f%%" % np.mean(test_error))
Explanation: Solutions
End of explanation |
4,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Download Test Data for XBRAIN
This notebook will walk through on how to download the test data necessary for the demo
Last Update
Step1: Methods
Step2: Parameters to pass to Google Drive
Step3: Unzip the data to a directory | Python Code:
# imports
import requests
import zipfile
import os
Explanation: Download Test Data for XBRAIN
This notebook will walk through on how to download the test data necessary for the demo
Last Update: 10/12/2017
End of explanation
# Methods to pull from google drive
def download_file_from_google_drive(id, destination):
URL = "https://docs.google.com/uc?export=download"
session = requests.Session()
response = session.get(URL, params = { 'id' : id }, stream = True)
token = get_confirm_token(response)
if token:
params = { 'id' : id, 'confirm' : token }
response = session.get(URL, params = params, stream = True)
save_response_content(response, destination)
def get_confirm_token(response):
for key, value in response.cookies.items():
if key.startswith('download_warning'):
return value
return None
def save_response_content(response, destination):
CHUNK_SIZE = 32768 #iterate by chunks of data to not loose information
with open(destination, "wb") as f:
for chunk in response.iter_content(CHUNK_SIZE):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
Explanation: Methods
End of explanation
# download the zip
file_id = '0Bx1nyj-4aTk9RnNnS3JfRDZ6MVU' # the ID to the file on Google Drive
destination = 'public_data.zip'
# pass through the arguements
download_file_from_google_drive(file_id, destination)
Explanation: Parameters to pass to Google Drive
End of explanation
# unzip the data
zip_ref = zipfile.ZipFile(destination, 'r')
zip_ref.extractall(os.getcwd())
zip_ref.close()
Explanation: Unzip the data to a directory
End of explanation |
4,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook contains code to train a fully connected neural network on MNIST using tf.contrib.learn. At the end is a short exercise.
Step1: Import the dataset
Step2: There are 55k examples in train, and 10k in eval. You may wish to limit the size to experiment faster.
Step3: Display some digits
Step4: These digits are clearly drawn. Here's one that's not.
Step5: Now let's take a look at how many features we have.
Step6: Fit a Linear Classifier
Our goal here is to get about 90% accuracy with this simple classifier.
Step7: Evaluate accuracy
Step8: Classify a few examples
We can make predictions on individual images as well. Note
Step9: Visualize learned weights
Let's see if we can reproduce the pictures of the weights in the TensorFlow Basic MNSIT <a href="https
Step10: Exercise
Step11: Has our accuracy improved? | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import tensorflow as tf
learn = tf.contrib.learn
tf.logging.set_verbosity(tf.logging.ERROR)
Explanation: This notebook contains code to train a fully connected neural network on MNIST using tf.contrib.learn. At the end is a short exercise.
End of explanation
mnist = learn.datasets.load_dataset('mnist')
data = mnist.train.images
labels = np.asarray(mnist.train.labels, dtype=np.int32)
test_data = mnist.test.images
test_labels = np.asarray(mnist.test.labels, dtype=np.int32)
Explanation: Import the dataset
End of explanation
max_examples = 10000
data = data[:max_examples]
labels = labels[:max_examples]
Explanation: There are 55k examples in train, and 10k in eval. You may wish to limit the size to experiment faster.
End of explanation
def display(i):
img = test_data[i]
plt.title('Example %d. Label: %d' % (i, test_labels[i]))
plt.imshow(img.reshape((28,28)), cmap=plt.cm.gray_r)
display(0)
display(1)
Explanation: Display some digits
End of explanation
display(8)
Explanation: These digits are clearly drawn. Here's one that's not.
End of explanation
print len(data[0])
Explanation: Now let's take a look at how many features we have.
End of explanation
feature_columns = learn.infer_real_valued_columns_from_input(data)
classifier = learn.LinearClassifier(feature_columns=feature_columns, n_classes=10)
classifier.fit(data, labels, batch_size=100, steps=1000)
Explanation: Fit a Linear Classifier
Our goal here is to get about 90% accuracy with this simple classifier.
End of explanation
classifier.evaluate(test_data, test_labels)["accuracy"]
Explanation: Evaluate accuracy
End of explanation
# here's one it gets right
print ("Predicted %d, Label: %d" % (list(classifier.predict(test_data[0:1]))[0], test_labels[0]))
display(0)
# and one it gets wrong
print ("Predicted %d, Label: %d" % (list(classifier.predict(test_data[8:9]))[0], test_labels[8]))
display(8)
Explanation: Classify a few examples
We can make predictions on individual images as well. Note: the predict method accepts an array of samples as input, and returns a generator.
End of explanation
weights = classifier.weights_
f, axes = plt.subplots(2, 5, figsize=(10,4))
axes = axes.reshape(-1)
for i in range(len(axes)):
a = axes[i]
a.imshow(weights.T[i].reshape(28, 28), cmap=plt.cm.seismic)
a.set_title(i)
a.set_xticks(()) # ticks be gone
a.set_yticks(())
plt.show()
Explanation: Visualize learned weights
Let's see if we can reproduce the pictures of the weights in the TensorFlow Basic MNSIT <a href="https://www.tensorflow.org/tutorials/mnist/beginners/index.html#mnist-for-ml-beginners">tutorial</a>.
End of explanation
# Build 2 layer DNN with 128, 32 units respectively.
# Play with these parameters to see if you can do better
# How? See https://www.tensorflow.org/versions/r0.12/tutorials/tflearn/index.html#tf-contrib-learn-quickstart
Explanation: Exercise: switch the estimator to a DNN
End of explanation
classifier.evaluate(test_data, test_labels)["accuracy"]
Explanation: Has our accuracy improved?
End of explanation |
4,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy 是 Python 数据分析的基础,很多有关Python数据分析都是建立在其之上。
1 ndarray
整个 numpy 的基础是 ndarray (n dimensional array),表示同一元素组成的多维数组。
+ dtype
数据类型
+ shape
由N个正整数组成的元组,每一个元素代表了每一维的大小
+ axes
数组的维数为轴,轴的数量为秩(rank)
Numpy 数组是不变的,一旦创建,就不可与改变
Step1: 创建二维数组
Step2: 创建数组既可以通过list也可以通过tuple,也可以list和tuple混合使用
Step3: Numpy 自带了若干数组创建方法
Step4: 2 运算操作
算术运算
Step5: 数学函数运算
Step6: 从上面可以看出,对于常见的$+ \space - \space \times $都是每个元素之间的操作
矩阵积
矩阵乘法:$$C=A \times B$$
其中$A_{m \times n},B_{n \times p}$,那么矩阵$C_{m \times p}$,其中$$C_{ij}=\sum_{k=1}^{n}A_{ik}*B_{kj}$$
Step7: 自增操作
Step8: 通用函数
universal function 是对数组中的每个元素进行操作
Step9: 聚合函数
对数组进行操作,并返回一个数值
Step10: 3 索引机制、切片和迭代方法
索引机制
通过方括号[]的索引机制
1 一维数组
Step11: 方括号内传入多个参数,来获取多个值
Step12: 2 二维数组
Step13: 切片操作
抽取数组中一部分形成一个新的数组,在Python列表中进行切片操作是原列表的副本,而在numpy中,则是原数据的一个缓冲。
Step14: 切片的语法:省略第一个元素,表示从0开始;省略第二元素,表示到最大的索引值;省略第三个元素表示间隔为1。
对于二维数组切片,不过要指定切片的维数。
Step15: 对于不连续的索引,可以将这个索引放入一个数组中
Step16: 数组迭代
在Python中可以通过for迭代的方式进行数据处理,在numpy中同样可以通过该方式
Step17: 如果不想用多层迭代的方式遍历整个数组,可以用一下方式
Step18: 当然numpy也支持其他当时来进行数组的迭代处理,apply_along_axis函数,该函数接受三个参数:聚合函数,对哪一轴数据处理和数组。
Step19: 当然也可对自定义处理函数
Step20: 4 条件和布尔数组
按照一定的条件从数组中筛选出一部分元素
Step21: 5 形状改变
reshape()函数改变数组的形状来形成一个新的数组
Step22: 不形成的数组,直接修改数组的shape属性
Step23: 6 数组操作
连接数组
vstack()函数执行垂直入栈操作,数组朝垂直方向发展,hstack()函数执行水平方向入栈,数组朝水平方向发展。
Step24: 7 数组切分
数组切分是数组连接的逆操作,也分为hsplit()和vsplit()两种
Step25: split函数更为复杂,可以把数组分割成几个不对称的部分,还指定切分的索引,axis=0,表示为行索引,axis=1表示列索引
Step26: 8 其他概念
副本和视图
numpy为所有的复制运算都不会为数组创建和数组的任何元素创建副本,如果想要创建副本,使用copy()函数
Step27: 广播机制
广播机制对两个及以上的数组进行运算或者是函数处理,满足的条件为:
1. 连个数组的各维度的长度相同
2. 如果不相容,其中那一维是1
Step28: 数组A为$4 \times 4$而数组B为$4 \times 1$,但满足广播的条件
Step29: 9 结构化数组
数组的数据类型不仅可以说会用内置的,也可以自定义一些数据类型,记录一些事数据结构而不是独立的元素,指定dtype对象
bytes ---> b1
int ---> i1,i2,i4,i8
unsigned ints ---> u1,u2,u4,u8
floats ---> f2,f4,f8
complex ---> c8,c16
fix length strings ---> a<n>
Step30: 数组文件的读写
二进制文件读写
Step31: csv 文件读写
data.csv
id,value1,value2,value3
1,123,1.4,23
2,110,0.5,18
3,164,2.1,19 | Python Code:
import numpy as np
a = np.array([1,2,3])
print a
type(a)
type(a)
a.dtype
print a.ndim
print a.size
print a.shape
Explanation: Numpy 是 Python 数据分析的基础,很多有关Python数据分析都是建立在其之上。
1 ndarray
整个 numpy 的基础是 ndarray (n dimensional array),表示同一元素组成的多维数组。
+ dtype
数据类型
+ shape
由N个正整数组成的元组,每一个元素代表了每一维的大小
+ axes
数组的维数为轴,轴的数量为秩(rank)
Numpy 数组是不变的,一旦创建,就不可与改变
End of explanation
b = np.array([[1.3,2.4],[0.3,4.1]])
print b.dtype
print b.ndim
print b.size
print b.shape
Explanation: 创建二维数组
End of explanation
import numpy as np
c = np.array([[1,2,3],[4,5,6]])
d = np.array([(7,8,9),(10,11,12)])
e = np.array([[13,14],(12,31),[12,21]])
print c
print d
print e
Explanation: 创建数组既可以通过list也可以通过tuple,也可以list和tuple混合使用
End of explanation
np.zeros((3,3))
np.ones((3,3))
np.arange(4,10)
np.arange(0,12,3)
np.arange(0,6,0.6)
np.arange(0,12).reshape(3,4)
np.linspace(0,10,5)
np.linspace(0,10,5,endpoint=True)
np.linspace(0,10,5,endpoint=False)
np.random.random(3)
np.random.random((3,4))
Explanation: Numpy 自带了若干数组创建方法
End of explanation
import numpy as np
a = np.arange(4)
print a
print a+4
print a*2
b = np.arange(4,8)
print b
print a + b
print a - b
print a * b
Explanation: 2 运算操作
算术运算
End of explanation
print a * np.sin(b)
print a * np.sqrt(b)
Explanation: 数学函数运算
End of explanation
A = np.arange(0,9).reshape(3,3)
B = np.ones((3,3))
print A.dot(B)
print np.dot(A,B)
Explanation: 从上面可以看出,对于常见的$+ \space - \space \times $都是每个元素之间的操作
矩阵积
矩阵乘法:$$C=A \times B$$
其中$A_{m \times n},B_{n \times p}$,那么矩阵$C_{m \times p}$,其中$$C_{ij}=\sum_{k=1}^{n}A_{ik}*B_{kj}$$
End of explanation
a = np.arange(4)
print a
a +=1
print a
a -=2
print a
Explanation: 自增操作
End of explanation
a = np.arange(1,5)
print a
print np.sqrt(a)
print np.log(a)
print np.sin(a)
Explanation: 通用函数
universal function 是对数组中的每个元素进行操作
End of explanation
import numpy as np
a = np.array([3.3,4.5,5.7,0.3])
print a.sum()
print a.max()
print a.min()
print a.std()
Explanation: 聚合函数
对数组进行操作,并返回一个数值
End of explanation
a = np.arange(10,16)
print a
print a[1]
print a[-1]
Explanation: 3 索引机制、切片和迭代方法
索引机制
通过方括号[]的索引机制
1 一维数组
End of explanation
print a[[1,3,4]]
Explanation: 方括号内传入多个参数,来获取多个值
End of explanation
A = np.arange(10,19).reshape((3,3))
print A
print A[1,2]
Explanation: 2 二维数组
End of explanation
a = range(5)
print a
a_a = a[1:3]
a_a[0]=10
print a_a
print a
b = np.arange(10,16)
b_b1 = b[1:5]
print b_b1
b_b1[0]=20
print b_b1
print b
Explanation: 切片操作
抽取数组中一部分形成一个新的数组,在Python列表中进行切片操作是原列表的副本,而在numpy中,则是原数据的一个缓冲。
End of explanation
A = np.arange(10,19).reshape((3,3))
print A
print A[0,:]
print A[0:2,0:2]
Explanation: 切片的语法:省略第一个元素,表示从0开始;省略第二元素,表示到最大的索引值;省略第三个元素表示间隔为1。
对于二维数组切片,不过要指定切片的维数。
End of explanation
print A[[0,2],0:2]
Explanation: 对于不连续的索引,可以将这个索引放入一个数组中
End of explanation
for row in A:
print row
Explanation: 数组迭代
在Python中可以通过for迭代的方式进行数据处理,在numpy中同样可以通过该方式
End of explanation
for item in A.flat:
print item,
Explanation: 如果不想用多层迭代的方式遍历整个数组,可以用一下方式
End of explanation
# axis=0 按列对数据进行处理
print np.apply_along_axis(np.mean,axis=0,arr=A)
# axis=1 按行对数据进行处理
print np.apply_along_axis(np.mean,axis=1,arr=A)
Explanation: 当然numpy也支持其他当时来进行数组的迭代处理,apply_along_axis函数,该函数接受三个参数:聚合函数,对哪一轴数据处理和数组。
End of explanation
def half(x):
return x/2
print np.apply_along_axis(half,axis=0,arr=A)
print np.apply_along_axis(half,axis=1,arr=A)
Explanation: 当然也可对自定义处理函数
End of explanation
A = np.random.random((4,4))
A
A <0.5
# 按照条件抽选出数组
A[A<0.5]
Explanation: 4 条件和布尔数组
按照一定的条件从数组中筛选出一部分元素
End of explanation
a = np.random.random(12)
print a
A = a.reshape((3,4))
print A
print a
Explanation: 5 形状改变
reshape()函数改变数组的形状来形成一个新的数组
End of explanation
a.shape=(4,3)
a
Explanation: 不形成的数组,直接修改数组的shape属性
End of explanation
A = np.ones((3,3))
B = np.zeros((3,3))
print np.vstack((A,B))
print np.hstack((A,B))
Explanation: 6 数组操作
连接数组
vstack()函数执行垂直入栈操作,数组朝垂直方向发展,hstack()函数执行水平方向入栈,数组朝水平方向发展。
End of explanation
A = np.arange(16).reshape((4,4))
print A
# 水平方向切割成两部分
B,C=np.hsplit(A,2)
print B
print C
B,C=np.vsplit(A,2)
print B
print C
Explanation: 7 数组切分
数组切分是数组连接的逆操作,也分为hsplit()和vsplit()两种
End of explanation
A1,A2,A3 = np.split(A,[1,3],axis=1)
print A1
print A2
print A3
A1,A2,A3 = np.split(A,[1,3],axis=0)
print A1
print A2
print A3
Explanation: split函数更为复杂,可以把数组分割成几个不对称的部分,还指定切分的索引,axis=0,表示为行索引,axis=1表示列索引
End of explanation
a = np.arange(4)
b = a
b[0]=10
a
c = a[0:2]
c[0]=-1
a
d=a.copy()
d[0]=-10
a
Explanation: 8 其他概念
副本和视图
numpy为所有的复制运算都不会为数组创建和数组的任何元素创建副本,如果想要创建副本,使用copy()函数
End of explanation
A = np.arange(16).reshape((4,4))
B = np.arange(4)
print A
print B
Explanation: 广播机制
广播机制对两个及以上的数组进行运算或者是函数处理,满足的条件为:
1. 连个数组的各维度的长度相同
2. 如果不相容,其中那一维是1
End of explanation
A+B
Explanation: 数组A为$4 \times 4$而数组B为$4 \times 1$,但满足广播的条件
End of explanation
structured = np.array([(1,'first',0.5,1+2j),(2,'second',1.3,2-2j),(3,'third',0.8,1+3j)],dtype=('i2,a6,f4,c8'))
structured
structured[0]
# 引用同列下
structured['f1']
# 指定每一列的名称
structured = np.array([(1,'first',0.5,1+2j),(2,'second',1.3,2-2j),(3,'third',0.8,1+3j)],dtype=[('id','i2'),('position','a6'),('value','f4'),('complex','c8')])
structured
structured.dtype.names=('id','order','value','complex')
structured['order']
Explanation: 9 结构化数组
数组的数据类型不仅可以说会用内置的,也可以自定义一些数据类型,记录一些事数据结构而不是独立的元素,指定dtype对象
bytes ---> b1
int ---> i1,i2,i4,i8
unsigned ints ---> u1,u2,u4,u8
floats ---> f2,f4,f8
complex ---> c8,c16
fix length strings ---> a<n>
End of explanation
data = np.random.random((4,4))
data
np.save('saved_data',data)
loaded_data=np.load('saved_data.npy')
loaded_data
Explanation: 数组文件的读写
二进制文件读写
End of explanation
import numpy as np
data = np.genfromtxt('data.csv',delimiter=',',names=True)
data
data['id']
Explanation: csv 文件读写
data.csv
id,value1,value2,value3
1,123,1.4,23
2,110,0.5,18
3,164,2.1,19
End of explanation |
4,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2018-01-12 / FMA sub-sampling
Problem statement
Step1: Binary thresholding
Step2:
Step3: Multi-valued thresholds | Python Code:
import numpy as np
import pandas as pd
import entrofy
import matplotlib.pyplot as plt
%matplotlib nbagg
df = pd.read_csv('/home/bmcfee/data/vggish-likelihoods-a226b3-maxagg10.csv.gz', index_col=0)
df.head(5)
(df >= 0.5).describe().T.sort_values('freq')
df.median()
Explanation: 2018-01-12 / FMA sub-sampling
Problem statement:
Input:
C csv files
each file has n rows. Each row in file c encodes the prediction for class c on a 1sec segment.
A target number k
Target fractions for class representations p[c].
Output:
A set of k clips, each 10 seconds in duration
Aggregate predicted likelihoods for each class c on each clip k
Each class c has aggregate likelihood at least p[c] * k
Method:
drop edge effects from the beginning and end of tracks: remove the first and last frames from each track.
window the frame observations into 10sec clips with aggregate labels
threshold the aggregate likelihoods to binarize the representation
subsample the 10sec clips using entrofy
Questions:
How should likelihoods be aggregated within a segment?
Mean? Max? Quartile?
Mean makes sense from the perspective of random frame sampling
Quartile makes sense wrt sparse events
Max makes sense wrt extremely sparse events
How should likelihoods be thresholded? 0.5? Empirical average over X?
$p[y] = \sum_x p[y|x] * p[x] \approx \sum_{x \in X} p[y|x] /|X| $
But that doesn't matter really. Threshold should be bayes optimal (=> 0.5)
What's the target number of positives per class k * p[c]?
Maybe that should be determined by the base rate estimation p[y]?
Next step: Question scheduling on CF.
Idea: cluster the tracks according to aggregated likelihood vectors
Or maybe by their thresholded likelihoods?
Set the number of clusters to be relatively large (say, 23^2 ~= 512)
When generating questions for an annotator, assign them to a cluster and only generate questions from that cluster
Reasoning: this will keep the labels consistent from one question to the next
UPDATE:
Windowing and aggregation is happening upstream of this
Aggregation is max over the middle 8 frames
2018-01-19
Eric has provided the per-fragment aggregated estimates as one giant table
So what are our entrofy parameters?
attribute thresholds
Do we only do <>0.5?
Or break likelihood into quartiles?
Sounds like quartiles are the way to go
target proportions per class?
we can try to preserve the empirical distribution
or a biased distribution achieved by grouping on the track ids?
or uniform?
Uniform across quartiles for each instrument
output set size?
20-50 positives per instrument?
say, 16 * 4 * n_classes
Maybe round up to 1K to start
If we only want one example per track, we can make an aux categorical column that's the track index, and set the target number to 1
2018-02-02
Turns out we didn't get the data transferred in time on 01/19, so still waiting
output set size: 500-1000 positives per class
try both hard threshold and quartile sampling
End of explanation
N_OUT = 23 * 100
mappers = {col: entrofy.mappers.ContinuousMapper(df[col],
prefix=col,
n_out=2,
boundaries=[0.0, 0.5, 1.0]) for col in df}
idx, score = entrofy.entrofy(df, N_OUT, mappers=mappers,
seed=20180205,
quantile=0.05,
n_trials=10)
df.loc[idx].head(10)
(df.loc[idx] >= 0.5).describe().T.sort_values('freq')
Explanation: Binary thresholding
End of explanation
!pwd
idx.to_series().to_json('subsample_idx.json')
Explanation:
End of explanation
mappers = {col: entrofy.mappers.ContinuousMapper(df[col], n_out=4,
boundaries=[0.0, 0.25, 0.5, 0.75, 1.0]) for col in df}
idx, score = entrofy.entrofy(df, 1000, mappers=mappers, n_trials=100)
Explanation: Multi-valued thresholds
End of explanation |
4,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Neural Machine Translation with Attention
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https
Step1: Download and prepare the dataset
We'll use a language dataset provided by http
Step2: Limit the size of the dataset to experiment faster (optional)
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data)
Step3: Create a tf.data dataset
Step4: Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https
Step5: Define the optimizer and the loss function
Step6: Checkpoints (Object-based saving)
Step7: Training
Pass the input through the encoder which return encoder output and the encoder hidden state.
The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
Step8: Translate
The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
Note
Step9: Restore the latest checkpoint and test | Python Code:
from __future__ import absolute_import, division, print_function
# Import TensorFlow >= 1.10 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import time
print(tf.__version__)
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Neural Machine Translation with Attention
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using tf.keras and eager execution. This is an advanced example that assumes some knowledge of sequence to sequence models.
After training the model in this notebook, you will be able to input a Spanish sentence, such as "¿todavia estan en casa?", and return the English translation: "are you still at home?"
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 mintues to run on a single P100 GPU.
End of explanation
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return word_pairs
# This class creates a word -> index mapping (e.g,. "dad" -> 5) and vice-versa
# (e.g., 5 -> "dad") for each language,
class LanguageIndex():
def __init__(self, lang):
self.lang = lang
self.word2idx = {}
self.idx2word = {}
self.vocab = set()
self.create_index()
def create_index(self):
for phrase in self.lang:
self.vocab.update(phrase.split(' '))
self.vocab = sorted(self.vocab)
self.word2idx['<pad>'] = 0
for index, word in enumerate(self.vocab):
self.word2idx[word] = index + 1
for word, index in self.word2idx.items():
self.idx2word[index] = word
def max_length(tensor):
return max(len(t) for t in tensor)
def load_dataset(path, num_examples):
# creating cleaned input, output pairs
pairs = create_dataset(path, num_examples)
# index language using the class defined above
inp_lang = LanguageIndex(sp for en, sp in pairs)
targ_lang = LanguageIndex(en for en, sp in pairs)
# Vectorize the input and target languages
# Spanish sentences
input_tensor = [[inp_lang.word2idx[s] for s in sp.split(' ')] for en, sp in pairs]
# English sentences
target_tensor = [[targ_lang.word2idx[s] for s in en.split(' ')] for en, sp in pairs]
# Calculate max_length of input and output tensor
# Here, we'll set those to the longest sentence in the dataset
max_length_inp, max_length_tar = max_length(input_tensor), max_length(target_tensor)
# Padding the input and output tensor to the maximum length
input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor,
maxlen=max_length_inp,
padding='post')
target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor,
maxlen=max_length_tar,
padding='post')
return input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_tar
Explanation: Download and prepare the dataset
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
Add a start and end token to each sentence.
Clean the sentences by removing special characters.
Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
Pad each sentence to a maximum length.
End of explanation
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_targ = load_dataset(path_to_file, num_examples)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val)
Explanation: Limit the size of the dataset to experiment faster (optional)
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data):
End of explanation
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
N_BATCH = BUFFER_SIZE//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word2idx)
vocab_tar_size = len(targ_lang.word2idx)
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
Explanation: Create a tf.data dataset
End of explanation
def gru(units):
# If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU)
# the code automatically does that.
if tf.test.is_gpu_available():
return tf.keras.layers.CuDNNGRU(units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.enc_units)
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.dec_units)
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.W1 = tf.keras.layers.Dense(self.dec_units)
self.W2 = tf.keras.layers.Dense(self.dec_units)
self.V = tf.keras.layers.Dense(1)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
# hidden shape == (batch_size, hidden size)
# hidden_with_time_axis shape == (batch_size, 1, hidden size)
# we are doing this to perform addition to calculate the score
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# score shape == (batch_size, max_length, hidden_size)
score = tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis))
# attention_weights shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
attention_weights = tf.nn.softmax(self.V(score), axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * enc_output
context_vector = tf.reduce_sum(context_vector, axis=1)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size * 1, vocab)
x = self.fc(output)
return x, state, attention_weights
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.dec_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
Explanation: Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
The input is put through an encoder model which gives us the encoder output of shape (batch_size, max_length, hidden_size) and the encoder hidden state of shape (batch_size, hidden_size).
Here are the equations that are implemented:
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
We're using Bahdanau attention. Lets decide on notation before writing the simplified form:
FC = Fully connected (dense) layer
EO = Encoder output
H = hidden state
X = input to the decoder
And the pseudo-code:
score = FC(tanh(FC(EO) + FC(H)))
attention weights = softmax(score, axis = 1). Softmax by default is applied on the last axis but here we want to apply it on the 1st axis, since the shape of score is (batch_size, max_length, hidden_size). Max_length is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
context vector = sum(attention weights * EO, axis = 1). Same reason as above for choosing axis as 1.
embedding output = The input to the decoder X is passed through an embedding layer.
merged vector = concat(embedding output, context vector)
This merged vector is then given to the GRU
The shapes of all the vectors at each step have been specified in the comments in the code:
End of explanation
optimizer = tf.train.AdamOptimizer()
def loss_function(real, pred):
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
return tf.reduce_mean(loss_)
Explanation: Define the optimizer and the loss function
End of explanation
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
Explanation: Checkpoints (Object-based saving)
End of explanation
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
total_loss += batch_loss
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# saving (checkpoint) the model every 2 epochs
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / N_BATCH))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
Explanation: Training
Pass the input through the encoder which return encoder output and the encoder hidden state.
The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
End of explanation
def evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word2idx[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out)
# storing the attention weigths to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.idx2word[predicted_id] + ' '
if targ_lang.idx2word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
plt.show()
def translate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
result, sentence, attention_plot = evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
Explanation: Translate
The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
Note: The encoder output is calculated only once for one input.
End of explanation
# restoring the latest checkpoint in checkpoint_dir
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate('hace mucho frio aqui.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate('esta es mi vida.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate('¿todavia estan en casa?', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
# wrong translation
translate('trata de averiguarlo.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
Explanation: Restore the latest checkpoint and test
End of explanation |
4,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Breast Cancer Proliferation Scores with Apache Spark and Apache SystemML
Machine Learning
Setup
Step1: Read in train & val data
Step2: Extract X and Y matrices
Step4: Convert to SystemML Matrices
Note
Step6: Trigger Caching (Optional)
Note
Step8: Save Matrices (Optional)
Step10: Softmax Classifier
Sanity Check
Step12: Train
Step14: Eval
Step16: LeNet-like ConvNet
Sanity Check
Step18: Hyperparameter Search
Step20: Train
Step22: Eval | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import matplotlib.pyplot as plt
import numpy as np
from pyspark.sql.functions import col, max
import systemml # pip3 install systemml
from systemml import MLContext, dml
plt.rcParams['figure.figsize'] = (10, 6)
ml = MLContext(sc)
Explanation: Predicting Breast Cancer Proliferation Scores with Apache Spark and Apache SystemML
Machine Learning
Setup
End of explanation
# Settings
size=64
grayscale = True
c = 1 if grayscale else 3
p = 0.01
tr_sample_filename = os.path.join("data", "train_{}_sample_{}{}.parquet".format(p, size, "_grayscale" if grayscale else ""))
val_sample_filename = os.path.join("data", "val_{}_sample_{}{}.parquet".format(p, size, "_grayscale" if grayscale else ""))
train_df = sqlContext.read.load(tr_sample_filename)
val_df = sqlContext.read.load(val_sample_filename)
train_df, val_df
tc = train_df.count()
vc = val_df.count()
tc, vc, tc + vc
train_df.select(max(col("__INDEX"))).show()
train_df.groupBy("tumor_score").count().show()
val_df.groupBy("tumor_score").count().show()
Explanation: Read in train & val data
End of explanation
# Note: Must use the row index column, or X may not
# necessarily correspond correctly to Y
X_df = train_df.select("__INDEX", "sample")
X_val_df = val_df.select("__INDEX", "sample")
y_df = train_df.select("__INDEX", "tumor_score")
y_val_df = val_df.select("__INDEX", "tumor_score")
X_df, X_val_df, y_df, y_val_df
Explanation: Extract X and Y matrices
End of explanation
script =
# Scale images to [-1,1]
X = X / 255
X_val = X_val / 255
X = X * 2 - 1
X_val = X_val * 2 - 1
# One-hot encode the labels
num_tumor_classes = 3
n = nrow(y)
n_val = nrow(y_val)
Y = table(seq(1, n), y, n, num_tumor_classes)
Y_val = table(seq(1, n_val), y_val, n_val, num_tumor_classes)
outputs = ("X", "X_val", "Y", "Y_val")
script = dml(script).input(X=X_df, X_val=X_val_df, y=y_df, y_val=y_val_df).output(*outputs)
X, X_val, Y, Y_val = ml.execute(script).get(*outputs)
X, X_val, Y, Y_val
Explanation: Convert to SystemML Matrices
Note: This allows for reuse of the matrices on multiple
subsequent script invocations with only a single
conversion. Additionally, since the underlying RDDs
backing the SystemML matrices are maintained, any
caching will also be maintained.
End of explanation
# script =
# # Trigger conversions and caching
# # Note: This may take a while, but will enable faster iteration later
# print(sum(X))
# print(sum(Y))
# print(sum(X_val))
# print(sum(Y_val))
#
# script = dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val)
# ml.execute(script)
Explanation: Trigger Caching (Optional)
Note: This will take a while and is not necessary, but doing it
once will speed up the training below. Otherwise, the cost of
caching will be spread across the first full loop through the
data during training.
End of explanation
# script =
# write(X, "data/X_"+p+"_sample_binary", format="binary")
# write(Y, "data/Y_"+p+"_sample_binary", format="binary")
# write(X_val, "data/X_val_"+p+"_sample_binary", format="binary")
# write(Y_val, "data/Y_val_"+p+"_sample_binary", format="binary")
#
# script = dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, p=p)
# ml.execute(script)
Explanation: Save Matrices (Optional)
End of explanation
script =
source("softmax_clf.dml") as clf
# Hyperparameters & Settings
lr = 1e-2 # learning rate
mu = 0.9 # momentum
decay = 0.999 # learning rate decay constant
batch_size = 50
epochs = 500
log_interval = 1
n = 200 # sample size for overfitting sanity check
# Train
[W, b] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], lr, mu, decay, batch_size, epochs, log_interval)
outputs = ("W", "b")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs)
W, b = ml.execute(script).get(*outputs)
W, b
Explanation: Softmax Classifier
Sanity Check: Overfit Small Portion
End of explanation
script =
source("softmax_clf.dml") as clf
# Hyperparameters & Settings
lr = 5e-7 # learning rate
mu = 0.5 # momentum
decay = 0.999 # learning rate decay constant
batch_size = 50
epochs = 1
log_interval = 10
# Train
[W, b] = clf::train(X, Y, X_val, Y_val, lr, mu, decay, batch_size, epochs, log_interval)
outputs = ("W", "b")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs)
W, b = ml.execute(script).get(*outputs)
W, b
Explanation: Train
End of explanation
script =
source("softmax_clf.dml") as clf
# Eval
probs = clf::predict(X, W, b)
[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, W, b)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
outputs = ("loss", "accuracy", "loss_val", "accuracy_val")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val, W=W, b=b).output(*outputs)
loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs)
loss, acc, loss_val, acc_val
Explanation: Eval
End of explanation
script =
source("convnet.dml") as clf
# Hyperparameters & Settings
lr = 1e-2 # learning rate
mu = 0.9 # momentum
decay = 0.999 # learning rate decay constant
lambda = 0 #5e-04
batch_size = 50
epochs = 300
log_interval = 1
dir = "models/lenet-cnn/sanity/"
n = 200 # sample size for overfitting sanity check
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, dir)
outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size)
.output(*outputs))
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = ml.execute(script).get(*outputs)
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2
Explanation: LeNet-like ConvNet
Sanity Check: Overfit Small Portion
End of explanation
script =
source("convnet.dml") as clf
dir = "models/lenet-cnn/hyperparam-search/"
# TODO: Fix `parfor` so that it can be efficiently used for hyperparameter tuning
j = 1
while(j < 2) {
#parfor(j in 1:10000, par=6) {
# Hyperparameter Sampling & Settings
lr = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # learning rate
mu = as.scalar(rand(rows=1, cols=1, min=0.5, max=0.9)) # momentum
decay = as.scalar(rand(rows=1, cols=1, min=0.9, max=1)) # learning rate decay constant
lambda = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # regularization constant
batch_size = 50
epochs = 1
log_interval = 10
trial_dir = dir + "j/"
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, trial_dir)
# Eval
#probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
#[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
# Save hyperparams
str = "lr: " + lr + ", mu: " + mu + ", decay: " + decay + ", lambda: " + lambda + ", batch_size: " + batch_size
name = dir + accuracy_val + "," + j #+","+accuracy+","+j
write(str, name)
j = j + 1
}
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, C=c, Hin=size, Win=size))
ml.execute(script)
Explanation: Hyperparameter Search
End of explanation
script =
source("convnet.dml") as clf
# Hyperparameters & Settings
lr = 0.00205 # learning rate
mu = 0.632 # momentum
decay = 0.99 # learning rate decay constant
lambda = 0.00385
batch_size = 50
epochs = 1
log_interval = 10
dir = "models/lenet-cnn/train/"
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, dir)
outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size)
.output(*outputs))
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = ml.execute(script).get(*outputs)
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2
Explanation: Train
End of explanation
script =
source("convnet.dml") as clf
# Eval
probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
outputs = ("loss", "accuracy", "loss_val", "accuracy_val")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size,
Wc1=Wc1, bc1=bc1,
Wc2=Wc2, bc2=bc2,
Wc3=Wc3, bc3=bc3,
Wa1=Wa1, ba1=ba1,
Wa2=Wa2, ba2=ba2)
.output(*outputs))
loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs)
loss, acc, loss_val, acc_val
Explanation: Eval
End of explanation |
4,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-3', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: BNU
Source ID: SANDBOX-3
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
4,768 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Implementing a denoising autoencoder
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the HW page on the course website.
In this exercise we will develop a denoising autoencoder, and test it out on the MNIST dataset.
Step2: We will use the class DenoisingAutoencoder in the file METU/denoising_autoencoder.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
Step3: Forward pass
Step4: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check
Step5: Train the network
To train the network we will use stochastic gradient descent (SGD). Look at the function DenoisingAutoencoder.train_with_SGD and fill in the missing sections to implement the training procedure. This should be very similar to the training procedures you used in the first HW.
Once you have implemented the method, run the code below to train the network on toy data. You should achieve a training loss less than 2.0.
Step6: Load the data
Now that you have implemented a DAE network that passes gradient checks and works on toy data, it's time to load up the MNIST dataset so we can use it to train DAE on a real dataset. Make sure that you have run "cs231n/datasets/get_datasets.sh" script before you continue with this step.
Step7: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
Step8: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized. | Python Code:
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
returns relative error
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
Explanation: Implementing a denoising autoencoder
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the HW page on the course website.
In this exercise we will develop a denoising autoencoder, and test it out on the MNIST dataset.
End of explanation
from METU.denoising_autoencoder import DenoisingAutoencoder
from METU.Noise import Noise, GaussianNoise
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 2
num_inputs = 100
# Outputs are equal to the inputs
network_size = (input_size, hidden_size, input_size)
def init_toy_model(num_inputs, input_size):
np.random.seed(0)
net = DenoisingAutoencoder((input_size, hidden_size, input_size))
net.init_weights()
return net
def init_toy_data(num_inputs, input_size):
np.random.seed(1)
X = np.random.randn(num_inputs, input_size)
return X
net = init_toy_model(num_inputs, input_size)
X = init_toy_data(num_inputs, input_size)
print "Ok, now we have a toy network"
Explanation: We will use the class DenoisingAutoencoder in the file METU/denoising_autoencoder.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
End of explanation
loss,_ = net.loss(GaussianNoise(0.5)(X), X, reg=3e-3, activation_function='sigmoid')
correct_loss = 2.42210627243
print 'Your loss value:' + str(loss)
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
Explanation: Forward pass: compute loss
Open the file METU/denoising_autoencoder.py and look at the method DenoisingAutoencoder.loss. This function is very similar to the loss functions you have written in the first HW: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for the corrupted input. In the same function, implement the second part that computes the data and the regularization losses.
End of explanation
from METU.gradient_check import eval_numerical_gradient
reg = 3e-3
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
net.init_weights()
noisy_X = GaussianNoise(0.5)(X)
loss, grads = net.loss(noisy_X, X, reg, activation_function='tanh')
# these should all be less than 1e-5 or so
f = lambda W: net.loss(noisy_X, X, reg, activation_function='tanh')[0]
W1_grad = eval_numerical_gradient(f, net.weights[1]['W'], verbose=False)
print '%s max relative error: %e' % ("W1", rel_error(W1_grad, grads[1]['W']))
W0_grad = eval_numerical_gradient(f, net.weights[0]['W'], verbose=False)
print '%s max relative error: %e' % ("W0", rel_error(W0_grad, grads[0]['W']))
b1_grad = eval_numerical_gradient(f, net.weights[1]['b'], verbose=False)
print '%s max relative error: %e' % ("b1", rel_error(b1_grad, grads[1]['b']))
b0_grad = eval_numerical_gradient(f, net.weights[0]['b'], verbose=False)
print '%s max relative error: %e' % ("b0", rel_error(b0_grad, grads[0]['b']))
Explanation: Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
End of explanation
net = init_toy_model(num_inputs, input_size)
reg = 3e-3
stats = net.train_with_SGD(X, noise=GaussianNoise(sd=0.5),
learning_rate=0.02, learning_rate_decay=0.95,
reg=reg, batchsize=100, num_iters=500, verbose=False, activation_function='sigmoid')
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
Explanation: Train the network
To train the network we will use stochastic gradient descent (SGD). Look at the function DenoisingAutoencoder.train_with_SGD and fill in the missing sections to implement the training procedure. This should be very similar to the training procedures you used in the first HW.
Once you have implemented the method, run the code below to train the network on toy data. You should achieve a training loss less than 2.0.
End of explanation
from cs231n.data_utils import load_mnist
X_train, y_train, X_val, y_val, X_test, y_test = load_mnist()
X_train = X_train.reshape(X_train.shape[0], -1)
X_val = X_val.reshape(X_val.shape[0], -1)
X_test = X_test.reshape(X_test.shape[0], -1)
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
#Visualize some samples
x = np.reshape(X_train[100], (28,28))
plt.imshow(x)
plt.title(y_train[0])
plt.show()
plt.imshow(GaussianNoise(rate=0.5,sd=0.5)(x))
plt.show()
# Yes, DAE will learn to reconstruct from such corrupted data
Explanation: Load the data
Now that you have implemented a DAE network that passes gradient checks and works on toy data, it's time to load up the MNIST dataset so we can use it to train DAE on a real dataset. Make sure that you have run "cs231n/datasets/get_datasets.sh" script before you continue with this step.
End of explanation
import time
input_size = 28 * 28
hidden_size = 300 # Try also sizes bigger than 28*28
reg = 0.003 # 3e-3
net = DenoisingAutoencoder((input_size, hidden_size, input_size))
net.init_weights()
# Train with SGD
tic = time.time()
stats = net.train_with_SGD(X_train, noise=GaussianNoise(rate=0.5,sd=0.5),
learning_rate=0.4, learning_rate_decay=0.99,
reg=reg, num_iters=1000, batchsize=128, momentum='classic', mu=0.9, verbose=True,
activation_function='sigmoid')
toc = time.time()
print toc-tic, 'sec elapsed'
Explanation: Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
End of explanation
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.show()
#from cs231n.vis_utils import visualize_grid
#from cs231n.vis_utils import visualize_grid_2D
# SHOW SOME WEIGHTS
W0 = net.weights[0]['W']
W0 = W0.T
num_of_samples=100
for i in range(0,10):
for j in range(0,10):
plt.subplot(10, 10, i*10+j+1)
rand_index = np.random.randint(0,W0.shape[0]-1,1)
plt.imshow(W0[rand_index].reshape(28,28))
plt.axis('off')
plt.show()
# SHOW SOME RECONSTRUCTIONS
plt_index=1
for i in range(0,10):
rand_index = np.random.randint(0,X_train.shape[0]-1,1)
x = X_train[rand_index]
x_noisy = GaussianNoise(rate=0.5,sd=0.5)(x)
x_recon = net.predict(x_noisy)
#x_loss,_ = net.loss(x_noisy, x, reg=0.0, activation_function='sigmoid')
plt.subplot(10,3,plt_index)
plt.imshow(x.reshape(28,28))
plt.axis('off')
if i == 0: plt.title('input')
plt_index+=1
plt.subplot(10,3,plt_index)
plt.imshow(x_noisy.reshape(28,28))
plt.axis('off')
if i == 0: plt.title('corrupted input')
plt_index+=1
plt.subplot(10,3,plt_index)
plt.imshow(x_recon.reshape(28,28))
plt.axis('off')
if i == 0: plt.title('reconstruction')
plt_index+=1
Explanation: Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
End of explanation |
4,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automated finite difference operators from symbolic equations
This notebook is the first in a series of hands-on tutorial notebooks that are intended to give a brief practical overview of the Devito finite difference framework. We will present an overview of the symbolic layers of Devito and solve a set of small computational science problems that covers a range of partial differential equations (PDEs).
But before we start, let's import Devito and a few SymPy utilities
Step1: From equation to stencil code in a few lines of Python
Today's objective is to demonstrate how Devito and its SymPy-powered symbolic API can be used to solve partial differential equations using the finite difference method with highly optimized stencils in a few lines of Python. We will show how to derive computational stencils directly from the equation in an automated fashion and how we can use Devito to generate and execute optimized C code at runtime to solve our problem.
Defining the physical domain
Before we can start creating stencils we will need to give Devito a few details about the computational domain in which we want to solve our problem. For this purpose we create a Grid object that stores the physical extent (the size) of our domain and knows how many points we want to use in each dimension to discretize our data.
<img src="figures/grid.png" style="width
Step2: Functions and data
To express our equation in symbolic form and discretize it using finite differences, Devito provides a set of Function types. A Function object created from these does two things
Step3: Ok, let's create a function $f(x, y)$ and look at the data Devito has associated with it. Please note that it is important to use explicit keywords, such as name or grid when creating Devitos Function objects.
Step4: By default Devito's Function objects will use the spatial dimensions (x, y) for 2D grids and (x, y, z) for 3D grids. To solve a PDE for several timesteps, we need a time dimension for our symbolic function. For this Devito provides a second function type, TimeFunction, that provides the correct dimension and some other intricacies needed to create a time stepping scheme.
Step5: What does the shape of the associated data look like? Can you guess why?
<button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>Solution</button>
<div id="sol1" class="collapse">
```
The shape is (2, 5, 6). Devito has allocated two buffers to represent g(t, x, y) and g(t + dt, x, y).
```
## Exercise 1
Step6: Next, we want to discretize our governing equation so that we can create a functional Operator from it. We can start by simply writing out the equation as a symbolic expression, while using the shorthand expressions for derivatives that the Function objects provide. This will create a symbolic object of the dicrestized equation.
Can you write out the governing equation using the Devito shorthand expressions? Remember, the governing equation is given as
$$\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} + c\frac{\partial u}{\partial y} = 0$$
<button data-toggle="collapse" data-target="#sol3" class='btn btn-primary'>Solution</button>
<div id="sol3" class="collapse">
```
eq = Eq(u.dt + c * u.dxl + c * u.dyl)
eq
```
As we can see, SymPy has kindly resolved our derivatives. Next, we need to rearrange our equation so that the term $u(t+dt, x, y)$ is on the left-hand side, since it represents the next point in time for our state variable $u$. We can use a SymPy utility called `solve` to rearrange our equation for us, so that it represents a valid state update for $u$.
Can you use `solve` to create a valid stencil for our update to $u(t+dt, x, y)$? Hint
Step7: Please note that the Operator is where all the Devito power is hidden, as the it will automatically generate and compile optimized C stencil code. We can look at this code - although we don't need to execute it.
Step8: Second derivatives and high-order stencils
For the above example all we had to do was combine some first derivatives. However, lots of common scientific problems require second derivative, most notably any PDE including diffusion. To generate second order derivatives we need to give the devito.Function object another piece of information
Step9: We can arbitrarily drive the discretization order up if require higher order stencils.
Step10: To implement diffusion or wave equations, we need to take the Laplacian $\nabla^2 u$, which is simply the second derivative in all space dimensions. For this, Devito also provides a shorthand expression, which means we do not have to hard-code the problem dimension (2D or 3D) in the code. To change the problem dimension we can create another Grid object and use this to re-define our Functions.
Step11: Exercise 3
Step12: To remind ourselves, the governing equation we want to implement is
$$m \frac{\partial^2 u}{\partial t^2} = \nabla^2 u$$
Please have a go and try to implement the operator below. You will need to follow the same strategy to discretize the equation and create a symbolic stencil expression that updates $u(t + dt, x, y)$. Once we apply our Operator for nt timesteps we should see that the wave has expanded homogeneously.
Step13: <button data-toggle="collapse" data-target="#sol6" class='btn btn-primary'>Solution</button>
<div id="sol6" class="collapse">
```python
eqn = Eq(m * u.dt2 - u.laplace)
stencil = solve(eqn, u.forward)[0]
update = Eq(u.forward, stencil)
op = Operator(update)
op(t=nt, dt=dt)
```
Now, let's see what happens if we change the square slowness field `m` by increasing the wave speed to $2.5$ in the bottom half of the domain. | Python Code:
from devito import *
from sympy import init_printing, symbols, solve
init_printing(use_latex=True)
Explanation: Automated finite difference operators from symbolic equations
This notebook is the first in a series of hands-on tutorial notebooks that are intended to give a brief practical overview of the Devito finite difference framework. We will present an overview of the symbolic layers of Devito and solve a set of small computational science problems that covers a range of partial differential equations (PDEs).
But before we start, let's import Devito and a few SymPy utilities:
End of explanation
grid = Grid(shape=(5, 6), extent=(1., 1.))
grid
Explanation: From equation to stencil code in a few lines of Python
Today's objective is to demonstrate how Devito and its SymPy-powered symbolic API can be used to solve partial differential equations using the finite difference method with highly optimized stencils in a few lines of Python. We will show how to derive computational stencils directly from the equation in an automated fashion and how we can use Devito to generate and execute optimized C code at runtime to solve our problem.
Defining the physical domain
Before we can start creating stencils we will need to give Devito a few details about the computational domain in which we want to solve our problem. For this purpose we create a Grid object that stores the physical extent (the size) of our domain and knows how many points we want to use in each dimension to discretize our data.
<img src="figures/grid.png" style="width: 220px;"/>
End of explanation
?Function
Explanation: Functions and data
To express our equation in symbolic form and discretize it using finite differences, Devito provides a set of Function types. A Function object created from these does two things:
It behaves like a sympy.Function symbol
It manages data associated with the symbol
To get more information on how to create and use a Function object, or any type provided by Devito, we can use the magic function ? to look at its documentation from within our notebook.
End of explanation
f = Function(name='g', grid=grid)
f
f.data
Explanation: Ok, let's create a function $f(x, y)$ and look at the data Devito has associated with it. Please note that it is important to use explicit keywords, such as name or grid when creating Devitos Function objects.
End of explanation
g = TimeFunction(name='g', grid=grid)
g
Explanation: By default Devito's Function objects will use the spatial dimensions (x, y) for 2D grids and (x, y, z) for 3D grids. To solve a PDE for several timesteps, we need a time dimension for our symbolic function. For this Devito provides a second function type, TimeFunction, that provides the correct dimension and some other intricacies needed to create a time stepping scheme.
End of explanation
from examples.cfd import init_smooth, plot_field
nt = 100 # Number of timesteps
dt = 0.2 * 2. / 80 # Timestep size (sigma=0.2)
c = 1 # Value for c
# Then we create a grid and our function
grid = Grid(shape=(81, 81), extent=(2., 2.))
u = TimeFunction(name='u', grid=grid)
# We can now set the initial condition and plot it
init_smooth(field=u.data[0], dx=grid.spacing[0], dy=grid.spacing[1])
init_smooth(field=u.data[1], dx=grid.spacing[0], dy=grid.spacing[1])
plot_field(u.data[0])
Explanation: What does the shape of the associated data look like? Can you guess why?
<button data-toggle="collapse" data-target="#sol1" class='btn btn-primary'>Solution</button>
<div id="sol1" class="collapse">
```
The shape is (2, 5, 6). Devito has allocated two buffers to represent g(t, x, y) and g(t + dt, x, y).
```
## Exercise 1: Derivatives of symbolic functions
The Devito functions we have created so far all act as `sympy.Function` objects, which means that we can form symbolic derivative expressions for them. Devito provides a set of shorthand expressions (implemented as Python properties) that allow us to generate finite differences in symbolic form. For example, the property `f.dx` denotes $\frac{\partial}{\partial x} f(x, y)$ - only that Devito has already discretized it with a finite difference expression. There are also a set of shorthand expressions for left (backward) and right (forward) derivatives:
| Derivative | Shorthand | Discretized | Stencil |
| ---------- |:---------:|:-----------:|:-------:|
| $\frac{\partial}{\partial x}f(x, y)$ (right) | `f.dxr` | $\frac{f(x+h_x,y)}{h_x} - \frac{f(x,y)}{h_x}$ | <img src="figures/stencil_forward.png" style="width: 180px;"/> |
| $\frac{\partial}{\partial x}f(x, y)$ (left) | `f.dxl` | $\frac{f(x,y)}{h_x} - \frac{f(x-h_x,y)}{h_x}$ | <img src="figures/stencil_backward.png" style="width: 180px;"/> |
A similar set of expressions exist for each spatial dimension defined on our grid, for example `f.dy` and `f.dyl`. For this exercise, please have a go at creating some derivatives and see if the resulting symbolic output matches what you expect.
Can you take similar derivatives in time using $g(t, x, y)$? Can you spot anything different? What does the shorthand `g.forward` denote?
<button data-toggle="collapse" data-target="#sol2" class='btn btn-primary'>Solution</button>
<div id="sol2" class="collapse">
```
The first derivative in time is g.dt and u.forward represent the forward stencil point g.(t+dt, x, y).
```
## Exercise 2: A linear convenction operator
**Note:** The following example is derived from [step 5](http://nbviewer.ipython.org/github/barbagroup/CFDPython/blob/master/lessons/07_Step_5.ipynb) of the tutorials in the excellent tutorial series [CFD Python: 12 steps to Navier-Stokes](http://lorenabarba.com/blog/cfd-python-12-steps-to-navier-stokes/).
In this simple example we will show how to derive a very simple convection operator from a high-level description of the governing equation. We will go through the process of deriving a discretized finite difference formulation of the state update for the field variable $u$, before creating a callable `Operator` object. Luckily, the automation provided by SymPy makes the derivation very nice and easy.
The governing equation we want to implement is the linear convection equation:
$$\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} + c\frac{\partial u}{\partial y} = 0$$
Before we start, we need to define some parameters, such as the grid, the number of timesteps and the timestep size. We will also initialize our initial velocity field `u` with a smooth initial condition.
End of explanation
op = Operator(update)
op(time=nt+1, dt=dt)
plot_field(u.data[0])
Explanation: Next, we want to discretize our governing equation so that we can create a functional Operator from it. We can start by simply writing out the equation as a symbolic expression, while using the shorthand expressions for derivatives that the Function objects provide. This will create a symbolic object of the dicrestized equation.
Can you write out the governing equation using the Devito shorthand expressions? Remember, the governing equation is given as
$$\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} + c\frac{\partial u}{\partial y} = 0$$
<button data-toggle="collapse" data-target="#sol3" class='btn btn-primary'>Solution</button>
<div id="sol3" class="collapse">
```
eq = Eq(u.dt + c * u.dxl + c * u.dyl)
eq
```
As we can see, SymPy has kindly resolved our derivatives. Next, we need to rearrange our equation so that the term $u(t+dt, x, y)$ is on the left-hand side, since it represents the next point in time for our state variable $u$. We can use a SymPy utility called `solve` to rearrange our equation for us, so that it represents a valid state update for $u$.
Can you use `solve` to create a valid stencil for our update to $u(t+dt, x, y)$? Hint: `solve` always returns a list of potential solutions, even if there is only one.
Can you then create a SymPy `Eq` object to represent a valid state update for the variable $u$?
<button data-toggle="collapse" data-target="#sol4" class='btn btn-primary'>Solution</button>
<div id="sol4" class="collapse">
```
stencil = solve(eq, u.forward)[0]
update = Eq(u.forward, stencil)
update
```
The right-hand side of the update equation should be a stencil of the shape
<img src="figures/stencil_convection.png" style="width: 160px;"/>
Once we have created this update expression, we can create a Devito `Operator`. This `Operator` will basically behave like a Python function that we can call to apply the created stencil over our associated data, as long as we provide all necessary unknowns. In this case we need to provide the number of timesteps to compute via the keyword `time` and the timestep size to use via `dt` (both have been defined above).
End of explanation
print(op.ccode)
Explanation: Please note that the Operator is where all the Devito power is hidden, as the it will automatically generate and compile optimized C stencil code. We can look at this code - although we don't need to execute it.
End of explanation
u = TimeFunction(name='u', grid=grid, space_order=2)
u.dx2
Explanation: Second derivatives and high-order stencils
For the above example all we had to do was combine some first derivatives. However, lots of common scientific problems require second derivative, most notably any PDE including diffusion. To generate second order derivatives we need to give the devito.Function object another piece of information: the desired discretization of the stencils.
First, let's do a simple second derivative in $x$, for which we need to give $u$ at least a space_order of 2. The shorthand for the second derivative is then u.dx2.
End of explanation
u = TimeFunction(name='u', grid=grid, space_order=4)
u.dx2
Explanation: We can arbitrarily drive the discretization order up if require higher order stencils.
End of explanation
grid_3d = Grid(shape=(5, 6, 7), extent=(1., 1., 1.))
u = TimeFunction(name='u', grid=grid_3d, space_order=2)
u
Explanation: To implement diffusion or wave equations, we need to take the Laplacian $\nabla^2 u$, which is simply the second derivative in all space dimensions. For this, Devito also provides a shorthand expression, which means we do not have to hard-code the problem dimension (2D or 3D) in the code. To change the problem dimension we can create another Grid object and use this to re-define our Functions.
End of explanation
import numpy as np
from examples.seismic import plot_image
t0, tn, dt = 214., 400, 4.2 # Start, end and timestep size
nt = int(1 + (tn - t0) / dt) # Number of timesteps
# A 120x120 grid that defines our square domain
grid = Grid(shape=(120, 120), extent=(1800., 1800.))
# Load and plot the initial "warmed-up" wavefield
u = TimeFunction(name='u', grid=grid, space_order=2, time_order=2)
u.data[:] = np.load('wavefield.npy')
plot_image(u.data[0])
# Square slowness for a constant wave speed of 1.5m/s
m = Function(name='m', grid=grid)
m.data[:] = 1. / 1.5**2
Explanation: Exercise 3: Higher order derivatives
We can re-define our function u with a different space_order argument to change the discretization order of the created stencil expression. Using the grid_3d object, can you derive and expression of the 12th-order Laplacian $\nabla^2 u$? What about the 16th-order stencil for the Laplacian?
Hint: Devito functions provides a .laplace shorthand expression that will work in 2D and 3D.
<button data-toggle="collapse" data-target="#sol5" class='btn btn-primary'>Solution</button>
<div id="sol5" class="collapse">
```
u = TimeFunction(name='u', grid=grid_3d, space_order=12)
u.laplace
```
## Exercise 4: Making a wave
In the final exercise of the introduction we will implement a simple wave equation operator to the ones used in seismic imaging. For this we will implement the isotropic wave equation without boundary conditions. The equation defines the propagation of a wave in an isotropic medium and is defined as
$$m \frac{\partial^2 u}{\partial t^2} = \nabla^2 u$$
where $m$ is the square slowness of the wave, defined in terms of the wave speed $c$ as $m = 1 / c^2$. For the purpose of this exercise, we will ignore any source terms and instead use a "warmed-up" wavefield from file.
In the cell below we define the time parameters of our simulation, as well as the spatial dimensions and the shape of our computational grid with a `Grid` object. Using this grid object we can define two functions:
* The wavefield $u(t, x, y)$ which we initialise from the file `wavefield.npy`
* The square slowness $m(x, y)$ which, for now we will keep constant, for $c = 1.5km/s$.
End of explanation
# Reset the wavefield, so that we can run the cell multiple times
u.data[:] = np.load('wavefield.npy')
# Please implement your wave equation operator here
plot_image(u.data[0])
Explanation: To remind ourselves, the governing equation we want to implement is
$$m \frac{\partial^2 u}{\partial t^2} = \nabla^2 u$$
Please have a go and try to implement the operator below. You will need to follow the same strategy to discretize the equation and create a symbolic stencil expression that updates $u(t + dt, x, y)$. Once we apply our Operator for nt timesteps we should see that the wave has expanded homogeneously.
End of explanation
m.data[:, 60:] = 1. / 2.5**2 # Set a new wave speed
plot_image(m.data)
u.data[:] = np.load('wavefield.npy') # Reset our wave field u
plot_image(u.data[0])
op(t=60, dt=dt)
plot_image(u.data[0])
Explanation: <button data-toggle="collapse" data-target="#sol6" class='btn btn-primary'>Solution</button>
<div id="sol6" class="collapse">
```python
eqn = Eq(m * u.dt2 - u.laplace)
stencil = solve(eqn, u.forward)[0]
update = Eq(u.forward, stencil)
op = Operator(update)
op(t=nt, dt=dt)
```
Now, let's see what happens if we change the square slowness field `m` by increasing the wave speed to $2.5$ in the bottom half of the domain.
End of explanation |
4,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Datasets
Step1: The default query terms are 'big data', 'data science' and 'machine learning'. The dictionary returned from the call contains the standard 'X' and 'y' keys that are ready to be used in the GPy toolkit as inputs to the Gaussian process. In this case the 'X' variables are the time (first column) and an index representing the query.
Step2: So the 284th element of X contains is the 34th time point of the query term 2, which in this case is the 34th time point of the 'machine learning' time series. The value of the time series at that point is given by the corresponding row of Y
Step3: The dictionary also contains a pandas data frame of the trend data, which is in line with what sahuguet originally returned.
Step4: And we can plot the trends data to see what the effect is.
Step5: Dogs, Cats and Rabbits
Another data set we might consider downloading from google trends is different pets. Below we consider cats, dogs and rabbits.
Step6: Here we've plotted the data in the same manner as sahuguet suggested in his original notebook, using the plotting facility of pandas.
Games Consoles
Finally we can try and compare different games console popularity. | Python Code:
import pods
%matplotlib inline
# calling without arguments uses the default query terms
data = pods.datasets.google_trends()
Explanation: Datasets: Downloading Data from Google Trends
28th May 2014
Neil Lawrence
This data set collection was inspired by a ipython notebook from sahuguet which made queries to google trends and downloaded the results. We've modified the download to cache the results of a query: making multiple calls to the google API results in a block due to terms of service violations, cacheing the data locally prevents this happening.
End of explanation
print(data['X'][284, :])
Explanation: The default query terms are 'big data', 'data science' and 'machine learning'. The dictionary returned from the call contains the standard 'X' and 'y' keys that are ready to be used in the GPy toolkit as inputs to the Gaussian process. In this case the 'X' variables are the time (first column) and an index representing the query.
End of explanation
print(data['Y'][284, :])
Explanation: So the 284th element of X contains is the 34th time point of the query term 2, which in this case is the 34th time point of the 'machine learning' time series. The value of the time series at that point is given by the corresponding row of Y
End of explanation
data['data frame'].describe()
Explanation: The dictionary also contains a pandas data frame of the trend data, which is in line with what sahuguet originally returned.
End of explanation
data['data frame'].set_index('Date', inplace=True) # Set date column as index
data['data frame'].plot()
Explanation: And we can plot the trends data to see what the effect is.
End of explanation
data = pods.datasets.google_trends(['cats', 'dogs', 'rabbits'])
data['data frame'].set_index('Date', inplace=True)
data['data frame'].plot()
Explanation: Dogs, Cats and Rabbits
Another data set we might consider downloading from google trends is different pets. Below we consider cats, dogs and rabbits.
End of explanation
data = pods.datasets.google_trends(['xbox one', 'wii u', 'ps4'])
data['data frame'].set_index('Date', inplace=True)
data['data frame'].plot()
Explanation: Here we've plotted the data in the same manner as sahuguet suggested in his original notebook, using the plotting facility of pandas.
Games Consoles
Finally we can try and compare different games console popularity.
End of explanation |
4,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example for using the WindpowerlibTurbine model
The WindpowerlibTurbine model can be used to determine the feed-in of a wind turbine using the windpowerlib.
The windpowerlib is a python library for simulating the performance of wind turbines and farms. For more information about the model check the documentation of the windpowerlib.
The following example shows you how to use the WindpowerlibTurbine model.
Set up WindPowerPlant object
Get weather data
Calculate feed-in
Set up WindPowerPlant object <a class="anchor" id="windpowerplant_object"></a>
To calculate the feed-in using the WindpowerlibTurbine model you have to set up a WindPowerPlant object. You can import it as follows
Step1: The wind power plant must have all power plant parameters required by the WindpowerlibTurbine model. The required parameters can be looked up in the model's documentation.
The WindpowerlibTurbine model requires you to provide the turbine's hub height as well as the turbine's power curve or power coefficient curve. Alternatively to providing the curve(s) directly you can provide the turbine type which will retrieve the turbine's power and/or power coefficient curve from a wind turbine library provided along with the windpowerlib. For an overview of the provided wind turbines you can use the function get_power_plant_data().
Step2: Now you can set up a wind turbine to calculate feed-in for
Step3: Get weather data <a class="anchor" id="weather_data"></a>
Besides setting up your wind turbine you have to provide weather data the feed-in is calculated with.
This example uses open_FRED weather data. For more information on the data and download see the load_open_fred_weather_data Notebook.
Step4: Calculate feed-in <a class="anchor" id="feedin"></a>
The feed-in can be calculated by calling the WindPowerPlant's feedin method with the weather data.
Step5: Scaled feed-in
The wind turbine feed-in can also be automatically scaled by the turbine's nominal power.
Step6: The turbine's nominal power can be retrieved as follows
Step7: Feed-in with optional model parameters
In order to change the default calculation configurations of the WindpowerlibTurbine model to e.g. use the turbine's power coefficient curve instead of power curve you can pass further parameters to the feedin method. An overview of which further parameters may be provided is documented under the feedin method's kwargs. | Python Code:
from feedinlib import WindPowerPlant
Explanation: Example for using the WindpowerlibTurbine model
The WindpowerlibTurbine model can be used to determine the feed-in of a wind turbine using the windpowerlib.
The windpowerlib is a python library for simulating the performance of wind turbines and farms. For more information about the model check the documentation of the windpowerlib.
The following example shows you how to use the WindpowerlibTurbine model.
Set up WindPowerPlant object
Get weather data
Calculate feed-in
Set up WindPowerPlant object <a class="anchor" id="windpowerplant_object"></a>
To calculate the feed-in using the WindpowerlibTurbine model you have to set up a WindPowerPlant object. You can import it as follows:
End of explanation
from feedinlib import get_power_plant_data
# get wind turbines
turbine_df = get_power_plant_data(dataset='oedb_turbine_library')
# print the first four turbines
turbine_df.iloc[1:5, :]
Explanation: The wind power plant must have all power plant parameters required by the WindpowerlibTurbine model. The required parameters can be looked up in the model's documentation.
The WindpowerlibTurbine model requires you to provide the turbine's hub height as well as the turbine's power curve or power coefficient curve. Alternatively to providing the curve(s) directly you can provide the turbine type which will retrieve the turbine's power and/or power coefficient curve from a wind turbine library provided along with the windpowerlib. For an overview of the provided wind turbines you can use the function get_power_plant_data().
End of explanation
# set up wind turbine using the wind turbine library
turbine_data = {
'turbine_type': 'E-101/3050', # turbine name as in turbine library
'hub_height': 135 # in m
}
wind_turbine = WindPowerPlant(**turbine_data)
Explanation: Now you can set up a wind turbine to calculate feed-in for:
End of explanation
from feedinlib.open_FRED import Weather
from feedinlib.open_FRED import defaultdb
from shapely.geometry import Point
# specify latitude and longitude of wind turbine location
location = Point(13.5, 52.4)
# download weather data for June 2017
open_FRED_weather_data = Weather(
start='2017-06-01', stop='2017-07-01',
locations=[location],
heights=[140, 160],
variables="windpowerlib",
**defaultdb())
# get weather data in windpowerlib format
weather_df = open_FRED_weather_data.df(location=location, lib="windpowerlib")
# plot wind speed
import matplotlib.pyplot as plt
%matplotlib inline
weather_df.loc[:, ['wind_speed']].plot(title='Wind speed')
plt.xlabel('Time')
plt.ylabel('Wind speed in m/s');
Explanation: Get weather data <a class="anchor" id="weather_data"></a>
Besides setting up your wind turbine you have to provide weather data the feed-in is calculated with.
This example uses open_FRED weather data. For more information on the data and download see the load_open_fred_weather_data Notebook.
End of explanation
feedin = wind_turbine.feedin(
weather=weather_df)
# plot calculated feed-in
import matplotlib.pyplot as plt
%matplotlib inline
feedin.plot(title='Wind turbine feed-in')
plt.xlabel('Time')
plt.ylabel('Power in W');
Explanation: Calculate feed-in <a class="anchor" id="feedin"></a>
The feed-in can be calculated by calling the WindPowerPlant's feedin method with the weather data.
End of explanation
# calculate scaled feed-in
feedin_scaled = wind_turbine.feedin(
weather=weather_df,
scaling='nominal_power')
Explanation: Scaled feed-in
The wind turbine feed-in can also be automatically scaled by the turbine's nominal power.
End of explanation
wind_turbine.nominal_power
# plot calculated feed-in
import matplotlib.pyplot as plt
%matplotlib inline
feedin_scaled.plot(title='Scaled wind turbine feed-in')
plt.xlabel('Time')
plt.ylabel('Power in W');
Explanation: The turbine's nominal power can be retrieved as follows:
End of explanation
# use density corrected power curve to calculate feed-in
feedin_density_corrected = wind_turbine.feedin(
weather=weather_df,
density_correction=True)
# plot calculated feed-in
import matplotlib.pyplot as plt
%matplotlib inline
feedin_density_corrected.plot(title='Wind turbine feed-in', legend=True,
label='density corrected power curve')
feedin.plot(legend=True, label='power curve')
plt.xlabel('Time')
plt.ylabel('Power in W');
# use power coefficient curve to calculate feed-in
feedin_coefficient_curve = wind_turbine.feedin(
weather=weather_df,
power_output_model='power_coefficient_curve')
# plot calculated feed-in
import matplotlib.pyplot as plt
%matplotlib inline
feedin_coefficient_curve.plot(title='Wind turbine feed-in', legend=True,
label='power coefficient curve')
feedin.plot(legend=True, label='power curve')
plt.xlabel('Time')
plt.ylabel('Power in W');
Explanation: Feed-in with optional model parameters
In order to change the default calculation configurations of the WindpowerlibTurbine model to e.g. use the turbine's power coefficient curve instead of power curve you can pass further parameters to the feedin method. An overview of which further parameters may be provided is documented under the feedin method's kwargs.
End of explanation |
4,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summarizing Multiple Graphs Together
Author
Step1: Environment
Step2: Dependencies
Step3: Setup
Step4: Data
In this notebook, pickled instances of networks from the Causal Biological Networks database are used.
Step5: Processing
The graphs are combine with the union function, which retains all node and edges from each graph
Step6: The info_str function creates a short text summary of the network. The information is generated with info_json which is more useful programatically. | Python Code:
import os
import time
import sys
import pybel
import pybel_tools
from pybel_tools.summary import info_str
Explanation: Summarizing Multiple Graphs Together
Author: Charles Tapley Hoyt
Estimated Run Time: 45 seconds
This notebook shows how to combine multiple graphs from different sources and summarize them together. This might be useful during projects where multiple curators are creating BEL scripts that should be joined for scientific use, but for provenance, should be kept separate.
Imports
End of explanation
print(sys.version)
print(time.asctime())
Explanation: Environment
End of explanation
pybel.utils.get_version()
pybel_tools.utils.get_version()
Explanation: Dependencies
End of explanation
bms_base = os.environ['BMS_BASE']
human_dir = os.path.join(bms_base, 'cbn', 'Human-2.0')
mouse_dir = os.path.join(bms_base, 'cbn', 'Mouse-2.0')
rat_dir = os.path.join(bms_base, 'cbn', 'Rat-2.0')
Explanation: Setup
End of explanation
%%time
graphs = []
for d in (human_dir, mouse_dir, rat_dir):
for p in os.listdir(d):
if not p.endswith('gpickle'):
continue
path = os.path.join(d, p)
g = pybel.from_pickle(path)
graphs.append(g)
len(graphs)
Explanation: Data
In this notebook, pickled instances of networks from the Causal Biological Networks database are used.
End of explanation
%%time
combine = pybel.struct.union(graphs)
Explanation: Processing
The graphs are combine with the union function, which retains all node and edges from each graph
End of explanation
print(info_str(combine))
Explanation: The info_str function creates a short text summary of the network. The information is generated with info_json which is more useful programatically.
End of explanation |
4,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In the previous tutorial, you learned how to build an agent with one-step lookahead. This agent performs reasonably well, but definitely still has room for improvement! For instance, consider the potential moves in the figure below. (Note that we use zero-based numbering for the columns, so the leftmost column corresponds to col=0, the next column corresponds to col=1, and so on.)
<center>
<img src="https
Step1: We'll also need to slightly modify the heuristic from the previous tutorial, since the opponent is now able to modify the game board.
<center>
<img src="https
Step2: In the next code cell, we define a few additional functions that we'll need for the minimax agent.
Step3: We won't describe the minimax implementation in detail, but if you want to read more technical pseudocode, here's the description from Wikipedia. (Note that the pseudocode can be safely skipped!)
<center>
<img src="https
Step4: In the next code cell, we see the outcome of one game round against a random agent.
Step5: And we check how we can expect it to perform on average. | Python Code:
#$HIDE_INPUT$
import random
import numpy as np
# Gets board at next step if agent drops piece in selected column
def drop_piece(grid, col, mark, config):
next_grid = grid.copy()
for row in range(config.rows-1, -1, -1):
if next_grid[row][col] == 0:
break
next_grid[row][col] = mark
return next_grid
# Helper function for get_heuristic: checks if window satisfies heuristic conditions
def check_window(window, num_discs, piece, config):
return (window.count(piece) == num_discs and window.count(0) == config.inarow-num_discs)
# Helper function for get_heuristic: counts number of windows satisfying specified heuristic conditions
def count_windows(grid, num_discs, piece, config):
num_windows = 0
# horizontal
for row in range(config.rows):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[row, col:col+config.inarow])
if check_window(window, num_discs, piece, config):
num_windows += 1
# vertical
for row in range(config.rows-(config.inarow-1)):
for col in range(config.columns):
window = list(grid[row:row+config.inarow, col])
if check_window(window, num_discs, piece, config):
num_windows += 1
# positive diagonal
for row in range(config.rows-(config.inarow-1)):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[range(row, row+config.inarow), range(col, col+config.inarow)])
if check_window(window, num_discs, piece, config):
num_windows += 1
# negative diagonal
for row in range(config.inarow-1, config.rows):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[range(row, row-config.inarow, -1), range(col, col+config.inarow)])
if check_window(window, num_discs, piece, config):
num_windows += 1
return num_windows
Explanation: Introduction
In the previous tutorial, you learned how to build an agent with one-step lookahead. This agent performs reasonably well, but definitely still has room for improvement! For instance, consider the potential moves in the figure below. (Note that we use zero-based numbering for the columns, so the leftmost column corresponds to col=0, the next column corresponds to col=1, and so on.)
<center>
<img src="https://i.imgur.com/aAYyy2I.png" width=90%><br/>
</center>
With one-step lookahead, the red player picks one of column 5 or 6, each with 50% probability. But, column 5 is clearly a bad move, as it lets the opponent win the game in only one more turn. Unfortunately, the agent doesn't know this, because it can only look one move into the future.
In this tutorial, you'll use the minimax algorithm to help the agent look farther into the future and make better-informed decisions.
Minimax
We'd like to leverage information from deeper in the game tree. For now, assume we work with a depth of 3. This way, when deciding its move, the agent considers all possible game boards that can result from
1. the agent's move,
2. the opponent's move, and
3. the agent's next move.
We'll work with a visual example. For simplicity, we assume that at each turn, both the agent and opponent have only two possible moves. Each of the blue rectangles in the figure below corresponds to a different game board.
<center>
<img src="https://i.imgur.com/BrRe7Bu.png" width=90%><br/>
</center>
We have labeled each of the "leaf nodes" at the bottom of the tree with the score from the heuristic. (We use made-up scores in the figure. In the code, we'll use the same heuristic from the previous tutorial.) As before, the current game board is at the top of the figure, and the agent's goal is to end up with a score that's as high as possible.
But notice that the agent no longer has complete control over its score -- after the agent makes its move, the opponent selects its own move. And, the opponent's selection can prove disastrous for the agent! In particular,
- If the agent chooses the left branch, the opponent can force a score of -1.
- If the agent chooses the right branch, the opponent can force a score of +10.
Take the time now to check this in the figure, to make sure it makes sense to you!
With this in mind, you might argue that the right branch is the better choice for the agent, since it is the less risky option. Sure, it gives up the possibility of getting the large score (+40) that can only be accessed on the left branch, but it also guarantees that the agent gets at least +10 points.
This is the main idea behind the minimax algorithm: the agent chooses moves to get a score that is as high as possible, and it assumes the opponent will counteract this by choosing moves to force the score to be as low as possible. That is, the agent and opponent have opposing goals, and we assume the opponent plays optimally.
So, in practice, how does the agent use this assumption to select a move? We illustrate the agent's thought process in the figure below.
<center>
<img src="https://i.imgur.com/bWezUC3.png" width=90%><br/>
</center>
In the example, minimax assigns the move on the left a score of -1, and the move on the right is assigned a score of +10. So, the agent will select the move on the right.
Code
We'll use several functions from the previous tutorial. These are defined in the hidden code cell below. (Click on the "Code" button below if you'd like to view them.)
End of explanation
# Helper function for minimax: calculates value of heuristic for grid
def get_heuristic(grid, mark, config):
num_threes = count_windows(grid, 3, mark, config)
num_fours = count_windows(grid, 4, mark, config)
num_threes_opp = count_windows(grid, 3, mark%2+1, config)
num_fours_opp = count_windows(grid, 4, mark%2+1, config)
score = num_threes - 1e2*num_threes_opp - 1e4*num_fours_opp + 1e6*num_fours
return score
Explanation: We'll also need to slightly modify the heuristic from the previous tutorial, since the opponent is now able to modify the game board.
<center>
<img src="https://i.imgur.com/vQ8b1aX.png" width=70%><br/>
</center>
In particular, we need to check if the opponent has won the game by playing a disc. The new heuristic looks at each group of four adjacent locations in a (horizontal, vertical, or diagonal) line and assigns:
- 1000000 (1e6) points if the agent has four discs in a row (the agent won),
- 1 point if the agent filled three spots, and the remaining spot is empty (the agent wins if it fills in the empty spot),
- -100 points if the opponent filled three spots, and the remaining spot is empty (the opponent wins by filling in the empty spot), and
- -10000 (-1e4) points if the opponent has four discs in a row (the opponent won).
This is defined in the code cell below.
End of explanation
# Uses minimax to calculate value of dropping piece in selected column
def score_move(grid, col, mark, config, nsteps):
next_grid = drop_piece(grid, col, mark, config)
score = minimax(next_grid, nsteps-1, False, mark, config)
return score
# Helper function for minimax: checks if agent or opponent has four in a row in the window
def is_terminal_window(window, config):
return window.count(1) == config.inarow or window.count(2) == config.inarow
# Helper function for minimax: checks if game has ended
def is_terminal_node(grid, config):
# Check for draw
if list(grid[0, :]).count(0) == 0:
return True
# Check for win: horizontal, vertical, or diagonal
# horizontal
for row in range(config.rows):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[row, col:col+config.inarow])
if is_terminal_window(window, config):
return True
# vertical
for row in range(config.rows-(config.inarow-1)):
for col in range(config.columns):
window = list(grid[row:row+config.inarow, col])
if is_terminal_window(window, config):
return True
# positive diagonal
for row in range(config.rows-(config.inarow-1)):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[range(row, row+config.inarow), range(col, col+config.inarow)])
if is_terminal_window(window, config):
return True
# negative diagonal
for row in range(config.inarow-1, config.rows):
for col in range(config.columns-(config.inarow-1)):
window = list(grid[range(row, row-config.inarow, -1), range(col, col+config.inarow)])
if is_terminal_window(window, config):
return True
return False
# Minimax implementation
def minimax(node, depth, maximizingPlayer, mark, config):
is_terminal = is_terminal_node(node, config)
valid_moves = [c for c in range(config.columns) if node[0][c] == 0]
if depth == 0 or is_terminal:
return get_heuristic(node, mark, config)
if maximizingPlayer:
value = -np.Inf
for col in valid_moves:
child = drop_piece(node, col, mark, config)
value = max(value, minimax(child, depth-1, False, mark, config))
return value
else:
value = np.Inf
for col in valid_moves:
child = drop_piece(node, col, mark%2+1, config)
value = min(value, minimax(child, depth-1, True, mark, config))
return value
Explanation: In the next code cell, we define a few additional functions that we'll need for the minimax agent.
End of explanation
# How deep to make the game tree: higher values take longer to run!
N_STEPS = 3
def agent(obs, config):
# Get list of valid moves
valid_moves = [c for c in range(config.columns) if obs.board[c] == 0]
# Convert the board to a 2D grid
grid = np.asarray(obs.board).reshape(config.rows, config.columns)
# Use the heuristic to assign a score to each possible board in the next step
scores = dict(zip(valid_moves, [score_move(grid, col, obs.mark, config, N_STEPS) for col in valid_moves]))
# Get a list of columns (moves) that maximize the heuristic
max_cols = [key for key in scores.keys() if scores[key] == max(scores.values())]
# Select at random from the maximizing columns
return random.choice(max_cols)
Explanation: We won't describe the minimax implementation in detail, but if you want to read more technical pseudocode, here's the description from Wikipedia. (Note that the pseudocode can be safely skipped!)
<center>
<img src="https://i.imgur.com/BwP9tMD.png" width=60%>
</center>
Finally, we implement the minimax agent in the competition format. The N_STEPS variable is used to set the depth of the tree.
End of explanation
from kaggle_environments import make, evaluate
# Create the game environment
env = make("connectx")
# Two random agents play one game round
env.run([agent, "random"])
# Show the game
env.render(mode="ipython")
Explanation: In the next code cell, we see the outcome of one game round against a random agent.
End of explanation
#$HIDE_INPUT$
def get_win_percentages(agent1, agent2, n_rounds=100):
# Use default Connect Four setup
config = {'rows': 6, 'columns': 7, 'inarow': 4}
# Agent 1 goes first (roughly) half the time
outcomes = evaluate("connectx", [agent1, agent2], config, [], n_rounds//2)
# Agent 2 goes first (roughly) half the time
outcomes += [[b,a] for [a,b] in evaluate("connectx", [agent2, agent1], config, [], n_rounds-n_rounds//2)]
print("Agent 1 Win Percentage:", np.round(outcomes.count([1,-1])/len(outcomes), 2))
print("Agent 2 Win Percentage:", np.round(outcomes.count([-1,1])/len(outcomes), 2))
print("Number of Invalid Plays by Agent 1:", outcomes.count([None, 0]))
print("Number of Invalid Plays by Agent 2:", outcomes.count([0, None]))
get_win_percentages(agent1=agent, agent2="random", n_rounds=50)
Explanation: And we check how we can expect it to perform on average.
End of explanation |
4,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow
TensorFlow is an open source library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation on almost any platforms.
Programming Models for Deep Learning
Symbolic v.s. Imperative style programs
If you are a python or C++ programmer, then you are already familiar with imperative programs. Imperative style programs conduct the computation as we run them. Most of the code you write in python is imperative, for example
Step1: Get familiar with the following basic tensorflow methods
Step2: Linear Regression example | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
Explanation: TensorFlow
TensorFlow is an open source library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation on almost any platforms.
Programming Models for Deep Learning
Symbolic v.s. Imperative style programs
If you are a python or C++ programmer, then you are already familiar with imperative programs. Imperative style programs conduct the computation as we run them. Most of the code you write in python is imperative, for example:
import numpy as np
a = np.ones(10)
b = np.ones(10) * 2
c = b * a
Symbolic programs are different. The following lines are an equivalent symbolic style program that achieves the same goal:
A = Variable()
B = Constant()
C = B * A
# compiles the function
f = compile(C)
# run the function
c = f.run(A=np.ones(10), B=np.ones(10)*2)
when C = B * A is executed, there is no actual computation happening. Instead, these operations generate a computation graph (symbolic graph) that represents the computation. Symbolic programs separates computation graph (1)definition, (2)compiling, and (3)running step.
Generally speaking, imperative programs are more flexible, while symblic programs are more efficient (graph optimizations, better garbage collections).
End of explanation
# Define C=B*A in a symbolic way
A = tf.Variable(tf.ones([10]))
B = tf.constant(np.ones(10)*2, tf.float32)
C = tf.multiply(A, B)
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
# initialize variables
sess.run(init)
# run the graph and evaluate C
c = sess.run([C])
print('c:', c)
Explanation: Get familiar with the following basic tensorflow methods:
# define constant
tf.Constant()
# define trainable parameters
tf.Variable()
# holding mini-batch input data to the graph
tf.placeholder()
# common neural network layers
tf.nn.*()
# Launch the existing graph
tf.Session()
Now let's first implement 'C=B*A' in TensorFlow!
End of explanation
# Generate ground truth 100 x, y data points in NumPy, y = 3.0 * x + 1.0
# Regress for W and b that compute y_data = W * x_data + b
x_data = np.random.rand(100).astype(np.float32)
y_data = 3.0 * x_data + 1.0
plt.plot(x_data, y_data)
# define trainable variables
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
# define graph operations
y = tf.multiply(W, x_data) + b
# define loss, L2
loss = tf.reduce_mean(tf.square(y - y_data))
# define optimizer for training
train_optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(loss)
# define the operation that initializes variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
# initialization
sess.run(init)
# starting training
training_iters = 100
for step in range(training_iters):
if step % 20 == 0 or (step+1)==training_iters:
print(step, sess.run(W), sess.run(b))
# run optimizer during training
_ = sess.run([train_optimizer])
Explanation: Linear Regression example
End of explanation |
4,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute seed based time-frequency connectivity in sensor space
Computes the connectivity between a seed-gradiometer close to the visual cortex
and all other gradiometers. The connectivity is computed in the time-frequency
domain using Morlet wavelets and the debiased Squared Weighted Phase Lag Index
[1]_ is used as connectivity metric.
.. [1] Vinck et al. "An improved index of phase-synchronization for electro-
physiological data in the presence of volume-conduction, noise and
sample-size bias" NeuroImage, vol. 55, no. 4, pp. 1548-1565, Apr. 2011.
Step1: Set parameters | Python Code:
# Author: Martin Luessi <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne import io
from mne.connectivity import spectral_connectivity, seed_target_indices
from mne.datasets import sample
from mne.time_frequency import AverageTFR
print(__doc__)
Explanation: Compute seed based time-frequency connectivity in sensor space
Computes the connectivity between a seed-gradiometer close to the visual cortex
and all other gradiometers. The connectivity is computed in the time-frequency
domain using Morlet wavelets and the debiased Squared Weighted Phase Lag Index
[1]_ is used as connectivity metric.
.. [1] Vinck et al. "An improved index of phase-synchronization for electro-
physiological data in the presence of volume-conduction, noise and
sample-size bias" NeuroImage, vol. 55, no. 4, pp. 1548-1565, Apr. 2011.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
# Pick MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
# Create epochs for left-visual condition
event_id, tmin, tmax = 3, -0.2, 0.5
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6),
preload=True)
# Use 'MEG 2343' as seed
seed_ch = 'MEG 2343'
picks_ch_names = [raw.ch_names[i] for i in picks]
# Create seed-target indices for connectivity computation
seed = picks_ch_names.index(seed_ch)
targets = np.arange(len(picks))
indices = seed_target_indices(seed, targets)
# Define wavelet frequencies and number of cycles
cwt_frequencies = np.arange(7, 30, 2)
cwt_n_cycles = cwt_frequencies / 7.
# Run the connectivity analysis using 2 parallel jobs
sfreq = raw.info['sfreq'] # the sampling frequency
con, freqs, times, _, _ = spectral_connectivity(
epochs, indices=indices,
method='wpli2_debiased', mode='cwt_morlet', sfreq=sfreq,
cwt_frequencies=cwt_frequencies, cwt_n_cycles=cwt_n_cycles, n_jobs=1)
# Mark the seed channel with a value of 1.0, so we can see it in the plot
con[np.where(indices[1] == seed)] = 1.0
# Show topography of connectivity from seed
title = 'WPLI2 - Visual - Seed %s' % seed_ch
layout = mne.find_layout(epochs.info, 'meg') # use full layout
tfr = AverageTFR(epochs.info, con, times, freqs, len(epochs))
tfr.plot_topo(fig_facecolor='w', font_color='k', border='k')
Explanation: Set parameters
End of explanation |
4,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Elements are the basic building blocks for any HoloViews visualization. These are the objects that can be composed together using the various Container types.
Here in this overview, we show an example of how to build each of these Elements directly out of Python or Numpy data structures. An even more powerful way to use them is by collecting similar Elements into a HoloMap, as described in Exploring Data, so that you can explore, select, slice, and animate them flexibly, but here we focus on having small, self-contained examples. Complete reference material for each type can be accessed using our documentation system. This tutorial uses the default matplotlib plotting backend; see the Bokeh Elements tutorial for the corresponding bokeh plots.
Element types
This class hierarchy shows each of the Element types.
Each type is named for the default or expected way that the underlying data can be visualized. E.g., if your data is wrapped into a Surface object, it will display as a 3D surface by default, whereas the same data embedded in an Image object will display as a 2D raster image. But please note that the specification and implementation for each Element type does not actually include any such visualization -- the name merely serves as a semantic indication that you ordinarily think of the data as being laid out visually in that way. The actual plotting is done by a separate plotting subsystem, while the objects themselves focus on storing your data and the metadata needed to describe and use it.
This separation of data and visualization is described in detail in the Options tutorial, which describes all about how to find out the options available for each Element type and change them if necessary, from either Python or IPython Notebook. When using this tutorial interactively in an IPython/Jupyter notebook session, we suggest adding %output info=True after the call to notebook_extension below, which will pop up a detailed list and explanation of the available options for visualizing each Element type, after that notebook cell is executed. Then, to find out all the options for any of these Element types, just press <Shift-Enter> on the corresponding cell in the live notebook.
The types available
Step1: In addition, Element has key dimensions (kdims), value dimensions (vdims), and constant dimensions (cdims) to describe the semantics of indexing within the Element, the semantics of the underlying data contained by the Element, and any constant parameters associated with the object, respectively.
Dimensions are described in the Introduction.
The remaining Element types each have a rich, graphical display as shown below.
Chart Elements <a id='Chart Elements'></a>
Visualization of a dependent variable against an independent variable
The first large class of Elements is the Chart elements. These objects have at least one fully indexable, sliceable key dimension (typically the x axis in a plot), and usually have one or more value dimension(s) (often the y axis) that may or may not be indexable depending on the implementation. The key dimensions are normally the parameter settings for which things are measured, and the value dimensions are the data points recorded at those settings.
As described in the Columnar Data tutorial, the data can be stored in several different internal formats, such as a NumPy array of shape (N, D), where N is the number of samples and D the number of dimensions. A somewhat larger list of formats can be accepted, including any of the supported internal formats, or
As a list of length N containing tuples of length D.
As a tuple of length D containing iterables of length N.
Curve <a id='Curve'></a>
Step2: A Curve is a set of values provided for some set of keys from a continuously indexable 1D coordinate system, where the plotted values will be connected up because they are assumed to be samples from a continuous relation.
ErrorBars <a id='ErrorBars'></a>
Step3: ErrorBars is a set of x-/y-coordinates with associated error values. Error values may be either symmetric or asymmetric, and thus can be supplied as an Nx3 or Nx4 array (or any of the alternative constructors Chart Elements allow).
Step4: Spread <a id='Spread'></a>
Spread elements have the same data format as the ErrorBars element, namely x- and y-values with associated symmetric or assymetric errors, but are interpreted as samples from a continuous distribution (just as Curve is the continuous version of Scatter). These are often paired with an overlaid Curve to show both the mean (as a curve) and the spread of values; see the Columnar Data tutorial for examples.
Symmetric
Step5: Asymmetric
Step6: Area <a id='Area'></a>
Area under the curve
By default the Area Element draws just the area under the curve, i.e. the region between the curve and the origin.
Step7: * Area between curves *
When supplied a second value dimension the area is defined as the area between two curves.
Step8: Bars <a id='Bars'></a>
Step9: Bars is an NdElement type, so by default it is sorted. To preserve the initial ordering specify the Dimension with values set to 'initial', or you can supply an explicit list of valid dimension keys.
Bars support up to three key dimensions which can be laid by 'group', 'category', and 'stack' dimensions. By default the key dimensions are mapped onto the first, second, and third Dimension of the Bars object, but this behavior can be overridden via the group_index, category_index, and stack_index options. You can also style each bar the way you want by creating style groups for any combination of the three dimensions. Here we color_by 'category' and 'stack', so that a given color represents some combination of those two values (according to the key shown).
Step10: BoxWhisker <a id='BoxWhisker'></a>
The BoxWhisker Element allows representing distributions of data varying by 0-N key dimensions. To represent the distribution of a single variable, we can create a BoxWhisker Element with no key dimensions and a single value dimension
Step11: BoxWhisker Elements support any number of dimensions and may also be rotated. To style the boxes and whiskers, supply boxprops, whiskerprops, and flierprops.
Step12: BoxWhisker Elements may also be used to represent a distribution as a marginal plot by adjoining it using <<.
Step13: Histogram <a id='Histogram'></a>
Step14: Histograms partition the x axis into discrete (but not necessarily regular) bins, showing counts in each as a bar.
Almost all Element types, including Histogram, may be projected onto a polar axis by supplying projection='polar' as a plot option.
Step15: Scatter <a id='Scatter'></a>
Step16: Scatter is the discrete equivalent of Curve, showing y values for discrete x values selected. See Points for more information.
The marker shape specified above can be any supported by matplotlib, e.g. s, d, or o; the other options select the color and size of the marker. For convenience with the bokeh backend, the matplotlib marker options are supported using a compatibility function in HoloViews.
Points <a id='Points'></a>
Step17: As you can see, Points is very similar to Scatter, and can produce some plots that look identical. However, the two Elements are very different semantically. For Scatter, the dots each show a dependent variable y for some x, such as in the Scatter example above where we selected regularly spaced values of x and then created a random number as the corresponding y. I.e., for Scatter, the y values are the data; the xs are just where the data values are located. For Points, both x and y are independent variables, known as key_dimensions in HoloViews
Step18: The Scatter object expresses a dependent relationship between x and y, making it useful for combining with other similar Chart types, while the Points object expresses the relationship of two independent keys x and y with optional vdims (zero in this case), which makes Points objects meaningful to combine with the Raster types below.
Of course, the vdims need not be empty for Points; here is an example with two additional quantities for each point, as value_dimensions z and α visualized as the color and size of the dots, respectively. The point sizes can be tweaked using the option scaling_factor, which determines the amount by which each point width or area is scaled, depending on the value of scaling_method.
Step19: Such a plot wouldn't be meaningful for Scatter, but is a valid use for Points, where the x and y locations are independent variables representing coordinates, and the "data" is conveyed by the size and color of the dots.
Spikes <a id='Spikes'></a>
Spikes represent any number of horizontal or vertical line segments with fixed or variable heights. There are a number of disparate uses for this type. First of all, they may be used as a rugplot to give an overview of a one-dimensional distribution. They may also be useful in more domain-specific cases, such as visualizing spike trains for neurophysiology or spectrograms in physics and chemistry applications.
In the simplest case, a Spikes object represents coordinates in a 1D distribution
Step20: When supplying two dimensions to the Spikes object, the second dimension will be mapped onto the line height. Optionally, you may also supply a cmap and color_index to map color onto one of the dimensions. This way we can, for example, plot a mass spectrogram
Step21: Another possibility is to draw a number of spike trains as you would encounter in neuroscience. Here we generate 10 separate random spike trains and distribute them evenly across the space by setting their position. By also declaring some yticks, each spike train can be labeled individually
Step22: Finally, we may use Spikes to visualize marginal distributions as adjoined plots using the << adjoin operator
Step23: VectorField <a id='VectorField'></a>
Step24: As you can see above, here the x and y positions are chosen to make a regular grid. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each x,y position).
Using the IPython %%opts cell-magic (described in the Options tutorial, along with the Python equivalent), we can also use color as a redundant indicator to the direction or magnitude
Step25: The vector fields above were sampled on a regular grid, but any collection of x,y values is allowed
Step26: SideHistogram <a id='SideHistogram'></a>
The .hist method conveniently adjoins a histogram to the side of any Chart, Surface, or Raster component, as well as many of the container types (though it would be reporting data from one of these underlying Element types). For a Raster using color or grayscale to show values (see Raster section below), the side histogram doubles as a color bar or key.
Step27: Chart3D Elements <a id='Chart3D Elements'></a>
Surface <a id='Surface'></a>
Step28: Surface is used for a set of gridded points whose associated value dimension represents samples from a continuous surface; it is the equivalent of a Curve but with two key dimensions instead of just one.
Scatter3D <a id='Scatter3D'></a>
Step29: Scatter3D is the equivalent of Scatter but for two key dimensions, rather than just one.
Trisurface <a id='Trisurface'></a>
The Trisurface Element renders any collection of 3D points as a Surface by applying Delaunay triangulation. It thus supports arbitrary, non-gridded data, but it does not support indexing to find data values, since finding the closest ones would require a search.
Step30: Raster Elements <a id='Raster Elements'></a>
A collection of raster image types
The second large class of Elements is the raster elements. Like Points and unlike the other Chart elements, Raster Elements live in a 2D key-dimensions space. For the Image, RGB, and HSV elements, the coordinates of this two-dimensional key space are defined in a continuously indexable coordinate system. We can use np.meshgrid to define the appropriate sampling along the x and y dimensions
Step31: Raster <a id='Raster'></a>
A Raster is the base class for image-like Elements, but may be used directly to visualize 2D arrays using a color map. The coordinate system of a Raster is the raw indexes of the underlying array, with integer values always starting from (0,0) in the top left, with default extents corresponding to the shape of the array. The Image subclass visualizes similarly, but using a continuous Cartesian coordinate system suitable for an array that represents some underlying continuous region.
Step32: QuadMesh <a id='QuadMesh'></a>
The basic QuadMesh is a 2D grid of bins specified as x-/y-values specifying a regular sampling or edges, with arbitrary sampling and an associated 2D array containing the bin values. The coordinate system of a QuadMesh is defined by the bin edges, therefore any index falling into a binned region will return the appropriate value. Unlike Image objects, slices must be inclusive of the bin edges.
Step33: QuadMesh may also be used to represent an arbitrary mesh of quadrilaterals by supplying three separate 2D arrays representing the coordinates of each quadrilateral in a 2D space. Note that when using QuadMesh in this mode, slicing and indexing semantics and most operations will currently not work.
Step34: HeatMap <a id='HeatMap'></a>
A HeatMap displays like a typical raster image, but the input is a dictionary indexed with two-dimensional keys, not a Numpy array or Pandas dataframe. As many rows and columns as required will be created to display the values in an appropriate grid format. Values unspecified are left blank, and the keys can be any Python datatype (not necessarily numeric). One typical usage is to show values from a set of experiments, such as a parameter space exploration, and many other such visualizations are shown in the Containers and Exploring Data tutorials. Each value in a HeatMap is labeled explicitly by default, and so this component is not meant for very large numbers of samples. With the default color map, high values (in the upper half of the range present) are colored orange and red, while low values (in the lower half of the range present) are colored shades of blue.
Step35: Image <a id='Image'></a>
Like Raster, a HoloViews Image allows you to view 2D arrays using an arbitrary color map. Unlike Raster, an Image is associated with a 2D coordinate system in continuous space, which is appropriate for values sampled from some underlying continuous distribution (as in a photograph or other measurements from locations in real space). Slicing, sampling, etc. on an Image all use this continuous space, whereas the corresponding operations on a Raster work on the raw array coordinates.
To make the coordinate system clear, we'll define two arrays called xs and ys with a non-square aspect and map them through a simple function that illustrate how these inputs relate to the coordinate system
Step36: Notice how, because our declared coordinate system is continuous, we can slice with any floating-point value we choose. The appropriate range of the samples in the input numpy array will always be displayed, whether or not there are samples at those specific floating-point values.
It is also worth noting that the name Image can clash with other common libraries, which is one reason to avoid unqualified imports like from holoviews import *. For instance, the Python Imaging Libray provides an Image module, and IPython itself supplies an Image class in IPython.display. Python namespaces allow you to avoid such problems, e.g. using from PIL import Image as PILImage or using import holoviews as hv and then hv.Image(), as we do in these tutorials.
RGB <a id='RGB'></a>
The RGB element is an Image that supports red, green, blue channels
Step37: You can see how the RGB object is created from the original channels
Step38: RGB also supports an optional alpha channel, which will be used as a mask revealing or hiding any Elements it is overlaid on top of
Step39: HSV <a id='HSV'></a>
HoloViews makes it trivial to work in any color space that can be converted to RGB by making a simple subclass of RGB as appropriate. For instance, we also provide the HSV (hue, saturation, value) color space, which is useful for plotting cyclic data (as the Hue) along with two additional dimensions (controlling the saturation and value of the color, respectively)
Step40: You can see how this is created from the original channels
Step41: Tabular Elements <a id='Tabular Elements'></a>
General data structures for holding arbitrary information
ItemTable <a id='ItemTable'></a>
An ItemTable is an ordered collection of key, value pairs. It can be used to directly visualize items in a tabular format where the items may be supplied as an OrderedDict or a list of (key,value) pairs. A standard Python dictionary can be easily visualized using a call to the .items() method, though the entries in such a dictionary are not kept in any particular order, and so you may wish to sort them before display. One typical usage for an ItemTable is to list parameter values or measurements associated with an adjacent Element.
Step42: Table <a id='Table'></a>
A table is more general than an ItemTable, as it allows multi-dimensional keys and multidimensional values.
Step43: Note that you can use select using tables, and once you select using a full, multidimensional key, you get an ItemTable (shown on the right)
Step44: The Table is used as a common data structure that may be converted to any other HoloViews data structure using the TableConversion class.
The functionality of the TableConversion class may be conveniently accessed using the .to property. For more extended usage of table conversion see the Columnar Data and Pandas Conversion Tutorials.
Step45: Annotation Elements <a id='Annotation Elements'></a>
Useful information that can be overlaid onto other components
Annotations are components designed to be overlaid on top of other Element objects. To demonstrate annotation and paths, we will be drawing many of our elements on top of an RGB Image
Step46: VLine and HLine <a id='VLine'></a><a id='HLine'></a>
Step47: Spline <a id='Spline'></a>
The Spline annotation is used to draw Bezier splines using the same semantics as matplotlib splines. In the overlay below, the spline is in dark blue and the control points are in light blue.
Step48: Text and Arrow <a id='Text'></a><a id='Arrow'></a>
Step49: Paths <a id='Path Elements'></a>
Line-based components that can be overlaid onto other components
Paths are a subclass of annotations that involve drawing line-based components on top of other elements. Internally, Path Element types hold a list of Nx2 arrays, specifying the x/y-coordinates along each path. The data may be supplied in a number of ways, including
Step50: Contours <a id='Contours'></a>
A Contours object is similar to Path object except each of the path elements is associated with a numeric value, called the level. Sadly, our penguins are too complicated to give a simple example so instead we will simply mark the first couple of rings of our earlier ring pattern
Step51: Polygons <a id='Polygons'></a>
A Polygons object is similar to a Contours object except that each supplied path is closed and filled. Just like Contours, optionally a level may be supplied; the Polygons will then be colored according to the supplied cmap. Non-finite values such as np.NaN or np.inf will default to the supplied facecolor.
Polygons with values can be used to build heatmaps with arbitrary shapes.
Step52: Polygons without a value are useful as annotation, but also allow us to draw arbitrary shapes.
Step53: Bounds <a id='Bounds'></a>
A bounds is a rectangular area specified as a tuple in (left, bottom, right, top) format. It is useful for denoting a region of interest defined by some bounds, whereas Box (below) is useful for drawing a box at a specific location.
Step54: Box <a id='Box'></a> and Ellipse <a id='Ellipse'></a>
A Box is similar to a Bounds except you specify the box position, width, and aspect ratio instead of the coordinates of the box corners. An Ellipse is specified just as for Box, but has a rounded shape. | Python Code:
import holoviews as hv
hv.notebook_extension()
hv.Element(None, group='Value', label='Label')
Explanation: Elements are the basic building blocks for any HoloViews visualization. These are the objects that can be composed together using the various Container types.
Here in this overview, we show an example of how to build each of these Elements directly out of Python or Numpy data structures. An even more powerful way to use them is by collecting similar Elements into a HoloMap, as described in Exploring Data, so that you can explore, select, slice, and animate them flexibly, but here we focus on having small, self-contained examples. Complete reference material for each type can be accessed using our documentation system. This tutorial uses the default matplotlib plotting backend; see the Bokeh Elements tutorial for the corresponding bokeh plots.
Element types
This class hierarchy shows each of the Element types.
Each type is named for the default or expected way that the underlying data can be visualized. E.g., if your data is wrapped into a Surface object, it will display as a 3D surface by default, whereas the same data embedded in an Image object will display as a 2D raster image. But please note that the specification and implementation for each Element type does not actually include any such visualization -- the name merely serves as a semantic indication that you ordinarily think of the data as being laid out visually in that way. The actual plotting is done by a separate plotting subsystem, while the objects themselves focus on storing your data and the metadata needed to describe and use it.
This separation of data and visualization is described in detail in the Options tutorial, which describes all about how to find out the options available for each Element type and change them if necessary, from either Python or IPython Notebook. When using this tutorial interactively in an IPython/Jupyter notebook session, we suggest adding %output info=True after the call to notebook_extension below, which will pop up a detailed list and explanation of the available options for visualizing each Element type, after that notebook cell is executed. Then, to find out all the options for any of these Element types, just press <Shift-Enter> on the corresponding cell in the live notebook.
The types available:
<dl class="dl-horizontal">
<dt><a href="#Element"><code>Element</code></a></dt><dd>The base class of all <code>Elements</code>.</dd>
</dl>
<a id='ChartIndex'></a> <a href="#Chart Elements"><code>Charts:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Curve"><code>Curve</code></a></dt><dd>A continuous relation between a dependent and an independent variable.</dd>
<dt><a href="#ErrorBars"><code>ErrorBars</code></a></dt><dd>A collection of x-/y-coordinates with associated error magnitudes.</dd>
<dt><a href="#Spread"><code>Spread</code></a></dt><dd>Continuous version of ErrorBars.</dd>
<dt><a href="#Area"><code>Area</code></a></dt><dd></dd>
<dt><a href="#Bars"><code>Bars</code></a></dt><dd>Data collected and binned into categories.</dd>
<dt><a href="#Histogram"><code>Histogram</code></a></dt><dd>Data collected and binned in a continuous space using specified bin edges.</dd>
<dt><a href="#BoxWhisker"><code>BoxWhisker</code></a></dt><dd>Distributions of data varying by 0-N key dimensions.</dd>
<dt><a href="#Scatter"><code>Scatter</code></a></dt><dd>Discontinuous collection of points indexed over a single dimension.</dd>
<dt><a href="#Points"><code>Points</code></a></dt><dd>Discontinuous collection of points indexed over two dimensions.</dd>
<dt><a href="#VectorField"><code>VectorField</code></a></dt><dd>Cyclic variable (and optional auxiliary data) distributed over two-dimensional space.</dd>
<dt><a href="#Spikes"><code>Spikes</code></a></dt><dd>A collection of horizontal or vertical lines at various locations with fixed height (1D) or variable height (2D).</dd>
<dt><a href="#SideHistogram"><code>SideHistogram</code></a></dt><dd>Histogram binning data contained by some other <code>Element</code>.</dd>
</dl>
<a id='Chart3DIndex'></a> <a href="#Chart3D Elements"><code>Chart3D Elements:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Surface"><code>Surface</code></a></dt><dd>Continuous collection of points in a three-dimensional space.</dd>
<dt><a href="#Scatter3D"><code>Scatter3D</code></a></dt><dd>Discontinuous collection of points in a three-dimensional space.</dd>
<dt><a href="#Trisurface"><code>Trisurface</code></a></dt><dd>Continuous but irregular collection of points interpolated into a Surface using Delaunay triangulation.</dd>
</dl>
<a id='RasterIndex'></a> <a href="#Raster Elements"><code>Raster Elements:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Raster"><code>Raster</code></a></dt><dd>The base class of all rasters containing two-dimensional arrays.</dd>
<dt><a href="#QuadMesh"><code>QuadMesh</code></a></dt><dd>Raster type specifying 2D bins with two-dimensional array of values.</dd>
<dt><a href="#HeatMap"><code>HeatMap</code></a></dt><dd>Raster displaying sparse, discontinuous data collected in a two-dimensional space.</dd>
<dt><a href="#Image"><code>Image</code></a></dt><dd>Raster containing a two-dimensional array covering a continuous space (sliceable).</dd>
<dt><a href="#RGB"><code>RGB</code></a></dt><dd>Image with 3 (R,G,B) or 4 (R,G,B,Alpha) color channels.</dd>
<dt><a href="#HSV"><code>HSV</code></a></dt><dd>Image with 3 (Hue, Saturation, Value) or 4 channels.</dd>
</dl>
<a id='TabularIndex'></a> <a href="#Tabular Elements"><code>Tabular Elements:</code></a>
<dl class="dl-horizontal">
<dt><a href="#ItemTable"><code>ItemTable</code></a></dt><dd>Ordered collection of key-value pairs (ordered dictionary).</dd>
<dt><a href="#Table"><code>Table</code></a></dt><dd>Collection of arbitrary data with arbitrary key and value dimensions.</dd>
</dl>
<a id='AnnotationIndex'></a> <a href="#Annotation Elements"><code>Annotations:</code></a>
<dl class="dl-horizontal">
<dt><a href="#VLine"><code>VLine</code></a></dt><dd>Vertical line annotation.</dd>
<dt><a href="#HLine"><code>HLine</code></a></dt><dd>Horizontal line annotation.</dd>
<dt><a href="#Spline"><code>Spline</code></a></dt><dd>Bezier spline (arbitrary curves).</dd>
<dt><a href="#Text"><code>Text</code></a></dt><dd>Text annotation on an <code>Element</code>.</dd>
<dt><a href="#Arrow"><code>Arrow</code></a></dt><dd>Arrow on an <code>Element</code> with optional text label.</dd>
</dl>
<a id='PathIndex'></a> <a href="#Path Elements"><code>Paths:</code></a>
<dl class="dl-horizontal">
<dt><a href="#Path"><code>Path</code></a></dt><dd>Collection of paths.</dd>
<dt><a href="#Contours"><code>Contours</code></a></dt><dd>Collection of paths, each with an associated value.</dd>
<dt><a href="#Polygons"><code>Polygons</code></a></dt><dd>Collection of filled, closed paths with an associated value.</dd>
<dt><a href="#Bounds"><code>Bounds</code></a></dt><dd>Box specified by corner positions.</dd>
<dt><a href="#Box"><code>Box</code></a></dt><dd>Box specified by center position, radius, and aspect ratio.</dd>
<dt><a href="#Ellipse"><code>Ellipse</code></a></dt><dd>Ellipse specified by center position, radius, and aspect ratio.</dd>
</dl>
Element <a id='Element'></a>
The basic or fundamental types of data that can be visualized.
Element is the base class for all the other HoloViews objects shown in this section.
All Element objects accept data as the first argument to define the contents of that element. In addition to its implicit type, each element object has a group string defining its category, and a label naming this particular item, as described in the Introduction.
When rich display is off, or if no visualization has been defined for that type of Element, the Element is presented with a default textual representation:
End of explanation
import numpy as np
points = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
hv.Curve(points)
Explanation: In addition, Element has key dimensions (kdims), value dimensions (vdims), and constant dimensions (cdims) to describe the semantics of indexing within the Element, the semantics of the underlying data contained by the Element, and any constant parameters associated with the object, respectively.
Dimensions are described in the Introduction.
The remaining Element types each have a rich, graphical display as shown below.
Chart Elements <a id='Chart Elements'></a>
Visualization of a dependent variable against an independent variable
The first large class of Elements is the Chart elements. These objects have at least one fully indexable, sliceable key dimension (typically the x axis in a plot), and usually have one or more value dimension(s) (often the y axis) that may or may not be indexable depending on the implementation. The key dimensions are normally the parameter settings for which things are measured, and the value dimensions are the data points recorded at those settings.
As described in the Columnar Data tutorial, the data can be stored in several different internal formats, such as a NumPy array of shape (N, D), where N is the number of samples and D the number of dimensions. A somewhat larger list of formats can be accepted, including any of the supported internal formats, or
As a list of length N containing tuples of length D.
As a tuple of length D containing iterables of length N.
Curve <a id='Curve'></a>
End of explanation
np.random.seed(7)
points = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
errors = [(0.1*i, np.sin(0.1*i), np.random.rand()/2) for i in np.linspace(0, 100, 11)]
hv.Curve(points) * hv.ErrorBars(errors)
Explanation: A Curve is a set of values provided for some set of keys from a continuously indexable 1D coordinate system, where the plotted values will be connected up because they are assumed to be samples from a continuous relation.
ErrorBars <a id='ErrorBars'></a>
End of explanation
%%opts ErrorBars (capthick=3)
points = [(0.1*i, np.sin(0.1*i)) for i in range(100)]
errors = [(0.1*i, np.sin(0.1*i), np.random.rand()/2, np.random.rand()/4) for i in np.linspace(0, 100, 11)]
hv.Curve(points) * hv.ErrorBars(errors, vdims=['y', 'yerrneg', 'yerrpos'])
Explanation: ErrorBars is a set of x-/y-coordinates with associated error values. Error values may be either symmetric or asymmetric, and thus can be supplied as an Nx3 or Nx4 array (or any of the alternative constructors Chart Elements allow).
End of explanation
np.random.seed(42)
xs = np.linspace(0, np.pi*2, 20)
err = 0.2+np.random.rand(len(xs))
hv.Spread((xs, np.sin(xs), err))
Explanation: Spread <a id='Spread'></a>
Spread elements have the same data format as the ErrorBars element, namely x- and y-values with associated symmetric or assymetric errors, but are interpreted as samples from a continuous distribution (just as Curve is the continuous version of Scatter). These are often paired with an overlaid Curve to show both the mean (as a curve) and the spread of values; see the Columnar Data tutorial for examples.
Symmetric
End of explanation
%%opts Spread (facecolor='indianred' alpha=1)
xs = np.linspace(0, np.pi*2, 20)
hv.Spread((xs, np.sin(xs), 0.1+np.random.rand(len(xs)), 0.1+np.random.rand(len(xs))),
vdims=['y', 'yerrneg', 'yerrpos'])
Explanation: Asymmetric
End of explanation
xs = np.linspace(0, np.pi*4, 40)
hv.Area((xs, np.sin(xs)))
Explanation: Area <a id='Area'></a>
Area under the curve
By default the Area Element draws just the area under the curve, i.e. the region between the curve and the origin.
End of explanation
X = np.linspace(0,3,200)
Y = X**2 + 3
Y2 = np.exp(X) + 2
Y3 = np.cos(X)
hv.Area((X, Y, Y2), vdims=['y', 'y2']) * hv.Area((X, Y, Y3), vdims=['y', 'y3'])
Explanation: * Area between curves *
When supplied a second value dimension the area is defined as the area between two curves.
End of explanation
data = [('one',8),('two', 10), ('three', 16), ('four', 8), ('five', 4), ('six', 1)]
bars = hv.Bars(data, kdims=[hv.Dimension('Car occupants', values='initial')], vdims=['Count'])
bars + bars[['one', 'two', 'three']]
Explanation: Bars <a id='Bars'></a>
End of explanation
%%opts Bars [color_by=['category', 'stack'] legend_position='top']
from itertools import product
np.random.seed(1)
groups, categories, stacks = ['A', 'B'], ['a', 'b'], ['I', 'II']
keys = product(groups, categories, stacks)
hv.Bars([(k, np.random.rand()*100) for k in keys],
kdims=['Group', 'Category', 'Stack'], vdims=['Count'])
Explanation: Bars is an NdElement type, so by default it is sorted. To preserve the initial ordering specify the Dimension with values set to 'initial', or you can supply an explicit list of valid dimension keys.
Bars support up to three key dimensions which can be laid by 'group', 'category', and 'stack' dimensions. By default the key dimensions are mapped onto the first, second, and third Dimension of the Bars object, but this behavior can be overridden via the group_index, category_index, and stack_index options. You can also style each bar the way you want by creating style groups for any combination of the three dimensions. Here we color_by 'category' and 'stack', so that a given color represents some combination of those two values (according to the key shown).
End of explanation
hv.BoxWhisker(np.random.randn(200), kdims=[], vdims=['Value'])
Explanation: BoxWhisker <a id='BoxWhisker'></a>
The BoxWhisker Element allows representing distributions of data varying by 0-N key dimensions. To represent the distribution of a single variable, we can create a BoxWhisker Element with no key dimensions and a single value dimension:
End of explanation
%%opts BoxWhisker [fig_size=200 invert_axes=True]
style = dict(boxprops=dict(color='gray', linewidth=1), whiskerprops=dict(color='indianred', linewidth=1))
groups = [chr(65+g) for g in np.random.randint(0, 3, 200)]
hv.BoxWhisker((groups, np.random.randint(0, 5, 200), np.random.randn(200)),
kdims=['Group', 'Category'], vdims=['Value'])(style=style).sort()
Explanation: BoxWhisker Elements support any number of dimensions and may also be rotated. To style the boxes and whiskers, supply boxprops, whiskerprops, and flierprops.
End of explanation
points = hv.Points(np.random.randn(500, 2))
points << hv.BoxWhisker(points['y']) << hv.BoxWhisker(points['x'])
Explanation: BoxWhisker Elements may also be used to represent a distribution as a marginal plot by adjoining it using <<.
End of explanation
np.random.seed(1)
data = [np.random.normal() for i in range(10000)]
frequencies, edges = np.histogram(data, 20)
hv.Histogram(frequencies, edges)
Explanation: Histogram <a id='Histogram'></a>
End of explanation
%%opts Histogram [projection='polar' show_grid=True]
data = [np.random.rand()*np.pi*2 for i in range(100)]
frequencies, edges = np.histogram(data, 20)
hv.Histogram(frequencies, edges, kdims=['Angle'])
Explanation: Histograms partition the x axis into discrete (but not necessarily regular) bins, showing counts in each as a bar.
Almost all Element types, including Histogram, may be projected onto a polar axis by supplying projection='polar' as a plot option.
End of explanation
%%opts Scatter (color='k', marker='s', s=50)
np.random.seed(42)
points = [(i, np.random.random()) for i in range(20)]
hv.Scatter(points) + hv.Scatter(points)[12:20]
Explanation: Scatter <a id='Scatter'></a>
End of explanation
np.random.seed(12)
points = np.random.rand(50,2)
hv.Points(points) + hv.Points(points)[0.6:0.8,0.2:0.5]
Explanation: Scatter is the discrete equivalent of Curve, showing y values for discrete x values selected. See Points for more information.
The marker shape specified above can be any supported by matplotlib, e.g. s, d, or o; the other options select the color and size of the marker. For convenience with the bokeh backend, the matplotlib marker options are supported using a compatibility function in HoloViews.
Points <a id='Points'></a>
End of explanation
for o in [hv.Points(points,name="Points "), hv.Scatter(points,name="Scatter")]:
for d in ['key','value']:
print("%s %s_dimensions: %s " % (o.name, d, o.dimensions(d,label=True)))
Explanation: As you can see, Points is very similar to Scatter, and can produce some plots that look identical. However, the two Elements are very different semantically. For Scatter, the dots each show a dependent variable y for some x, such as in the Scatter example above where we selected regularly spaced values of x and then created a random number as the corresponding y. I.e., for Scatter, the y values are the data; the xs are just where the data values are located. For Points, both x and y are independent variables, known as key_dimensions in HoloViews:
End of explanation
%%opts Points [color_index=2 size_index=3 scaling_method="width" scaling_factor=10]
np.random.seed(10)
data = np.random.rand(100,4)
points = hv.Points(data, vdims=['z', 'alpha'])
points + points[0.3:0.7, 0.3:0.7].hist()
Explanation: The Scatter object expresses a dependent relationship between x and y, making it useful for combining with other similar Chart types, while the Points object expresses the relationship of two independent keys x and y with optional vdims (zero in this case), which makes Points objects meaningful to combine with the Raster types below.
Of course, the vdims need not be empty for Points; here is an example with two additional quantities for each point, as value_dimensions z and α visualized as the color and size of the dots, respectively. The point sizes can be tweaked using the option scaling_factor, which determines the amount by which each point width or area is scaled, depending on the value of scaling_method.
End of explanation
%%opts Spikes (alpha=0.4)
xs = np.random.rand(50)
ys = np.random.rand(50)
hv.Points((xs, ys)) * hv.Spikes(xs)
Explanation: Such a plot wouldn't be meaningful for Scatter, but is a valid use for Points, where the x and y locations are independent variables representing coordinates, and the "data" is conveyed by the size and color of the dots.
Spikes <a id='Spikes'></a>
Spikes represent any number of horizontal or vertical line segments with fixed or variable heights. There are a number of disparate uses for this type. First of all, they may be used as a rugplot to give an overview of a one-dimensional distribution. They may also be useful in more domain-specific cases, such as visualizing spike trains for neurophysiology or spectrograms in physics and chemistry applications.
In the simplest case, a Spikes object represents coordinates in a 1D distribution:
End of explanation
%%opts Spikes (cmap='Reds')
hv.Spikes(np.random.rand(20, 2), kdims=['Mass'], vdims=['Intensity'])
Explanation: When supplying two dimensions to the Spikes object, the second dimension will be mapped onto the line height. Optionally, you may also supply a cmap and color_index to map color onto one of the dimensions. This way we can, for example, plot a mass spectrogram:
End of explanation
%%opts Spikes NdOverlay [show_legend=False]
hv.NdOverlay({i: hv.Spikes(np.random.randint(0, 100, 10), kdims=['Time'])(plot=dict(position=0.1*i))
for i in range(10)})(plot=dict(yticks=[((i+1)*0.1-0.05, i) for i in range(10)]))
Explanation: Another possibility is to draw a number of spike trains as you would encounter in neuroscience. Here we generate 10 separate random spike trains and distribute them evenly across the space by setting their position. By also declaring some yticks, each spike train can be labeled individually:
End of explanation
%%opts Spikes (alpha=0.05) [spike_length=1]
points = hv.Points(np.random.randn(500, 2))
points << hv.Spikes(points['y']) << hv.Spikes(points['x'])
Explanation: Finally, we may use Spikes to visualize marginal distributions as adjoined plots using the << adjoin operator:
End of explanation
y,x = np.mgrid[-10:10,-10:10] * 0.25
sine_rings = np.sin(x**2+y**2)*np.pi+np.pi
exp_falloff = 1/np.exp((x**2+y**2)/8)
vector_data = [x,y,sine_rings, exp_falloff]
hv.VectorField(vector_data)
Explanation: VectorField <a id='VectorField'></a>
End of explanation
%%opts VectorField.A [color_dim='angle'] VectorField.M [color_dim='magnitude']
hv.VectorField(vector_data, group='A')
Explanation: As you can see above, here the x and y positions are chosen to make a regular grid. The arrow angles follow a sinsoidal ring pattern, and the arrow lengths fall off exponentially from the center, so this plot has four dimensions of data (direction and length for each x,y position).
Using the IPython %%opts cell-magic (described in the Options tutorial, along with the Python equivalent), we can also use color as a redundant indicator to the direction or magnitude:
End of explanation
n=20
x=np.linspace(1,3,n)
y=np.sin(np.linspace(0,2*np.pi,n))/4
hv.VectorField([x,y,x*5,np.ones(n)]) * hv.VectorField([x,-y,x*5,np.ones(n)])
Explanation: The vector fields above were sampled on a regular grid, but any collection of x,y values is allowed:
End of explanation
import numpy as np
np.random.seed(42)
points = [(i, np.random.normal()) for i in range(800)]
hv.Scatter(points).hist()
Explanation: SideHistogram <a id='SideHistogram'></a>
The .hist method conveniently adjoins a histogram to the side of any Chart, Surface, or Raster component, as well as many of the container types (though it would be reporting data from one of these underlying Element types). For a Raster using color or grayscale to show values (see Raster section below), the side histogram doubles as a color bar or key.
End of explanation
%%opts Surface (cmap='jet' rstride=20, cstride=2)
hv.Surface(np.sin(np.linspace(0,100*np.pi*2,10000)).reshape(100,100))
Explanation: Chart3D Elements <a id='Chart3D Elements'></a>
Surface <a id='Surface'></a>
End of explanation
%%opts Scatter3D [azimuth=40 elevation=20]
y,x = np.mgrid[-5:5, -5:5] * 0.1
heights = np.sin(x**2+y**2)
hv.Scatter3D(zip(x.flat,y.flat,heights.flat))
Explanation: Surface is used for a set of gridded points whose associated value dimension represents samples from a continuous surface; it is the equivalent of a Curve but with two key dimensions instead of just one.
Scatter3D <a id='Scatter3D'></a>
End of explanation
%%opts Trisurface [fig_size=200] (cmap='hot_r')
hv.Trisurface((x.flat,y.flat,heights.flat))
Explanation: Scatter3D is the equivalent of Scatter but for two key dimensions, rather than just one.
Trisurface <a id='Trisurface'></a>
The Trisurface Element renders any collection of 3D points as a Surface by applying Delaunay triangulation. It thus supports arbitrary, non-gridded data, but it does not support indexing to find data values, since finding the closest ones would require a search.
End of explanation
x,y = np.meshgrid(np.linspace(-5,5,101), np.linspace(5,-5,101))
Explanation: Raster Elements <a id='Raster Elements'></a>
A collection of raster image types
The second large class of Elements is the raster elements. Like Points and unlike the other Chart elements, Raster Elements live in a 2D key-dimensions space. For the Image, RGB, and HSV elements, the coordinates of this two-dimensional key space are defined in a continuously indexable coordinate system. We can use np.meshgrid to define the appropriate sampling along the x and y dimensions:
End of explanation
hv.Raster(np.sin(x**2+y**2))
Explanation: Raster <a id='Raster'></a>
A Raster is the base class for image-like Elements, but may be used directly to visualize 2D arrays using a color map. The coordinate system of a Raster is the raw indexes of the underlying array, with integer values always starting from (0,0) in the top left, with default extents corresponding to the shape of the array. The Image subclass visualizes similarly, but using a continuous Cartesian coordinate system suitable for an array that represents some underlying continuous region.
End of explanation
n = 21
xs = np.logspace(1, 3, n)
ys = np.linspace(1, 10, n)
hv.QuadMesh((xs, ys, np.random.rand(n-1, n-1)))
Explanation: QuadMesh <a id='QuadMesh'></a>
The basic QuadMesh is a 2D grid of bins specified as x-/y-values specifying a regular sampling or edges, with arbitrary sampling and an associated 2D array containing the bin values. The coordinate system of a QuadMesh is defined by the bin edges, therefore any index falling into a binned region will return the appropriate value. Unlike Image objects, slices must be inclusive of the bin edges.
End of explanation
coords = np.linspace(-1.5,1.5,n)
X,Y = np.meshgrid(coords, coords);
Qx = np.cos(Y) - np.cos(X)
Qz = np.sin(Y) + np.sin(X)
Z = np.sqrt(X**2 + Y**2)
hv.QuadMesh((Qx, Qz, Z))
Explanation: QuadMesh may also be used to represent an arbitrary mesh of quadrilaterals by supplying three separate 2D arrays representing the coordinates of each quadrilateral in a 2D space. Note that when using QuadMesh in this mode, slicing and indexing semantics and most operations will currently not work.
End of explanation
data = {(chr(65+i),chr(97+j)): i*j for i in range(5) for j in range(5) if i!=j}
hv.HeatMap(data).sort()
Explanation: HeatMap <a id='HeatMap'></a>
A HeatMap displays like a typical raster image, but the input is a dictionary indexed with two-dimensional keys, not a Numpy array or Pandas dataframe. As many rows and columns as required will be created to display the values in an appropriate grid format. Values unspecified are left blank, and the keys can be any Python datatype (not necessarily numeric). One typical usage is to show values from a set of experiments, such as a parameter space exploration, and many other such visualizations are shown in the Containers and Exploring Data tutorials. Each value in a HeatMap is labeled explicitly by default, and so this component is not meant for very large numbers of samples. With the default color map, high values (in the upper half of the range present) are colored orange and red, while low values (in the lower half of the range present) are colored shades of blue.
End of explanation
bounds=(-2,-3,5,2) # Coordinate system: (left, bottom, top, right)
xs,ys = np.meshgrid(np.linspace(-2,5,50), np.linspace(2,-3, 30))
(hv.Image(np.sin(xs)+ys, bounds=bounds)
+ hv.Image(np.sin(xs)+ys, bounds=bounds)[0:3, -2.5:2])
Explanation: Image <a id='Image'></a>
Like Raster, a HoloViews Image allows you to view 2D arrays using an arbitrary color map. Unlike Raster, an Image is associated with a 2D coordinate system in continuous space, which is appropriate for values sampled from some underlying continuous distribution (as in a photograph or other measurements from locations in real space). Slicing, sampling, etc. on an Image all use this continuous space, whereas the corresponding operations on a Raster work on the raw array coordinates.
To make the coordinate system clear, we'll define two arrays called xs and ys with a non-square aspect and map them through a simple function that illustrate how these inputs relate to the coordinate system:
End of explanation
r = 0.5*np.sin(np.pi +3*x**2+y**2)+0.5
g = 0.5*np.sin(x**2+2*y**2)+0.5
b = 0.5*np.sin(np.pi/2+x**2+y**2)+0.5
hv.RGB(np.dstack([r,g,b]))
Explanation: Notice how, because our declared coordinate system is continuous, we can slice with any floating-point value we choose. The appropriate range of the samples in the input numpy array will always be displayed, whether or not there are samples at those specific floating-point values.
It is also worth noting that the name Image can clash with other common libraries, which is one reason to avoid unqualified imports like from holoviews import *. For instance, the Python Imaging Libray provides an Image module, and IPython itself supplies an Image class in IPython.display. Python namespaces allow you to avoid such problems, e.g. using from PIL import Image as PILImage or using import holoviews as hv and then hv.Image(), as we do in these tutorials.
RGB <a id='RGB'></a>
The RGB element is an Image that supports red, green, blue channels:
End of explanation
%%opts Image (cmap='gray')
hv.Image(r,label="R") + hv.Image(g,label="G") + hv.Image(b,label="B")
Explanation: You can see how the RGB object is created from the original channels:
End of explanation
%%opts Image (cmap='gray')
mask = 0.5*np.sin(0.2*(x**2+y**2))+0.5
rgba = hv.RGB(np.dstack([r,g,b,mask]))
bg = hv.Image(0.5*np.cos(x*3)+0.5, label="Background") * hv.VLine(x=0,label="Background")
overlay = bg*rgba
overlay.label="RGBA Overlay"
bg + hv.Image(mask,label="Mask") + overlay
Explanation: RGB also supports an optional alpha channel, which will be used as a mask revealing or hiding any Elements it is overlaid on top of:
End of explanation
h = 0.5 + np.sin(0.2*(x**2+y**2)) / 2.0
s = 0.5*np.cos(x*3)+0.5
v = 0.5*np.cos(y*3)+0.5
hv.HSV(np.dstack([h, s, v]))
Explanation: HSV <a id='HSV'></a>
HoloViews makes it trivial to work in any color space that can be converted to RGB by making a simple subclass of RGB as appropriate. For instance, we also provide the HSV (hue, saturation, value) color space, which is useful for plotting cyclic data (as the Hue) along with two additional dimensions (controlling the saturation and value of the color, respectively):
End of explanation
%%opts Image (cmap='gray')
hv.Image(h, label="H") + hv.Image(s, label="S") + hv.Image(v, label="V")
Explanation: You can see how this is created from the original channels:
End of explanation
hv.ItemTable([('Age', 10), ('Weight',15), ('Height','0.8 meters')])
Explanation: Tabular Elements <a id='Tabular Elements'></a>
General data structures for holding arbitrary information
ItemTable <a id='ItemTable'></a>
An ItemTable is an ordered collection of key, value pairs. It can be used to directly visualize items in a tabular format where the items may be supplied as an OrderedDict or a list of (key,value) pairs. A standard Python dictionary can be easily visualized using a call to the .items() method, though the entries in such a dictionary are not kept in any particular order, and so you may wish to sort them before display. One typical usage for an ItemTable is to list parameter values or measurements associated with an adjacent Element.
End of explanation
keys = [('M',10), ('M',16), ('F',12)]
values = [(15, 0.8), (18, 0.6), (10, 0.8)]
table = hv.Table(zip(keys,values),
kdims = ['Gender', 'Age'],
vdims=['Weight', 'Height'])
table
Explanation: Table <a id='Table'></a>
A table is more general than an ItemTable, as it allows multi-dimensional keys and multidimensional values.
End of explanation
table.select(Gender='M') + table.select(Gender='M', Age=10)
Explanation: Note that you can use select using tables, and once you select using a full, multidimensional key, you get an ItemTable (shown on the right):
End of explanation
table.select(Gender='M').to.curve(kdims=["Age"], vdims=["Weight"])
Explanation: The Table is used as a common data structure that may be converted to any other HoloViews data structure using the TableConversion class.
The functionality of the TableConversion class may be conveniently accessed using the .to property. For more extended usage of table conversion see the Columnar Data and Pandas Conversion Tutorials.
End of explanation
scene = hv.RGB.load_image('../assets/penguins.png')
Explanation: Annotation Elements <a id='Annotation Elements'></a>
Useful information that can be overlaid onto other components
Annotations are components designed to be overlaid on top of other Element objects. To demonstrate annotation and paths, we will be drawing many of our elements on top of an RGB Image:
End of explanation
scene * hv.VLine(-0.05) + scene * hv.HLine(-0.05)
Explanation: VLine and HLine <a id='VLine'></a><a id='HLine'></a>
End of explanation
points = [(-0.3, -0.3), (0,0), (0.25, -0.25), (0.3, 0.3)]
codes = [1,4,4,4]
scene * hv.Spline((points,codes)) * hv.Curve(points)
Explanation: Spline <a id='Spline'></a>
The Spline annotation is used to draw Bezier splines using the same semantics as matplotlib splines. In the overlay below, the spline is in dark blue and the control points are in light blue.
End of explanation
scene * hv.Text(0, 0.2, 'Adult\npenguins') + scene * hv.Arrow(0,-0.1, 'Baby penguin', 'v')
Explanation: Text and Arrow <a id='Text'></a><a id='Arrow'></a>
End of explanation
angle = np.linspace(0, 2*np.pi, 100)
baby = list(zip(0.15*np.sin(angle), 0.2*np.cos(angle)-0.2))
adultR = [(0.25, 0.45), (0.35,0.35), (0.25, 0.25), (0.15, 0.35), (0.25, 0.45)]
adultL = [(-0.3, 0.4), (-0.3, 0.3), (-0.2, 0.3), (-0.2, 0.4),(-0.3, 0.4)]
scene * hv.Path([adultL, adultR, baby]) * hv.Path([baby])
Explanation: Paths <a id='Path Elements'></a>
Line-based components that can be overlaid onto other components
Paths are a subclass of annotations that involve drawing line-based components on top of other elements. Internally, Path Element types hold a list of Nx2 arrays, specifying the x/y-coordinates along each path. The data may be supplied in a number of ways, including:
A list of Nx2 numpy arrays.
A list of lists containing x/y coordinate tuples.
A tuple containing an array of length N with the x-values and a second array of shape NxP, where P is the number of paths.
A list of tuples each containing separate x and y values.
Path <a id='Path'></a>
A Path object is actually a collection of paths which can be arbitrarily specified. Although there may be multiple unconnected paths in a single Path object, they will all share the same style. Only by overlaying multiple Path objects do you iterate through the defined color cycle (or any other style options that have been defined).
End of explanation
def circle(radius, x=0, y=0):
angles = np.linspace(0, 2*np.pi, 100)
return np.array( list(zip(x+radius*np.sin(angles), y+radius*np.cos(angles))))
hv.Image(np.sin(x**2+y**2)) * hv.Contours([circle(0.22)], level=0) * hv.Contours([circle(0.33)], level=1)
Explanation: Contours <a id='Contours'></a>
A Contours object is similar to Path object except each of the path elements is associated with a numeric value, called the level. Sadly, our penguins are too complicated to give a simple example so instead we will simply mark the first couple of rings of our earlier ring pattern:
End of explanation
%%opts Polygons (cmap='hot' edgecolor='k' linewidth=2)
np.random.seed(35)
hv.Polygons([np.random.rand(4,2)], level=0.5) *\
hv.Polygons([np.random.rand(4,2)], level=1.0) *\
hv.Polygons([np.random.rand(4,2)], level=1.5) *\
hv.Polygons([np.random.rand(4,2)], level=2.0)
Explanation: Polygons <a id='Polygons'></a>
A Polygons object is similar to a Contours object except that each supplied path is closed and filled. Just like Contours, optionally a level may be supplied; the Polygons will then be colored according to the supplied cmap. Non-finite values such as np.NaN or np.inf will default to the supplied facecolor.
Polygons with values can be used to build heatmaps with arbitrary shapes.
End of explanation
def rectangle(x=0, y=0, width=1, height=1):
return np.array([(x,y), (x+width, y), (x+width, y+height), (x, y+height)])
(hv.Polygons([rectangle(width=2), rectangle(x=6, width=2)])(style={'facecolor': '#a50d0d'})
* hv.Polygons([rectangle(x=2, height=2), rectangle(x=5, height=2)])(style={'facecolor': '#ffcc00'})
* hv.Polygons([rectangle(x=3, height=2, width=2)])(style={'facecolor': 'c', 'hatch':'x'}))
Explanation: Polygons without a value are useful as annotation, but also allow us to draw arbitrary shapes.
End of explanation
scene * hv.Bounds(0.2) * hv.Bounds((0.45, 0.45, 0.2, 0.2))
Explanation: Bounds <a id='Bounds'></a>
A bounds is a rectangular area specified as a tuple in (left, bottom, right, top) format. It is useful for denoting a region of interest defined by some bounds, whereas Box (below) is useful for drawing a box at a specific location.
End of explanation
scene * hv.Box( -0.25, 0.3, 0.3, aspect=0.5) * hv.Box( 0, -0.2, 0.1) + \
scene * hv.Ellipse(-0.25, 0.3, 0.3, aspect=0.5) * hv.Ellipse(0, -0.2, 0.1)
Explanation: Box <a id='Box'></a> and Ellipse <a id='Ellipse'></a>
A Box is similar to a Bounds except you specify the box position, width, and aspect ratio instead of the coordinates of the box corners. An Ellipse is specified just as for Box, but has a rounded shape.
End of explanation |
4,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A workflow for classifying a point cloud using point features
The following example will run through the functions to classify a point cloud based on the point neighborhood attributes. This is a very simple example but this of course could be extended to extract very useful information using different classes and subsequent querying of the constituent segments.
The point cloud in question can be downloaded here
Step1: Firstly we will calculate the features required to characterise the pointcloud.
These are calculated on 3 scales which by default are k=10, 20 & 30 nearest-neighbours.
If wish to alter this go ahead!
Step2: Next we can get training as a numpy array for creating our model
Step3: Next we create a model, this will be a keras-based dense net in this instance but does not have to be.
The nnt structure is 32 > 16 > 8 > 32.
This is not necessarily a good example of a dense nnt structure and is used merely for demo purposes.
Step4: Finally we classify the point cloud | Python Code:
from geospatial_learn import learning as ln
incloud = "/path/to/Llandinam.ply"
Explanation: A workflow for classifying a point cloud using point features
The following example will run through the functions to classify a point cloud based on the point neighborhood attributes. This is a very simple example but this of course could be extended to extract very useful information using different classes and subsequent querying of the constituent segments.
The point cloud in question can be downloaded here:
https://drive.google.com/file/d/1DP7wkTqemfux2UkAD_8gZUnzm5GUfShZ/view?usp=sharing
It is derived from UAV imagery via structure from motion. Unzip it and have a look in cloud compare, making the scalar field 'training'.
The task will classify the scene into roofs, building facades, trees/vegetation and ground classes, which are represented by the training samples seen in the screenshot.
<img src="figures/llanlabel.png" style="height:300px">
Import the learning module which contains all we need
End of explanation
ln.ply_features(incloud)
Explanation: Firstly we will calculate the features required to characterise the pointcloud.
These are calculated on 3 scales which by default are k=10, 20 & 30 nearest-neighbours.
If wish to alter this go ahead!
End of explanation
training = ln.get_training_ply(incld)
Explanation: Next we can get training as a numpy array for creating our model
End of explanation
model = 'path/to/model.h5'
ln.create_model(training, model, clf='keras', cv=5)
Explanation: Next we create a model, this will be a keras-based dense net in this instance but does not have to be.
The nnt structure is 32 > 16 > 8 > 32.
This is not necessarily a good example of a dense nnt structure and is used merely for demo purposes.
End of explanation
classify_ply(incloud, model, train_field="training", class_field='label',
rgb=True
Explanation: Finally we classify the point cloud
End of explanation |
4,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Route Computation and Analysis using Batfish
Network engineers routinely need to validate routing and forwarding in the network. They often do that by connecting to multiple network devices and executing a series of show route commands. This distributed debugging is highly complex even in a moderately-sized network. Batfish makes this task extremely simple by providing an easy-to-query, centralized view of routing tables in the network.
In this notebook, we will look at how you can extract routing information from Batfish.
Check out a video demo of this notebook here.
Step1: Initializing the Network and Snapshot
SNAPSHOT_PATH below can be updated to point to a custom snapshot directory, see the Batfish instructions for how to package data for analysis.<br>
More example networks are available in the networks folder of the Batfish repository.
Step2: The network snapshot that we initialized above is illustrated below. You can download/view devices' configuration files here.
All of the information we will show you in this notebook is dynamically computed by Batfish based on the configuration files for the network devices.
View Routing Tables for ALL devices and ALL VRFs
Batfish makes all routing tables in the network easily accessible. Let's take a look at how you can retrieve the specific information you want.
Step3: We are not going to print this table as it has a large number of entries.
View Routing Tables for default VRF on AS1 border routers
There are 2 ways that we can get the desired subset of data
Step4: View BGP learnt routes for default VRF on AS1 border routers
Step5: View BGP learnt routes for ALL VRFs on ALL routers with Metric >=50
We cannot pass in metric as a parameter to Batfish, so this task is best handled with the Pandas API.
Step6: View the routing entries for network 1.0.2.0/24 on ALL routers in ALL VRFs
Step7: Using Panda's filtering it is easy to retrieve the list of nodes which have the network in the routing table for at least 1 VRF. This type of processing should always be done using the Pandas APIs.
Step8: Now we will retrieve the list of nodes that do NOT have this prefix in their routing table. This is easy to do with the Pandas groupby and filter functions.
Step9: The only devices that do not have a route to 1.0.2.0/24 are the 2 hosts in the snapshot. This is expected, as they should just have a default route. Let's verify that. | Python Code:
# Import packages
%run startup.py
bf = Session(host="localhost")
Explanation: Introduction to Route Computation and Analysis using Batfish
Network engineers routinely need to validate routing and forwarding in the network. They often do that by connecting to multiple network devices and executing a series of show route commands. This distributed debugging is highly complex even in a moderately-sized network. Batfish makes this task extremely simple by providing an easy-to-query, centralized view of routing tables in the network.
In this notebook, we will look at how you can extract routing information from Batfish.
Check out a video demo of this notebook here.
End of explanation
# Initialize a network and snapshot
NETWORK_NAME = "example_network"
SNAPSHOT_NAME = "example_snapshot"
SNAPSHOT_PATH = "networks/example"
bf.set_network(NETWORK_NAME)
bf.init_snapshot(SNAPSHOT_PATH, name=SNAPSHOT_NAME, overwrite=True)
Explanation: Initializing the Network and Snapshot
SNAPSHOT_PATH below can be updated to point to a custom snapshot directory, see the Batfish instructions for how to package data for analysis.<br>
More example networks are available in the networks folder of the Batfish repository.
End of explanation
# Get routing tables for all nodes and VRFs
routes_all = bf.q.routes().answer().frame()
Explanation: The network snapshot that we initialized above is illustrated below. You can download/view devices' configuration files here.
All of the information we will show you in this notebook is dynamically computed by Batfish based on the configuration files for the network devices.
View Routing Tables for ALL devices and ALL VRFs
Batfish makes all routing tables in the network easily accessible. Let's take a look at how you can retrieve the specific information you want.
End of explanation
?bf.q.routes
# Get the routing table for the 'default' VRF on border routers of as1
# using BF parameters
routes_as1border = bf.q.routes(nodes="/as1border/", vrfs="default").answer().frame()
# Get the routing table for the 'default' VRF on border routers of as1
# using Pandas filtering
routes_as1border = routes_all[(routes_all['Node'].str.contains('as1border')) & (routes_all['VRF'] == 'default')]
routes_as1border
Explanation: We are not going to print this table as it has a large number of entries.
View Routing Tables for default VRF on AS1 border routers
There are 2 ways that we can get the desired subset of data:
Option 1) Only request that information from Batfish by passing in parameters into the routes() question. This is useful to do when you need to reduce the amount of data being returned, but is limited to regex filtering based on VRF, Node, Protocol and Network.
Option 2) Filter the output of the routes() question using the Pandas APIs.
End of explanation
# Getting BGP routes in the routing table for the 'default' VRF on border routers of as1
# using BF parameters
routes_as1border_bgp = bf.q.routes(nodes="/as1border/", vrfs="default", protocols="bgp").answer().frame()
# Geting BGP routes in the routing table for the 'default' VRF on border routers of as1
# using Pandas filtering
routes_as1border_bgp = routes_all[(routes_all['Node'].str.contains('as1border')) & (routes_all['VRF'] == 'default') & (routes_all['Protocol'] == 'bgp')]
routes_as1border_bgp
Explanation: View BGP learnt routes for default VRF on AS1 border routers
End of explanation
routes_filtered = routes_all[(routes_all['Protocol'] == 'bgp') & (routes_all['Metric'] >= 50)]
routes_filtered
Explanation: View BGP learnt routes for ALL VRFs on ALL routers with Metric >=50
We cannot pass in metric as a parameter to Batfish, so this task is best handled with the Pandas API.
End of explanation
# grab the route table entry for network 1.0.2.0/24 from all routers in all VRFs
# using BF parameters
routes_filtered = bf.q.routes(network="1.0.2.0/24").answer().frame()
# grab the route table entry for network 1.0.2.0/24 from all routers in all VRFs
# using Pandas filtering
routes_filtered = routes_all[routes_all['Network'] == "1.0.2.0/24"]
routes_filtered
Explanation: View the routing entries for network 1.0.2.0/24 on ALL routers in ALL VRFs
End of explanation
# Get the list of nodes that have the network 1.0.2.0/24 in at least 1 VRF
# the .unique function removes duplicate entries that would have been returned if the network was in multiple VRFs on a node or there were
# multiple route entries for the network (ECMP)
print(sorted(routes_filtered["Node"].unique()))
Explanation: Using Panda's filtering it is easy to retrieve the list of nodes which have the network in the routing table for at least 1 VRF. This type of processing should always be done using the Pandas APIs.
End of explanation
# Group all routes by Node and filter for those that don't have '1.0.2.0/24'
routes_filtered = routes_all.groupby('Node').filter(lambda x: all(x['Network'] != '1.0.2.0/24'))
# Get the unique node names and sort the list
print(sorted(routes_filtered["Node"].unique()))
Explanation: Now we will retrieve the list of nodes that do NOT have this prefix in their routing table. This is easy to do with the Pandas groupby and filter functions.
End of explanation
routes_all[routes_all['Node'].str.contains('host')]
Explanation: The only devices that do not have a route to 1.0.2.0/24 are the 2 hosts in the snapshot. This is expected, as they should just have a default route. Let's verify that.
End of explanation |
4,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Comparing Spots in PHOEBE 2 vs PHOEBE Legacy
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Adding Spots and Compute Options
Step3: Let's use the external atmospheres available for both phoebe1 and phoebe2
Step4: Plotting | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: Comparing Spots in PHOEBE 2 vs PHOEBE Legacy
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_spot(component='primary', relteff=0.8, radius=20, colat=45, colon=90, feature='spot01')
b.add_dataset('lc', times=np.linspace(0,1,101))
b.add_compute('phoebe', irrad_method='none', compute='phoebe2')
b.add_compute('legacy', refl_num=0, compute='phoebe1')
Explanation: Adding Spots and Compute Options
End of explanation
b.set_value_all('atm', 'extern_planckint')
b.set_value_all('ld_func', 'logarithmic')
b.set_value_all('ld_coeffs', [0.0, 0.0])
b.run_compute('phoebe2', model='phoebe2model')
b.run_compute('phoebe1', model='phoebe1model')
Explanation: Let's use the external atmospheres available for both phoebe1 and phoebe2
End of explanation
axs, artists = b.plot()
legend = plt.legend()
ylims = plt.ylim(1.94, 2.02)
Explanation: Plotting
End of explanation |
4,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Jupyter Notebooks
Step1: A continuación usaremos una celda de código en la cual guardaremos una expresión markdown en una variable python para su posterior visualización.
Step2: El método display cumple la función de renderización de código html en el navegador, donde dicho código es generado por otro método, en este caso por el método markdown que convierte markdown a html y si almacena en un objeto de ipython para dicho fin.
De esta misma forma se pueden generar elementos de documento más complejos de forma automatica, como tablas.
Step3: En este ejemplo las variables fueron asignadas de manera estatica, pero usted puede generar un código que cree el contenido de cada celda, y uniendo su representación en string a las cadenas necesarias para formar el patron markdown de una tabla.
De igual manera es posible generar renderizado de ecuaciones de forma sistematica (elemento aprovechado por paquetes de algebra como Sympy).
Step4: Otras opciones de visualización soportadas son | Python Code:
from IPython.display import display, Latex, Markdown
Explanation: Jupyter Notebooks: Intermedio
Jupyter permite integrar sus capacidades de visualización html con el modulo IPython.display. De esta forma se puede generar la visualización de texto, imagen, video y ecuaciones de una forma sistematica al integrarlo con patrones generados mediante código.
End of explanation
mark_text = "_Ejemplo_ de **markdown** \nHola mundo"
display(Markdown(mark_text))
Explanation: A continuación usaremos una celda de código en la cual guardaremos una expresión markdown en una variable python para su posterior visualización.
End of explanation
fila1 = "|columna 1|columna 2|"
filaalineacion = "|---:|:---:|"
fila2 = "|der|cen|"
display(Markdown(fila1+"\n"+filaalineacion+"\n"+fila2))
Explanation: El método display cumple la función de renderización de código html en el navegador, donde dicho código es generado por otro método, en este caso por el método markdown que convierte markdown a html y si almacena en un objeto de ipython para dicho fin.
De esta misma forma se pueden generar elementos de documento más complejos de forma automatica, como tablas.
End of explanation
latexexp = "$$\\frac{mc^2}{2}$$"
display(Latex(latexexp))
Explanation: En este ejemplo las variables fueron asignadas de manera estatica, pero usted puede generar un código que cree el contenido de cada celda, y uniendo su representación en string a las cadenas necesarias para formar el patron markdown de una tabla.
De igual manera es posible generar renderizado de ecuaciones de forma sistematica (elemento aprovechado por paquetes de algebra como Sympy).
End of explanation
import IPython.display as Display
dir(Display)
Explanation: Otras opciones de visualización soportadas son:
End of explanation |
4,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Assignment 3a
Step2: Working with external modules
Exercise 2
NLTK offers a way of using WordNet in Python. Do some research (using google, because quite frankly, that's what we do very often) and see if you can find out how to import it. WordNet is a computational lexicon which organizes words according to their senses (collected in synsets). See if you can print all the synset definitions of the lemma dog.
Make sure you have run the following cell to make sure you have installed WordNet
Step3: Working with python scripts
Exercise 3
a.) Define a function called count, which determines how often each word occurs in a string. Do not use NLTK just yet. Find a way to test it.
Write a helper-function called preprocess, which removes the punctuation specified by the user, and returns the same string without the unwanted characters. You call the function preprocess inside the count function.
Remember that there are string methods that you can use to get rid of unwanted characters. Test the preprocess function using the following string 'this is a (tricky) test'.
Remember how we used dictionaries to count words? If not, have a look at Chapter 10 - Dictionaries.
make sure you split the string on a space character ' '. You loop over the list to count the words.
Test your function using an example string, which will tell you whether it fulfills the requirements (remove punctuation, split, count). You will get a point for good testing.
b.) Create a python script
Use your editor to create a Python script called count_words.py. Place the function definition of the count function in count_words.py. Also put a function call of the count function in this file to test it. Place your helper function definition, i.e., preprocess, in a separate script called utils_3a.py. Import your helper function preprocess into count_words.py. Test whether everything works as expected by calling the script count_words.py from the terminal.
The function preprocess preprocesses the text by removing characters that are unwanted by the user. preprocess is called within the count function and hence builds upon the output from the preprocess function and creates a dictionary in which the key is a word and the value is the frequency of the word.
Please submit these scripts together with the other notebooks.
Don't forget to add docstrings to your functions.
Step4: Dealing with text files
Exercise 4
Playing with lyrics
a.) Write a function called load_text, which opens and reads a file and returns the text in the file. It should have the file path as a parameter. Test it by loading this file
Step6: Analyzing text with nltk
Exercise 5
Building a simple NLP pipeline
For this exercise, you will need NLTK. Don't forget to import it.
Write a function called tag_text, which takes raw text as input and returns the tagged text. To do this, make sure you follow the steps below | Python Code:
%%capture
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Data.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/images.zip
!wget https://github.com/cltl/python-for-text-analysis/raw/master/zips/Extra_Material.zip
!unzip Data.zip -d ../
!unzip images.zip -d ./
!unzip Extra_Material.zip -d ../
!rm Data.zip
!rm Extra_Material.zip
!rm images.zip
Explanation: <a href="https://colab.research.google.com/github/cltl/python-for-text-analysis/blob/colab/Assignments-colab/ASSIGNMENT_3a.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
# your code here
Explanation: Assignment 3a: Revision of block 3
Due: Friday the 1st of October 2021 14:30.
Please submit your assignment (notebooks of parts 3a and 3b + Python modules) as a single .zip file using Canvas (Assignments --> Assignment 3). Please put the notebooks for Assignment 3a and 3b as well as the Python modules (files ending with .py) in one folder, which you call ASSIGNMENT_3_FIRSTNAME_LASTNAME. Please zip this folder and upload it as your submission.
Please name your zip file with the following naming convention: ASSIGNMENT_3_FIRSTNAME_LASTNAME.zip
IMPORTANTE NOTE:
* The students who follow the Bachelor version of this course, i.e., the course Introduction to Python for Humanities and Social Sciences (L_AABAALG075) as part of the minor Digital Humanities, do not have to do Exercises 3 and 4 of Assignment 3b
* The other students, i.e., who follow the Master version of course, which is Programming in Python for Text Analysis (L_AAMPLIN021), are required to do Exercises 3 and 4 of Assignment 3b
If you have questions about this topic, please contact us ([email protected]). Questions and answers will be collected on Piazza, so please check if your question has already been answered first.
In this block, we covered a lot of ground:
Chapter 12 - Importing external modules
Chapter 13 - Working with Python scripts
Chapter 14 - Reading and writing text files
Chapter 15 - Off to analyzing text
In this assignment, you will first complete a number of small exercises about each chapter to make sure you are familiar with the most important concepts. In the second part of the assignment, you will apply your newly acquired skills to write your very own text processing program (ASSIGNMENT-3b) :-). But don't worry, there will be instructions and hints along the way.
Can I use external modules other than the ones treated so far?
For now, please try to avoid it. All the exercises can be solved with what we have covered in block I, II, and III.
Functions & scope
Excercise 1:
Define a function called split_sort_text which takes one positional parameter called text (a string).
The function:
* splits the string on a space character, i.e., ' '
* returns all the unique words in alphabetical order as a list.
Hint 1: There is a specific python container which does not allow for duplicates and simply removes them. Use this one.
Hint 2: There is a function which sorts items in an iterable called 'sorted'. Look at the documentation to see how it is used.
Hint 3: Don't forget to write a docstring. Please make sure that the docstring generally explains with the input is, what the function does, and what the function returns. If you want, but this is not needed to receive full points, you can use reStructuredText.
End of explanation
import nltk
# uncomment the following line to download material including WordNet
# nltk.download('book')
# your code here
Explanation: Working with external modules
Exercise 2
NLTK offers a way of using WordNet in Python. Do some research (using google, because quite frankly, that's what we do very often) and see if you can find out how to import it. WordNet is a computational lexicon which organizes words according to their senses (collected in synsets). See if you can print all the synset definitions of the lemma dog.
Make sure you have run the following cell to make sure you have installed WordNet:
End of explanation
# Feel free to use this cell to try out your code.
Explanation: Working with python scripts
Exercise 3
a.) Define a function called count, which determines how often each word occurs in a string. Do not use NLTK just yet. Find a way to test it.
Write a helper-function called preprocess, which removes the punctuation specified by the user, and returns the same string without the unwanted characters. You call the function preprocess inside the count function.
Remember that there are string methods that you can use to get rid of unwanted characters. Test the preprocess function using the following string 'this is a (tricky) test'.
Remember how we used dictionaries to count words? If not, have a look at Chapter 10 - Dictionaries.
make sure you split the string on a space character ' '. You loop over the list to count the words.
Test your function using an example string, which will tell you whether it fulfills the requirements (remove punctuation, split, count). You will get a point for good testing.
b.) Create a python script
Use your editor to create a Python script called count_words.py. Place the function definition of the count function in count_words.py. Also put a function call of the count function in this file to test it. Place your helper function definition, i.e., preprocess, in a separate script called utils_3a.py. Import your helper function preprocess into count_words.py. Test whether everything works as expected by calling the script count_words.py from the terminal.
The function preprocess preprocesses the text by removing characters that are unwanted by the user. preprocess is called within the count function and hence builds upon the output from the preprocess function and creates a dictionary in which the key is a word and the value is the frequency of the word.
Please submit these scripts together with the other notebooks.
Don't forget to add docstrings to your functions.
End of explanation
# your code here
Explanation: Dealing with text files
Exercise 4
Playing with lyrics
a.) Write a function called load_text, which opens and reads a file and returns the text in the file. It should have the file path as a parameter. Test it by loading this file: ../Data/lyrics/walrus.txt
Hint: remember it is best practice to use a context manager
Hint: FileNotFoundError: This means that the path you provide does not lead to an existing file on your computer. Please carefully study Chapter 14. Please determine where the notebook or Python module that you are working with is located on your computer. Try to determine where Python is looking if you provide a path such as “../Data/lyrics/walrus.txt”. Try to go from your notebook to the location on your computer where Python is trying to find the file. One tip: if you did not store the Assignments notebooks 3a and 3b in the folder “Assignments”, you would get this error.
b.) Write a function called replace_walrus, which takes lyrics as input and replaces every instance of 'walrus' by 'hippo' (make sure to account for upper and lower case - it is fine to transform everything to lower case). The function should write the new version of the song to a file called 'walrus_hippo.txt and stored in ../Data/lyrics.
Don't forget to add docstrings to your functions.
End of explanation
test_text = Shall I compare thee to a summer's day?
Thou art more lovely and more temperate:
Rough winds do shake the darling buds of May,
And summer's lease hath all too short a date:
# your code here
Explanation: Analyzing text with nltk
Exercise 5
Building a simple NLP pipeline
For this exercise, you will need NLTK. Don't forget to import it.
Write a function called tag_text, which takes raw text as input and returns the tagged text. To do this, make sure you follow the steps below:
Tokenize the text.
Perform part-of-speech tagging on the list of tokens.
Return the tagged text
Then test your function using the text snipped below (test_text) as input.
Please note that the tags may not be correct and that this is not a mistake on your end, but simply NLP tools not being perfect.
End of explanation |
4,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Select seeds for search networks
I select small (1000-1500) sized bot network and pick 4 random members from it
Step1: Now search for friends of seed users
Step2: Show common users in total and per seed user
Step3: Now search and populate neo4j database
Step5: Get all users from neo4j and build graph
Step6: Let's cluster graph and search for communities
Step7: Let's make clusters dataframe
Step8: Let's look to all clusters closely
Step9: We have only two clusters with significant user count.
Let's check first
Step10: Join edges to users
Step11: Let's look to all groups
Step12: Looks like most bot accounts has followers/follows count from 1200 to 1900
Let's filter it
Step13: Now collect all information from these accounts and search for corellations
Step14: Ok, we have two significant values. Moscow and New York. Let's split dataset
Step15: Now check NY users
Step16: Conclusion
We have one twitter bot network on two languages
Step17: Now export moscow and ny users to csv | Python Code:
seeds = ['volya_belousova', 'egor4rgurev', 'kirillfrolovdw', 'ilyazhuchhj']
auth = tweepy.OAuthHandler(OAUTH_KEY, OAUTH_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
graph = Graph(user=NEO4J_USER, password=NEO4J_SECRET)
def get_follwers_by_id(account_id):
ids = []
for page in tweepy.Cursor(api.followers_ids, user_id=account_id).pages():
print("FOLLOWERS: Next page for %s" % account_id)
ids.extend(page)
return ids
def get_friends_by_id(account_id):
ids = []
for page in tweepy.Cursor(api.friends_ids, user_id=account_id).pages():
print("FRIENDS: Next page for %s" % account_id)
ids.extend(page)
return ids
def get_friends(account):
ids = []
for page in tweepy.Cursor(api.friends_ids, screen_name=account).pages():
print("Next page for %s" % account)
ids.extend(page)
return ids
def chunks(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
Explanation: Select seeds for search networks
I select small (1000-1500) sized bot network and pick 4 random members from it
End of explanation
friend_ids = {}
for account in seeds:
friend_ids[account] = get_friends(account)
commons = {}
for first in seeds:
for second in seeds:
if first != second:
commons[(first, second)] = list(set(friend_ids[first]) & set(friend_ids[second]))
all_users = friend_ids[seeds[0]]
for name in seeds:
all_users = list(set(all_users) | set(friend_ids[name]))
Explanation: Now search for friends of seed users
End of explanation
display("Common users: {0}".format(len(all_users)))
html = ["<table width=100%>"]
html.append('<tr><td></td>')
for name in seeds:
html.append('<td>{0}</td>'.format(name))
html.append('</tr>')
for first in seeds:
html.append('<tr><td>{0}</td>'.format(first))
for second in seeds:
if first != second:
html.append('<td>{0}</td>'.format(len(commons[(first,second)])))
else:
html.append('<td>x</td>')
html.append("</tr>")
html.append('</table>')
HTML(''.join(html))
Explanation: Show common users in total and per seed user
End of explanation
graph.run("CREATE CONSTRAINT ON (u:UserRes) ASSERT u.id IS UNIQUE")
processed_users = []
for user_id in all_users:
if user_id not in processed_users:
user = Node("UserRes", id=user_id)
graph.merge(user)
try:
for friend_id in get_follwers_by_id(user_id):
if friend_id in all_users:
friend = Node("UserRes", id=friend_id)
graph.merge(friend)
graph.merge(Relationship(friend, "FRIEND_OF", user))
for friend_id in get_friends_by_id(user_id):
if friend_id in all_users:
friend = Node("UserRes", id=friend_id)
graph.merge(friend)
graph.merge(Relationship(user, "FRIEND_OF", friend))
except tweepy.TweepError:
print("User {0} has protected followers/friends".format(user_id))
processed_users.append(user_id)
print(float(len(processed_users)) / float(len(all_users)) * 100.0)
Explanation: Now search and populate neo4j database
End of explanation
query =
MATCH (user1:UserRes)-[:FRIEND_OF]->(user2:UserRes),
(user2:UserRes)-[:FRIEND_OF]->(user1)
RETURN user1.id, user2.id
data = graph.run(query)
ig = IGraph.TupleList(data, weights=False)
ig.es["width"] = 1
ig.simplify(combine_edges={ "width": "sum" })
Explanation: Get all users from neo4j and build graph
End of explanation
clusters = IGraph.community_fastgreedy(ig)
clusters = clusters.as_clustering()
print("Found %d clusters" % len(clusters))
Explanation: Let's cluster graph and search for communities
End of explanation
nodes = [{"id": node.index, "name": node["name"]} for node in ig.vs]
for node in nodes:
node["cluster"] = clusters.membership[node["id"]]
nodes_df = pd.DataFrame(nodes)
edges = [{"source": x[0], "target": x[1]} for x in ig.get_edgelist()]
edges_df = pd.DataFrame(edges)
edges_counts = edges_df.groupby('source').count().reset_index().rename(columns = {'target': 'count'})
Explanation: Let's make clusters dataframe
End of explanation
nodes_df.groupby('cluster').count()
Explanation: Let's look to all clusters closely
End of explanation
first_cluster = nodes_df[nodes_df["cluster"] == 0][["id", "name"]]
Explanation: We have only two clusters with significant user count.
Let's check first
End of explanation
first_cluster_counts = first_cluster.set_index('id').join(edges_counts.set_index('source')).reset_index()
first_cluster_counts["count"].hist()
Explanation: Join edges to users
End of explanation
for group in range(20):
start = group * 100
stop = (group + 1) * 100
users_slice = first_cluster_counts[(first_cluster_counts["count"] > start) & (first_cluster_counts["count"] < stop)]
print("Users from %d to %d has %d" %(start, stop, users_slice.count()[0]))
display(users_slice[:10])
Explanation: Let's look to all groups
End of explanation
filtered_bots = first_cluster_counts[(first_cluster_counts["count"] > 1200) & (first_cluster_counts["count"] < 1900)]
print("We found %s bots in first approximation" % filtered_bots.count()[0])
Explanation: Looks like most bot accounts has followers/follows count from 1200 to 1900
Let's filter it
End of explanation
first_cluster_bots = []
for group in chunks(filtered_bots["name"].values, 100):
for user in api.lookup_users(user_ids=list(group)):
first_cluster_bots.append(user)
locations = [user.location for user in first_cluster_bots]
first_cluster_bots[0].favourites_count
possible_bot_users = pd.DataFrame([{'name': user.name, 'id': user.id, 'location': user.location, 'screen_name': user.screen_name, 'followers': user.followers_count, 'friends': user.friends_count, 'created_at': user.created_at, 'favorites': user.favourites_count} for user in first_cluster_bots])
possible_bot_users.hist()
possible_bot_users[["id", "location"]].groupby('location').count().plot(kind='bar')
Explanation: Now collect all information from these accounts and search for corellations
End of explanation
moscow_users = possible_bot_users[possible_bot_users["location"] == u'Москва']
moscow_users.hist()
moscow_users[:10]
Explanation: Ok, we have two significant values. Moscow and New York. Let's split dataset
End of explanation
ny_users = possible_bot_users[possible_bot_users["location"] == u'New York, USA']
ny_users.hist()
ny_users[:10]
Explanation: Now check NY users
End of explanation
print("Moscow bots: %d, NY bots: %d, Total: %d" % (moscow_users.count()[0], ny_users.count()[0], moscow_users.count()[0] + ny_users.count()[0]))
Explanation: Conclusion
We have one twitter bot network on two languages: Russian and English.
All bots have deep linking and posts random sentences every hour.
End of explanation
ny_users.append(moscow_users).to_csv("./moscow_ny_bots.csv", encoding='utf8')
Explanation: Now export moscow and ny users to csv
End of explanation |
4,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summary
Requirements
Step8: 1. Make a request
Only the mapping *id -> name *
Step9: Full response
Step10: 2. Bulk request
If you need to retrieve more than 100 mappings, you will need to make several call.
The bulk_requests function can to that for you
Step11: You may also want to retrieve all the elements, without specifying an id.
base the default get function will only return up to 100 elements, you can use the bulk_request_get_all
to avoid hitting the limit. | Python Code:
from pynexus import AppNexusAPI
APPNEXUS_ACCOUNT = {
"username": "",
"password": ""
}
api = AppNexusAPI(**APPNEXUS_ACCOUNT)
Explanation: Summary
Requirements:
python 3
This notebook show how to make call to the AppNexus API.
Implemented functions:
```python
def get_campaign(self, ids=None, one_id=None, advertiser_id=None, only_names=True):
cf https://wiki.appnexus.com/display/api/Campaign+Service
def get_pixel(self, ids=None, one_id=None, advertiser_id=None,
advertiser_code=None, pixel_code=None, only_names=True):
cf https://wiki.appnexus.com/display/api/Conversion+Pixel+Service
def get_device(self, one_id=None, device_type=None, only_names=True):
cf https://wiki.appnexus.com/display/api/Device+Model+Service
def get_advertiser(self, ids=None, one_id=None, search_term=None, only_names=True):
cf https://wiki.appnexus.com/display/api/Advertiser+Service
def get_line_item(self, ids=None, one_id=None, advertiser_id=None, only_names=True):
cf https://wiki.appnexus.com/display/api/Line+Item+Service
def get_insertion_order(self, ids=None, one_id=None, advertiser_id=None, search_term=None, only_names=True):
cf https://wiki.appnexus.com/display/api/Insertion+Order+Service
def get_segment(self, ids=None, one_id=None, search_term=None, only_names=True):
cf https://wiki.appnexus.com/display/api/Segment+Service
```
End of explanation
api.get_device(one_id=80)
Explanation: 1. Make a request
Only the mapping *id -> name *
End of explanation
api.get_device(one_id=80, only_names=False)
Explanation: Full response
End of explanation
pixels = AppNexusAPI.bulk_requests(api.get_pixel, range(0, 300))
Explanation: 2. Bulk request
If you need to retrieve more than 100 mappings, you will need to make several call.
The bulk_requests function can to that for you
End of explanation
# We limit here the number of calls to 10
names = AppNexusAPI.bulk_request_get_all(api.get_pixel, limit=10, only_names=True)
print('%d elements downloaded' % len(names))
Explanation: You may also want to retrieve all the elements, without specifying an id.
base the default get function will only return up to 100 elements, you can use the bulk_request_get_all
to avoid hitting the limit.
End of explanation |
4,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generator expressions
Step1: Tuples as Records
Step2: Tuple Unpacking
Step3: Named tuples
Step5: Slicing
Step6: Assigning to Slices
Step7: Using + and * with Sequences
Step8: Building Lists of Lists
Step9: Augmented Assignment with Sequences
Step10: A += Assignment Puzzler
Step11: • Putting mutable items in tuples is not a good idea.
• Augmented assignment is not an atomic operation—we just saw it throwing an exception after doing part of its job.
• Inspecting Python bytecode is not too difficult, and is often helpful to see what is going on under the hood.
list.sort and the sorted Built-In Function
sorted() makes a new list, doesn't touch the original.
sort() changes list in place.
Step12: Next
Step13: Inserting with bisect.insort
Step14: Arrays
Step15: To sort an array, use a = array.array(a.typecode, sorted(a)). To keep it sorted while adding to it, use bisect.insort.
Memory Views
The built-in memorview class is a shared-memory sequence type that lets you handle slices of arrays without copying bytes.
Step16: NumPy and SciPy
Step17: Loading, saving, and operating
Step18: a hidden cost | Python Code:
symbols = '$#%^&'
[ord(s) for s in symbols]
tuple(ord(s) for s in symbols)
(ord(s) for s in symbols)
for x in (ord(s) for s in symbols):
print(x)
import array
array.array('I', (ord(s) for s in symbols))
colors = ['black', 'white']
sizes = ['S', 'M', 'L']
for tshirt in ((c, s) for c in colors for s in sizes):
print(tshirt)
for tshirt in ('%s %s' % (c, s) for c in colors for s in sizes):
print(tshirt)
Explanation: Generator expressions
End of explanation
lax_coordinates = (33.9425, -118.408056)
city, year, pop, chg, area = ('Tokyo', 2003, 32450, 0.66, 8014)
traveler_ids = [('USA', '31195855'), ('BRA', 'CE342567'), ('ESP', 'XDA205856')]
for passport in sorted(traveler_ids):
print('%s/%s' % passport)
for country, _ in traveler_ids:
print(country)
Explanation: Tuples as Records
End of explanation
import os
_, filename = os.path.split('/home/kyle/afile.txt')
print(filename)
a, b, *rest = range(5)
a, b, rest
a, b, *rest = range(3)
a, b, rest
a, b, *rest = range(2)
a, b, rest
a, *body, c, d = range(5)
a, body, c, d
*head, b, c, d = range(5)
head, b, c, d
metro_areas = [('Tokyo','JP',36.933,(35.689722,139.691667)),
('Delhi NCR', 'IN', 21.935, (28.613889, 77.208889)),
('Mexico City', 'MX', 20.142, (19.433333, -99.133333)),
('New York-Newark', 'US', 20.104, (40.808611, -74.020386)),
('Sao Paulo', 'BR', 19.649, (-23.547778, -46.635833)),
]
print('{:15} | {:^9} | {:^9}'.format('', 'lat.', 'long.'))
fmt = '{:15} | {:9.4f} | {:9.4f}'
fmt
for name, cc, pop, (latitude, longitude) in metro_areas:
if longitude <= 0:
print(fmt.format(name, latitude, longitude))
Explanation: Tuple Unpacking
End of explanation
from collections import namedtuple
City = namedtuple('City', 'name country population coordinates')
tokyo = City('Tokyo', 'JP', 36.933, (35.689722, 139.691667))
tokyo
tokyo.population
tokyo.name
tokyo.coordinates
tokyo[1]
# a few useful methods on namedtuple
City._fields
LatLong = namedtuple('LatLong', 'lat long')
delhi_data = ('Delhi NCR', 'IN', 21.935, LatLong(28.613889, 77.208889))
delhi = City._make(delhi_data) # instantiate a named tuple from an iterable
delhi._asdict()
for key, value in delhi._asdict().items():
print(key + ':', value)
Explanation: Named tuples
End of explanation
# why slices and range exclude the last item
l = [10,20,30,40,50,60]
l[:2]
l[2:]
# slice objects
s = 'bicycle'
s[::3]
s[::-1]
s[::-2]
invoice =
0.....6.................................40........52...55........
1909 Pimoroni PiBrella $17.50 3 $52.50
1489 6mm Tactile Switch x20 $4.95 2 $9.90
1510 Panavise Jr. - PV-201 $28.00 1 $28.00
1601 PiTFT Mini Kit 320x240 $34.95 1 $34.95
SKU = slice(0,6)
DESCRIPTION = slice(6, 40)
UNIT_PRICE = slice(40, 52)
QUANTITY = slice(52, 55)
ITEM_TOTAL = slice(55, None)
line_items = invoice.split('\n')[2:]
for item in line_items:
print(item[UNIT_PRICE], item[DESCRIPTION])
Explanation: Slicing
End of explanation
l = list(range(10))
l
l[2:5] = [20, 30]
l
del l[5:7]
l
l[3::2] = [11, 22]
l
l[2:5] = 100
l
l[2:5] = [100]
l
Explanation: Assigning to Slices
End of explanation
l = [1, 2, 3]
l * 5
5 * 'abcd'
Explanation: Using + and * with Sequences
End of explanation
board = [['_'] *3 for i in range(3)]
board
board[1][2] = 'X'
board
Explanation: Building Lists of Lists
End of explanation
l = [1, 2, 3]
id(l)
l *= 2
id(l) # same list
t=(1,2,3)
id(t)
t *= 2
id(t) # new tuple was created
Explanation: Augmented Assignment with Sequences
End of explanation
import dis
dis.dis('s[a] += b')
Explanation: A += Assignment Puzzler
End of explanation
fruits = ['grape', 'raspberry', 'apple', 'banana']
sorted(fruits)
fruits
sorted(fruits, reverse=True)
sorted(fruits, key=len)
sorted(fruits, key=len, reverse=True)
fruits
fruits.sort() # note that sort() returns None
fruits
Explanation: • Putting mutable items in tuples is not a good idea.
• Augmented assignment is not an atomic operation—we just saw it throwing an exception after doing part of its job.
• Inspecting Python bytecode is not too difficult, and is often helpful to see what is going on under the hood.
list.sort and the sorted Built-In Function
sorted() makes a new list, doesn't touch the original.
sort() changes list in place.
End of explanation
breakpoints=[60, 70, 80, 90]
grades='FDCBA'
bisect.bisect(breakpoints, 99)
bisect.bisect(breakpoints, 59)
bisect.bisect(breakpoints, 75)
def grade(score, breakpoints=[60, 70, 80, 90], grades='FDCBA'):
i = bisect.bisect(breakpoints, score)
return grades[i]
[grade(score) for score in [33, 99, 77, 70, 89, 90, 100]]
grade(4)
grade(93)
Explanation: Next: use bisect module to better search sorted lists.
Managing Ordered Sequences with bisect
End of explanation
import bisect
import random
SIZE = 7
random.seed(1729)
my_list = []
for i in range(SIZE):
new_item = random.randrange(SIZE*2)
bisect.insort(my_list, new_item)
print('%2d ->' % new_item, my_list)
Explanation: Inserting with bisect.insort
End of explanation
from array import array
from random import random
floats = array('d', (random() for i in range(10**7)))
floats[-1]
fp = open('floats.bin', 'wb')
floats.tofile(fp)
fp.close()
floats2 = array('d')
fp = open('floats.bin', 'rb')
floats2.fromfile(fp, 10**7)
fp.close()
floats2[-1]
floats2 == floats
Explanation: Arrays
End of explanation
# Changing the value of an array item by poking one of its bytes
import array
numbers = array.array('h', [-2, -1, 0, 1, 2])
memv = memoryview(numbers)
len(memv)
memv[0]
memv_oct = memv.cast('B') # ch type of array to unsigned char
memv_oct.tolist()
memv_oct[5] = 4
numbers
Explanation: To sort an array, use a = array.array(a.typecode, sorted(a)). To keep it sorted while adding to it, use bisect.insort.
Memory Views
The built-in memorview class is a shared-memory sequence type that lets you handle slices of arrays without copying bytes.
End of explanation
import numpy
a = numpy.arange(12)
a
type(a)
a.shape
a.shape = 3, 4 # turn a into three units of 4
a
a[2]
a[2, 1]
a[:, 1]
a.transpose()
Explanation: NumPy and SciPy
End of explanation
from collections import deque
dq = deque(range(10), maxlen=10)
dq
dq.rotate(3)
dq
dq.rotate(-4)
dq
dq.appendleft(-1)
dq
dq.extend([11, 22, 33])
dq
dq.extendleft([10, 20, 30, 40])
dq
Explanation: Loading, saving, and operating:
Use numpy.loadtxt()
Deques and Other Queues
Inserting and removing from the left of a list (the 0-index end) is costly. collections.deque is a thread-safe double-ended queue designed for fast inserting and removing from both ends.
End of explanation
# but a workaround with `key`
l = [28, 14, '28', 5, '9', '1', 0, 6, '23', 19]
sorted(l, key=int)
sorted(l, key=str)
Explanation: a hidden cost: removing items from the middle of a deque is not as fast
On using single type in list: "we put items in a list to process them later, which implies that all items should support at least some operation in common".
End of explanation |
4,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MPB mode-solver
MPB is a free open source sofware to compute
Step1: As you can see the refractive index goes from 1.44 SiO2 Silicon dioxide to 3.47 Silicon.
Step2: As you can see the first order mode has most power in y-direction Ey. This type of mode is called TE (transverse-electric)
On the other hand the second order mode has most of the ligth in the Ex. This mode is called TM (transverse-magnetic)
Step3: Sidewall angle
You can also specify the sidewall angle.
Step4: Rib waveguides
Rib waveguides have a slab (not fully etched)
Step5: Symmetries
You can exploit symmetries to reduce computation time as well as finding only (TE or TM) modes
MPB assumes propagation in the X direction
TE
Step6: ODD_Y (TE)
Step7: Sweep waveguide width
Strip
Step8: Rib
Step9: Nitride
Step10: Dispersion
To get the effective index we only need to compute the mode propagation constant at a single frequency.
However, to compute the dispersion (group delay) we need to compute the effective index for at least 3 wavelengths.
The effective index neff relates to the speed of the phase evolution of the light, while the group index ng relates to the group velocity of the light.
To compute the resonances in MZI interferometers or ring resonators you need to use ng
Step11: Convergence tests
Before launching a set of simulations you need to make sure you have the correct simulation settings
Step12: Find modes coupler
When two waveguides are close to each other, they support modes that travel with different index (speed). One of the modes is an even mode, while the other one is an odd mode.
Light will couple from one waveguide to another because the even and odd modes travel at different speeds and they interfere with each other. Creating a periodically back and forth coupling between both waveguides.
Depending on the length of the coupling region and the gap there will be a different percentage of the light coupled from one to another
```bash
_____________________________________________________
|
|
| widths[0] widths[1]
| <----------> gaps[0] <---------->
| ___________ <-------------> ___________ _
| | | | | |
sz|_____| ncore |_______________| |_____|
| | wg_thickness
|slab_thickness nslab |
|___________________________________________________|
|
|<---> <--->
|ymargin nclad ymargin
|____________________________________________________
<--------------------------------------------------->
sy
```
Step14: Find coupling vs gap
Step15: Heater efficiency
You can simulate the index change effect from a heater in MPB
Lets assume the temperature increases by 10C (the actual increase does not matter)
Question
What is the optimal waveguide width for maximum index change? | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import meep as mp
import gdsfactory.simulation.modes as gm
modes = gm.find_modes_waveguide(
parity=mp.NO_PARITY,
wg_width=0.4,
ncore=3.47,
nclad=1.44,
wg_thickness=0.22,
resolution=40,
sy=3,
sz=3,
nmodes=4,
)
m1 = modes[1]
m2 = modes[2]
m3 = modes[3]
Explanation: MPB mode-solver
MPB is a free open source sofware to compute:
electro-magnetic modes
band structures
supported by a waveguide with periodic boundaries.
Find modes waveguide
Lets find the modes supported by a waveguide for a particular waveguide geometry and wavelength.
A waveguide is like a pipe to guide the light and is made of a higher refractive index core ncore surrounded by a lower refractive index cladding nclad
bash
__________________________
|
|
| width
| <---------->
| ___________ _ _ _
| | | |
sz|_____| |_______|
| | wg_thickness
|slab_thickness |
|_________________________|
|
|
|__________________________
<------------------------>
sy
Silicon is yellow and opaque at visible wavelengths (380 to 700nm). This is the reason why CMOS cameras can be made of Silicon.
At Infra-red wavelengths used for communications (1300 or 1550nm) Silicon is transparent and has a high refractive index 3.47. So making a Silicon waveguide is quite easy, where the Silicon is the guiding material, and Silicon oxide n=1.45 makes a great low index material for the cladding of the waveguide.
This video explains how Silicon Photonic waveguides guide light in Photonic integrated circuits.
Strip waveguides
Strip waveguides are fully etch and don't have a slab. slab_thickness = 0
bash
__________________________
|
|
| width
| <---------->
| ___________ _ _ _
| | | |
sz| | | |
| | ncore | | wg_thickness
| | | |
| |___________| _ _ _|
|
| nclad
|__________________________
<------------------------>
sy
End of explanation
m1.plot_eps()
m1.neff
m1.plot_ey()
m2.plot_e_all()
m1.plot_e()
Explanation: As you can see the refractive index goes from 1.44 SiO2 Silicon dioxide to 3.47 Silicon.
End of explanation
m2.plot_e_all()
m3.plot_e() # not guided
m1.neff
m2.neff
# the third mode does not propagate and its neff is below the cladding index
m3.neff
Explanation: As you can see the first order mode has most power in y-direction Ey. This type of mode is called TE (transverse-electric)
On the other hand the second order mode has most of the ligth in the Ex. This mode is called TM (transverse-magnetic)
End of explanation
modes = gm.find_modes_waveguide(
parity=mp.NO_PARITY,
wg_width=0.4,
ncore=3.47,
nclad=1.44,
wg_thickness=0.22,
resolution=40,
sidewall_angle=10,
)
m1 = modes[1]
m2 = modes[2]
m3 = modes[3]
m1.plot_eps()
modes = gm.find_modes_waveguide(
parity=mp.NO_PARITY,
wg_width=0.4,
ncore=3.47,
nclad=1.44,
wg_thickness=0.22,
resolution=60,
sidewall_angle=10,
slab_thickness=90e-3,
)
m1 = modes[1]
m2 = modes[2]
m3 = modes[3]
m1.plot_eps()
Explanation: Sidewall angle
You can also specify the sidewall angle.
End of explanation
import gdsfactory.simulation.modes as gm
import meep as mp
modes = gm.find_modes_waveguide(
mode_number=1, nmodes=2, slab_thickness=90e-3, resolution=40
)
m1 = modes[1]
m2 = modes[2]
m1.plot_eps()
m1.plot_e_all()
Explanation: Rib waveguides
Rib waveguides have a slab (not fully etched)
End of explanation
modes = gm.find_modes_waveguide(
mode_number=1,
parity=mp.EVEN_Y + mp.ODD_Z,
nmodes=2,
wg_width=1.0,
ncore=3.47,
nclad=1.44,
wg_thickness=0.22,
resolution=32,
sy=6,
sz=6,
)
m1 = modes[1]
m2 = modes[2]
m1.plot_e()
Explanation: Symmetries
You can exploit symmetries to reduce computation time as well as finding only (TE or TM) modes
MPB assumes propagation in the X direction
TE: mp.ODD_Y + mp.EVEN_Z
TM: mp.EVEN+Y + mp.ODD_Z, all energy in z component
TM: mp.ODD_Y + mp.EVEN_Z
You can define an even Y parity to find only the TM modes
End of explanation
modes = gm.find_modes_waveguide(
mode_number=1,
parity=mp.ODD_Y,
nmodes=2,
wg_width=0.20,
ncore=3.47,
nclad=1.44,
wg_thickness=0.22,
resolution=20,
sy=5,
sz=5,
)
m1 = modes[1]
m2 = modes[2]
m1.plot_e()
Explanation: ODD_Y (TE)
End of explanation
df = gm.find_neff_vs_width(filepath="neff_vs_width.csv")
df
gm.plot_neff_vs_width(df)
Explanation: Sweep waveguide width
Strip
End of explanation
import matplotlib.pyplot as plt
import gdsfactory.simulation.modes as gm
import meep as mp
modes = gm.find_modes_waveguide(
wg_width=0.4,
ncore=3.47,
nclad=1.44,
wg_thickness=220e-3,
resolution=20,
sz=6,
sy=6,
nmodes=4,
slab_thickness=90e-3,
)
m1 = modes[1]
m2 = modes[2]
m3 = modes[3]
m1.plot_eps()
m1.neff
m1.plot_e()
m1.neff
m2.plot_e()
m2.neff
df = gm.find_neff_vs_width(slab_thickness=90e-3, filepath="neff_vs_width_rib.csv")
gm.plot_neff_vs_width(df)
Explanation: Rib
End of explanation
modes = gm.find_modes_waveguide(
wg_width=1.0,
ncore=2.0,
nclad=1.44,
wg_thickness=400e-3,
sz=6,
sy=10,
nmodes=4,
resolution=10,
)
m1 = modes[1]
m2 = modes[2]
m3 = modes[3]
m1.plot_eps()
m1.plot_ey()
m1.plot_e_all()
m2.plot_ex()
m3.plot_ey()
df = gm.find_neff_vs_width(
width1=0.5,
width2=1.2,
wg_thickness=0.4,
ncore=2.0,
sy=10.0,
resolution=15,
filepath="neff_vs_width_nitride.csv",
)
gm.plot_neff_vs_width(df)
Explanation: Nitride
End of explanation
gm.find_mode_dispersion?
m = gm.find_mode_dispersion()
m.ng
Explanation: Dispersion
To get the effective index we only need to compute the mode propagation constant at a single frequency.
However, to compute the dispersion (group delay) we need to compute the effective index for at least 3 wavelengths.
The effective index neff relates to the speed of the phase evolution of the light, while the group index ng relates to the group velocity of the light.
To compute the resonances in MZI interferometers or ring resonators you need to use ng
End of explanation
import gdsfactory.simulation.modes as gm
import numpy as np
import matplotlib.pyplot as plt
resolutions = np.linspace(10, 50, 5)
neffs = []
for resolution in resolutions:
modes = gm.find_modes_waveguide(
wg_width=0.5, ncore=3.5, nclad=1.44, wg_thickness=0.22, resolution=resolution
)
mode = modes[1]
neffs.append(mode.neff)
plt.plot(resolutions, neffs, "o-")
plt.ylabel("neff")
plt.xlabel("resolution (pixels/um)")
szs = np.linspace(4, 6, 6)
neffs = []
for sz in szs:
modes = gm.find_modes_waveguide(
wg_width=0.5, ncore=3.5, nclad=1.44, wg_thickness=0.22, resolution=20, sz=sz
)
mode = modes[1]
neffs.append(mode.neff)
plt.plot(szs, neffs, "o-")
plt.ylabel("neff")
plt.xlabel("simulation size in z(um)")
sys = np.linspace(2, 6, 6)
neffs = []
for sy in sys:
modes = gm.find_modes_waveguide(
wg_width=0.5, ncore=3.5, nclad=1.44, wg_thickness=0.22, resolution=20, sy=sy
)
mode = modes[1]
neffs.append(mode.neff)
plt.plot(sys, neffs, "o-")
plt.ylabel("neff")
plt.xlabel("simulation size in y (um)")
Explanation: Convergence tests
Before launching a set of simulations you need to make sure you have the correct simulation settings:
resolution: resolution
sx: Size of the simulation region in the x-direction (default=4.0)
sy: Size of the simulation region in the y-direction (default=4.0)
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import meep as mp
import gdsfactory.simulation.modes as gm
modes = gm.find_modes_coupler(
wg_widths=(0.5, 0.5),
gaps=(0.2,),
ncore=3.47,
nclad=1.44,
wg_thickness=0.22,
resolution=20,
sz=6,
nmodes=4,
)
m1 = modes[1]
m2 = modes[2]
m3 = modes[3]
m1.plot_eps()
m1.plot_ey() # even mode
m2.plot_ey() # odd mode
Explanation: Find modes coupler
When two waveguides are close to each other, they support modes that travel with different index (speed). One of the modes is an even mode, while the other one is an odd mode.
Light will couple from one waveguide to another because the even and odd modes travel at different speeds and they interfere with each other. Creating a periodically back and forth coupling between both waveguides.
Depending on the length of the coupling region and the gap there will be a different percentage of the light coupled from one to another
```bash
_____________________________________________________
|
|
| widths[0] widths[1]
| <----------> gaps[0] <---------->
| ___________ <-------------> ___________ _
| | | | | |
sz|_____| ncore |_______________| |_____|
| | wg_thickness
|slab_thickness nslab |
|___________________________________________________|
|
|<---> <--->
|ymargin nclad ymargin
|____________________________________________________
<--------------------------------------------------->
sy
```
End of explanation
gm.find_coupling_vs_gap?
df = gm.coupler.find_coupling_vs_gap(
gap1=0.2,
gap2=0.4,
steps=12,
nmodes=4,
wavelength=1.55,
filepath="find_coupling_vs_gap_strip.csv",
)
gm.plot_coupling_vs_gap(df)
plt.title("strip 500x200 coupling")
df = gm.coupler.find_coupling_vs_gap_nitride(
filepath="find_coupling_vs_gap_nitride.csv"
)
gm.plot_coupling_vs_gap(df)
plt.title("nitride 1000x400 nitride")
ne = []
no = []
gaps = [0.2, 0.25, 0.3]
for gap in gaps:
modes = gm.find_modes_coupler(
wg_widths=(0.5, 0.5),
gaps=(gap,),
ncore=3.47,
nclad=1.44,
wg_thickness=0.22,
resolution=20,
sz=6,
nmodes=4,
)
ne.append(modes[1].neff)
no.append(modes[2].neff)
import numpy as np
def coupling_length(
neff1: float,
neff2: float,
power_ratio: float = 1.0,
wavelength: float = 1.55,
) -> float:
Returns the coupling length (um) of the directional coupler
to achieve power_ratio
Args:
neff1: even supermode of the directional coupler.
neff2: odd supermode of the directional coupler.
power_ratio: p2/p1, where 1 means 100% power transfer
wavelength: in um
dneff = (neff1 - neff2).real
return wavelength / (np.pi * dneff) * np.arcsin(np.sqrt(power_ratio))
lc = [
coupling_length(neff1=neff1, neff2=neff2) for gap, neff1, neff2 in zip(gaps, ne, no)
]
plt.plot(gaps, lc, ".-")
plt.ylabel("100% coupling length (um)")
plt.xlabel("gap (um)")
Explanation: Find coupling vs gap
End of explanation
from tqdm import tqdm
import pathlib
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import gdsfactory.simulation.modes as gm
dn_dt_si = 1.87e-4
dn_dt_sio2 = 8.5e-6
wg_widths = np.arange(0.4, 1.3, 0.2)
wg_widths
filepath = pathlib.Path("neff_vs_temperature.csv")
if filepath.exists:
df = pd.read_csv(filepath)
dt = 10
else:
dneffs = []
for wg_width in tqdm(wg_widths):
dt = 0
modes_t0 = gm.find_modes_waveguide(
wg_width=wg_width,
ncore=3.47 + dn_dt_si * dt,
nclad=1.44 + dn_dt_sio2 * dt,
wg_thickness=0.22,
resolution=20,
sy=6,
sz=6,
nmodes=4,
)
m1 = modes_t0[1]
neff_t0 = m1.neff
dt = 10
modes_t1 = gm.find_modes_waveguide(
wg_width=wg_width,
ncore=3.47 + dn_dt_si * dt,
nclad=1.44 + dn_dt_sio2 * dt,
wg_thickness=0.22,
resolution=20,
sy=6,
sz=6,
nmodes=4,
)
m1 = modes_t1[1]
neff_t1 = m1.neff
dneff = neff_t1 - neff_t0
dneffs.append(dneff)
df = pd.DataFrame(dict(wg_width=wg_widths, dneff=dneffs))
df.to_csv(filepath)
wg_widths = df.wg_width
dneffs = df.dneff
plt.plot(wg_widths, np.array(dneffs) / dt, ".-")
plt.xlabel("waveguide width (um)")
plt.ylabel("dneff / dT")
dndt = np.array(dneffs) / dt
plt.plot(wg_widths, dndt / max(dndt) * 100, ".-")
plt.title("waveguide dn/dT")
plt.xlabel("waveguide width (um)")
plt.ylabel("dn/dT (%)")
Explanation: Heater efficiency
You can simulate the index change effect from a heater in MPB
Lets assume the temperature increases by 10C (the actual increase does not matter)
Question
What is the optimal waveguide width for maximum index change?
End of explanation |
4,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question. | Python Code:
import numpy as np
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
# YOUR CODE HERE
#raise NotImplementedError()
ones=['one','two','three','four','five','six','seven','eight','nine','ten']
teens=['eleven','twelve','thirteen','fourteen','fifteen','sixteen','seventeen','eighteen','nineteen']
tens=['twenty','thirty','forty','fifty','sixty','seventy','eighty','ninety']
#obivous what if statements do so not going comment everything
if n<=10:
x=ones[n-1]
if 10<n<20:
x=teens[n-11]
if n!=10 and n<100 and n%10==0:
x=tens[int(n/10)-2]
if 20<n<30:
x=tens[0]+'-'+ones[n%20-1]
if 30<n<40:
x=tens[1]+'-'+ones[n%30-1]
if 40<n<50:
x=tens[2]+'-'+ones[n%40-1]
if 50<n<60:
x=tens[3]+'-'+ones[n%50-1]
if 60<n<70:
x=tens[4]+'-'+ones[n%60-1]
if 70<n<80:
x=tens[5]+'-'+ones[n%70-1]
if 80<n<90:
x=tens[6]+'-'+ones[n%80-1]
if 90<n<100:
x=tens[7]+'-'+ones[n%90-1]
if 100<=n<1000:
a = str(n)
b = ones[int(a[0])-1]+' hundred and'
if n%100==0:
x = ones[int(a[0])-1]+' hundred'
else:
if int(a[1::])<=10:
x=b+' '+ones[int(a[1::])-1]
if 10<int(a[1::])<20:
x=b+' '+teens[int(a[1::])-11]
if int(a[1::])!=10 and int(a[1::])<100 and int(a[1::])%10==0:
x=b+' '+tens[int(int(a[1::])/10)-2]
if 20<int(a[1::])<30:
x=b+' '+tens[0]+'-'+ones[int(a[1::])%20-1]
if 30<int(a[1::])<40:
x=b+' '+tens[1]+'-'+ones[int(a[1::])%30-1]
if 40<int(a[1::])<50:
x=b+' '+tens[2]+'-'+ones[int(a[1::])%40-1]
if 50<int(a[1::])<60:
x=b+' '+tens[3]+'-'+ones[int(a[1::])%50-1]
if 60<int(a[1::])<70:
x=b+' '+tens[4]+'-'+ones[int(a[1::])%60-1]
if 70<int(a[1::])<80:
x=b+' '+tens[5]+'-'+ones[int(a[1::])%70-1]
if 80<int(a[1::])<90:
x=b+' '+tens[6]+'-'+ones[int(a[1::])%80-1]
if 90<int(a[1::])<100:
x=b+' '+tens[7]+'-'+ones[int(a[1::])%90-1]
if n==1000:
x="one thousand"
return str(x)
#number_to_words(999)
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
# YOUR CODE HERE
#raise NotImplementedError()
assert number_to_words(5) == 'five'
assert number_to_words(100) == 'one hundred'
assert number_to_words(435) == 'four hundred and thirty-five'
assert True # use this for grading the number_to_words tests.
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
# YOUR CODE HERE
#raise NotImplementedError()
l = []
lit = np.ones([len(range(n))])
#puts all the written out numbers in a list
for i in range(n):
l.append(number_to_words(i+1))
#reomoves hyphens
y = [k.replace('-','') for k in l]
#removes spaces
z=[m.replace(' ','') for m in y]
#puts the length of each word w/out spaces/hyphens in array
for j in range(n):
lit[j]=len(z[j])
#returns sum of all lengths in array lit
return sum(lit)
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
# YOUR CODE HERE
#raise NotImplementedError()
assert count_letters(1) == 3
assert count_letters(2) == 6
assert count_letters(5) == 19
assert True # use this for grading the count_letters tests.
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
# YOUR CODE HERE
#raise NotImplementedError()
count_letters(1000)
assert True # use this for gradig the answer to the original question.
Explanation: Finally used your count_letters function to solve the original question.
End of explanation |
4,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quickstart
This is a short introduction and quickstart for the PySpark DataFrame API. PySpark DataFrames are lazily evaluated. They are implemented on top of RDDs. When Spark transforms data, it does not immediately compute the transformation but plans how to compute later. When actions such as collect() are explicitly called, the computation starts.
This notebook shows the basic usages of the DataFrame, geared mainly for new users. You can run the latest version of these examples by yourself on a live notebook here.
There is also other useful information in Apache Spark documentation site, see the latest version of Spark SQL and DataFrames, RDD Programming Guide, Structured Streaming Programming Guide, Spark Streaming Programming Guide and Machine Learning Library (MLlib) Guide.
PySpark applications start with initializing SparkSession which is the entry point of PySpark as below. In case of running it in PySpark shell via <code>pyspark</code> executable, the shell automatically creates the session in the variable <code>spark</code> for users.
Step1: DataFrame Creation
A PySpark DataFrame can be created via pyspark.sql.SparkSession.createDataFrame typically by passing a list of lists, tuples, dictionaries and pyspark.sql.Rows, a pandas DataFrame and an RDD consisting of such a list.
pyspark.sql.SparkSession.createDataFrame takes the schema argument to specify the schema of the DataFrame. When it is omitted, PySpark infers the corresponding schema by taking a sample from the data.
Firstly, you can create a PySpark DataFrame from a list of rows
Step2: Create a PySpark DataFrame with an explicit schema.
Step3: Create a PySpark DataFrame from a pandas DataFrame
Step4: Create a PySpark DataFrame from an RDD consisting of a list of tuples.
Step5: The DataFrames created above all have the same results and schema.
Step6: Viewing Data
The top rows of a DataFrame can be displayed using DataFrame.show().
Step7: Alternatively, you can enable spark.sql.repl.eagerEval.enabled configuration for the eager evaluation of PySpark DataFrame in notebooks such as Jupyter. The number of rows to show can be controlled via spark.sql.repl.eagerEval.maxNumRows configuration.
Step8: The rows can also be shown vertically. This is useful when rows are too long to show horizontally.
Step9: You can see the DataFrame's schema and column names as follows
Step10: Show the summary of the DataFrame
Step11: DataFrame.collect() collects the distributed data to the driver side as the local data in Python. Note that this can throw an out-of-memory error when the dataset is too large to fit in the driver side because it collects all the data from executors to the driver side.
Step12: In order to avoid throwing an out-of-memory exception, use DataFrame.take() or DataFrame.tail().
Step13: PySpark DataFrame also provides the conversion back to a pandas DataFrame to leverage pandas API. Note that toPandas also collects all data into the driver side that can easily cause an out-of-memory-error when the data is too large to fit into the driver side.
Step14: Selecting and Accessing Data
PySpark DataFrame is lazily evaluated and simply selecting a column does not trigger the computation but it returns a Column instance.
Step15: In fact, most of column-wise operations return Columns.
Step16: These Columns can be used to select the columns from a DataFrame. For example, DataFrame.select() takes the Column instances that returns another DataFrame.
Step17: Assign new Column instance.
Step18: To select a subset of rows, use DataFrame.filter().
Step19: Applying a Function
PySpark supports various UDFs and APIs to allow users to execute Python native functions. See also the latest Pandas UDFs and Pandas Function APIs. For instance, the example below allows users to directly use the APIs in a pandas Series within Python native function.
Step20: Another example is DataFrame.mapInPandas which allows users directly use the APIs in a pandas DataFrame without any restrictions such as the result length.
Step21: Grouping Data
PySpark DataFrame also provides a way of handling grouped data by using the common approach, split-apply-combine strategy.
It groups the data by a certain condition applies a function to each group and then combines them back to the DataFrame.
Step22: Grouping and then applying the avg() function to the resulting groups.
Step23: You can also apply a Python native function against each group by using pandas API.
Step24: Co-grouping and applying a function.
Step25: Getting Data in/out
CSV is straightforward and easy to use. Parquet and ORC are efficient and compact file formats to read and write faster.
There are many other data sources available in PySpark such as JDBC, text, binaryFile, Avro, etc. See also the latest Spark SQL, DataFrames and Datasets Guide in Apache Spark documentation.
CSV
Step26: Parquet
Step27: ORC
Step28: Working with SQL
DataFrame and Spark SQL share the same execution engine so they can be interchangeably used seamlessly. For example, you can register the DataFrame as a table and run a SQL easily as below
Step29: In addition, UDFs can be registered and invoked in SQL out of the box
Step30: These SQL expressions can directly be mixed and used as PySpark columns. | Python Code:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
Explanation: Quickstart
This is a short introduction and quickstart for the PySpark DataFrame API. PySpark DataFrames are lazily evaluated. They are implemented on top of RDDs. When Spark transforms data, it does not immediately compute the transformation but plans how to compute later. When actions such as collect() are explicitly called, the computation starts.
This notebook shows the basic usages of the DataFrame, geared mainly for new users. You can run the latest version of these examples by yourself on a live notebook here.
There is also other useful information in Apache Spark documentation site, see the latest version of Spark SQL and DataFrames, RDD Programming Guide, Structured Streaming Programming Guide, Spark Streaming Programming Guide and Machine Learning Library (MLlib) Guide.
PySpark applications start with initializing SparkSession which is the entry point of PySpark as below. In case of running it in PySpark shell via <code>pyspark</code> executable, the shell automatically creates the session in the variable <code>spark</code> for users.
End of explanation
from datetime import datetime, date
import pandas as pd
from pyspark.sql import Row
df = spark.createDataFrame([
Row(a=1, b=2., c='string1', d=date(2000, 1, 1), e=datetime(2000, 1, 1, 12, 0)),
Row(a=2, b=3., c='string2', d=date(2000, 2, 1), e=datetime(2000, 1, 2, 12, 0)),
Row(a=4, b=5., c='string3', d=date(2000, 3, 1), e=datetime(2000, 1, 3, 12, 0))
])
df
Explanation: DataFrame Creation
A PySpark DataFrame can be created via pyspark.sql.SparkSession.createDataFrame typically by passing a list of lists, tuples, dictionaries and pyspark.sql.Rows, a pandas DataFrame and an RDD consisting of such a list.
pyspark.sql.SparkSession.createDataFrame takes the schema argument to specify the schema of the DataFrame. When it is omitted, PySpark infers the corresponding schema by taking a sample from the data.
Firstly, you can create a PySpark DataFrame from a list of rows
End of explanation
df = spark.createDataFrame([
(1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
(2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
(3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
], schema='a long, b double, c string, d date, e timestamp')
df
Explanation: Create a PySpark DataFrame with an explicit schema.
End of explanation
pandas_df = pd.DataFrame({
'a': [1, 2, 3],
'b': [2., 3., 4.],
'c': ['string1', 'string2', 'string3'],
'd': [date(2000, 1, 1), date(2000, 2, 1), date(2000, 3, 1)],
'e': [datetime(2000, 1, 1, 12, 0), datetime(2000, 1, 2, 12, 0), datetime(2000, 1, 3, 12, 0)]
})
df = spark.createDataFrame(pandas_df)
df
Explanation: Create a PySpark DataFrame from a pandas DataFrame
End of explanation
rdd = spark.sparkContext.parallelize([
(1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
(2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
(3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
])
df = spark.createDataFrame(rdd, schema=['a', 'b', 'c', 'd', 'e'])
df
Explanation: Create a PySpark DataFrame from an RDD consisting of a list of tuples.
End of explanation
# All DataFrames above result same.
df.show()
df.printSchema()
Explanation: The DataFrames created above all have the same results and schema.
End of explanation
df.show(1)
Explanation: Viewing Data
The top rows of a DataFrame can be displayed using DataFrame.show().
End of explanation
spark.conf.set('spark.sql.repl.eagerEval.enabled', True)
df
Explanation: Alternatively, you can enable spark.sql.repl.eagerEval.enabled configuration for the eager evaluation of PySpark DataFrame in notebooks such as Jupyter. The number of rows to show can be controlled via spark.sql.repl.eagerEval.maxNumRows configuration.
End of explanation
df.show(1, vertical=True)
Explanation: The rows can also be shown vertically. This is useful when rows are too long to show horizontally.
End of explanation
df.columns
df.printSchema()
Explanation: You can see the DataFrame's schema and column names as follows:
End of explanation
df.select("a", "b", "c").describe().show()
Explanation: Show the summary of the DataFrame
End of explanation
df.collect()
Explanation: DataFrame.collect() collects the distributed data to the driver side as the local data in Python. Note that this can throw an out-of-memory error when the dataset is too large to fit in the driver side because it collects all the data from executors to the driver side.
End of explanation
df.take(1)
Explanation: In order to avoid throwing an out-of-memory exception, use DataFrame.take() or DataFrame.tail().
End of explanation
df.toPandas()
Explanation: PySpark DataFrame also provides the conversion back to a pandas DataFrame to leverage pandas API. Note that toPandas also collects all data into the driver side that can easily cause an out-of-memory-error when the data is too large to fit into the driver side.
End of explanation
df.a
Explanation: Selecting and Accessing Data
PySpark DataFrame is lazily evaluated and simply selecting a column does not trigger the computation but it returns a Column instance.
End of explanation
from pyspark.sql import Column
from pyspark.sql.functions import upper
type(df.c) == type(upper(df.c)) == type(df.c.isNull())
Explanation: In fact, most of column-wise operations return Columns.
End of explanation
df.select(df.c).show()
Explanation: These Columns can be used to select the columns from a DataFrame. For example, DataFrame.select() takes the Column instances that returns another DataFrame.
End of explanation
df.withColumn('upper_c', upper(df.c)).show()
Explanation: Assign new Column instance.
End of explanation
df.filter(df.a == 1).show()
Explanation: To select a subset of rows, use DataFrame.filter().
End of explanation
import pandas
from pyspark.sql.functions import pandas_udf
@pandas_udf('long')
def pandas_plus_one(series: pd.Series) -> pd.Series:
# Simply plus one by using pandas Series.
return series + 1
df.select(pandas_plus_one(df.a)).show()
Explanation: Applying a Function
PySpark supports various UDFs and APIs to allow users to execute Python native functions. See also the latest Pandas UDFs and Pandas Function APIs. For instance, the example below allows users to directly use the APIs in a pandas Series within Python native function.
End of explanation
def pandas_filter_func(iterator):
for pandas_df in iterator:
yield pandas_df[pandas_df.a == 1]
df.mapInPandas(pandas_filter_func, schema=df.schema).show()
Explanation: Another example is DataFrame.mapInPandas which allows users directly use the APIs in a pandas DataFrame without any restrictions such as the result length.
End of explanation
df = spark.createDataFrame([
['red', 'banana', 1, 10], ['blue', 'banana', 2, 20], ['red', 'carrot', 3, 30],
['blue', 'grape', 4, 40], ['red', 'carrot', 5, 50], ['black', 'carrot', 6, 60],
['red', 'banana', 7, 70], ['red', 'grape', 8, 80]], schema=['color', 'fruit', 'v1', 'v2'])
df.show()
Explanation: Grouping Data
PySpark DataFrame also provides a way of handling grouped data by using the common approach, split-apply-combine strategy.
It groups the data by a certain condition applies a function to each group and then combines them back to the DataFrame.
End of explanation
df.groupby('color').avg().show()
Explanation: Grouping and then applying the avg() function to the resulting groups.
End of explanation
def plus_mean(pandas_df):
return pandas_df.assign(v1=pandas_df.v1 - pandas_df.v1.mean())
df.groupby('color').applyInPandas(plus_mean, schema=df.schema).show()
Explanation: You can also apply a Python native function against each group by using pandas API.
End of explanation
df1 = spark.createDataFrame(
[(20000101, 1, 1.0), (20000101, 2, 2.0), (20000102, 1, 3.0), (20000102, 2, 4.0)],
('time', 'id', 'v1'))
df2 = spark.createDataFrame(
[(20000101, 1, 'x'), (20000101, 2, 'y')],
('time', 'id', 'v2'))
def asof_join(l, r):
return pd.merge_asof(l, r, on='time', by='id')
df1.groupby('id').cogroup(df2.groupby('id')).applyInPandas(
asof_join, schema='time int, id int, v1 double, v2 string').show()
Explanation: Co-grouping and applying a function.
End of explanation
df.write.csv('foo.csv', header=True)
spark.read.csv('foo.csv', header=True).show()
Explanation: Getting Data in/out
CSV is straightforward and easy to use. Parquet and ORC are efficient and compact file formats to read and write faster.
There are many other data sources available in PySpark such as JDBC, text, binaryFile, Avro, etc. See also the latest Spark SQL, DataFrames and Datasets Guide in Apache Spark documentation.
CSV
End of explanation
df.write.parquet('bar.parquet')
spark.read.parquet('bar.parquet').show()
Explanation: Parquet
End of explanation
df.write.orc('zoo.orc')
spark.read.orc('zoo.orc').show()
Explanation: ORC
End of explanation
df.createOrReplaceTempView("tableA")
spark.sql("SELECT count(*) from tableA").show()
Explanation: Working with SQL
DataFrame and Spark SQL share the same execution engine so they can be interchangeably used seamlessly. For example, you can register the DataFrame as a table and run a SQL easily as below:
End of explanation
@pandas_udf("integer")
def add_one(s: pd.Series) -> pd.Series:
return s + 1
spark.udf.register("add_one", add_one)
spark.sql("SELECT add_one(v1) FROM tableA").show()
Explanation: In addition, UDFs can be registered and invoked in SQL out of the box:
End of explanation
from pyspark.sql.functions import expr
df.selectExpr('add_one(v1)').show()
df.select(expr('count(*)') > 0).show()
Explanation: These SQL expressions can directly be mixed and used as PySpark columns.
End of explanation |
4,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
hot-CNO and breakout
Step1: This collection of rates has the main CNO rates plus a breakout rate into the hot CNO cycle
Step2: To evaluate the rates, we need a composition. This is defined using a list of Nuceli objects.
Step3: Interactive exploration is enabled through the Explorer class, which takes a RateCollection and a Composition | Python Code:
import pynucastro as pyrl
Explanation: hot-CNO and breakout
End of explanation
files = ["c12-pg-n13-ls09",
"c13-pg-n14-nacr",
"n13--c13-wc12",
"n13-pg-o14-lg06",
"n14-pg-o15-im05",
"n15-pa-c12-nacr",
"o14--n14-wc12",
"o15--n15-wc12",
"o14-ap-f17-Ha96c",
"f17-pg-ne18-cb09",
"ne18--f18-wc12",
"f18-pa-o15-il10"]
rc = pyrl.RateCollection(files)
Explanation: This collection of rates has the main CNO rates plus a breakout rate into the hot CNO cycle
End of explanation
comp = pyrl.Composition(rc.get_nuclei())
comp.set_solar_like()
Explanation: To evaluate the rates, we need a composition. This is defined using a list of Nuceli objects.
End of explanation
re = pyrl.Explorer(rc, comp, size=(1000,1000),
ydot_cutoff_value=1.e-25)
re.explore()
Explanation: Interactive exploration is enabled through the Explorer class, which takes a RateCollection and a Composition
End of explanation |
4,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unsupervised Analysis of Days of Week
Tresting crossings each dat as features to learn about the relationships bwteen days of the week
Step1: Get Data
Step2: Principal Component Analysis
Step3: Unsupervised Clustering
Step4: Comparing with Day of the week
Step5: Analyzing Outliers
The following points are weekdays with a holiday-like pattern | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import numpy as np
import pandas as pd
from sklearn.decomposition import PCA
from sklearn.mixture import GaussianMixture
Explanation: Unsupervised Analysis of Days of Week
Tresting crossings each dat as features to learn about the relationships bwteen days of the week
End of explanation
from jupyterworkflow.data import get_fremont_data
data = get_fremont_data()
pivoted = data.pivot_table('Total', index=data.index.time, columns=data.index.date)
pivoted.plot(legend=False, alpha=0.01);
Explanation: Get Data
End of explanation
X = pivoted.fillna(0).T.values
X.shape
X2 = PCA(2, svd_solver='full').fit_transform(X)
X2.shape
plt.scatter(X2[:, 0], X2[:, 1]);
Explanation: Principal Component Analysis
End of explanation
gmm = GaussianMixture(2)
gmm.fit(X)
labels = gmm.predict(X)
plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow');
plt.colorbar();
fig, ax = plt.subplots(1, 2, figsize=(14, 6))
pivoted.T[labels == 0].T.plot(legend=False, alpha=0.01, ax=ax[0]);
pivoted.T[labels == 1].T.plot(legend=False, alpha=0.01, ax=ax[1]);
ax[0].set_title('Purple Cluster');
ax[1].set_title('Red Cluster');
Explanation: Unsupervised Clustering
End of explanation
dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek
plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek, cmap='rainbow');
plt.colorbar();
Explanation: Comparing with Day of the week
End of explanation
weekday_label = 0
weekend_label = 1
if pivoted.T[labels == 1].max().max() > pivoted.T[labels == 0].max().max() :
weekday_label = 1
weekend_label = 0
dates = pd.DatetimeIndex(pivoted.columns)
dates [(labels == weekend_label) & (dayofweek < 5)]
Explanation: Analyzing Outliers
The following points are weekdays with a holiday-like pattern
End of explanation |
4,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
## <p style="text-align
Step1: Control flow commands
Step2: numpy
Step3: matplotlib | Python Code:
# Basic python
print("Hello world!")
print(type("Hello world!"))
print(type("Hello world!")==str)
# dir()
# Built-in data types
# int
print(type(1))
# float
print(type(1.0))
# str
print(type("Hello World"))
# bool
print(type(False))
# Built-in data types: list
alist = [1, 2.0, "3"]
print(alist)
print(type(alist))
print(len(alist))
alist.append(4)
print(alist)
# how to access elements in a list
print(alist)
print(alist[1])
print(alist[::3])
print(alist[1:])
print(alist[:-1])
# Built-in data types: tuple
atuple = (1, 2.0, "3")
print(atuple)
print(type(atuple))
# how to accesse elements in a list
print(atuple[1])
print(atuple[::2])
print(atuple[1:])
print(atuple[:-1])
# Set & dictionary
aset = {1, 2.0, '3'}
print(aset)
adict = {1:"1.000", 2.0:"2.000", "3":"3.000"}
print(adict)
print(adict[1])
print(adict.keys(), adict.values())
Explanation: ## <p style="text-align: center; font-size: 4em;"> Python tutorial </p>
built-in functions
print
type
dir
built-in data types
int
float
str
bool
list
tuple
set
dict
End of explanation
# for
for i in range(10):
print(i)
# for
print(alist)
for _ in alist:
print(_)
# for
print(adict)
for k, v in adict.items():
print(k, v)
# if elif else
data = [-2, 0, 2]
for _ in data:
if _ < 0:
print(_)
elif _ == 0:
print("exactly 0")
else:
print(_)
# while
a = 10
while(a>0):
print(a)
a-=3
Explanation: Control flow commands
End of explanation
# how to import packages/modules
# 1
import numpy
print(numpy.sqrt(2))
# 2
import numpy as np
print(np.sqrt(2))
# 3
from numpy import sqrt
print(sqrt(2))
# numpy.ndarray VS list
x_list = [1,2,3]
import numpy as np
x_array = np.array(x_list)
print(x_list, x_list*2)
print(x_array, x_array*2)
# numpy
import numpy as np
a = np.arange(15).reshape(3, 5)
print("a = ", a)
print("the shape of a is ", a.shape)
print(a.ndim)
print(a.dtype.name)
print(a.itemsize)
print(a.size)
print(type(a))
Explanation: numpy
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams.update({'font.size':20})
fig = plt.figure(figsize=(10,10))
x = np.linspace(0, 6*np.pi, 100)
plt.plot(x, np.cos(x), 'r');
plt.plot(x, np.sin(x), 'b-');
# legend?
%pylab inline
rcParams.update({'font.size':20})
n_mc = 10000
x = np.random.rand(n_mc,)
y = np.random.rand(n_mc,)
theta = np.linspace(0., 0.5*np.pi, 100)
cx = np.cos(theta)
cy = np.sin(theta)
pi_est = 4.0*np.sum((x**2+y**2)<1.)/n_mc
fig = figure(figsize=(8,8))
ax = fig.add_subplot(111)
line, = plot(x, y, '.')
c, = plot(cx, cy, 'r-')
ax.set_xlabel("X")
ax.set_ylabel("Y")
ax.set_title("estimated pi = {:.10f}".format(pi_est));
print(1)
Explanation: matplotlib
End of explanation |
4,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graph format
The EDeN library allows the vectorization of graphs, i.e. the transformation of graphs into sparse vectors.
The graphs that can be processed by the EDeN library have the following restrictions
Step1: Build graphs and then display them
Step2: Create a vector representation
Step3: embed the high dimensional vector in 2D using PCA and plot the instances
Step4: Compute pairwise similarity matrix | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
from eden.util import configure_logging
import logging
logger = logging.getLogger()
configure_logging(logger,verbosity=1)
import pylab as plt
import networkx as nx
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
from eden.display import serialize_graph
print serialize_graph(G)
from eden.display import draw_graph
draw_graph(G, size=15, vertex_size=1500, font_size=14, vertex_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label='A', vec=[0,0,.1])
G.add_node(1, label='B', vec=[0,.1,0])
G.add_node(2, label='C', vec=[.1,0,0])
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
draw_graph(G, secondary_vertex_label='vec', size=15, vertex_size=1500, font_size=14, vertex_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label='A', svec={'A':1, 'B':2, 'C':3})
G.add_node(1, label='B', svec={'A':1, 'B':2, 'D':3})
G.add_node(2, label='C', svec={'A':1, 'D':2, 'E':3})
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='y')
G.add_edge(2,0, label='z')
draw_graph(G, secondary_vertex_label='svec', size=15, vertex_size=1500, font_size=14, vertex_border=True, size_x_to_y_ratio=3)
G=nx.Graph()
G.add_node(0, label='A', weight=.5)
G.add_node(1, label='B', weight=1)
G.add_node(3, label='D', weight=1)
G.add_node(4, label='E', weight=1)
G.add_node(5, label='F', weight=.01)
G.add_edge(0,1, label='x', weight=.75)
G.add_edge(1,3, label='z', nesting=True, weight=.5)
G.add_edge(0,3, label='z', nesting=True, weight=.1)
G.add_edge(3,4, label='k')
G.add_edge(3,5, label='j')
draw_graph(G, secondary_vertex_label='weight', secondary_edge_label='weight', size=15, vertex_size=1500, font_size=14, vertex_border=True, size_x_to_y_ratio=3)
Explanation: Graph format
The EDeN library allows the vectorization of graphs, i.e. the transformation of graphs into sparse vectors.
The graphs that can be processed by the EDeN library have the following restrictions:
- the graphs are implemented as networkx graphs
- nodes and edges have identifiers: the following identifiers are used as reserved words
1. label
2. weight
3. vec or svec
4. nesting
nodes and edges must have the 'label' attribute
the 'label' attribute can be of one of the following python types:
string
list
dictionary
strings are used to represent categorical values;
lists are used to represent dense vectors;
dictionaries are used to represent sparse vectors: keys are of string type and values are of type float;
nodes and edges can have a 'weight' attribute of type float
nodes can have a 'vec' attribute of type list of reals
nodes can have a 'svec' attribute of type dictionary of reals
nesting edges must have a 'nesting' attribute of type boolean set to True
End of explanation
import networkx as nx
graph_list = []
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='C')
G.add_edge(0,1, label='a')
G.add_edge(1,2, label='b')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='X')
G.add_edge(0,1, label='a')
G.add_edge(1,2, label='b')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A')
G.add_node(1, label='B')
G.add_node(2, label='X')
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='x')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='X')
G.add_node(1, label='X')
G.add_node(2, label='X')
G.add_edge(0,1, label='x')
G.add_edge(1,2, label='x')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', vec=[1,0,0])
G.add_node(1, label='B', vec=[0,1,0])
G.add_node(2, label='C', vec=[0,0,1])
G.add_edge(0,1, label='a')
G.add_edge(1,2, label='b')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', vec=[1,1,0])
G.add_node(1, label='B', vec=[0,1,1])
G.add_node(2, label='C', vec=[0,0,1])
G.add_edge(0,1, label='a', entity='CATEG_EDGE')
G.add_edge(1,2, label='b', entity='CATEG_EDGE')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', vec=[1,0.1,0.2])
G.add_node(1, label='B', vec=[0.3,1,0.4])
G.add_node(2, label='C', vec=[0.5,0.6,1])
G.add_edge(0,1, label='a')
G.add_edge(1,2, label='b')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', vec=[0.1,0.2,0.3])
G.add_node(1, label='B', vec=[0.4,0.5,0.6])
G.add_node(2, label='C', vec=[0.7,0.8,0.9])
G.add_edge(0,1, label='a')
G.add_edge(1,2, label='b')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', svec={'A':1, 'B':1, 'C':1})
G.add_node(1, label='B', svec={'a':1, 'B':1, 'C':1})
G.add_node(2, label='C', svec={'a':1, 'b':1, 'C':1})
G.add_edge(0,1, label='a')
G.add_edge(1,2, label='b')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', svec={'A':1, 'C':1, 'D':1})
G.add_node(1, label='B', svec={'a':1, 'C':1, 'D':1})
G.add_node(2, label='C', svec={'a':1, 'C':1, 'D':1})
G.add_edge(0,1, label='a')
G.add_edge(1,2, label='b')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', svec={'A':1, 'D':1, 'E':1})
G.add_node(1, label='B', svec={'a':1, 'D':1, 'E':1})
G.add_node(2, label='C', svec={'a':1, 'D':1, 'E':1})
G.add_edge(0,1, label='a')
G.add_edge(1,2, label='b')
graph_list += [G.copy()]
G=nx.Graph()
G.add_node(0, label='A', svec={'A':1, 'B':1, 'C':1, 'D':1, 'E':1})
G.add_node(1, label='B', svec={'a':1, 'B':1, 'C':1, 'D':1, 'E':1})
G.add_node(2, label='C', svec={'a':1, 'b':1, 'C':1, 'D':1, 'E':1})
G.add_edge(0,1, label='a')
G.add_edge(1,2, label='b')
graph_list += [G.copy()]
Explanation: Build graphs and then display them
End of explanation
%%time
from eden.graph import vectorize
X = vectorize(graph_list, complexity=2, nbits=16, discrete=False)
y=[1]*4+[2]*4+[3]*4
print 'Instances: %d \nFeatures: %d with an avg of %d features per instance' % (X.shape[0], X.shape[1], X.getnnz()/X.shape[0])
Explanation: Create a vector representation
End of explanation
import pylab as plt
def plot(X,y):
size=8
cmap = 'rainbow'
plt.figure(figsize=(size, size))
plt.xticks([])
plt.yticks([])
plt.axis('off')
plt.scatter(X[:, 0], X[:, 1], alpha=0.7, c=y, cmap=cmap, s=50, edgecolors='k')
for i in range(X.shape[0]):
plt.annotate(str(i), (X[i, 0], X[i, 1]), xytext=(-3, 8), textcoords='offset points')
plt.show()
from sklearn.decomposition import TruncatedSVD
Xd = TruncatedSVD(n_components=2).fit_transform(X)
plot(Xd,y)
Explanation: embed the high dimensional vector in 2D using PCA and plot the instances
End of explanation
from sklearn import metrics
K=metrics.pairwise.pairwise_kernels(X, metric='linear')
from ipy_table import *
def prep_table(K):
header = [' ']
header += [i for i in range(K.shape[0])]
mat = [header]
for id, row in enumerate(K):
new_row = [id]
new_row += list(row)
mat.append(new_row)
return mat
mat=prep_table(K)
make_table(mat)
apply_theme('basic')
set_global_style(float_format = '%0.2f')
%matplotlib inline
import pylab as plt
plt.figure( figsize=(6,6) )
img = plt.imshow( K, interpolation='none', cmap=plt.get_cmap( 'YlOrRd' ) )
plt.show()
Explanation: Compute pairwise similarity matrix
End of explanation |
4,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load a sample of the raw JSON data into pandas.
Step1: Transform the full JSON file into a CSV, removing any stuff that we won't need
Step3: Creates CSVs of text from comments made by users who have posted about anorexia or obesity. | Python Code:
import pandas as pd
json_file = 'sample_data'
list(pd.read_json(json_file, lines=True))
Explanation: Load a sample of the raw JSON data into pandas.
End of explanation
import csv
import json
from nltk.tokenize import TweetTokenizer
from tqdm import tqdm
MIN_NUM_WORD_TOKENS = 10
TOTAL_NUM_LINES = 53851542 # $ wc -l data_full.json
PBAR_UPDATE_SIZE = 10000
tokenizer = TweetTokenizer()
def _ok_to_write(entries):
if entries['author'] == '[deleted]':
return False
if entries['body'] == '[deleted]' or len(tokenizer.tokenize(entries['body'])) < MIN_NUM_WORD_TOKENS:
return False
return True
out_columns = [
'author',
'body',
'subreddit',
'subreddit_id',
'score',
]
in_filename = 'data_full.json'
out_filename = 'data_full_preprocessed.csv'
count = 0
pbar = tqdm(total=TOTAL_NUM_LINES)
with open(out_filename, 'w') as o:
writer = csv.DictWriter(o, fieldnames=out_columns, extrasaction='ignore',
delimiter=',', quoting=csv.QUOTE_MINIMAL)
writer.writeheader()
with open(in_filename, 'r') as f:
for line in f:
count += 1
if count % PBAR_UPDATE_SIZE == 0:
pbar.update(PBAR_UPDATE_SIZE)
entries = json.loads(line)
if _ok_to_write(entries):
writer.writerow(entries)
print('Done. Processed {} lines total.'.format(count))
Explanation: Transform the full JSON file into a CSV, removing any stuff that we won't need:
[deleted] users or comments
comments with <10 tokens
(WARNING: this takes ~2.5 hours)
End of explanation
import pandas as pd
from tqdm import tqdm
from nltk.corpus import wordnet
from nltk.stem.porter import *
from nltk.stem import WordNetLemmatizer
from nltk.tokenize import TweetTokenizer
tokenizer = TweetTokenizer()
wordnet_lemmatizer = WordNetLemmatizer()
# Create synonym sets for obesity and anorexia
def syn_set(word_list):
syns = set()
for word in word_list:
for synset in wordnet.synsets(word):
for lemma in synset.lemmas():
syns.add(lemma.name())
return syns
OBESITY_SYNS = syn_set(['obesity'])
ANOREXIA_SYNS = syn_set(['anorexia'])
def row_filter_fn(df, syns):
Returns True if the row should be included, False otherwise.
# Check if any synonyms can be found.
if set([wordnet_lemmatizer.lemmatize(token.lower()) for token in tokenizer.tokenize(df)]) & syns:
return True
return False
csv_filename = 'data_full_preprocessed.csv'
chunksize = 10000
count = 0
obesity_data_frames = []
anorexia_data_frames = []
for chunk in tqdm(pd.read_csv(csv_filename, chunksize=chunksize)):
obesity_df = chunk[chunk['body'].apply(row_filter_fn, syns=OBESITY_SYNS)]
if not obesity_df.empty:
obesity_data_frames.append(obesity_df)
anorexia_df = chunk[chunk['body'].apply(row_filter_fn, syns=ANOREXIA_SYNS)]
if not anorexia_df.empty:
anorexia_data_frames.append(anorexia_df)
count += 1
#if count == 100: break
print('Total # chunks processed: {}.'.format(count))
# Write out to CSVs.
pd.concat(obesity_data_frames).to_csv('obesity.csv', index=False)
pd.concat(anorexia_data_frames).to_csv('anorexia.csv', index=False)
Explanation: Creates CSVs of text from comments made by users who have posted about anorexia or obesity.
End of explanation |
4,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Negative Binomial Regression (Students absence example)
Negative binomial distribution review
I always experience some kind of confusion when looking at the negative binomial distribution after a while of not working with it. There are so many different definitions that I usually need to read everything more than once. The definition I've first learned, and the one I like the most, says as follows
Step1: In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k successes and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes.
Step2: For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156.
Step3: Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values.
Step4: Negative binomial in GLM
The negative binomial distribution belongs to the exponential family, and the canonical link function is
$$
g(\mu_i) = \log\left(\frac{\mu_i}{k + \mu_i}\right) = \log\left(\frac{k}{\mu_i} + 1\right)
$$
but it is difficult to interpret. The log link is usually preferred because of the analogy with Poisson model, and it also tends to give better results.
Load and explore Students data
This example is based on this UCLA example.
School administrators study the attendance behavior of high school juniors at two schools. Predictors of the number of days of absence include the type of program in which the student is enrolled and a standardized test in math. We have attendance data on 314 high school juniors.
The variables of insterest in the dataset are
daysabs
Step5: We assign categories to the values 1, 2, and 3 of our "prog" variable.
Step6: The Academic program is the most popular program (167/314) and General is the least popular one (40/314)
Step7: Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values.
Step8: The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score.
On the other hand, the distribution of the days of absence is right-skewed in all cases. Students in the General program present the highest absence mean while the Vocational group is the one who misses fewer classes on average.
Models
We are interested in measuring the association between the type of the program and the math score with the days of absence. It's also of interest to see if the association between math score and days of absence is different in each type of program.
In order to answer our questions, we are going to fit and compare two models. The first model uses the type of the program and the math score as predictors. The second model also includes the interaction between these two variables. The score in the math test is going to be standardized in both cases to make things easier for the sampler and save some seconds. A good idea to follow along is to run these models without scaling math and comparing how long it took to fit.
We are going to use a negative binomial likelihood to model the days of absence. But let's stop here and think why we use this likelihood. Earlier, we said that the negative binomial distributon arises when our variable represents the number of trials until we got $k$ successes. However, the number of trials is fixed, i.e. the number of school days in a given year is not a random variable. So if we stick to the definition, we could think of the two alternative views for this problem
Each of the $n$ days is a trial, and we record whether the student is absent ($y=1$) or not ($y=0$). This corresponds to a binary regression setting, where we could think of logistic regression or something alike. A problem here is that we have the sum of $y$ for a student, but not the $n$.
The whole school year represents the space where events occur and we count how many absences we see in that space for each student. This gives us a Poisson regression setting (count of an event in a given space or time).
We also know that when $n$ is large and $p$ is small, the Binomial distribution can be approximated with a Poisson distribution with $\lambda = n * p$. We don't know exactly $n$ in this scenario, but we know it is around 180, and we do know that $p$ is small because you can't skip classes all the time. So both modeling approaches should give similar results.
But then, why negative binomial? Can't we just use a Poisson likelihood?
Yes, we can. However, using a Poisson likelihood implies that the mean is equal to the variance, and that is usually an unrealistic assumption. If it turns out the variance is either substantially smaller or greater than the mean, the Poisson regression model results in a poor fit. Alternatively, if we use a negative binomial likelihood, the variance is not forced to be equal to the mean, and there's more flexibility to handle a given dataset, and consequently, the fit tends to better.
Model 1
$$
\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i
$$
Model 2
$$
\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i
+ \beta_5 \text{General}_i \cdot \text{Math_std}_i + \beta_6 \text{Vocational}_i \cdot \text{Math_std}_i
$$
In both cases we have the following dummy variables
$$\text{Academic}_i =
\left{
\begin{array}{ll}
1 & \textrm{if student is under Academic program} \
0 & \textrm{other case}
\end{array}
\right.
$$
$$\text{General}_i =
\left{
\begin{array}{ll}
1 & \textrm{if student is under General program} \
0 & \textrm{other case}
\end{array}
\right.
$$
$$\text{Vocational}_i =
\left{
\begin{array}{ll}
1 & \textrm{if student is under Vocational program} \
0 & \textrm{other case}
\end{array}
\right.
$$
and $Y$ represents the days of absence.
So, for example, the first model for a student under the Vocational program reduces to
$$
\log{Y_i} = \beta_3 + \beta_4 \text{Math_std}_i
$$
And one last thing to note is we've decided not to inclide an intercept term, that's why you don't see any $\beta_0$ above. This choice allows us to represent the effect of each program directly with $\beta_1$, $\beta_2$, and $\beta_3$.
Model fit
It's very easy to fit these models with Bambi. We just pass a formula describing the terms in the model and Bambi will know how to handle each of them correctly. The 0 on the right hand side of ~ simply means we don't want to have the intercept term that is added by default. scale(math) tells Bambi we want to use standardize math before being included in the model. By default, Bambi uses a log link for negative binomial GLMs. We'll stick to this default here.
Model 1
Step9: Model 2
For this second model we just add prog
Step10: Explore models
The first thing we do is calling az.summary(). Here we pass the InferenceData object the .fit() returned. This prints information about the marginal posteriors for each parameter in the model as well as convergence diagnostics.
Step11: The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with plot_forest(). There we simply pass a list containing the InferenceData objects of the models we want to compare.
Step12: One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of scale(math) is slightly lower in the model that considers the interaction, but the difference is not significant.
We can also make conclusions about the association between the program and the math score with the days of absence. First, we see the posterior for the Vocational group is to the left of the posterior for the two other programs, meaning it is associated with fewer absences (as we have seen when first exploring our data). There also seems to be a difference between General and Academic, where we may conclude the students in the General group tend to miss more classes.
In addition, the marginal posterior for math shows negative values in both cases. This means that students with higher math scores tend to miss fewer classes. Below, we see a forest plot with the posteriors for the coefficients of the interaction effects. Both of them overlap with 0, which means the data does not give much evidence to support there is an interaction effect between program and math score (i.e., the association between math and days of absence is similar for all the programs).
Step13: Plot predicted mean response
We finish this example showing how we can get predictions for new data and plot the mean response for each program together with confidence intervals.
Step14: As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0.
If you've made it to the end of this notebook and you're still curious about what else you can do with these two models, you're invited to use az.compare() to compare the fit of the two models. What do you expect before seeing the plot? Why? Is there anything else you could do to improve the fit of the model?
Also, if you're still curious about what this model would have looked like with the Poisson likelihood, you just need to replace family="negativebinomial" with family="poisson" and then you're ready to compare results! | Python Code:
import arviz as az
import bambi as bmb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.stats import nbinom
az.style.use("arviz-darkgrid")
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
Explanation: Negative Binomial Regression (Students absence example)
Negative binomial distribution review
I always experience some kind of confusion when looking at the negative binomial distribution after a while of not working with it. There are so many different definitions that I usually need to read everything more than once. The definition I've first learned, and the one I like the most, says as follows: The negative binomial distribution is the distribution of a random variable that is defined as the number of independent Bernoulli trials until the k-th "success". In short, we repeat a Bernoulli experiment until we observe k successes and record the number of trials it required.
$$
Y \sim \text{NB}(k, p)
$$
where $0 \le p \le 1$ is the probability of success in each Bernoulli trial, $k > 0$, usually integer, and $y \in {k, k + 1, \cdots}$
The probability mass function (pmf) is
$$
p(y | k, p)= \binom{y - 1}{y-k}(1 -p)^{y - k}p^k
$$
If you, like me, find it hard to remember whether $y$ starts at $0$, $1$, or $k$, try to think twice about the definition of the variable. But how? First, recall we aim to have $k$ successes. And success is one of the two possible outcomes of a trial, so the number of trials can never be smaller than the number of successes. Thus, we can be confident to say that $y \ge k$.
But this is not the only way of defining the negative binomial distribution, there are plenty of options! One of the most interesting, and the one you see in PyMC3, the library we use in Bambi for the backend, is as a continuous mixture. The negative binomial distribution describes a Poisson random variable whose rate is also a random variable (not a fixed constant!) following a gamma distribution. Or in other words, conditional on a gamma-distributed variable $\mu$, the variable $Y$ has a Poisson distribution with mean $\mu$.
Under this alternative definition, the pmf is
$$
\displaystyle p(y | k, \alpha) = \binom{y + \alpha - 1}{y} \left(\frac{\alpha}{\mu + \alpha}\right)^\alpha\left(\frac{\mu}{\mu + \alpha}\right)^y
$$
where $\mu$ is the parameter of the Poisson distribution (the mean, and variance too!) and $\alpha$ is the rate parameter of the gamma.
End of explanation
y = np.arange(0, 30)
k = 3
p1 = 0.5
p2 = 0.3
fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)
ax[0].bar(y, nbinom.pmf(y, k, p1))
ax[0].set_xticks(np.linspace(0, 30, num=11))
ax[0].set_title(f"k = {k}, p = {p1}")
ax[1].bar(y, nbinom.pmf(y, k, p2))
ax[1].set_xticks(np.linspace(0, 30, num=11))
ax[1].set_title(f"k = {k}, p = {p2}")
fig.suptitle("Y = Number of failures until k successes", fontsize=16);
Explanation: In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k successes and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes.
End of explanation
print(nbinom.pmf(y, k, p1)[0])
print(nbinom.pmf(y, k, p1)[3])
Explanation: For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156.
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)
ax[0].bar(y + k, nbinom.pmf(y, k, p1))
ax[0].set_xticks(np.linspace(3, 30, num=10))
ax[0].set_title(f"k = {k}, p = {p1}")
ax[1].bar(y + k, nbinom.pmf(y, k, p2))
ax[1].set_xticks(np.linspace(3, 30, num=10))
ax[1].set_title(f"k = {k}, p = {p2}")
fig.suptitle("Y = Number of trials until k successes", fontsize=16);
Explanation: Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values.
End of explanation
data = pd.read_stata("https://stats.idre.ucla.edu/stat/stata/dae/nb_data.dta")
data.head()
Explanation: Negative binomial in GLM
The negative binomial distribution belongs to the exponential family, and the canonical link function is
$$
g(\mu_i) = \log\left(\frac{\mu_i}{k + \mu_i}\right) = \log\left(\frac{k}{\mu_i} + 1\right)
$$
but it is difficult to interpret. The log link is usually preferred because of the analogy with Poisson model, and it also tends to give better results.
Load and explore Students data
This example is based on this UCLA example.
School administrators study the attendance behavior of high school juniors at two schools. Predictors of the number of days of absence include the type of program in which the student is enrolled and a standardized test in math. We have attendance data on 314 high school juniors.
The variables of insterest in the dataset are
daysabs: The number of days of absence. It is our response variable.
progr: The type of program. Can be one of 'General', 'Academic', or 'Vocational'.
math: Score in a standardized math test.
End of explanation
data["prog"] = data["prog"].map({1: "General", 2: "Academic", 3: "Vocational"})
data.head()
Explanation: We assign categories to the values 1, 2, and 3 of our "prog" variable.
End of explanation
data["prog"].value_counts()
Explanation: The Academic program is the most popular program (167/314) and General is the least popular one (40/314)
End of explanation
fig, ax = plt.subplots(3, 2, figsize=(8, 6), sharex="col")
programs = list(data["prog"].unique())
programs.sort()
for idx, program in enumerate(programs):
# Histogram
ax[idx, 0].hist(data[data["prog"] == program]["math"], edgecolor='black', alpha=0.9)
ax[idx, 0].axvline(data[data["prog"] == program]["math"].mean(), color="C1")
# Barplot
days = data[data["prog"] == program]["daysabs"]
days_mean = days.mean()
days_counts = days.value_counts()
values = list(days_counts.index)
count = days_counts.values
ax[idx, 1].bar(values, count, edgecolor='black', alpha=0.9)
ax[idx, 1].axvline(days_mean, color="C1")
# Titles
ax[idx, 0].set_title(program)
ax[idx, 1].set_title(program)
plt.setp(ax[-1, 0], xlabel="Math score")
plt.setp(ax[-1, 1], xlabel="Days of absence");
Explanation: Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values.
End of explanation
model_additive = bmb.Model("daysabs ~ 0 + prog + scale(math)", data, family="negativebinomial")
idata_additive = model_additive.fit()
Explanation: The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score.
On the other hand, the distribution of the days of absence is right-skewed in all cases. Students in the General program present the highest absence mean while the Vocational group is the one who misses fewer classes on average.
Models
We are interested in measuring the association between the type of the program and the math score with the days of absence. It's also of interest to see if the association between math score and days of absence is different in each type of program.
In order to answer our questions, we are going to fit and compare two models. The first model uses the type of the program and the math score as predictors. The second model also includes the interaction between these two variables. The score in the math test is going to be standardized in both cases to make things easier for the sampler and save some seconds. A good idea to follow along is to run these models without scaling math and comparing how long it took to fit.
We are going to use a negative binomial likelihood to model the days of absence. But let's stop here and think why we use this likelihood. Earlier, we said that the negative binomial distributon arises when our variable represents the number of trials until we got $k$ successes. However, the number of trials is fixed, i.e. the number of school days in a given year is not a random variable. So if we stick to the definition, we could think of the two alternative views for this problem
Each of the $n$ days is a trial, and we record whether the student is absent ($y=1$) or not ($y=0$). This corresponds to a binary regression setting, where we could think of logistic regression or something alike. A problem here is that we have the sum of $y$ for a student, but not the $n$.
The whole school year represents the space where events occur and we count how many absences we see in that space for each student. This gives us a Poisson regression setting (count of an event in a given space or time).
We also know that when $n$ is large and $p$ is small, the Binomial distribution can be approximated with a Poisson distribution with $\lambda = n * p$. We don't know exactly $n$ in this scenario, but we know it is around 180, and we do know that $p$ is small because you can't skip classes all the time. So both modeling approaches should give similar results.
But then, why negative binomial? Can't we just use a Poisson likelihood?
Yes, we can. However, using a Poisson likelihood implies that the mean is equal to the variance, and that is usually an unrealistic assumption. If it turns out the variance is either substantially smaller or greater than the mean, the Poisson regression model results in a poor fit. Alternatively, if we use a negative binomial likelihood, the variance is not forced to be equal to the mean, and there's more flexibility to handle a given dataset, and consequently, the fit tends to better.
Model 1
$$
\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i
$$
Model 2
$$
\log{Y_i} = \beta_1 \text{Academic}_i + \beta_2 \text{General}_i + \beta_3 \text{Vocational}_i + \beta_4 \text{Math_std}_i
+ \beta_5 \text{General}_i \cdot \text{Math_std}_i + \beta_6 \text{Vocational}_i \cdot \text{Math_std}_i
$$
In both cases we have the following dummy variables
$$\text{Academic}_i =
\left{
\begin{array}{ll}
1 & \textrm{if student is under Academic program} \
0 & \textrm{other case}
\end{array}
\right.
$$
$$\text{General}_i =
\left{
\begin{array}{ll}
1 & \textrm{if student is under General program} \
0 & \textrm{other case}
\end{array}
\right.
$$
$$\text{Vocational}_i =
\left{
\begin{array}{ll}
1 & \textrm{if student is under Vocational program} \
0 & \textrm{other case}
\end{array}
\right.
$$
and $Y$ represents the days of absence.
So, for example, the first model for a student under the Vocational program reduces to
$$
\log{Y_i} = \beta_3 + \beta_4 \text{Math_std}_i
$$
And one last thing to note is we've decided not to inclide an intercept term, that's why you don't see any $\beta_0$ above. This choice allows us to represent the effect of each program directly with $\beta_1$, $\beta_2$, and $\beta_3$.
Model fit
It's very easy to fit these models with Bambi. We just pass a formula describing the terms in the model and Bambi will know how to handle each of them correctly. The 0 on the right hand side of ~ simply means we don't want to have the intercept term that is added by default. scale(math) tells Bambi we want to use standardize math before being included in the model. By default, Bambi uses a log link for negative binomial GLMs. We'll stick to this default here.
Model 1
End of explanation
model_interaction = bmb.Model("daysabs ~ 0 + prog + scale(math) + prog:scale(math)", data, family="negativebinomial")
idata_interaction = model_interaction.fit()
Explanation: Model 2
For this second model we just add prog:scale(math) to indicate the interaction. A shorthand would be to use y ~ 0 + prog*scale(math), which uses the full interaction operator. In other words, it just means we want to include the interaction between prog and scale(math) as well as their main effects.
End of explanation
az.summary(idata_additive)
az.summary(idata_interaction)
Explanation: Explore models
The first thing we do is calling az.summary(). Here we pass the InferenceData object the .fit() returned. This prints information about the marginal posteriors for each parameter in the model as well as convergence diagnostics.
End of explanation
az.plot_forest(
[idata_additive, idata_interaction],
model_names=["Additive", "Interaction"],
var_names=["prog", "scale(math)"],
combined=True,
figsize=(8, 4)
);
Explanation: The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with plot_forest(). There we simply pass a list containing the InferenceData objects of the models we want to compare.
End of explanation
az.plot_forest(idata_interaction, var_names=["prog:scale(math)"], combined=True, figsize=(8, 4))
plt.axvline(0);
Explanation: One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of scale(math) is slightly lower in the model that considers the interaction, but the difference is not significant.
We can also make conclusions about the association between the program and the math score with the days of absence. First, we see the posterior for the Vocational group is to the left of the posterior for the two other programs, meaning it is associated with fewer absences (as we have seen when first exploring our data). There also seems to be a difference between General and Academic, where we may conclude the students in the General group tend to miss more classes.
In addition, the marginal posterior for math shows negative values in both cases. This means that students with higher math scores tend to miss fewer classes. Below, we see a forest plot with the posteriors for the coefficients of the interaction effects. Both of them overlap with 0, which means the data does not give much evidence to support there is an interaction effect between program and math score (i.e., the association between math and days of absence is similar for all the programs).
End of explanation
math_score = np.arange(1, 100)
# This function takes a model and an InferenceData object.
# It returns of length 3 with predictions for each type of program.
def predict(model, idata):
predictions = []
for program in programs:
new_data = pd.DataFrame({"math": math_score, "prog": [program] * len(math_score)})
new_idata = model.predict(
idata,
data=new_data,
inplace=False
)
prediction = new_idata.posterior.stack(sample=["chain", "draw"])["daysabs_mean"].values
predictions.append(prediction)
return predictions
prediction_additive = predict(model_additive, idata_additive)
prediction_interaction = predict(model_interaction, idata_interaction)
mu_additive = [prediction.mean(1) for prediction in prediction_additive]
mu_interaction = [prediction.mean(1) for prediction in prediction_interaction]
fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize = (10, 4))
for idx, program in enumerate(programs):
ax[0].plot(math_score, mu_additive[idx], label=f"{program}", color=f"C{idx}", lw=2)
az.plot_hdi(math_score, prediction_additive[idx].T, color=f"C{idx}", ax=ax[0])
ax[1].plot(math_score, mu_interaction[idx], label=f"{program}", color=f"C{idx}", lw=2)
az.plot_hdi(math_score, prediction_interaction[idx].T, color=f"C{idx}", ax=ax[1])
ax[0].set_title("Additive");
ax[1].set_title("Interaction");
ax[0].set_xlabel("Math score")
ax[1].set_xlabel("Math score")
ax[0].set_ylim(0, 25)
ax[0].legend(loc="upper right");
Explanation: Plot predicted mean response
We finish this example showing how we can get predictions for new data and plot the mean response for each program together with confidence intervals.
End of explanation
%load_ext watermark
%watermark -n -u -v -iv -w
Explanation: As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0.
If you've made it to the end of this notebook and you're still curious about what else you can do with these two models, you're invited to use az.compare() to compare the fit of the two models. What do you expect before seeing the plot? Why? Is there anything else you could do to improve the fit of the model?
Also, if you're still curious about what this model would have looked like with the Poisson likelihood, you just need to replace family="negativebinomial" with family="poisson" and then you're ready to compare results!
End of explanation |
4,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Fourier Transform
Let's download an audio file
Step1: Listen to the audio file
Step2: Fourier Transform
The Fourier Transform (Wikipedia) is one of the most fundamental operations in applied mathematics and signal processing.
It transforms our time-domain signal into the frequency domain. Whereas the time domain expresses our signal as a sequence of samples, the frequency domain expresses our signal as a superposition of sinusoids of varying magnitudes, frequencies, and phase offsets.
To compute a Fourier transform in NumPy or SciPy, use scipy.fft
Step3: Plot the spectrum
Step4: Zoom in | Python Code:
import urllib
filename = 'c_strum.wav'
urllib.urlretrieve('http://audio.musicinformationretrieval.com/c_strum.wav', filename=filename)
x, sr = librosa.load(filename)
print(x.shape)
print(sr)
Explanation: ← Back to Index
Fourier Transform
Let's download an audio file:
End of explanation
ipd.Audio(x, rate=sr)
Explanation: Listen to the audio file:
End of explanation
X = scipy.fft(x)
X_mag = numpy.absolute(X)
f = numpy.linspace(0, sr, len(X_mag)) # frequency variable
Explanation: Fourier Transform
The Fourier Transform (Wikipedia) is one of the most fundamental operations in applied mathematics and signal processing.
It transforms our time-domain signal into the frequency domain. Whereas the time domain expresses our signal as a sequence of samples, the frequency domain expresses our signal as a superposition of sinusoids of varying magnitudes, frequencies, and phase offsets.
To compute a Fourier transform in NumPy or SciPy, use scipy.fft:
End of explanation
plt.figure(figsize=(13, 5))
plt.plot(f, X_mag) # magnitude spectrum
plt.xlabel('Frequency (Hz)')
Explanation: Plot the spectrum:
End of explanation
plt.figure(figsize=(13, 5))
plt.plot(f[:5000], X_mag[:5000])
plt.xlabel('Frequency (Hz)')
Explanation: Zoom in:
End of explanation |
4,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Digits Dataset
Step2: Split Into Training And Test Sets
Step3: Fit Standardizer To Training Set
Step4: Apply Standardizer To Training And Test Sets | Python Code:
# Load libraries
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
Explanation: Title: Split Data Into Training And Test Sets
Slug: split_data_into_training_and_test_sets
Summary: How to split data into training and test sets for machine learning in Python.
Date: 2017-09-15 12:00
Category: Machine Learning
Tags: Model Evaluation
Authors: Chris Albon
Preliminaries
End of explanation
# Load the digits dataset
digits = datasets.load_digits()
# Create the features matrix
X = digits.data
# Create the target vector
y = digits.target
Explanation: Load Digits Dataset
End of explanation
# Create training and test sets
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.1,
random_state=1)
Explanation: Split Into Training And Test Sets
End of explanation
# Create standardizer
standardizer = StandardScaler()
# Fit standardizer to training set
standardizer.fit(X_train)
Explanation: Fit Standardizer To Training Set
End of explanation
# Apply to both training and test sets
X_train_std = standardizer.transform(X_train)
X_test_std = standardizer.transform(X_test)
Explanation: Apply Standardizer To Training And Test Sets
End of explanation |
4,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step12: Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
Step19: Sequential container
Define a forward and backward pass procedures.
Step21: Layers
input
Step22: This one is probably the hardest but as others only takes 5 lines of code in total.
- input
Step23: Implement dropout. The idea and implementation is really simple
Step24: Activation functions
Here's the complete example for the Rectified Linear Unit non-linearity (aka ReLU)
Step25: Implement Leaky Rectified Linear Unit. Expriment with slope.
Step31: Criterions
Criterions are used to score the models answers.
Step32: The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.
Step33: You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula,
remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size. | Python Code:
class Module(object):
def __init__ (self):
self.output = None
self.gradInput = None
self.training = True
Basically, you can think of a module as of a something (black box)
which can process `input` data and produce `ouput` data.
This is like applying a function which is called `forward`:
output = module.forward(input)
The module should be able to perform a backward pass: to differentiate the `forward` function.
More, it should be able to differentiate it if is a part of chain (chain rule).
The latter implies there is a gradient from previous step of a chain rule.
gradInput = module.backward(input, gradOutput)
def forward(self, input):
Takes an input object, and computes the corresponding output of the module.
return self.updateOutput(input)
def backward(self,input, gradOutput):
Performs a backpropagation step through the module, with respect to the given input.
This includes
- computing a gradient w.r.t. `input` (is needed for further backprop),
- computing a gradient w.r.t. parameters (to update parameters while optimizing).
self.updateGradInput(input, gradOutput)
self.accGradParameters(input, gradOutput)
return self.gradInput
def updateOutput(self, input):
Computes the output using the current parameter set of the class and input.
This function returns the result which is stored in the `output` field.
Make sure to both store the data in `output` field and return it.
# The easiest case:
# self.output = input
# return self.output
pass
def updateGradInput(self, input, gradOutput):
Computing the gradient of the module with respect to its own input.
This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly.
The shape of `gradInput` is always the same as the shape of `input`.
Make sure to both store the gradients in `gradInput` field and return it.
# The easiest case:
# self.gradInput = gradOutput
# return self.gradInput
pass
def accGradParameters(self, input, gradOutput):
Computing the gradient of the module with respect to its own parameters.
No need to override if module has no parameters (e.g. ReLU).
pass
def zeroGradParameters(self):
Zeroes `gradParams` variable if the module has params.
pass
def getParameters(self):
Returns a list with its parameters.
If the module does not have parameters return empty list.
return []
def getGradParameters(self):
Returns a list with gradients with respect to its parameters.
If the module does not have parameters return empty list.
return []
def training(self):
Sets training mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
self.training = True
def evaluate(self):
Sets evaluation mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
self.training = False
def __repr__(self):
Pretty printing. Should be overrided in every module if you want
to have readable description.
return "Module"
Explanation: Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
End of explanation
class Sequential(Module):
This class implements a container, which processes `input` data sequentially.
`input` is processed by each module (layer) in self.modules consecutively.
The resulting array is called `output`.
def __init__ (self):
super(Sequential, self).__init__()
self.modules = []
def add(self, module):
Adds a module to the container.
self.modules.append(module)
def updateOutput(self, input):
Basic workflow of FORWARD PASS:
y_0 = module[0].forward(input)
y_1 = module[1].forward(y_0)
...
output = module[n-1].forward(y_{n-2})
Just write a little loop.
# Your code goes here. ################################################
# module = self.modules[0]
# y_curr = module.forward(input)
# for i in range(1, len(self.modules)):
# y_curr = self.modules[i].forward(y_curr)
# self.output = y_curr
# return self.output
#
# self.modules[0].output = self.modules[0].forward(input)
# for i in range(1, len(self.modules)):
# self.modules[i].output = self.modules[i].forward(self.modules[i-1].output)
# self.output = self.modules[-1].output
self.y = []
self.y.append(self.modules[0].forward(input))
for i in range(1, len(self.modules)):
self.y.append(self.modules[i].forward(self.y[-1]))
self.output = self.y[-1]
return self.output
def backward(self, input, gradOutput):
Workflow of BACKWARD PASS:
g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput)
g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1})
...
g_1 = module[1].backward(y_0, g_2)
gradInput = module[0].backward(input, g_1)
!!!
To ech module you need to provide the input, module saw while forward pass,
it is used while computing gradients.
Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass)
and NOT `input` to this Sequential module.
!!!
# Your code goes here. ################################################
# self.modules[-1].gradInput = self.modules[-1].backward(self.modules[-2].output, gradOutput)
# for i in range(len(self.modules) - 2, 0, -1):
# self.modules[i].gradInput = self.modules[i].backward(self.modules[i-1].output, self.modules[i+1].gradInput)
# i = 0
# self.modules[0].gradInput = self.modules[0].backward(input, self.modules[i+1].gradInput)
# self.gradInput = self.modules[0].gradInput
self.gradInput = self.modules[-1].backward(self.y[-2], gradOutput)
for i in range(len(self.modules) - 2, 0, -1):
self.gradInput = self.modules[i].backward(self.y[i-1], self.gradInput)
self.gradInput = self.modules[0].backward(input, self.gradInput)
return self.gradInput
def zeroGradParameters(self):
for module in self.modules:
module.zeroGradParameters()
def getParameters(self):
Should gather all parameters in a list.
return [x.getParameters() for x in self.modules]
def getGradParameters(self):
Should gather all gradients w.r.t parameters in a list.
return [x.getGradParameters() for x in self.modules]
def __repr__(self):
string = "".join([str(x) + '\n' for x in self.modules])
return string
def __getitem__(self,x):
return self.modules.__getitem__(x)
Explanation: Sequential container
Define a forward and backward pass procedures.
End of explanation
class Linear(Module):
A module which applies a linear transformation
A common name is fully-connected layer, InnerProductLayer in caffe.
The module should work with 2D input of shape (n_samples, n_feature).
def __init__(self, n_in, n_out):
super(Linear, self).__init__()
# This is a nice initialization
stdv = 1./np.sqrt(n_in)
self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in))
self.b = np.random.uniform(-stdv, stdv, size = n_out)
self.gradW = np.zeros_like(self.W)
self.gradb = np.zeros_like(self.b)
def updateOutput(self, input):
# Your code goes here. ################################################
# N = input.shape[0]
# newx = input.reshape((N,-1))
self.output = input.dot(self.W.T) + self.b
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
# x, dout = input, gradOutput
# N = x.shape[0]
# D = np.prod(x.shape[1:])
# x2 = np.reshape(x, (N, D))
# dx2 = np.dot(dout, w.T) # N x D
# dw = np.dot(x2.T, dout) # D x M
# db = np.dot(dout.T, np.ones(N)) # M x 1
# dx = np.reshape(dx2, x.shape)
# self.gradInput = dx, dw, db #FIXME ?
# self.gradb = np.sum(gradOutput,axis = 0)
self.gradInput = gradOutput.dot(self.W)#.reshape(*input.shape)
# self.gradW = input.reshape((input.shape[0],-1)).T.dot(gradOutput)
return self.gradInput
def accGradParameters(self, input, gradOutput):
# Your code goes here. ################################################
self.gradb = np.sum(gradOutput,axis = 0)
self.gradW = gradOutput.T.dot(input)
# self.gradW = input.reshape((input.shape[0],-1)).T.dot(gradOutput)
# pass
def zeroGradParameters(self):
self.gradW.fill(0)
self.gradb.fill(0)
def getParameters(self):
return [self.W, self.b]
def getGradParameters(self):
return [self.gradW, self.gradb]
def __repr__(self):
s = self.W.shape
q = 'Linear %d -> %d' %(s[1],s[0])
return q
input_dim = 3
output_dim = 2
x = np.random.randn(5, input_dim)
w = np.random.randn(output_dim, input_dim)
b = np.random.randn(output_dim)
dout = np.random.randn(5, output_dim)
linear = Linear(input_dim, output_dim)
def update_W_matrix(new_W):
linear.W = new_W
return linear.forward(x)
def update_bias(new_b):
linear.b = new_b
return linear.forward(x)
dx = linear.backward(x, dout)
dx_num = eval_numerical_gradient_array(lambda x: linear.forward(x), x, dout)
dw_num = eval_numerical_gradient_array(update_W_matrix, w, dout)
db_num = eval_numerical_gradient_array(update_bias, b, dout)
print 'Testing Linear_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, linear.gradW)
print 'db error: ', rel_error(db_num, linear.gradb)
Explanation: Layers
input: batch_size x n_feats1
output: batch_size x n_feats2
End of explanation
class SoftMax(Module):
def __init__(self):
super(SoftMax, self).__init__()
def updateOutput(self, input):
# start with normalization for numerical stability
self.output = np.subtract(input, input.max(axis=1, keepdims=True))
# Your code goes here. ################################################
self.output = np.exp(self.output)
# out_sum = self.output.sum(axis=1, keepdims=True)
self.output = np.divide(self.output, self.output.sum(axis=1, keepdims=True))
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
# N = self.output.shape[0]
# self.gradInput = self.output.copy()
# self.gradInput[np.arange(N).astype(np.int), gradOutput.astype(np.int)] -= 1
# self.gradInput /= N
batch_size, n_feats = self.output.shape
a = self.output.reshape(batch_size, n_feats, -1)
b = self.output.reshape(batch_size, -1, n_feats)
self.gradInput = np.multiply(gradOutput.reshape(batch_size, -1, n_feats),
np.subtract(np.multiply(np.eye(n_feats), a),
np.multiply(a, b))).sum(axis=2)
return self.gradInput
def __repr__(self):
return "SoftMax"
soft_max = SoftMax()
x = np.random.randn(5, 3)
dout = np.random.randn(5, 3)
dx_numeric = eval_numerical_gradient_array(lambda x: soft_max.forward(x), x, dout)
dx = soft_max.backward(x, dout)
# The error should be around 1e-10
print 'Testing SoftMax grad:'
print 'dx error: ', rel_error(dx_numeric, dx)
Explanation: This one is probably the hardest but as others only takes 5 lines of code in total.
- input: batch_size x n_feats
- output: batch_size x n_feats
End of explanation
class Dropout(Module):
def __init__(self, p=0.5):
super(Dropout, self).__init__()
self.p = p
self.mask = None
def updateOutput(self, input):
# Your code goes here. ################################################
self.mask = np.random.binomial(1, self.p, input.shape) if self.training else np.ones(input.shape)
self.output = input*self.mask
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
self.gradInput = gradOutput*self.mask
return self.gradInput
def __repr__(self):
return "Dropout"
Explanation: Implement dropout. The idea and implementation is really simple: just multimply the input by $Bernoulli(p)$ mask.
This is a very cool regularizer. In fact, when you see your net is overfitting try to add more dropout.
While training (self.training == True) it should sample a mask on each iteration (for every batch). When testing this module should implement identity transform i.e. self.output = input.
input: batch_size x n_feats
output: batch_size x n_feats
End of explanation
class ReLU(Module):
def __init__(self):
super(ReLU, self).__init__()
def updateOutput(self, input):
self.output = np.maximum(input, 0)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput , input > 0)
return self.gradInput
def __repr__(self):
return "ReLU"
Explanation: Activation functions
Here's the complete example for the Rectified Linear Unit non-linearity (aka ReLU):
End of explanation
class LeakyReLU(Module):
def __init__(self, slope = 0.03):
super(LeakyReLU, self).__init__()
self.slope = slope
def updateOutput(self, input):
# Your code goes here. ################################################
# self.output = np.maximum(input, input*self.slope)
self.output = input.copy()
self.output[self.output < 0] *= self.slope
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
# self.gradInput = np.multiply(gradOutput, input > 0) #FIXME
self.gradInput = gradOutput.copy()
self.gradInput[input < 0] *= self.slope
return self.gradInput
def __repr__(self):
return "LeakyReLU"
Explanation: Implement Leaky Rectified Linear Unit. Expriment with slope.
End of explanation
class Criterion(object):
def __init__ (self):
self.output = None
self.gradInput = None
def forward(self, input, target):
Given an input and a target, compute the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateOutput`.
return self.updateOutput(input, target)
def backward(self, input, target):
Given an input and a target, compute the gradients of the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateGradInput`.
return self.updateGradInput(input, target)
def updateOutput(self, input, target):
Function to override.
return self.output
def updateGradInput(self, input, target):
Function to override.
return self.gradInput
def __repr__(self):
Pretty printing. Should be overrided in every module if you want
to have readable description.
return "Criterion"
Explanation: Criterions
Criterions are used to score the models answers.
End of explanation
class MSECriterion(Criterion):
def __init__(self):
super(MSECriterion, self).__init__()
def updateOutput(self, input, target):
self.output = np.sum(np.power(input - target,2)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
self.gradInput = (input - target) * 2 / input.shape[0]
return self.gradInput
def __repr__(self):
return "MSECriterion"
Explanation: The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.
End of explanation
class ClassNLLCriterion(Criterion):
def __init__(self):
a = super(ClassNLLCriterion, self)
super(ClassNLLCriterion, self).__init__()
def updateOutput(self, input, target):
# Use this trick to avoid numerical errors
eps = 1e-15
input_clamp = np.clip(input, eps, 1 - eps)
# Your code goes here. ################################################
# N = input_clamp.shape[0]
# self.output = -np.sum(np.log(input_clamp[np.arange(N).astype(np.int), target.astype(np.int)]+1e-8)) / N
self.output = -np.sum(np.multiply(target, np.log(input_clamp))) / len(input)
return self.output
def updateGradInput(self, input, target):
# Use this trick to avoid numerical errors
input_clamp = np.maximum(1e-15, np.minimum(input, 1 - 1e-15) )
# Your code goes here. ################################################
self.gradInput = np.subtract(input_clamp, target) / len(input)
return self.gradInput
def __repr__(self):
return "ClassNLLCriterion"
Explanation: You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula,
remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size.
End of explanation |
4,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gradient reversal pytorch
Inspired from the following tweets
Step1: Tensorflow implementation
Step2: Modify the gradients
Keep forward pass the same.
The trick is to add $g(x)$, such that $g'(x)$ is the gradient modifier, during the forward pass and substract it as well. But stop gradients from flowing through the substraction part.
$f(x) + g(x) - g(x)$ will lead to gradients $f'(x) + g'(x) -g'(x)$. Since gradients don't flow through $-g'(x)$, hence we get new gradients as $f'(x) + g'(x)$
Step3: Gradient reversal
Here the modifying function $g(x)$ is simply the $-2*f(x)$, this will make the gradients $-f'(x)$.
Step4: Pytoch case
Step5: Modify gradients
Step6: Gradient reversal
Step7: Pytorch backward hooks | Python Code:
import torch
import tensorflow as tf
from torch.autograd import Variable
import numpy as np
def f(X):
return X*X
def g(X):
return X**3
X = np.random.randn(10)
X
Explanation: Gradient reversal pytorch
Inspired from the following tweets:
https://twitter.com/mat_kelcey/status/932149793765261313
https://twitter.com/ericjang11/status/932073259721359363
Basic idea:
```python
Add something to gradient
f(x) + g(x) - tf.stop_gradients(g(x))
Reverse gradient
tf.stop_gradient(f(x)*2) - f(x)
```
End of explanation
sess = tf.InteractiveSession()
tf_X = tf.Variable(X)
init_op = tf.global_variables_initializer()
sess.run(init_op)
sess.run(tf_X)
forward_op = f(tf_X)
sess.run(forward_op)
gradient_op = tf.gradients(forward_op, tf_X)
sess.run(gradient_op)
X*2 # This should match the gradient above
Explanation: Tensorflow implementation
End of explanation
gradient_modifier_op = g(tf_X)
sess.run(gradient_modifier_op)
modified_forward_op = (f(tf_X) + g(tf_X) - tf.stop_gradient(g(tf_X)))
modified_backward_op = tf.gradients(modified_forward_op, tf_X)
sess.run(modified_forward_op)
sess.run(modified_backward_op)
2*X + 3*(X**2) # This should match the gradients above
Explanation: Modify the gradients
Keep forward pass the same.
The trick is to add $g(x)$, such that $g'(x)$ is the gradient modifier, during the forward pass and substract it as well. But stop gradients from flowing through the substraction part.
$f(x) + g(x) - g(x)$ will lead to gradients $f'(x) + g'(x) -g'(x)$. Since gradients don't flow through $-g'(x)$, hence we get new gradients as $f'(x) + g'(x)$
End of explanation
gradient_reversal_op = (tf.stop_gradient(2*f(tf_X)) - f(tf_X))
gradient_reversal_grad_op = tf.gradients(gradient_reversal_op, tf_X)
sess.run(gradient_reversal_op)
sess.run(gradient_reversal_grad_op)
sess.run((gradient_op[0] + gradient_reversal_grad_op[0])) # This should be zero. Signifying grad is reversed.
Explanation: Gradient reversal
Here the modifying function $g(x)$ is simply the $-2*f(x)$, this will make the gradients $-f'(x)$.
End of explanation
def zero_grad(X):
if X.grad is not None:
X.grad.data.zero_()
torch_X = Variable(torch.FloatTensor(X), requires_grad=True)
torch_X.data.numpy()
f(torch_X).data.numpy()
g(torch_X).data.numpy()
zero_grad(torch_X)
f_X = f(torch_X)
f_X.backward(torch.ones(f_X.size()))
torch_X.grad.data.numpy()
2*X
Explanation: Pytoch case
End of explanation
modified_gradients_forward = lambda x: f(x) + g(x) - g(x).detach()
zero_grad(torch_X)
modified_grad = modified_gradients_forward(torch_X)
modified_grad.backward(torch.ones(modified_grad.size()))
torch_X.grad.data.numpy()
2*X + 3*(X*X) # It should be same as above
Explanation: Modify gradients
End of explanation
gradient_reversal = lambda x: (2*f(x)).detach() - f(x)
zero_grad(torch_X)
grad_reverse = gradient_reversal(torch_X)
grad_reverse.backward(torch.ones(grad_reverse.size()))
torch_X.grad.data.numpy()
-2*X # It should be same as above
Explanation: Gradient reversal
End of explanation
# Gradient reversal
zero_grad(torch_X)
f_X = f(torch_X)
f_X.register_hook(lambda grad: -grad)
f_X.backward(torch.ones(f_X.size()))
torch_X.grad.data.numpy()
-2*X
# Modified grad example
zero_grad(torch_X)
h = torch_X.register_hook(lambda grad: grad + 3*(torch_X*torch_X))
f_X = f(torch_X)
f_X.backward(torch.ones(f_X.size()))
h.remove()
torch_X.grad.data.numpy()
2*X + 3*(X*X) # It should be same as above
Explanation: Pytorch backward hooks
End of explanation |
4,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
=============================================================================
E E||---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|
B ||-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|
G G||---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|-C-|---|-D-|
D D||---|-E-|---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|
A A||---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|
E E||---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|
=============================================================================
3 5 7 9 12 15 17 19
The fretboard python module
The fretboard module is a relatively simple python module for displaying notes and scales relative to a guitar's
fretboard. It currently supports ASCII output.
Notes
Before getting to scales and displays, it is useful to know how the module handles representations of musical notes.
The fretboard module doesn't deal with the acoustic pitch of notes; rather it represents notes in terms of their number of semitones relative to a C reference.
Creating notes by name
Notes can be created either by naming the note and optionally appending "#" or "b" to indicate a sharp or flat note.
Step1: If you like, you can use multiple sharps or flats.
Step2: Creating relative notes
Notes can also be createde by specifying an interval (in number of semitones) relative to another note. For example, F is 5 semitones above C.
Step3: Adding 12 semitones gets us back to the same note.
Step4: Intervals
We can also take the difference of two notes, which yields the number of semitones between the notes.
Step5: Note that the interval printed above is the total number of semitones (which can span more than one octave) and can be positive or negative. If we want the to know the simple (within-octave) interval from a note up to the next instance of another note, the interval method (or function) can be used).
Step6: Scales
The main purpose of this module is to display scales over a fretboard diagram. Lets take a look at the Major scale in the key of C.
Step7: We can also create different modes of the diatonic scale. For example, the minor scale is just the sixth mode of the diatonic scale.
Step8: There is also a Minor scale class so you can save some typing (and don't have to remember which mode of the major scale it is). In addition to the Major and Minor, there are also classes for HarmonicMinor, Pentatonic, and Blues scales.
Step9: The Blues scale adds a flat fifth (the blue note) to the pentatonic minor.
Step10: To make the scales useful, we have the ability to test for membership.
Step11: We can also get the interval of a given note with respect to the scale.
Step12: Displaying Notes and Scales
The fretboard module supports printing fretboard diagrams to the terminal output (stdout) via the console submodule. The quickest way to display simple scale diagrams is visa the show_scale function.
Step13: There are various optional show_scale arguments that can be used to customize the display. The fmt
argument can be any of the following
Step14: Let's display scale intervals for Box 2 of the A Minor Pentatonic scale.
Step15: The show_scale function is a convenient wrapper around the Console class. To have greater control
over the display, we can work with a Console object directly. We'll create one and change the fill characters to be just empty space.
Step16: The Console class also has a display_fret method that allows us to control display of individual frets. Let's use that to display a C chord.
Step17: Finally, the Console.show method prints the display to stdout. To get the display string directly, use the get_display method instead. | Python Code:
from fretboard import *
c = Note('C')
print (c)
Note('C#')
Note('Db')
Note('B#')
Explanation: =============================================================================
E E||---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|
B ||-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|
G G||---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|-C-|---|-D-|
D D||---|-E-|---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|
A A||---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|
E E||---|---|-G-|---|-A-|---|---|-C-|---|-D-|---|-E-|---|---|-G-|---|-A-|---|---|
=============================================================================
3 5 7 9 12 15 17 19
The fretboard python module
The fretboard module is a relatively simple python module for displaying notes and scales relative to a guitar's
fretboard. It currently supports ASCII output.
Notes
Before getting to scales and displays, it is useful to know how the module handles representations of musical notes.
The fretboard module doesn't deal with the acoustic pitch of notes; rather it represents notes in terms of their number of semitones relative to a C reference.
Creating notes by name
Notes can be created either by naming the note and optionally appending "#" or "b" to indicate a sharp or flat note.
End of explanation
Note('B##'), Note('B###'), Note('B##b')
Explanation: If you like, you can use multiple sharps or flats.
End of explanation
f = c + 5
print (f)
Explanation: Creating relative notes
Notes can also be createde by specifying an interval (in number of semitones) relative to another note. For example, F is 5 semitones above C.
End of explanation
ff = f + 12
print (ff)
Explanation: Adding 12 semitones gets us back to the same note.
End of explanation
print (ff - c)
Explanation: Intervals
We can also take the difference of two notes, which yields the number of semitones between the notes.
End of explanation
print (c.interval(f))
print (f.interval(c))
Explanation: Note that the interval printed above is the total number of semitones (which can span more than one octave) and can be positive or negative. If we want the to know the simple (within-octave) interval from a note up to the next instance of another note, the interval method (or function) can be used).
End of explanation
c_maj = Major('C')
print (c_maj)
Explanation: Scales
The main purpose of this module is to display scales over a fretboard diagram. Lets take a look at the Major scale in the key of C.
End of explanation
c_min = Diatonic('C', mode=6)
print (c_min)
Explanation: We can also create different modes of the diatonic scale. For example, the minor scale is just the sixth mode of the diatonic scale.
End of explanation
a_pmin = MinorPentatonic('A')
print (a_pmin)
Explanation: There is also a Minor scale class so you can save some typing (and don't have to remember which mode of the major scale it is). In addition to the Major and Minor, there are also classes for HarmonicMinor, Pentatonic, and Blues scales.
End of explanation
a_blues = Blues('A')
print (a_blues)
Explanation: The Blues scale adds a flat fifth (the blue note) to the pentatonic minor.
End of explanation
'C' in a_blues
'B' in a_blues
Explanation: To make the scales useful, we have the ability to test for membership.
End of explanation
print (a_blues.get_interval_name('G'))
Explanation: We can also get the interval of a given note with respect to the scale.
End of explanation
console.show_scale(Minor('A'))
console.show_scale(Major('C'), tuning='D A D G A D', fmt='interval')
Explanation: Displaying Notes and Scales
The fretboard module supports printing fretboard diagrams to the terminal output (stdout) via the console submodule. The quickest way to display simple scale diagrams is visa the show_scale function.
End of explanation
scale = MinorPentatonic('A')
console.show_scale(scale, fmt=lambda f: 'R' if f.note == scale.root else '*')
Explanation: There are various optional show_scale arguments that can be used to customize the display. The fmt
argument can be any of the following:
"note" (default) - display the name of each note
"interval" - display name of the interval of each note with respect to the scale key
<text char> - display the specified character (e.g., '*') for each note
<callable> - display the return value of the callable oject applied to each Fret object of the display
If a callable object is given for fmt, it is passed a Fret object that has the following attributes:
note - the Note object associated with the fret
string - the number of the string associated with the fret
number - the number of the fret (zero indicates open string)
For example, let's display A Minor Pentatonic with "R" displayed for the root note and "*" for all others.
End of explanation
console.show_scale(scale, fmt=lambda f: scale.get_interval_name(f.note) if f.number in range(7, 11) else None)
Explanation: Let's display scale intervals for Box 2 of the A Minor Pentatonic scale.
End of explanation
c = Console()
c.fret_fill_char = ' '
c.fret_empty_fill_char = ' '
c.display_scale(scale)
c.show()
Explanation: The show_scale function is a convenient wrapper around the Console class. To have greater control
over the display, we can work with a Console object directly. We'll create one and change the fill characters to be just empty space.
End of explanation
c = Console()
for (s, f) in [(1, 0), (2, 1), (3, 0), (4, 2), (5, 3)]:
c.display_fret(s, f)
c.display_fret(6, 0, 'x')
c.show(fmax=5)
Explanation: The Console class also has a display_fret method that allows us to control display of individual frets. Let's use that to display a C chord.
End of explanation
text = c.get_display(fmax=5)
print (text)
Explanation: Finally, the Console.show method prints the display to stdout. To get the display string directly, use the get_display method instead.
End of explanation |
4,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Euplotid says "hello world"
Markdown
hello world
hello world
hello world
Step1: Quick Jupyter Tips
select how the cell is interpreted in the toolbar at the top, this one says "Markdown"
view a notebook as a powerpoint style presentation by clicking the "play" button at the top right, with button next to it allowing for editing
edit text at multiple locations by holding down the "command" key
highlight and edit blocks of text using the "alt" key
download the notebook in a variety of ways, including html and presentation html (which preserve interactive elements) and pdf which is static
Step2: Switching kernels
Jupyter (JuliaPythonR) is essentially a wrapper of kernels. A kernel takes text, compiles it according their respective rules and outputs machine code which is then executed on the CPU/GPU. Jupyter allows you to switch between different kernels by goint to Kernel-->Change Kernel at the top.
Step3: Images
<img src="https | Python Code:
#comments which are not run are denoted with #
#hello world
#try running this cell clicking inside it and pressing shift+enter
#python3
print("hello world")
Explanation: Euplotid says "hello world"
Markdown
hello world
hello world
hello world
End of explanation
%lsmagic
#shows you all available magics, these allow you to run many different
#commands on a single line, such as "load_ext" to load functions or "time" to time your code
#cell magics can interpret many other languages
%%perl
#can use magic to run whole cell as perl
print "hello world"
%%bash
#or make cell bash
echo "hello world"
#bash commands inside python kernel
!echo "hello world"
#can even store output of bash commands and use in python
string = !echo "hello world"
print(string)
Explanation: Quick Jupyter Tips
select how the cell is interpreted in the toolbar at the top, this one says "Markdown"
view a notebook as a powerpoint style presentation by clicking the "play" button at the top right, with button next to it allowing for editing
edit text at multiple locations by holding down the "command" key
highlight and edit blocks of text using the "alt" key
download the notebook in a variety of ways, including html and presentation html (which preserve interactive elements) and pdf which is static
End of explanation
#switch to Bash kernel
echo "hello world"
#switch to R kernel
print("hello world")
#switch to C kernel
printf("hello world");
#switch to python2 kernel
print "hello world"
Explanation: Switching kernels
Jupyter (JuliaPythonR) is essentially a wrapper of kernels. A kernel takes text, compiles it according their respective rules and outputs machine code which is then executed on the CPU/GPU. Jupyter allows you to switch between different kernels by goint to Kernel-->Change Kernel at the top.
End of explanation
from datetime import timedelta
from IPython.display import YouTubeVideo
start=int(timedelta(hours=0, minutes=0, seconds=0).total_seconds())
YouTubeVideo("rOU4YiuaxAM", start=start, autoplay=1, theme="light", color="red")
Explanation: Images
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/2/28/HelloWorld.svg/1200px-HelloWorld.svg.png" style="width: 500px; height: 500px">
Youtube videos
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.