Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
4,500 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Decomposition Example
First step, set the paths to where to find the motif dictionary and associated code. Note that these are available at http
Step1: Load some motifs from the motif database, I'm just loading the massbank ones
Step2: spectra is a dictionary with key the filename of the motif and values another dictionary with feature / probability pairs
Step3: make an index of the unique features -- we can only decompose onto features that are loaded from the database
Step4: Put the words and motifs into a matrix ($\beta$)
Step5: Renomalise beta as when we save motifs we threshold on the probabilities and they end up not summing to 1
Addition
Step6: Load some data..I'm loading massbank data, but other datasets are available. Also assuming the motifs were made using binned features
Step7: New code to load from the MGF
Step8: Do the decomposition
Some points to note here
Step9: This is the actual decomposition.
The main loop goes over 100 times, this is probably overkill but it jumps out if the total absolute change in gamma is less than 1e-6.
This code doesn't compute alpha, but I can add that if necessary
I get a warning about a log(0). This is due to some entries in beta being 0, and it doesn't cause any problem, but could be removed by setting all the zero values to somethign very small and then re-normalising the beta matrix.
I do some plotting at the end to show how stuff can be exposed
If you wanted to see the decomposition at the individual feature level then you would need to keep hold of phi_matrix which gives (for each word), the probabilities over each motif. | Python Code:
motifdbcodepath = '/Users/simon/git/motifdb/code/utilities/'
motifdbpath = '/Users/simon/git/motifdb/motifs/'
Explanation: Decomposition Example
First step, set the paths to where to find the motif dictionary and associated code. Note that these are available at http://github.com/sdrogers/motifdb
End of explanation
import sys
sys.path.append(motifdbcodepath)
from motifdb_loader import load_db
db_list = ['gnps_binned_005']
spectra,motif_metadata = load_db(db_list,motifdbpath)
Explanation: Load some motifs from the motif database, I'm just loading the massbank ones
End of explanation
import numpy as np
Explanation: spectra is a dictionary with key the filename of the motif and values another dictionary with feature / probability pairs
End of explanation
word_index = {}
word_pos = 0
for motif,word_probs in spectra.items():
for word,probability in word_probs.items():
if not word in word_index:
word_index[word] = word_pos
word_pos += 1
Explanation: make an index of the unique features -- we can only decompose onto features that are loaded from the database
End of explanation
# Create a beta matrix
motif_index = {}
motif_pos = 0
n_motifs = len(spectra)
n_words = len(word_index)
beta = np.zeros((n_motifs,n_words),np.double)
for motif,word_probs in spectra.items():
motif_index[motif] = motif_pos
for word,probability in word_probs.items():
beta[motif_pos,word_index[word]] = probability
motif_pos += 1
import pylab as plt
%matplotlib inline
plt.imshow(beta,aspect='auto')
Explanation: Put the words and motifs into a matrix ($\beta$)
End of explanation
# find the minimum value of beta that isn't zero
pos = np.where(beta > 0)
min_val = beta[pos].min()
zpos = np.where(beta == 0)
beta[zpos] = min_val/100
beta /= beta.sum(axis=1)[:,None]
Explanation: Renomalise beta as when we save motifs we threshold on the probabilities and they end up not summing to 1
Addition: made sure there were no zeros
End of explanation
ldacodepath = '/Users/simon/git/lda/code/'
sys.path.append(ldacodepath)
from ms2lda_feature_extraction import LoadGNPS,MakeBinnedFeatures
import glob
massbank_data = glob.glob('/Users/simon/Dropbox/BioResearch/Meta_clustering/MS2LDA/fingerid-104-traindata/spectra_massbank/*.ms')
sub_data = massbank_data[:100]
l = LoadGNPS()
ms1,ms2,spectral_metadata = l.load_spectra(sub_data)
m = MakeBinnedFeatures()
corpus,word_mz_range = m.make_features(ms2)
corpus = corpus[corpus.keys()[0]]
Explanation: Load some data..I'm loading massbank data, but other datasets are available. Also assuming the motifs were made using binned features
End of explanation
ldacodepath = '/Users/simon/git/lda/code/'
sys.path.append(ldacodepath)
from ms2lda_feature_extraction import LoadMGF,MakeBinnedFeatures
mgf_file = '/Users/simon/git/lda/notebooks/clusters.mgf'
l = LoadMGF()
ms1,ms2,spectral_metadata = l.load_spectra([mgf_file])
m = MakeBinnedFeatures()
corpus,word_mz_range = m.make_features(ms2)
corpus = corpus[corpus.keys()[0]]
Explanation: New code to load from the MGF
End of explanation
def compute_overlap(phi_matrix,motif_pos,beta_row,word_index):
overlap_score = 0.0
for word in phi_matrix:
word_pos = word_index[word]
overlap_score += phi_matrix[word][motif_pos]*beta_row[word_pos]
return overlap_score
Explanation: Do the decomposition
Some points to note here:
- Features in the spectra that are not in the motifs don't add anything to they get skipped. I compute (proportion_in) the percentage of intensity that is usable.
- This should be taken into account when interpreting the spectra-motif probabilities (theta). These values can be interpreted as the proportion of the usable part of the spectra, and not the total.
- I also compute overlap score, as it's useful to see how much of the motif is in the spectrum
This is a handy function to compute the overlap score
End of explanation
from scipy.special import psi as psi
theta = {}
K = n_motifs
alpha = 1 # will have some effect, but there's no way to set it. Making it low means we'll get sparse solutions
overlap_scores = {}
doc_scans = []
for doc in corpus:
doc_scans.append((doc,int(spectral_metadata[doc]['scans'])))
doc_scans = sorted(doc_scans,key = lambda x: x[1])
# doc_scans = filter(lambda x: x[1]<=130,doc_scans)
doc_list,scan_list = zip(*doc_scans)
for doc in doc_list:
# print doc,spectral_metadata[doc]['scans']
# Compute the proportion of this docs intensity that is represented in the motifs
total_in = 0.0
total = 0.0
doc_dict = corpus[doc]
for word,intensity in doc_dict.items():
total += intensity
if word in word_index:
total_in += intensity
proportion_in = (1.0*total_in)/total
# print '\t',proportion_in
if proportion_in > 0: # Be careful: if there is no overlap between the features in the spectrum and those across the motifs, bad things might happen
phi_matrix = {}
for word in doc_dict:
if word in word_index:
phi_matrix[word] = None
gamma = np.ones(K)
for i in range(100):
temp_gamma = np.zeros(K) + alpha
for word,intensity in doc_dict.items():
if word in word_index:
word_pos = word_index[word]
if beta[:,word_pos].sum()>0:
log_phi_matrix = np.log(beta[:,word_pos]) + psi(gamma)
log_phi_matrix = np.exp(log_phi_matrix - log_phi_matrix.max())
phi_matrix[word] = log_phi_matrix/log_phi_matrix.sum()
temp_gamma += phi_matrix[word]*intensity
gamma_change = np.sum(np.abs(gamma - temp_gamma))
gamma = temp_gamma.copy()
if gamma_change < 1e-6:
break
temp_theta = (gamma/gamma.sum()).flatten()
theta[doc] = {}
overlap_scores[doc] = {}
for motif,motif_pos in motif_index.items():
theta[doc][motif] = temp_theta[motif_pos]
overlap_scores[doc][motif] = compute_overlap(phi_matrix,motif_pos,beta[motif_pos,:],word_index)
# print some things!
tm = zip(theta[doc].keys(),theta[doc].values())
tm = sorted(tm,key = lambda x: x[1],reverse = True)
for mo,th in tm[:3]:
if overlap_scores[doc][mo]>=0.3:
print doc,spectral_metadata[doc]['scans'],mo,th,overlap_scores[doc][mo],motif_metadata[mo]['ANNOTATION'][:40]
print scan_list
Explanation: This is the actual decomposition.
The main loop goes over 100 times, this is probably overkill but it jumps out if the total absolute change in gamma is less than 1e-6.
This code doesn't compute alpha, but I can add that if necessary
I get a warning about a log(0). This is due to some entries in beta being 0, and it doesn't cause any problem, but could be removed by setting all the zero values to somethign very small and then re-normalising the beta matrix.
I do some plotting at the end to show how stuff can be exposed
If you wanted to see the decomposition at the individual feature level then you would need to keep hold of phi_matrix which gives (for each word), the probabilities over each motif.
End of explanation |
4,501 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Homework Set 6</h1>
Matt Buchovecky
Astro 283 / Fitz
Step1: <h2> Problem 1
Step7: To estimate the values of $(\alpha,\beta)$, we maximize the posterior function $p(\alpha,\beta\mid{D})$ with respect to $\alpha$ and $\beta$. From Baye's rule, and assuming the prior $p(\alpha,\beta)$ is uniform, this is equivalent to maximizing the likelihood function since
The first step is to find the values of $\alpha$ and $\beta$ that maximize the posterior function $p(\alpha,\beta\mid{D_x})$ - applying Baye's rule gives
$$
p(\alpha,\beta\mid{D}) = \frac{p\left({D_x}\mid\alpha,\beta\right)p\left(\alpha,\beta\right)}{p\left({D_x}\right)}
$$
assuming uniform priors, we can simplify the dependence of the posterior on the parameters to be just the product of the likelihood functions for each x-value
$$
p(\alpha,\beta\mid{D}) \propto \prod_i p(x_i\mid\alpha,\beta)
$$
where the likelihood function in this case is just the Rice distribution
$$
p(x_i\mid \alpha,\beta) = \left{
\begin{array}{ll}
\alpha^{-1}\exp\left(-\frac{x_i+\beta}{\alpha}\right)I_0\left(\frac{2\sqrt{x_i\beta}}{\alpha}\right) & \quad x_i\geq 0\
0 & \quad\text{otherwise}
\end{array}
\right.
$$
We can use a minimizer to minimize the opposite of the posterior, which is the same as maximizing it. Note we can apply this same logic for finding the parameters that maximize the parameter of a Poisson-like Gaussian, which is the mean and the variance.
Below I define the functions, then run the minimizer..
Step8: The optimal values for the Rice fitting are
Step9: <h4> posterior density function for Rice distribution </h4>
Step10: To compare the distribution models to see which is a better fit, we compute the ratio of the probabilities of the models
Step11: We can see the major factor after calculating is
Step16: Integrating the model along the 0th (slow) dimension. For a discrete array, this amounts to summing across each value in the 0th dimension for each coordinate in the other two dimensions. In most cases there would be a multiplicative factor of the bin width to represent $\delta x$, but here we are doing it in units of pixels with value $1$
Step17: <h3> Performing the convolution </h3>
The convulution will just be the product
First the data arrays are padded with zeros, large enough to include the whole PSF at the edge of the model array
Then, both arrays are transformed into Fourier space using our defined FFT in 2-dimensions.
In this space, the convolution is just the element-wise product of each array
The inverse FFT must be applied to view it in real space
The result is shifted for some reason, part of the transormation process. We use a numpy function to shift it back
Step18: The result looks good, we can see the small points become wider blurs, but the overall picture looks the same! | Python Code:
import numpy as np
from scipy import optimize, special
from matplotlib import pyplot as plt
from astropy.io import fits
%matplotlib inline
Explanation: <h1> Homework Set 6</h1>
Matt Buchovecky
Astro 283 / Fitz
End of explanation
# open the data file and load data into a list of points
infile = open("./samplevals_PA.txt", 'r')
v_arr = [ ]
for line in iter(infile):
line = line.split()
try:
float(line[0])
v_arr.append(float(line[0]))
except ValueError:
continue
infile.close()
# get a first look at the distribution to make guesses
plt.hist(v_arr)
Explanation: <h2> Problem 1
End of explanation
# define the pdfs and likelihood functions
def Rice_dist(x, alpha, beta):
the pdf of the Rice distribution for a single value
return (1/alpha)*np.exp((x+beta)/(-alpha))*special.iv(0, 2*np.sqrt(x*beta)/alpha)
def Rice_dist_n(x, alpha, beta):
the pdf of the Rice distribution for an array
condlist = [ x>0 ]
choicelist = [ Rice_dist(x, alpha, beta) ]
return np.select(condlist, choicelist, default=0.0)
def Rice_dist_gen(x, alpha, beta):
pdf of Rice distribution that works for single values and array types
print('hi')
def gaussian_1param(x, mu):
gaussian pdf with variance equal to the mean
return np.exp(-(x-mu)**2/mu)/np.sqrt(2*np.pi*mu)
def neg_likelihood(params, value_array, function):
the opposite of the likelihood function for a set of independent values for a given \\
function
l = -1
for x in value_array:
l *= function(x, *params)
return l
# perform the optimization on both functions
guess = (2, 3)
opt_rice = optimize.fmin(neg_likelihood, guess, args=(v_arr, Rice_dist))
print(opt_rice)
guess = (np.mean(v_arr),)
opt_gauss = optimize.fmin(neg_likelihood, guess, args=(v_arr, gaussian_1param))
print(opt_gauss)
Explanation: To estimate the values of $(\alpha,\beta)$, we maximize the posterior function $p(\alpha,\beta\mid{D})$ with respect to $\alpha$ and $\beta$. From Baye's rule, and assuming the prior $p(\alpha,\beta)$ is uniform, this is equivalent to maximizing the likelihood function since
The first step is to find the values of $\alpha$ and $\beta$ that maximize the posterior function $p(\alpha,\beta\mid{D_x})$ - applying Baye's rule gives
$$
p(\alpha,\beta\mid{D}) = \frac{p\left({D_x}\mid\alpha,\beta\right)p\left(\alpha,\beta\right)}{p\left({D_x}\right)}
$$
assuming uniform priors, we can simplify the dependence of the posterior on the parameters to be just the product of the likelihood functions for each x-value
$$
p(\alpha,\beta\mid{D}) \propto \prod_i p(x_i\mid\alpha,\beta)
$$
where the likelihood function in this case is just the Rice distribution
$$
p(x_i\mid \alpha,\beta) = \left{
\begin{array}{ll}
\alpha^{-1}\exp\left(-\frac{x_i+\beta}{\alpha}\right)I_0\left(\frac{2\sqrt{x_i\beta}}{\alpha}\right) & \quad x_i\geq 0\
0 & \quad\text{otherwise}
\end{array}
\right.
$$
We can use a minimizer to minimize the opposite of the posterior, which is the same as maximizing it. Note we can apply this same logic for finding the parameters that maximize the parameter of a Poisson-like Gaussian, which is the mean and the variance.
Below I define the functions, then run the minimizer..
End of explanation
# plot the Rice distribution with optimal values against normed histogram
r = np.arange(0., 20., 0.1)
plt.plot(r, Rice_dist(r, *opt_rice), label='rice')
plt.plot(r, gaussian_1param(r, *opt_gauss), label='gauss')
plt.hist(v_arr, normed=True, label='data')
plt.legend(loc='center left', bbox_to_anchor=(1., 0.5))
plt.title("comparison of fits to data")
Explanation: The optimal values for the Rice fitting are:
\begin{eqnarray}
\alpha &\approx& 1.13\
\beta &\approx& 4.50\
\end{eqnarray}
and for the Gaussian:
$\mu \approx 6.33$
To visualize these results, I will plot these distributions with the optimal parameters against the histogram of the given data.
I then plot the posterior function in parameter space to give the parameter density function
End of explanation
# define a mesh grid for parameter space plot
alpha_range = np.linspace(0.5, 2.5, 100)
beta_range = np.linspace(2.5, 6.5, 100)
alpha_arr, beta_arr = np.meshgrid(alpha_range, beta_range)
# positive likelihood values for Rice distribution!
Rice_arr = -neg_likelihood((alpha_arr, beta_arr), v_arr, Rice_dist_n)
# plot the posterior density function
ext = [alpha_range.min(), alpha_range.max(), beta_range.min(), beta_range.max()]
plt.imshow(Rice_arr, extent=ext, origin='lower')
plt.title("posterior density function for Rice distribution")
plt.xlabel('alpha')
plt.ylabel('beta')
Explanation: <h4> posterior density function for Rice distribution </h4>
End of explanation
# find the ratio of likelihood functions
ratio = neg_likelihood(opt_rice, v_arr, Rice_dist_n) / neg_likelihood(opt_gauss, v_arr, gaussian_1param)
print(ratio)
Explanation: To compare the distribution models to see which is a better fit, we compute the ratio of the probabilities of the models:
$$
\frac{P\left(R\mid{D_x}\right)}{P\left(G\mid{D_x}\right)} = \frac{\int p\left(R(\alpha,\beta)\mid{D_x}\right) d\alpha d\beta}{\int p\left(G(\mu)\mid{D_x}\right)d\mu}\
$$
Expanding again using Baye's rule, and assuming the $P\left({D_x}\right)$ terms will cancel on top and bottom, and the ratio becomes the ratio of the integrals of the likelihoods times the priors
$$
\frac{\int p\left(R(\alpha,\beta)\mid{D_x}\right) d\alpha d\beta}{\int p\left(G(\mu)\mid{D_x}\right)d\mu} =
\frac{\int p\left({D_x}\mid\alpha,\beta,R\right)p(\alpha,\beta) d\alpha d\beta}{\int p\left({D}\mid\mu,G\right)p(\mu) d\mu}
$$
This integral can be difficult, so we make two assumptions. First, if we assume roughly constant priors over some range, then they can be pulled out of the integral, and give the ratio:
$$\frac{p(\alpha,\beta)}{p(\mu)} = \frac{\mu_{\text{max}}-\mu_{\text{min}}}{(\alpha_\text{max}-\alpha_\text{min})(\beta_\text{max}-\beta_\text{min})}$$
Next, we approximate that the likelihood functions are Gauss distributed around the optimal values
\begin{eqnarray}
\int{p\left({D_x}\mid R(\alpha,\beta)\right) d\alpha d\beta} &=& p\left({D_x}\mid R(\alpha_0,\beta_0)\right)\int{g(\alpha,\beta)d\alpha d\beta} \
&=& 2\pi\left|\Sigma_{\alpha,\beta}\right|p\left({D_x}\mid R(\alpha_0,\beta_0)\right)
\end{eqnarray}
\begin{eqnarray}
\int{p\left({D_x}\mid G(\mu)\right) d\mu} &=& p\left({D_x}\mid G(\mu_0)\right)\int{g(\mu)d\mu} \
&=& \sqrt{2\pi\sigma_{\mu}}p\left({D_x}\mid G(\mu_0)\right)
\end{eqnarray}
and similarly for the Poisson-like Gaussian, but with only 1-dimension. The integral of the gaussian is just proportional to the errors on the parameters (determinant of the covariance matrix for the 2D Rice case)
There are multiple factors in this ratio now, let's take a look at the optimal posterior ratio first..
End of explanation
# read in the data and close files
model_fits = fits.open("./data/hw6prob2_model.fits")
psf_fits = fits.open("hw6prob2_psf.fits")
print(model_fits.info())
print(psf_fits.info())
model_data = model_fits[0].data
psf_data = psf_fits[0].data
model_fits.close()
psf_fits.close()
plt.imshow(psf_data)
cbar = plt.colorbar()
cbar.solids.set_edgecolors('face')
Explanation: We can see the major factor after calculating is:
$$
\frac{p({D_x}\mid R(\alpha,\beta))}{p({D_x}\mid G(\mu_0)} \sim 2\cdot 10^{26}
$$
Putting everything together, the
$$
\frac{P(R\mid{D_x})}{P(G\mid{D_x})} \approx \sqrt{2\pi}
\frac{p({D_x}\mid R(\alpha_0,\beta_0))}{p({D_x}\mid G(\mu_0))}
\frac{\det{\Sigma_{\alpha,\beta}}}{\sigma_{\mu}}
\frac{(\alpha_\text{max}-\alpha_\text{min})(\beta_\text{max}-\beta_\text{min})}{\mu_\text{max}-\mu_\text{min}}
$$
The other factors could be large, but will be nowhere near this magnitude
$$
\frac{P(R\mid{D_x})}{P(G\mid{D_x})} \gg 1
$$
Thus, the Rice distribution is a much better fit. We expect this result, as the numbers were indeed generated from a Rice distribution, and the Gaussian fit appeared visually to be extremely poor
<h2> Problem 2 </h2>
End of explanation
model_data_intgrl = np.sum(model_data, axis=0)
f = plt.figure()
plt.imshow(model_data_intgrl)
cbar = plt.colorbar()
cbar.solids.set_edgecolors('face')
# define FFT functions
def cool_turkey_fft(arr, N=0, s=1, **kwargs): # inverse=False
performs a 1-dimensional fast Fourier transform
on arr using the Cooley–Tukey algorithm.
return: transformed array, ndarray
keyword arguments: inverse=False
performs inverse FFT
if N == 0:
N = len(arr)
sign = 1 # sign that goes into exponential, + implies not doing inverse transform
# iter(kwargs)
for key, value in kwargs.items():
if key == 'inverse' and value:
sign = -1
s = int(s)
ARR = np.zeros(N, dtype=complex)
if N == 1:
ARR[0] = arr[0]
else:
N2 = int(N/2)
ARR[0:N2] = cool_turkey_fft(arr[0::2*s], N2, s, **kwargs)
ARR[N2:] = cool_turkey_fft(arr[1::2*s], N2, s, **kwargs)
for k in range(0, N2):
orig = ARR[k]
ARR[k] = orig + np.exp(-sign*2*np.pi*(1j)*k/N)*ARR[k+N2]
ARR[k+N2] = orig - np.exp(-sign*2*np.pi*(1j)*k/N)*ARR[k+N2]
return ARR
def ifft(arr, fft_method, *args, **kwargs): # =cool_turkey_fft
performs inverse of 1d fast Fourier transform
kwargs['inverse'] = True
ARR = fft_method(arr, *args, **kwargs)
return ARR / len(ARR)
def fft_2d(arr_2d, fft_1d, *args, **kwargs): # =cool_turkey_fft
performs a fast Fourier transform in 2 dimensions
# check type of array
# check dimensions
nx, ny = arr_2d.shape
N = nx
ARR_2d = np.zeros((N,N), dtype=np.complex64)
for i in range(0, N):
ARR_2d[i,:] = fft_1d(arr_2d[i,:], *args, **kwargs)
for j in range(0, N):
ARR_2d[:,j] = fft_1d(ARR_2d[:,j], *args, **kwargs)
return ARR_2d
def zero_pad_symm2d(arr, shape):
pads array with 0s, placing original values in the center symmetrically
returns ndarray of given shape
# check new shape big enough to include old shape
sh0 = arr.shape
ARR = np.zeros(shape)
ARR[int((shape[0]-sh0[0])/2):int((shape[0]+sh0[0])/2), int((shape[1]-sh0[1])/2):int((shape[1]+sh0[1])/2)] = arr
return ARR
Explanation: Integrating the model along the 0th (slow) dimension. For a discrete array, this amounts to summing across each value in the 0th dimension for each coordinate in the other two dimensions. In most cases there would be a multiplicative factor of the bin width to represent $\delta x$, but here we are doing it in units of pixels with value $1$
End of explanation
# do the padding
size_full = model_data_intgrl.shape[0] + 2*psf_data.shape[0]
psf_data_padded = zero_pad_symm2d(psf_data, (size_full,size_full))
model_data_padded = zero_pad_symm2d(model_data_intgrl, (size_full,size_full))
# FFT the 2D data
psf_fft = fft_2d(psf_data_padded, cool_turkey_fft)
model_fft = fft_2d(model_data_padded, cool_turkey_fft)
# convolve model with PSF
convoluted_data_fft = psf_fft * model_fft
# inverse FFT to get back to real space
convoluted_data_space = fft_2d(convoluted_data_fft, cool_turkey_fft, inverse=True)
# shift back
convoluted_data_space = np.fft.fftshift(convoluted_data_space)
# plot the result, looks good!
f = plt.figure()
plt.imshow(np.real(convoluted_data_space[64:192, 64:192]))
plt.title("integrated model data convolved with PSF")
cbar = plt.colorbar()
cbar.solids.set_edgecolors('face')
Explanation: <h3> Performing the convolution </h3>
The convulution will just be the product
First the data arrays are padded with zeros, large enough to include the whole PSF at the edge of the model array
Then, both arrays are transformed into Fourier space using our defined FFT in 2-dimensions.
In this space, the convolution is just the element-wise product of each array
The inverse FFT must be applied to view it in real space
The result is shifted for some reason, part of the transormation process. We use a numpy function to shift it back
End of explanation
import pandas
pandas.
Explanation: The result looks good, we can see the small points become wider blurs, but the overall picture looks the same!
End of explanation |
4,502 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NCC
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
4,503 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Counting Stars with NumPy
This example introduces some of the image processing capabilities available with NumPy and the SciPy ndimage package. More extensive documentation and tutorials can be found through the SciPy Lectures series.
Image I/O
Here is a list of beautiful star field images taken by the Hubble Space Telescope
Step1: Image Visualization
We can plot the image using matplotlib
Step2: Image Inspection
We can examine the image properties
Step3: Pixel coordinates are (column,row).
Colors are represented by RGB triples. Black is (0,0,0), White is (255, 255, 255) or (0xFF, 0xFF, 0xFF) in hexadecimal. Think of it as a color cube with the three axes representing the different possible colors. The furthest away from the origin (black) is white.
Step4: We could write code to find the brightest pixel in the image, where "brightest" means highest value of R+G+B. For the 256 color scale, the greatest possible value is 3 * 255, or 765. One way to do this would be to write a set of nested loops over the pixel dimensions, calculating the sum R+G+B for each pixel, but that would be rather tedious and slow.
We could process the information faster if we take advantage of the speedy NumPy slicing, aggregates, and ufuncs. Remember that any time we can eliminate interpreted Python loops we save a lot of processing time.
Step6: Image Feature Extraction
Now that we know how to read in the image as a NumPy array, let's count the stars above some threshold brightness. Start by converting the image to B/W, so that which pixels belong to stars and which don't is unambiguous. We'll use black for stars and white for background, since it's easier to see black-on-white than the reverse.
Step7: The way to count the features (stars) in the image is to identify "blobs" of connected or adjacent black pixels.
A traditional implementation of this algorithm using plain Python loops is presented in the Multimedia Programming lesson from Software Carpentry. This was covered in the notebook Counting Stars.
Let's see how to implement such an algorithm much more efficiently using numpy and scipy.ndimage.
The scipy.ndimage.label function will use a structuring element (cross-shaped by default) to search for features. As an example, consider the simple array
Step8: There are four unique features here, if we only count those that have neighbors along a cross-shaped structuring element.
Step9: If we wish to consider elements connected on the diagonal, as well as the cross structure, we define a new structuring element
Step10: Label the image using the new structuring element
Step11: Note that features 1, 3, and 4 from above are now considered a single feature
Step12: Let's use ndi.label to count up the stars in our B/W starfield image.
Step13: Label returns an array the same shape as the input where each "unique feature has a unique value", so if you want the indices of the features you use a list comprehension to extract the exact feature indices. Something like
Step14: Let's change the color of the largest star in the field to red. To find the largest star, look at the lengths of the arrays stored in label_indices. | Python Code:
import scipy.ndimage as ndi
import requests
from StringIO import StringIO
#Pick an image from the list above and fetch it with requests.get
#The default picture here is of M45 - the Pleiades Star Cluster.
response = requests.get("http://imgsrc.hubblesite.org/hu/db/images/hs-2004-20-a-large_web.jpg")
pic = ndi.imread(StringIO(response.content))
Explanation: Counting Stars with NumPy
This example introduces some of the image processing capabilities available with NumPy and the SciPy ndimage package. More extensive documentation and tutorials can be found through the SciPy Lectures series.
Image I/O
Here is a list of beautiful star field images taken by the Hubble Space Telescope:
http://imgsrc.hubblesite.org/hu/db/images/hs-2004-20-a-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1993-13-a-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1995-32-c-full_jpg.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1993-13-a-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-2002-10-c-large_web.jpg
http://imgsrc.hubblesite.org/hu/db/images/hs-1999-30-b-full_jpg.jpg
We can use the SciPy ndimage library to read image data into NumPy arrays. If we want to fetch a file off the web, we also need some help from the requests and StringIO libraries:
End of explanation
%pylab inline
import matplotlib.pyplot as plt
plt.imshow(pic);
Explanation: Image Visualization
We can plot the image using matplotlib:
End of explanation
print pic.shape
Explanation: Image Inspection
We can examine the image properties:
End of explanation
#Color array [R,G,B] of very first pixel
print pic[0,0]
Explanation: Pixel coordinates are (column,row).
Colors are represented by RGB triples. Black is (0,0,0), White is (255, 255, 255) or (0xFF, 0xFF, 0xFF) in hexadecimal. Think of it as a color cube with the three axes representing the different possible colors. The furthest away from the origin (black) is white.
End of explanation
#find value of max pixel with aggregates
print pic.sum(axis=2).max() #numbering from 0, axis 2 is the color depth
Explanation: We could write code to find the brightest pixel in the image, where "brightest" means highest value of R+G+B. For the 256 color scale, the greatest possible value is 3 * 255, or 765. One way to do this would be to write a set of nested loops over the pixel dimensions, calculating the sum R+G+B for each pixel, but that would be rather tedious and slow.
We could process the information faster if we take advantage of the speedy NumPy slicing, aggregates, and ufuncs. Remember that any time we can eliminate interpreted Python loops we save a lot of processing time.
End of explanation
def monochrome(pic_array, threshold):
replace the RGB values in the loaded image with either
black or white depending on whether its total
luminance is above or below some threshold
passed in by the user
mask = (pic_array.sum(axis=2) >= threshold) #could also be done in one step
pic_array[mask] = 0 #BLACK - broadcasting at work here
pic_array[~mask] = 255 #WHITE - broadcasting at work here
return
#Get another copy to convert to B/W
bwpic = ndi.imread(StringIO(response.content))
#This threshold is a scalar, not an RGB triple
#We're looking for pixels whose total color value is 600 or greater
monochrome(bwpic,200+200+200)
plt.imshow(bwpic);
Explanation: Image Feature Extraction
Now that we know how to read in the image as a NumPy array, let's count the stars above some threshold brightness. Start by converting the image to B/W, so that which pixels belong to stars and which don't is unambiguous. We'll use black for stars and white for background, since it's easier to see black-on-white than the reverse.
End of explanation
a = np.array([[0,0,1,1,0,0],
[0,0,0,1,0,0],
[1,1,0,0,1,0],
[0,0,0,1,0,0]])
Explanation: The way to count the features (stars) in the image is to identify "blobs" of connected or adjacent black pixels.
A traditional implementation of this algorithm using plain Python loops is presented in the Multimedia Programming lesson from Software Carpentry. This was covered in the notebook Counting Stars.
Let's see how to implement such an algorithm much more efficiently using numpy and scipy.ndimage.
The scipy.ndimage.label function will use a structuring element (cross-shaped by default) to search for features. As an example, consider the simple array:
End of explanation
labeled_array, num_features = ndi.label(a)
print(num_features)
print(labeled_array)
Explanation: There are four unique features here, if we only count those that have neighbors along a cross-shaped structuring element.
End of explanation
s = [[1,1,1],
[1,1,1],
[1,1,1]]
#Note, that scipy.ndimage.generate_binary_structure(2,2) would also do the same thing.
print s
Explanation: If we wish to consider elements connected on the diagonal, as well as the cross structure, we define a new structuring element:
End of explanation
labeled_array, num_features = ndi.label(a, structure=s)
print(num_features)
Explanation: Label the image using the new structuring element:
End of explanation
print(labeled_array)
Explanation: Note that features 1, 3, and 4 from above are now considered a single feature
End of explanation
labeled_array, num_stars = ndi.label(~bwpic) #Count and label the complement
print num_stars
plt.imshow(labeled_array);
Explanation: Let's use ndi.label to count up the stars in our B/W starfield image.
End of explanation
locations = ndi.find_objects(labeled_array)
print locations[9]
label_indices = [(labeled_array[:,:,0] == i).nonzero() for i in xrange(1, num_stars+1)]
print label_indices[9]
Explanation: Label returns an array the same shape as the input where each "unique feature has a unique value", so if you want the indices of the features you use a list comprehension to extract the exact feature indices. Something like:
label_indices = [(labeled_array[:,:,0] == i).nonzero() for i in xrange(1, num_stars+1)]
or use the ndi.find_objects method to obtain a tuple of feature locations as slices to obtain the general location of the star but not necessarily the correct shape.
End of explanation
star_sizes = [(label_indices[i-1][0]).size for i in xrange(1, num_stars+1)]
print len(star_sizes)
biggest_star = np.where(star_sizes == np.max(star_sizes))[0]
print biggest_star
print star_sizes[biggest_star]
bwpic[label_indices[biggest_star][0],label_indices[biggest_star][1],:] = (255,0,0)
plt.imshow(bwpic);
Explanation: Let's change the color of the largest star in the field to red. To find the largest star, look at the lengths of the arrays stored in label_indices.
End of explanation |
4,504 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
It is assumed that this notebook is in the same folder as the python-files randomGraphsNumerical.py, transitionMatrix.py, transitionMatrixDirected.py, AmplifierQ.py and Plot.py. Also there has to be a subfolder called output.
Generate random graphs and calculate fixation probability
Specify some parameters first.
Step1: Now let's create the graphs, calculate the fixation probability and store it in one file per graph in the folder output.
Step2: Classify graphs into amplifiers and suppressors
Step3: Plotting | Python Code:
popSize = 4 # This is the number of nodes in the network
update = 'BD' # Either 'BD' or 'DB' for Birth-death or death-Birth updating respectively
direction = 'undirected' # Either 'directed' or 'undirected' graphs are used
stepSize = 0.05 # Step size for the probability for each link in the network to be present independently. We used 0.05.
numberOfGraphs = 500 # We used 500
Explanation: It is assumed that this notebook is in the same folder as the python-files randomGraphsNumerical.py, transitionMatrix.py, transitionMatrixDirected.py, AmplifierQ.py and Plot.py. Also there has to be a subfolder called output.
Generate random graphs and calculate fixation probability
Specify some parameters first.
End of explanation
for probLinkConnect in np.arange(0.0,1.0+stepSize,stepSize):
for graph in range(0, numberOfGraphs):
!python randomGraphsNumerical.py $popSize $probLinkConnect $graph $update $direction
Explanation: Now let's create the graphs, calculate the fixation probability and store it in one file per graph in the folder output.
End of explanation
!python AmplifierQ.py $popSize $numberOfGraphs $stepSize $update $direction
Explanation: Classify graphs into amplifiers and suppressors
End of explanation
from Plot import *
Plot(popSize, numberOfGraphs, stepSize, update, direction)
Explanation: Plotting
End of explanation |
4,505 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Space Projection using Autoencoders
In this example we are going to autoencode the faces of the olivetti dataset and try to reconstruct them back.
Step1: http
Step3: We now need some code to read pgm files.
Thanks to StackOverflow we have some code to leverage
Step4: Let's import it to H2O
Step5: Reconstructing the hidden space
Now that we have our model trained, we would like to understand better what is the internal representation of this model? What makes a face a .. face?
We will provide to the model some gaussian noise and see what is the results.
We star by creating some gaussian noise
Step6: Then we import this data inside H2O. We have to first map the columns to the gaussian data. | Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import pandas as pd
import scipy.io
import matplotlib.pyplot as plt
from IPython.display import Image, display
import h2o
from h2o.estimators.deeplearning import H2OAutoEncoderEstimator
h2o.init()
Explanation: Image Space Projection using Autoencoders
In this example we are going to autoencode the faces of the olivetti dataset and try to reconstruct them back.
End of explanation
!wget -c http://www.cl.cam.ac.uk/Research/DTG/attarchive/pub/data/att_faces.tar.Z
!tar xzvf att_faces.tar.Z;rm att_faces.tar.Z;
Explanation: http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html
End of explanation
import re
def read_pgm(filename, byteorder='>'):
Return image data from a raw PGM file as numpy array.
Format specification: http://netpbm.sourceforge.net/doc/pgm.html
with open(filename, 'rb') as f:
buffer = f.read()
try:
header, width, height, maxval = re.search(
b"(^P5\s(?:\s*#.*[\r\n])*"
b"(\d+)\s(?:\s*#.*[\r\n])*"
b"(\d+)\s(?:\s*#.*[\r\n])*"
b"(\d+)\s(?:\s*#.*[\r\n]\s)*)", buffer).groups()
except AttributeError:
raise ValueError("Not a raw PGM file: '%s'" % filename)
return np.frombuffer(buffer,
dtype='u1' if int(maxval) < 256 else byteorder+'u2',
count=int(width)*int(height),
offset=len(header)
).reshape((int(height), int(width)))
image = read_pgm("orl_faces/s12/6.pgm", byteorder='<')
image.shape
plt.imshow(image, plt.cm.gray)
plt.show()
import glob
import os
from collections import defaultdict
images = glob.glob("orl_faces/**/*.pgm")
data = defaultdict(list)
image_data = []
for img in images:
_,label,_ = img.split(os.path.sep)
imgdata = read_pgm(img, byteorder='<').flatten().tolist()
data[label].append(imgdata)
image_data.append(imgdata)
Explanation: We now need some code to read pgm files.
Thanks to StackOverflow we have some code to leverage:
End of explanation
faces = h2o.H2OFrame(image_data)
faces.shape
from h2o.estimators.deeplearning import H2OAutoEncoderEstimator
model = H2OAutoEncoderEstimator(
activation="Tanh",
hidden=[50],
l1=1e-4,
epochs=10
)
model.train(x=faces.names, training_frame=faces)
model
Explanation: Let's import it to H2O
End of explanation
import pandas as pd
gaussian_noise = np.random.randn(10304)
plt.imshow(gaussian_noise.reshape(112, 92), plt.cm.gray);
Explanation: Reconstructing the hidden space
Now that we have our model trained, we would like to understand better what is the internal representation of this model? What makes a face a .. face?
We will provide to the model some gaussian noise and see what is the results.
We star by creating some gaussian noise:
End of explanation
gaussian_noise_pre = dict(zip(faces.names,gaussian_noise))
gaussian_noise_hf = h2o.H2OFrame.from_python(gaussian_noise_pre)
result = model.predict(gaussian_noise_hf)
result.shape
img = result.as_data_frame()
img_data = img.T.values.reshape(112, 92)
plt.imshow(img_data, plt.cm.gray);
Explanation: Then we import this data inside H2O. We have to first map the columns to the gaussian data.
End of explanation |
4,506 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from Derivative Approximation by Finite Differences by David Eberly, additional text and SymPy examples by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
Step1: Generalization of Taylor FD operators
In the last lesson, we learned how to derive a high order FD approximation for the second derivative using Taylor series expansion. In the next step we derive a general equation to compute FD operators, where I use a detailed derivation based on "Derivative Approximation by Finite Differences" by David Eberly
Estimation of arbitrary FD operators by Taylor series expansion
We can approximate the $d-th$ order derivative of a function $f(x)$ with an order of error $p>0$ by a general finite-difference approximation | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 parts of this notebook are from Derivative Approximation by Finite Differences by David Eberly, additional text and SymPy examples by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# import SymPy libraries
from sympy import symbols, differentiate_finite, Function
# Define symbols
x, h = symbols('x h')
f = Function('f')
# 1st order forward operator for 1st derivative
forward_1st_fx = differentiate_finite(f(x), x, points=[x+h, x]).simplify()
print("1st order forward operator 1st derivative:")
print(forward_1st_fx)
print(" ")
# 1st order backward operator for 1st derivative
backward_1st_fx = differentiate_finite(f(x), x, points=[x, x-h]).simplify()
print("1st order backward operator 1st derivative:")
print(backward_1st_fx)
print(" ")
# 2nd order centered operator for 1st derivative
center_1st_fx = differentiate_finite(f(x), x, points=[x+h, x-h]).simplify()
print("2nd order center operator 1st derivative:")
print(center_1st_fx)
print(" ")
# 2nd order FD operator for 2nd derivative
center_2nd_fxx = differentiate_finite(f(x), x, 2, points=[x+h, x, x-h]).simplify()
print("2nd order center operator 2nd derivative:")
print(center_2nd_fxx)
print(" ")
# 4th order FD operator for 2nd derivative
center_4th_fxx = differentiate_finite(f(x), x, 2, points=[x+2*h, x+h, x, x-h, x-2*h]).simplify()
print("4th order center operator 2nd derivative:")
print(center_4th_fxx)
print(" ")
Explanation: Generalization of Taylor FD operators
In the last lesson, we learned how to derive a high order FD approximation for the second derivative using Taylor series expansion. In the next step we derive a general equation to compute FD operators, where I use a detailed derivation based on "Derivative Approximation by Finite Differences" by David Eberly
Estimation of arbitrary FD operators by Taylor series expansion
We can approximate the $d-th$ order derivative of a function $f(x)$ with an order of error $p>0$ by a general finite-difference approximation:
\begin{equation}
\frac{h^d}{d!}f^{(d)}(x) = \sum_{i=i_{min}}^{i_{max}} C_i f(x+ih) + \cal{O}(h^{d+p})
\end{equation}
where h is an equidistant grid point distance. By choosing the extreme indices $i_{min}$ and $i_{max}$, you can define forward, backward or central operators. The accuracy of the FD operator is defined by it's length and therefore also the number of
weighting coefficients $C_i$ incorporated in the approximation. $\cal{O}(h^{d+p})$ terms are negelected.
Formally, we can approximate $f(x+ih)$ by a Taylor series expansion:
\begin{equation}
f(x+ih) = \sum_{n=0}^{\infty} i^n \frac{h^n}{n!}f^{(n)}(x)\nonumber
\end{equation}
Inserting into eq.(1) yields
\begin{align}
\frac{h^d}{d!}f^{(d)}(x) &= \sum_{i=i_{min}}^{i_{max}} C_i \sum_{n=0}^{\infty} i^n \frac{h^n}{n!}f^{(n)}(x) + \cal{O}(h^{d+p})\nonumber\
\end{align}
We can move the second sum on the RHS to the front
\begin{align}
\frac{h^d}{d!}f^{(d)}(x) &= \sum_{n=0}^{\infty} \left(\sum_{i=i_{min}}^{i_{max}} i^n C_i\right) \frac{h^n}{n!}f^{(n)}(x) + \cal{O}(h^{d+p})\nonumber\
\end{align}
In the FD approximation we only expand the Taylor series up to the term $n=(d+p)-1$
\begin{align}
\frac{h^d}{d!}f^{(d)}(x) &= \sum_{n=0}^{(d+p)-1} \left(\sum_{i=i_{min}}^{i_{max}} i^n C_i\right) \frac{h^n}{n!}f^{(n)}(x) + \cal{O}(h^{d+p})\nonumber\
\end{align}
and neglect the $\cal{O}(h^{d+p})$ terms
\begin{align}
\frac{h^d}{d!}f^{(d)}(x) &= \sum_{n=0}^{(d+p)-1} \left(\sum_{i=i_{min}}^{i_{max}} i^n C_i\right) \frac{h^n}{n!}f^{(n)}(x)\
\end{align}
Multiplying by $\frac{d!}{h^d}$ leads to the desired approximation for the $d-th$ derivative of the function f(x):
\begin{align}
f^{(d)}(x) &= \frac{d!}{h^d}\sum_{n=0}^{(d+p)-1} \left(\sum_{i=i_{min}}^{i_{max}} i^n C_i\right) \frac{h^n}{n!}f^{(n)}(x)\
\end{align}
Treating the approximation in eq.(2) as an equality, the only term in the sum on the right-hand side of the approximation that contains $\frac{h^d}{d!}f^{d}(x)$ occurs when $n = d$, so the coefficient of that term must be 1. The other terms must vanish for there to be equality, so the coefficients of those terms must be 0; therefore, it is necessary that
\begin{equation}
\sum_{i=i_{min}}^{i_{max}} i^n C_i=
\begin{cases}
0, ~~ 0 \le n \le (d+p)-1 ~ \text{and} ~ n \ne d\
1, ~~ n = d
\end{cases}\nonumber\
\end{equation}
This is a set of $d + p$ linear equations in $i_{max} − i_{min} + 1$ unknowns. If we constrain the number of unknowns to be $d+p$, the linear system has a unique solution.
A forward difference approximation occurs if we set $i_{min} = 0$
and $i_{max} = d + p − 1$.
A backward difference approximation can be implemented by setting $i_{max} = 0$ and $i_{min} = −(d + p − 1)$.
A centered difference approximation occurs if we set $i_{max} = −i_{min} = (d + p − 1)/2$ where it appears that $d + p$ is necessarily an odd number. As it turns out, $p$ can be chosen to be even regardless of the parity of $d$ and $i_{max} = (d + p − 1)/2$.
We could either implement the resulting linear system as matrix equation as in the previous lesson, or simply use a SymPy function which gives us the FD operators right away.
End of explanation |
4,507 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Template for test
Step1: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
Step2: Y Phosphorylation
Step3: T Phosphorylation | Python Code:
from pred import Predictor
from pred import sequence_vector
from pred import chemical_vector
Explanation: Template for test
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_s.csv")
y.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=0)
y.supervised_training("svc")
y.benchmark("Data/Benchmarks/phos.csv", "S")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_s.csv")
x.process_data(vector_function="sequence", amino_acid="S", imbalance_function=i, random_data=1)
x.supervised_training("svc")
x.benchmark("Data/Benchmarks/phos.csv", "S")
del x
Explanation: Controlling for Random Negatve vs Sans Random in Imbalanced Techniques using S, T, and Y Phosphorylation.
Included is N Phosphorylation however no benchmarks are available, yet.
Training data is from phospho.elm and benchmarks are from dbptm.
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_Y.csv")
y.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=0)
y.supervised_training("svc")
y.benchmark("Data/Benchmarks/phos.csv", "Y")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_Y.csv")
x.process_data(vector_function="sequence", amino_acid="Y", imbalance_function=i, random_data=1)
x.supervised_training("svc")
x.benchmark("Data/Benchmarks/phos.csv", "Y")
del x
Explanation: Y Phosphorylation
End of explanation
par = ["pass", "ADASYN", "SMOTEENN", "random_under_sample", "ncl", "near_miss"]
for i in par:
print("y", i)
y = Predictor()
y.load_data(file="Data/Training/clean_t.csv")
y.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=0)
y.supervised_training("svc")
y.benchmark("Data/Benchmarks/phos.csv", "T")
del y
print("x", i)
x = Predictor()
x.load_data(file="Data/Training/clean_t.csv")
x.process_data(vector_function="sequence", amino_acid="T", imbalance_function=i, random_data=1)
x.supervised_training("svc")
x.benchmark("Data/Benchmarks/phos.csv", "T")
del x
Explanation: T Phosphorylation
End of explanation |
4,508 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
df_infl_ctry.rename(columns = dic)
tt = df_infl_ctry.copy()
tt['month'] = tt.index.month
tt['year'] = tt.index.year
melted_df = pd.melt(tt,id_vars=['month','year'])
melted_df.head()
Step1: df_infl_ctry['month'] = df_infl_ctry.index.month
df_infl_ctry['year'] = df_infl_ctry.index.year
Step2: Generate a bunch of histograms of the data to make sure that all of the data
is in an expected range.
with plt.style.context('https | Python Code:
df_infl_ctry['min'] = df_infl_ctry.apply(min,axis=1)
df_infl_ctry['max'] = df_infl_ctry.apply(max,axis=1)
df_infl_ctry['mean'] = df_infl_ctry.apply(np.mean,axis=1)
df_infl_ctry['mode'] = df_infl_ctry.quantile(q=0.5, axis=1)
df_infl_ctry['10th'] = df_infl_ctry.quantile(q=0.10, axis=1)
df_infl_ctry['90th'] = df_infl_ctry.quantile(q=0.90, axis=1)
df_infl_ctry['25th'] = df_infl_ctry.quantile(q=0.25, axis=1)
df_infl_ctry['75th'] = df_infl_ctry.quantile(q=0.75, axis=1)
df_infl_ctry.head()
Explanation: df_infl_ctry.rename(columns = dic)
tt = df_infl_ctry.copy()
tt['month'] = tt.index.month
tt['year'] = tt.index.year
melted_df = pd.melt(tt,id_vars=['month','year'])
melted_df.head()
End of explanation
df_infl_ctry.tail()
print(df_infl_ctry.describe())
Explanation: df_infl_ctry['month'] = df_infl_ctry.index.month
df_infl_ctry['year'] = df_infl_ctry.index.year
End of explanation
len(df_infl_ctry)
df_infl_ctry.columns
df_infl_ctry['month_order'] = range(len(df_infl_ctry))
month_order = df_infl_ctry['month_order']
max_infl = df_infl_ctry['max'].values
min_infl = df_infl_ctry['min'].values
mean_infl = df_infl_ctry['mean'].values
mode_infl = df_infl_ctry['mode'].values
p25th = df_infl_ctry['25th'].values
p75th = df_infl_ctry['75th'].values
p10th = df_infl_ctry['10th'].values
p90th = df_infl_ctry['90th'].values
inflEA = df_infl_ctry['76451'].values
year_begin_df = df_infl_ctry[df_infl_ctry.index.month == 1]
year_begin_df;
year_beginning_indeces = list(year_begin_df['month_order'].values)
year_beginning_indeces
year_beginning_names = list(year_begin_df.index.year)
year_beginning_names
month_order
#import seaborn as sns
fig, ax1 = plt.subplots(figsize=(15, 7))
# Create the bars showing highs and lows
#plt.bar(month_order, max_infl - min_infl, bottom=min_infl,
# edgecolor='none', color='#C3BBA4', width=1)
plt.bar(month_order, p90th - p10th, bottom=p10th,
edgecolor='none', color='#C3BBA4', width=1)
# Create the bars showing average highs and lows
plt.bar(month_order, p75th - p25th, bottom=p25th,
edgecolor='none', color='#9A9180', width=1);
#annotations={month_order[50]:'Dividends'}
plt.plot(month_order, inflEA, color='#5A3B49',linewidth=2 );
plt.plot(month_order, mode_infl, color='wheat',linewidth=2,alpha=.3);
plt.xticks(year_beginning_indeces,
year_beginning_names,
fontsize=10)
#ax2 = ax1.twiny()
plt.xticks(year_beginning_indeces,
year_beginning_names,
fontsize=10);
plt.xlim(-5,200)
plt.grid(False)
##ax2 = ax1.twiny()
plt.ylim(-5, 14)
#ax3 = ax1.twinx()
plt.yticks(range(-4, 15, 2), [r'{}'.format(x)
for x in range(-4, 15, 2)], fontsize=10);
plt.grid(axis='both', color='wheat', linewidth=1.5, alpha = .5)
plt.title('HICP innflation, annual rate of change, Jan 2000 - March 2016\n\n', fontsize=20);
Explanation: Generate a bunch of histograms of the data to make sure that all of the data
is in an expected range.
with plt.style.context('https://gist.githubusercontent.com/rhiever/d0a7332fe0beebfdc3d5/raw/223d70799b48131d5ce2723cd5784f39d7a3a653/tableau10.mplstyle'):
for column in df_infl_ctry.columns[:-2]:
#if column in ['date']:
# continue
plt.figure()
plt.hist(df_infl_ctry[column].values)
plt.title(column)
#plt.savefig('{}.png'.format(column))
End of explanation |
4,509 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regular Expressions
Regular expressions are text matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, fro finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).
If you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the re module with Python for this lecture.
Let's get started!
Searching for Patterns in Text
One of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text
Step1: Now we've seen that re.search() will take the pattern, scan the text, and then returns a Match object. If no pattern is found, a None is returned. To give a clearer picture of this match object, check out the cell below
Step2: This Match object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object
Step3: Split with regular expressions
Let's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.
Step4: Note how re.split() returns a list with the term to spit on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand!
Finding all instances of a pattern
You can use re.findall() to find all the instances of a pattern in a string. For example
Step5: Pattern re Syntax
This will be the bulk of this lecture on using re with Python. Regular expressions supports a huge variety of patterns the just simply finding where a single string occurred.
We can use metacharacters along with re to find specific types of patterns.
Since we will be testing multiple re syntax forms, let's create a function that will print out results given a list of various regular expressions and a phrase to parse
Step6: Repetition Syntax
There are five ways to express repetition in a pattern
Step7: Character Sets
Character sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example
Step8: It makes sense that the first [sd] returns every instance. Also the second input will just return any thing starting with an s in this particular case of the test phrase input.
Exclusion
We can use ^ to exclude terms by incorporating it into the bracket syntax notation. For example
Step9: Use [^!.? ] to check for matches that are not a !,.,?, or space. Add the + to check that the match appears at least once, this basically translate into finding the words.
Step10: Character Ranges
As character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is [start-end].
Common use cases are to search for a specific range of letters in the alphabet, such [a-f] would return matches with any instance of letters between a and f.
Let's walk through some examples
Step11: Escape Codes
You can use special escape codes to find specific types of patterns in your data, such as digits, non-digits,whitespace, and more. For example | Python Code:
import re
# List of patterns to search for
patterns = [ 'term1', 'term2' ]
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
for pattern in patterns:
print 'Searching for "%s" in: \n"%s"' % (pattern, text),
#Check for match
if re.search(pattern, text):
print '\n'
print 'Match was found. \n'
else:
print '\n'
print 'No Match was found.\n'
Explanation: Regular Expressions
Regular expressions are text matching patterns described with a formal syntax. You'll often hear regular expressions referred to as 'regex' or 'regexp' in conversation. Regular expressions can include a variety of rules, fro finding repetition, to text-matching, and much more. As you advance in Python you'll see that a lot of your parsing problems can be solved with regular expressions (they're also a common interview question!).
If you're familiar with Perl, you'll notice that the syntax for regular expressions are very similar in Python. We will be using the re module with Python for this lecture.
Let's get started!
Searching for Patterns in Text
One of the most common uses for the re module is for finding patterns in text. Let's do a quick example of using the search method in the re module to find some text:
End of explanation
# List of patterns to search for
pattern = 'term1'
# Text to parse
text = 'This is a string with term1, but it does not have the other term.'
match = re.search(pattern, text)
type(match)
Explanation: Now we've seen that re.search() will take the pattern, scan the text, and then returns a Match object. If no pattern is found, a None is returned. To give a clearer picture of this match object, check out the cell below:
End of explanation
# Show start of match
match.start()
# Show end
match.end()
Explanation: This Match object returned by the search() method is more than just a Boolean or None, it contains information about the match, including the original input string, the regular expression that was used, and the location of the match. Let's see the methods we can use on the match object:
End of explanation
# Term to split on
split_term = '@'
phrase = 'What is the domain name of someone with the email: [email protected]'
# Split the phrase
re.split(split_term,phrase)
Explanation: Split with regular expressions
Let's see how we can split with the re syntax. This should look similar to how you used the split() method with strings.
End of explanation
# Returns a list of all matches
re.findall('match','test phrase match is in middle')
Explanation: Note how re.split() returns a list with the term to spit on removed and the terms in the list are a split up version of the string. Create a couple of more examples for yourself to make sure you understand!
Finding all instances of a pattern
You can use re.findall() to find all the instances of a pattern in a string. For example:
End of explanation
def multi_re_find(patterns,phrase):
'''
Takes in a list of regex patterns
Prints a list of all matches
'''
for pattern in patterns:
print 'Searching the phrase using the re check: %r' %pattern
print re.findall(pattern,phrase)
print '\n'
Explanation: Pattern re Syntax
This will be the bulk of this lecture on using re with Python. Regular expressions supports a huge variety of patterns the just simply finding where a single string occurred.
We can use metacharacters along with re to find specific types of patterns.
Since we will be testing multiple re syntax forms, let's create a function that will print out results given a list of various regular expressions and a phrase to parse:
End of explanation
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ 'sd*', # s followed by zero or more d's
'sd+', # s followed by one or more d's
'sd?', # s followed by zero or one d's
'sd{3}', # s followed by three d's
'sd{2,3}', # s followed by two to three d's
]
multi_re_find(test_patterns,test_phrase)
Explanation: Repetition Syntax
There are five ways to express repetition in a pattern:
1.) A pattern followed by the meta-character * is repeated zero or more times.
2.) Replace the * with + and the pattern must appear at least once.
3.) Using ? means the pattern appears zero or one time.
4.) For a specific number of occurrences, use {m} after the pattern, where m is replaced with the number of times the pattern should repeat.
5.) Use {m,n} where m is the minimum number of repetitions and n is the maximum. Leaving out n ({m,}) means the value appears at least m times, with no maximum.
Now we will see an example of each of these using our multi_re_find function:
End of explanation
test_phrase = 'sdsd..sssddd...sdddsddd...dsds...dsssss...sdddd'
test_patterns = [ '[sd]', # either s or d
's[sd]+'] # s followed by one or more s or d
multi_re_find(test_patterns,test_phrase)
Explanation: Character Sets
Character sets are used when you wish to match any one of a group of characters at a point in the input. Brackets are used to construct character set inputs. For example: the input [ab] searches for occurrences of either a or b.
Let's see some examples:
End of explanation
test_phrase = 'This is a string! But it has punctuation. How can we remove it?'
Explanation: It makes sense that the first [sd] returns every instance. Also the second input will just return any thing starting with an s in this particular case of the test phrase input.
Exclusion
We can use ^ to exclude terms by incorporating it into the bracket syntax notation. For example: [^...] will match any single character not in the brackets. Let's see some examples:
End of explanation
re.findall('[^!.? ]+',test_phrase)
Explanation: Use [^!.? ] to check for matches that are not a !,.,?, or space. Add the + to check that the match appears at least once, this basically translate into finding the words.
End of explanation
test_phrase = 'This is an example sentence. Lets see if we can find some letters.'
test_patterns=[ '[a-z]+', # sequences of lower case letters
'[A-Z]+', # sequences of upper case letters
'[a-zA-Z]+', # sequences of lower or upper case letters
'[A-Z][a-z]+'] # one upper case letter followed by lower case letters
multi_re_find(test_patterns,test_phrase)
Explanation: Character Ranges
As character sets grow larger, typing every character that should (or should not) match could become very tedious. A more compact format using character ranges lets you define a character set to include all of the contiguous characters between a start and stop point. The format used is [start-end].
Common use cases are to search for a specific range of letters in the alphabet, such [a-f] would return matches with any instance of letters between a and f.
Let's walk through some examples:
End of explanation
test_phrase = 'This is a string with some numbers 1233 and a symbol #hashtag'
test_patterns=[ r'\d+', # sequence of digits
r'\D+', # sequence of non-digits
r'\s+', # sequence of whitespace
r'\S+', # sequence of non-whitespace
r'\w+', # alphanumeric characters
r'\W+', # non-alphanumeric
]
multi_re_find(test_patterns,test_phrase)
Explanation: Escape Codes
You can use special escape codes to find specific types of patterns in your data, such as digits, non-digits,whitespace, and more. For example:
<table border="1" class="docutils">
<colgroup>
<col width="14%" />
<col width="86%" />
</colgroup>
<thead valign="bottom">
<tr class="row-odd"><th class="head">Code</th>
<th class="head">Meaning</th>
</tr>
</thead>
<tbody valign="top">
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\d</span></tt></td>
<td>a digit</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\D</span></tt></td>
<td>a non-digit</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\s</span></tt></td>
<td>whitespace (tab, space, newline, etc.)</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\S</span></tt></td>
<td>non-whitespace</td>
</tr>
<tr class="row-even"><td><tt class="docutils literal"><span class="pre">\w</span></tt></td>
<td>alphanumeric</td>
</tr>
<tr class="row-odd"><td><tt class="docutils literal"><span class="pre">\W</span></tt></td>
<td>non-alphanumeric</td>
</tr>
</tbody>
</table>
Escapes are indicated by prefixing the character with a backslash (). Unfortunately, a backslash must itself be escaped in normal Python strings, and that results in expressions that are difficult to read. Using raw strings, created by prefixing the literal value with r, for creating regular expressions eliminates this problem and maintains readability.
Personally, I think this use of r to escape a backslash is probably one of the things that block someone who is not familiar with regex in Python from being able to read regex code at first. Hopefully after seeing these examples this syntax will become clear.
End of explanation |
4,510 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<span style="color
Step1: Here the line (<span style="color
Step2: Replacing the first line with <span style="color
Step3: <img src="Tutorial7/MatplotlibQt.png" width="500" >
This GUI allows users to interactively inspect the plot .e.g. enlarging specific plot region for details, moving the plot on the canvas, changing the axis range and colors etc.
However, we will only use (<span style="color
Step4: In the above example, the same x and y data are used (both of which have been declared beforehand), instead of just plotting, users can add title of the graph, names for x and y axes, insert legend and save the file with specific resolution (in this case as .png file). It is not necessary to declare <span style="color
Step5: 7.2 Multiple Plots
Multiple plots can be created in many ways. It can be done by specifying the position of these plots on the whole canvas.
Step6: The input cell above shows the user how to embed a plot within a plot by specifying the initial coordinates and sizes of the plots. A slightly different syntax is used here with the sub plot properties assigned to variables axes1 and axes2. From here on it is easy to add features to each plot by operating on the each plot variable.
It is possibly much better to use sub plotting functions (depends on users taste) like <span style="color
Step7: Other subplot functions like <span style="color
Step8: The axes numerics are quite messy becuase of overlapping. User can use the <span style="color
Step9: We can now add some data inside these plots using data in x_data (and also increase the canvas size).
Step10: Relative size of row and column can be specified using <span style="color
Step11: <span style="color
Step12: Similarly, let add some data
Step13: 7.3 Setting the features of the plot
Features of a plot like label font size, legend position, ticks number etc. can be specified. There are many ways to do all of these and one of the way (possibly with the easiest syntax to understand) is shown in the codes below. It is good practice to assigned the plotting function to a variable before setting these features. This is very useful when doing multiple plotting.
Step14: The <span style="color
Step15: I suppose most aspects of the line plot have been covered but please remember that there are many ways in which a line plot can be created.
Histograms and bar charts can be plotted with similar syntax but will be explored further when we learn <span style="color
Step16: It is possible to extract the data from histogram/bar chart.
Step17: The histogram plots below is an example from one of the <span style="color
Step18: 7.4 3D Plotting
A 3D plot can be created by importing the Axes3D class and passing a projection='3d' argument.
Step19: Using the data X, Y and Z in the computer memory from the above cell, we create a wireframe plot and a surface plot with contour projection.
Step20: 7.5 Animated Plotting
<span style="color
Step21: Creating the video in a correct video format that can be run by your browser may require the installation of these tools (in my case with Ubuntu 14.04LTS) | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: <span style="color: #B40486">BASIC PYTHON FOR RESEARCHERS</span>
by Megat Harun Al Rashid bin Megat Ahmad
last updated: April 14, 2016
<span style="color: #29088A">7. Data Visualization and Plotting</span>
The <span style="color: #0000FF">$Matplotlib$</span> library can be considered as the default data visualization and plotting tools for Python, even though there are others libraries that can be used e.g. <span style="color: #0000FF">$Chaco$</span>, <span style="color: #0000FF">$PyX$</span>, <span style="color: #0000FF">$Bokeh$</span> and <span style="color: #0000FF">$Lightning$</span>. In this tutorial, we will focus exclusively on <span style="color: #0000FF">$Matplotlib$</span> and explore the basics and few of its large number of advanced features.
7.1 Basic Plotting
Users can start plotting with the minimum lines of codes to create a simple line plot. In a 2-dimensional plot, data for x and y coordinates can be represented by lists or <span style="color: #0000FF">$NumPy$</span> arrays. Users can import the plotting function from the <span style="color: #0000FF">$Matplotlib$</span> library as well as specify on how to display it.
End of explanation
# Data in lists
x = [1,2,3,4,5]
y = [21,34,78,12,9]
# Plot x against y
plt.plot(x, y)
Explanation: Here the line (<span style="color: #B404AE">%</span>matplotlib inline) means that the plot when created will be displayed on the working document. The second line import the plotting function of <span style="color: #0000FF">$Matplotlib$</span> library.
End of explanation
%matplotlib qt
plt.plot(x, y)
Explanation: Replacing the first line with <span style="color: #B404AE">%</span>matplotlib qt, an interactive plotting graphical user interface (GUI) will be displayed instead in a separate window (similar to MATLAB).
End of explanation
%matplotlib inline
plt.plot(x, y)
plt.title('Plot Title')
plt.xlabel("label x") # naming the x-axis label
plt.ylabel("label y") # naming the y-axis label
plt.legend(['data'], loc='upper right') # displaying legend and its position
# saving the image plot with specific resolution
plt.savefig("Tutorial7/First_Image.png", dpi = 300)
Explanation: <img src="Tutorial7/MatplotlibQt.png" width="500" >
This GUI allows users to interactively inspect the plot .e.g. enlarging specific plot region for details, moving the plot on the canvas, changing the axis range and colors etc.
However, we will only use (<span style="color: #B404AE">%</span>matplotlib inline) for the remainder of this tutorial. Let us see some other basic codes that are needed to prepare more than a simple plot.
End of explanation
import numpy as np
x = [1.0,2.0,3.0,4.0,5.0]
y1 = [21,34,78,12,9]
y2 = [10,25,63,26,15]
plt.figure(figsize=(12, 6)) # set the figure canvas size before plotting
# Two plots here in the same graph, each on top the other going down the list
plt.plot(x,y2,'b--s', ms = 8, mfc = 'r', label='data 1')
plt.plot(x,y1,'g-o', linewidth = 4, ms = 12, mfc = 'magenta', alpha = 0.5, label='data 2')
plt.xlim(0.5,5.5) # setting x-axis range
plt.ylim(0,100) # setting y-axis range
plt.xticks(np.linspace(0.5,5.5,11)) # creating x ticks using numpy array
plt.yticks(np.linspace(0,100,11)) # creating y ticks using numpy array
plt.grid() # showing grid (according to ticks)
plt.xlabel("label x") # naming the x-axis label
plt.ylabel("label y") # naming the y-axis label
plt.legend(loc='upper right') # displaying legend and its position
# saving the image plot with specific resolution
plt.savefig("Tutorial7/Second_Image.eps", dpi = 100)
Explanation: In the above example, the same x and y data are used (both of which have been declared beforehand), instead of just plotting, users can add title of the graph, names for x and y axes, insert legend and save the file with specific resolution (in this case as .png file). It is not necessary to declare <span style="color: #B404AE">%</span>matplotlib inline in the cell when it is already declared in previous cell (but if <span style="color: #B404AE">%</span>matplotlib qt was declared at some cells previously, then <span style="color: #B404AE">%</span>matplotlib inline needs to be re-declared again). Much more features can be added (like below) and we will see these one by one.
End of explanation
x_data = np.linspace(0.5,2.5,20) # Creating data
x_data
# y data are calculated inside plot() function
fig = plt.figure(figsize=(8, 6)) # set the figure canvas size before plotting
axes1 = fig.add_axes([0.1, 0.1, 0.9, 0.9]) # creating first plot and its positions on canvas
axes2 = fig.add_axes([0.2, 0.55, 0.35, 0.35]) # creating second plot and its position on canvas
# main figure
axes1.plot(x_data,np.exp(x_data),'go-') # green with square object and line
axes1.set_xlabel('x main')
axes1.set_ylabel('y main')
axes1.set_title('main title')
# embedded figure
axes2.plot(x_data,x_data**(-1),'r--') # just a red dashed line
axes2.set_xlabel('x embedded')
axes2.set_ylabel('y embedded')
axes2.set_title('embedded title')
Explanation: 7.2 Multiple Plots
Multiple plots can be created in many ways. It can be done by specifying the position of these plots on the whole canvas.
End of explanation
# y data are calculated inside plot() function
plt.subplot(1,2,1) # graph with one row, two columns at position 1
plt.plot(x_data,np.exp(x_data),'go-') # green with square object and line
plt.subplot(1,2,2) # graph with one row, two columns at position 2
plt.plot(x_data,x_data**(-1),'r--') # just a red dashed line
plt.subplot(2,2,1)
plt.plot(x_data,np.exp(x_data),'go-')
plt.subplot(2,2,2)
plt.plot(x_data,x_data**(-1),'r--')
plt.subplot(2,2,4)
plt.plot(x_data,x_data,'bs-')
Explanation: The input cell above shows the user how to embed a plot within a plot by specifying the initial coordinates and sizes of the plots. A slightly different syntax is used here with the sub plot properties assigned to variables axes1 and axes2. From here on it is easy to add features to each plot by operating on the each plot variable.
It is possibly much better to use sub plotting functions (depends on users taste) like <span style="color: #0000FF">$subplot(i,j,k)$</span> function where <span style="color: #0000FF">$i$</span> represents number of rows, <span style="color: #0000FF">$j$</span> represents number of columns and <span style="color: #0000FF">$k$</span> is the position of the plot (moving horizontally to the left from above to below) as shown for <span style="color: #0000FF">$i$</span> = 2 and <span style="color: #0000FF">$j$</span> = 2 below.
<img src="Tutorial7/Grid1.png" width="200" >
End of explanation
# A canvas of 2 x 3 graph
plot1 = plt.subplot2grid((2,3), (0,0), rowspan=2) # a(i,j) = starting at (0,0) and extend to (1,0)
plot2 = plt.subplot2grid((2,3), (0,1), colspan=2) # a(i,j) = starting at (0,1) and extend to (0,2)
plot3 = plt.subplot2grid((2,3), (1,1)) # a(i,j) = starting at (1,1)
plot4 = plt.subplot2grid((2,3), (1,2)) # a(i,j) = starting at (1,2)
Explanation: Other subplot functions like <span style="color: #0000FF">$subplot2grid()$</span> and <span style="color: #0000FF">$gridspec()$</span> allow more control on the produced multi-plots. In the case of <span style="color: #0000FF">$subplot2grid()$</span>, the location of a plot in a 2-dimensional graphical grid can be specified using coordinate (i,j) with both i and j starting at $0$.
End of explanation
# A canvas of 2 x 3 graph
p1 = plt.subplot2grid((2,3), (0,0), rowspan=2) # a(i,j) = starting at (0,0) and extend to (1,0)
p2 = plt.subplot2grid((2,3), (0,1), colspan=2) # a(i,j) = starting at (0,1) and extend to (0,2)
p3 = plt.subplot2grid((2,3), (1,1)) # a# it return best fit values for parameters
# pass the function name, x array, y array, initial value of parameters (in list)(i,j) = starting at (1,1)
p4 = plt.subplot2grid((2,3), (1,2)) # a(i,j) = starting at (1,2)
plt.tight_layout()
Explanation: The axes numerics are quite messy becuase of overlapping. User can use the <span style="color: #0000FF">$tight{_}layout()$</span> function to automatically tidy up the whole graph.
End of explanation
# A canvas of 2 x 3 graph
plt.figure(figsize=(8,4))
p1 = plt.subplot2grid((2,3), (0,0), rowspan=2) # a(i,j) = starting at (0,0) and extend to (1,0)
p2 = plt.subplot2grid((2,3), (0,1), colspan=2) # a(i,j) = starting at (0,1) and extend to (0,2)
p3 = plt.subplot2grid((2,3), (1,1)) # a(i,j) = starting at (1,1)
p4 = plt.subplot2grid((2,3), (1,2)) # a(i,j) = starting at (1,2)
plt.tight_layout()
p1.plot(x_data,np.exp(x_data),'go-')
p2.plot(x_data,2.0*np.exp(-((x_data-1.5)**2/(2*0.05))),'ms-') # Gaussian function for y value
p3.plot(x_data,x_data,'bs-')
p4.plot(x_data,x_data**(-1),'r--')
Explanation: We can now add some data inside these plots using data in x_data (and also increase the canvas size).
End of explanation
# a 2 x 2 grid plot with specified relative width and height
plt.figure(figsize=(8,4))
gs = plt.GridSpec(2, 2, width_ratios=[1,2],height_ratios=[2.5,1.5]) # a 2 x 2 grid plot
gp1 = plt.subplot(gs[0])
gp2 = plt.subplot(gs[1])
gp3 = plt.subplot(gs[2])
gp4 = plt.subplot(gs[3])
plt.tight_layout()
gp1.plot(x_data,np.exp(x_data),'go-')
gp2.plot(x_data,2.0*np.exp(-((x_data-1.5)**2/(2*0.05))),'ms-')
gp3.plot(x_data,x_data,'bs-')
gp4.plot(x_data,x_data**(-1),'r--')
gp4.set_yticks(np.linspace(0.4,2.0,5)) # setting the y ticks for plot 4
Explanation: Relative size of row and column can be specified using <span style="color: #0000FF">$GridSpec()$</span>:
End of explanation
plt.figure(figsize=(12,4))
gs1 = plt.GridSpec(3, 3, width_ratios=[1.5,1,1.5],height_ratios=[1.5,1,1.5])
gs1.update(left=0.05, right=0.48, wspace=0.3, hspace=0.4) # size on canvas
fp1 = plt.subplot(gs1[:2, :2])
fp2 = plt.subplot(gs1[0, 2])
fp3 = plt.subplot(gs1[2, :2])
fp4 = plt.subplot(gs1[1:, 2])
gs2 = plt.GridSpec(3, 3, width_ratios=[1.6,1,1.4],height_ratios=[1.3,1,1.7])
gs2.update(left=0.55, right=0.98, wspace=0.3, hspace=0.4) # size on canvas
afp1 = plt.subplot(gs2[:2,0])
afp2 = plt.subplot(gs2[:2, 1:])
afp3 = plt.subplot(gs2[2, :-1])
afp4 = plt.subplot(gs2[2, -1])
Explanation: <span style="color: #0000FF">$GridSpec()$</span> also allows the users to mix relative lengths of rows and columns with plots that extend these rows and columns (much like <span style="color: #0000FF">$subplot2grid()$</span>).
End of explanation
plt.figure(figsize=(12,4))
gs1 = plt.GridSpec(3, 3, width_ratios=[1.5,1,1.5],height_ratios=[1.5,1,1.5])
gs1.update(left=0.05, right=0.48, wspace=0.3, hspace=0.4) # size on canvas
fp1 = plt.subplot(gs1[:2, :2])
fp2 = plt.subplot(gs1[0, 2])
fp3 = plt.subplot(gs1[2, :2])
fp4 = plt.subplot(gs1[1:, 2])
gs2 = plt.GridSpec(3, 3, width_ratios=[1.6,1,1.4],height_ratios=[1.3,1,1.7])
gs2.update(left=0.55, right=0.98, wspace=0.3, hspace=0.4) # size on canvas
afp1 = plt.subplot(gs2[:2,0])
afp2 = plt.subplot(gs2[:2, 1:])
afp3 = plt.subplot(gs2[2, :-1])
afp4 = plt.subplot(gs2[2, -1])
fp1.plot(x_data,2.0*np.exp(-((x_data-1.5)**2/(2*0.05))),'ms-')
fp2.plot(x_data,np.exp(x_data),'go-')
fp2.set_yticks(np.linspace(0,14,5))
fp3.plot(x_data,x_data,'bs-')
fp4.plot(x_data,x_data**(-1),'r--')
afp1.plot(x_data,x_data**(-1),'r--')
afp2.plot(x_data,2.0*np.exp(-((x_data-1.5)**2/(2*0.05))),'ms-')
afp3.plot(x_data,np.exp(x_data),'go-')
afp4.plot(x_data,x_data,'bs-')
afp4.set_yticks(np.linspace(0,3,6))
Explanation: Similarly, let add some data:
End of explanation
xl = np.linspace(4,6,31)
Lf = 5*(0.02/((xl-5)**2 + 0.02)) # Cauchy or Lorentz distribution
# Setting the canvas size and also resolution
plt.figure(figsize=(14,5), dpi=100)
# Assigning plotting function to variable, here subplot() is used
# even though there is only one plot
# and no ticks created on the frame (ticks are specified later)
Lp = plt.subplot(xticks=[], yticks=[])
# Plot certain borders with width specified
Lp.spines['bottom'].set_linewidth(2)
Lp.spines['left'].set_linewidth(2)
plt.subplots_adjust(left=0.1, right=0.9, top=0.75, bottom=0.25) # size on canvas
# Plotting
Lp.plot(xl, Lf, color="r", linewidth=2.0, linestyle="-")
# Title and Axes and Legend
Lp.set_title('Cauchy \nDistribution', fontsize=18, color='blue',\
horizontalalignment='left', fontweight='bold', x=0.05, y=1.05)
Lp.set_xlabel(r'$x$', fontsize=20, fontweight='bold', color='g')
Lp.set_ylabel(r'$L(x) = A\left[\frac{\gamma}{(x - x_{o})^{2} + \gamma}\right]$', \
fontsize=20, fontweight='bold', color='#DF0101')
# Legend
## Anchoring legend on the canvas and made it translucent
Lp.legend([r'$Lorentz$'], fontsize=18, bbox_to_anchor=[0.2, 0.98]).get_frame().set_alpha(0.5)
# Axes ticks and ranges
ytick_list = np.linspace(0,6,7)
Lp.set_xticks(np.linspace(0,10,11))
Lp.set_yticks(ytick_list)
Lp.set_xticklabels(["$%.1f$" % xt for xt in (np.linspace(0,10,11))], fontsize = 14)
Lp.set_yticklabels(["$%d$" % yt for yt in ytick_list], fontsize = 14)
## Major ticks
Lp.tick_params(axis='x', which = 'major', direction='out', top = 'off', width=2, length=10, pad=15)
Lp.tick_params(axis='y', which = 'major', direction='in', right = 'off', width=2, length=10, pad=5)
## Minor ticks
Lp.set_xticks(np.linspace(-0.2,10.2,53), minor=True)
Lp.tick_params(axis='x', which = 'minor', direction='out', top = 'off', width=1, length=5)
# Grid
Lp.grid(which='major', color='#0B3B0B', alpha=0.75, linestyle='dashed', linewidth=1.2)
Lp.grid(which='minor', color='#0B3B0B', linewidth=1.2)
# Save the plot in many formats (compare the differences)
plt.savefig("Tutorial7/Third_Image.eps", dpi = 500)
plt.savefig("Tutorial7/Third_Image.jpeg", dpi = 75)
Explanation: 7.3 Setting the features of the plot
Features of a plot like label font size, legend position, ticks number etc. can be specified. There are many ways to do all of these and one of the way (possibly with the easiest syntax to understand) is shown in the codes below. It is good practice to assigned the plotting function to a variable before setting these features. This is very useful when doing multiple plotting.
End of explanation
xv = np.linspace(0,10,11)
yv = 5*xv
# Setting the canvas size and also resolution
plt.figure(figsize=(14,4), dpi=100)
plt.xlim(0,50)
plt.ylim(0,50)
Lp = plt.subplot()
# Plot without top and right borders
Lp.spines['top'].set_visible(False)
Lp.spines['right'].set_visible(False)
# Show ticks only on left and bottom spines
Lp.yaxis.set_ticks_position('left')
Lp.xaxis.set_ticks_position('bottom')
# Title and axes labels
Lp.set_title('Variety of Plot')
Lp.set_xlabel(r'$x$')
Lp.set_ylabel(r'$y$')
# Plotting
## Line widths and colors
Lp.plot(xv+2, yv, color="blue", linewidth=0.25)
Lp.plot(xv+4, yv, color="red", linewidth=0.50)
Lp.plot(xv+6, yv, color="m", linewidth=1.00)
Lp.plot(xv+8, yv, color="blue", linewidth=2.00)
## Linestype options: ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’
Lp.plot(xv+12, yv, color="red", lw=2, linestyle='-')
Lp.plot(xv+14, yv, color="#08088A", lw=2, ls='-.')
Lp.plot(xv+16, yv, color="red", lw=2, ls=':')
## Dash line can be cusotomized
line, = Lp.plot(xv+20, yv, color="blue", lw=2)
line.set_dashes([5, 10, 15, 10]) # format: line length, space length, ...
## Possible marker symbols: marker = '+', 'o', '*', 's', ',', '.', '1', '2', '3', '4', ...
Lp.plot(xv+24, yv, color="#0B3B0B", lw=2, ls='', marker='+', markersize = 12)
Lp.errorbar(xv+26, yv, color="green", lw=2, ls='', marker='o', yerr=5)
Lp.plot(xv+28, yv, color="green", lw=2, ls='', marker='s')
Lp.plot(xv+30, yv, color="#0B3B0B", lw=2, ls='', marker='1', ms = 12)
# Marker sizes and colorsplt.errorbar(x, y, xerr=0.2, yerr=0.4)
Lp.plot(xv+34, yv, color="r", lw=1, ls='-', marker='o', markersize=3)
Lp.plot(xv+36, yv, color="g", lw=1, ls='-', marker='o', markersize=5)
Lp.plot(xv+38, yv, color="purple", lw=1, ls='-', marker='o', markersize=8, markerfacecolor="red")
Lp.plot(xv+40, yv, color="purple", lw=1, ls='-', marker='s', markersize=8,
markerfacecolor="yellow", markeredgewidth=2, markeredgecolor="blue");
Explanation: The <span style="color: #0000FF">$r\${text}\$$</span> notation allows the writing of mathematical equations using <span style="color: #0000FF">$LaTeX$</span> syntax.
Let us now see some variety of plot lines that can be created.
End of explanation
people = ['Rechtschaffen', 'Tadashi', 'Vertueux', 'Justo', 'Salleh']
num_of_publ = [20,34,51,18,46]
y_pos = np.arange(len(people))
plt.title('Academic Output')
plt.barh(y_pos, num_of_publ, align='center', color = '#DF0101', alpha=0.5)
plt.yticks(y_pos, people)
plt.xlabel('Number of Publications')
plt.ylabel('Academicians')
Explanation: I suppose most aspects of the line plot have been covered but please remember that there are many ways in which a line plot can be created.
Histograms and bar charts can be plotted with similar syntax but will be explored further when we learn <span style="color: #0000FF">$Pandas$</span> in the next tutorial. <span style="color: #0000FF">$Pandas$</span> has a rich approach to produce histogram/bar charts with <span style="color: #0000FF">$Matplotlib$</span>. Some examples of histograms/bars charts:
Some examples of histograms/bars charts from <span style="color: #0000FF">$Matplotlib$</span>:
End of explanation
n = np.random.randn(100000) # a 1D normalized random array with 100,000 elements
fig, Hp = plt.subplots(1,2,figsize=(12,4))
# Each bin represents number of element (y-axis values) with
# certain values in the bin range (x-axis)
Hp[0].hist(n, bins = 50, color = 'red', alpha = 0.25)
Hp[0].set_title("Normal Distribution with bins = 50")
Hp[0].set_xlim((min(n), max(n)))
Hp[1].hist(n, bins = 25, color = 'g')
ni = plt.hist(n, cumulative=True, bins=25, visible=False) # Extract the data
Hp[1].set_title("Normal Distribution with bins = 25")
Hp[1].set_xlim((min(n), max(n)))
Hp[1].set_ylim(0, 15000)
ni
Explanation: It is possible to extract the data from histogram/bar chart.
End of explanation
'''Plot histogram with multiple sample sets and demonstrate:
* Use of legend with multiple sample sets
* Stacked bars
* Step curve with a color fill
* Data sets of different sample sizes
'''
n_bins = 10 # Number of bins to be displayed
x = np.random.randn(1000, 3) # A 2D normalized random array with 3 x 1,000 elements
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(10,8))
ax0, ax1, ax2, ax3 = axes.flat # Assigning the axes list to different variables
colors = ['red', 'tan', 'lime']
# {normed = 1} means that y-axis values are normalized
ax0.hist(x, n_bins, normed=1, histtype='bar', color=colors, label=colors)
ax0.legend(prop={'size': 10})
ax0.set_title('bars with legend')
ax1.hist(x, n_bins, normed=1, histtype='bar', stacked=True)
ax1.set_title('stacked bar')
ax2.hist(x, n_bins, histtype='step', stacked=True, fill=True)
ax2.set_title('stepfilled')
# Make a multiple-histogram of data-sets with different length
# or inhomogeneous 2D normalized random array
x_multi = [np.random.randn(n) for n in [10000, 5000, 2000]]
ax3.hist(x_multi, n_bins, histtype='bar')
ax3.set_title('different sample sizes')
plt.tight_layout()
Explanation: The histogram plots below is an example from one of the <span style="color: #0000FF">$Matplotlib$</span> gallery example with some slight modifications and added comments.
End of explanation
from mpl_toolkits.mplot3d.axes3d import Axes3D
# 2D Gaussian Plot (3D image)
import matplotlib.cm as cm
import matplotlib.ticker as ticker
a = 12.0
b = 5.0
c = 0.3
fig = plt.figure(figsize=(6,4))
ax = fig.gca(projection='3d') # Passing a projection='3d' argument
X = np.linspace(4, 6, 100)
Y = np.linspace(4, 6, 100)
X, Y = np.meshgrid(X, Y)
R = (X-b)**2/(2*c**2) + (Y-b)**2/(2*c**2) # Use the declared a,b,c values in 1D-Gaussian
Z = a*np.exp(-R)
# Applying the plot features to variable surf
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.hsv,
linewidth=0, antialiased=True, alpha=0.3)
ax.set_zlim(-1.01, 14)
ax.zaxis.set_major_locator(ticker.LinearLocator(8))
ax.zaxis.set_major_formatter(ticker.FormatStrFormatter('%.1f'))
fig.colorbar(surf, shrink=0.4, aspect=10)
ax.view_init(30, 40) # Angles of viewing
plt.tight_layout()
Explanation: 7.4 3D Plotting
A 3D plot can be created by importing the Axes3D class and passing a projection='3d' argument.
End of explanation
fig_wire = plt.figure(figsize=(12,6))
# Wireframe plot
ax1 = fig_wire.add_subplot(1,2,1, projection='3d')
ax1.plot_wireframe(X, Y, Z, rstride=4, cstride=4, alpha=0.75)
ax1.view_init(55, 30)
ax1.set_zlim(-2, 14)
ax1.set_title('Wireframe')
# Surface plot
ax2 = fig_wire.add_subplot(1,2,2, projection='3d')
ax2.plot_surface(X, Y, Z, rstride=6, cstride=6, alpha=0.45)
ax2.view_init(30, 30)
ax2.set_title('Surface and Contour')
# Different color maps used for each contours
# the offset argument refers to the position of contour
ax2.contour(X, Y, Z, zdir='x', offset=3, cmap=cm.hsv)
ax2.contour(X, Y, Z, zdir='y', offset=3, cmap=cm.prism)
ax2.contour(X, Y, Z, zdir='z', offset=-8, cmap=cm.coolwarm)
# Axes range for contours
ax2.set_xlim3d(3, 7)
ax2.set_ylim3d(3, 7)
ax2.set_zlim3d(-8, 20)
fig_wire.tight_layout()
Explanation: Using the data X, Y and Z in the computer memory from the above cell, we create a wireframe plot and a surface plot with contour projection.
End of explanation
# Brownian motion of particle suspended in liquid
nu = 0.7978E-3 # Pa s
kB = 1.3806488E-23 # m^2 kg s^-2 K^-1
d = 0.001 # 1 micron
T = 30+273.15 # Kelvin
D = kB*T/(3*np.pi*nu*d)
dt = 0.00001
dl = np.sqrt(2*D*dt)
xp = np.random.uniform(0,0.0001,20)
yp = np.random.uniform(-0.00000005,0.000000005,20)
for value in range(0,xp.size,1):
angle1 = np.random.normal(0,np.pi,500)
xb = xp[value]+np.cumsum(dl*np.cos(angle1))
yb = yp[value]+np.cumsum(dl*np.sin(angle1))
plt.figure(figsize=(6,4))
plt.plot(xb,yb)
from matplotlib import animation # Importing the animation module
fig, ax = plt.subplots(figsize=(6,4))
xr = np.sqrt((xb.max()-xb.min())**2)/20.0
yr = np.sqrt((yb.max()-yb.min())**2)/20.0
ax.set_xlim([(xb.min()-xr), (xb.max()+xr)])
ax.set_ylim([(yb.min()-yr), (yb.max()+yr)])
line1, = ax.plot([], [], 'ro-')
line2, = ax.plot([], [], color="blue", lw=1)
x2 = np.array([])
y2 = np.array([])
def init():
line1.set_data([], [])
line2.set_data([], [])
def update(n):
# n = frame counter
global x2, y2
x2 = np.hstack((x2,xb[n]))
y2 = np.hstack((y2,yb[n]))
line1.set_data([xb[n],xb[n+1]],[yb[n],yb[n+1]])
line2.set_data(x2, y2)
# Creating the animation
# anim = animation.FuncAnimation(fig, update, init_func=init, frames=len(xb)-1, blit=True)
anim = animation.FuncAnimation(fig, update, init_func=init, frames=len(xb)-1)
# Saving the animation film in .mp4 format
anim.save('Tutorial7/Brownian.mp4', fps=10, writer="avconv", codec="libx264")
plt.close(fig)
Explanation: 7.5 Animated Plotting
<span style="color: #0000FF">$Matplotlib$</span> has the ability to generate a movie file from sequences of figures. Let see an example of the syntax needed to capture a Brownian motion in a video (it is necessary to explicitly import the animation module from <span style="color: #0000FF">$Matplotlib$</span>).
End of explanation
from IPython.display import HTML
video = open("Tutorial7/Brownian.mp4", "rb").read()
video_encoded = video.encode("base64")
video_tag = '<video controls alt="test" src="data:video/x-m4v;base64,{0}">'.format(video_encoded)
HTML(video_tag)
Explanation: Creating the video in a correct video format that can be run by your browser may require the installation of these tools (in my case with Ubuntu 14.04LTS):
sudo apt-get install ffmpeg libav-tools
sudo apt-get install libavcodec-extra-*
Different linux variants and other operating systems may require further tuning.
The video can then be displayed:
End of explanation |
4,511 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WEB_API Scrapper
Step1: I would like to get the Best Seller list for the Month of October 2015. First I signed up to the New York Times API, and afterwards received a key in seconds.
Step2: Above I've used the urllib module and its urlopen method to request the data using the NY Times Books API. What is returned is a json file, that must be loaded into python using the json module and load method.
Step3: After viweing the information in the data variable which gives access to the json file. I've decided to create a dictionary to save the information in as we loop through the data json life.
Step4: After parsing through the data i chose to collect the book ranks, title, authors, and both 10 digit and 13 digit ISBN along with a description of the books.
Step5: What i am left with is a DataFrame from the clean_data dictionary as the data and manual titled columns, indexed by the Ranking of the books. | Python Code:
import urllib2
import json
import pandas as pd
Explanation: WEB_API Scrapper
End of explanation
url = urllib2.urlopen('http://api.nytimes.com/svc/books/v3/lists/2015-10-01/hardcover-fiction.json?callback=books&sort-by=rank&sort-order=DESC&api-key=efb1f6ff386ce33c0b913d44bce40fd8%3A10%3A73015082')
Explanation: I would like to get the Best Seller list for the Month of October 2015. First I signed up to the New York Times API, and afterwards received a key in seconds.
End of explanation
data = json.load(url)
Explanation: Above I've used the urllib module and its urlopen method to request the data using the NY Times Books API. What is returned is a json file, that must be loaded into python using the json module and load method.
End of explanation
clean_data = {}
for item in data['results']['books']:
clean_data[item['rank']] = [item['title'], item['author'], item['primary_isbn10'], item['primary_isbn13'], item['description']]
Explanation: After viweing the information in the data variable which gives access to the json file. I've decided to create a dictionary to save the information in as we loop through the data json life.
End of explanation
clean_data
best_seller = pd.DataFrame(clean_data.values(), columns = ['Title', 'Author', 'ISBN:10', 'ISBN:13', 'Description'],
index = clean_data.keys())
Explanation: After parsing through the data i chose to collect the book ranks, title, authors, and both 10 digit and 13 digit ISBN along with a description of the books.
End of explanation
best_seller
Explanation: What i am left with is a DataFrame from the clean_data dictionary as the data and manual titled columns, indexed by the Ranking of the books.
End of explanation |
4,512 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h2 align="center">点击下列图标在线运行HanLP</h2>
<div align="center">
<a href="https
Step1: 加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
Step2: 调用hanlp.load进行加载,模型会自动下载到本地缓存:
Step3: 语义角色分析
为已分词的句子执行语义角色分析:
Step4: 语义角色标注结果中每个四元组的格式为[论元或谓词, 语义角色标签, 起始下标, 终止下标]。其中,谓词的语义角色标签为PRED,起止下标对应单词数组。
遍历谓词论元结构: | Python Code:
!pip install hanlp -U
Explanation: <h2 align="center">点击下列图标在线运行HanLP</h2>
<div align="center">
<a href="https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/srl_mtl.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<a href="https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Fsrl_mtl.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Open In Binder"/></a>
</div>
安装
无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定:
End of explanation
import hanlp
hanlp.pretrained.srl.ALL # 语种见名称最后一个字段或相应语料库
Explanation: 加载模型
HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。
End of explanation
srl = hanlp.load('CPB3_SRL_ELECTRA_SMALL')
Explanation: 调用hanlp.load进行加载,模型会自动下载到本地缓存:
End of explanation
srl(['2021年', 'HanLPv2.1', '为', '生产', '环境', '带来', '次', '世代', '最', '先进', '的', '多', '语种', 'NLP', '技术', '。'])
Explanation: 语义角色分析
为已分词的句子执行语义角色分析:
End of explanation
for i, pas in enumerate(srl(['2021年', 'HanLPv2.1', '为', '生产', '环境', '带来', '次', '世代', '最', '先进', '的', '多', '语种', 'NLP', '技术', '。'])):
print(f'第{i+1}个谓词论元结构:')
for form, role, begin, end in pas:
print(f'{form} = {role} at [{begin}, {end}]')
Explanation: 语义角色标注结果中每个四元组的格式为[论元或谓词, 语义角色标签, 起始下标, 终止下标]。其中,谓词的语义角色标签为PRED,起止下标对应单词数组。
遍历谓词论元结构:
End of explanation |
4,513 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<b>This notebook analyses senders, repliers and interactions.</b>
What it does
Step1: Let's compute and plot the top senders
Step2: Let's compute and plot the top repliers
Step3: Let's compute and plot the top-dyads | Python Code:
%matplotlib inline
import bigbang.mailman as mailman
from bigbang.archive import load as load_archive
import bigbang.graph as graph
import bigbang.process as process
from bigbang.parse import get_date
from bigbang.archive import Archive
import bigbang.twopeople as twoppl
reload(process)
import pandas as pd
import datetime
import matplotlib.pyplot as plt
import numpy as np
import math
import pytz
import pickle
import os
pd.options.display.mpl_style = 'default' # pandas has a set of preferred graph formatting options
#insert one or more urls of the mailing lists you want to include in the analysis
#(if more mailing lists are included, the data are aggregated and treated as a single object of analysis)
urls = ["http://mm.icann.org/pipermail/cc-humanrights/",
"http://mm.icann.org/pipermail/wp4/",
"http://mm.icann.org/pipermail/ge/"]
try:
arch_paths =[]
for url in urls:
arch_paths.append('../archives/'+url[:-1].replace('://','_/')+'.csv')
archives = [load_archive(arch_path).data for arch_path in arch_paths]
except:
arch_paths =[]
for url in urls:
arch_paths.append('../archives/'+url[:-1].replace('//','/')+'.csv')
archives = [load_archive(arch_path).data for arch_path in arch_paths]
archives = pd.concat(archives)
Explanation: <b>This notebook analyses senders, repliers and interactions.</b>
What it does:
-it computes and plots the top-senders (= people sending mails), top-repliers (= people replying to mails), top-dyads (= interaction between repliers and receivers)
Parameters to set options:
-set how many top senders / repliers / dyads to print and plot, by setting the variables 'n_top_senders', 'n_top_repliers', 'n_top_dyads'
End of explanation
#compute and plot top senders (people sending out emails)
#set the number of top senders to be displayed
n_top_senders = 5
activity = Archive.get_activity(Archive(archives))
tot_activity = activity.sum(0)
tot_activity.sort()
print tot_activity[-n_top_senders:]
tot_activity[-n_top_senders:].plot(kind = 'barh', width = 1)
#compute replies list (sender+replier)
arc_data = Archive(archives).data
from_users = arc_data[['From']]
to_users = arc_data[arc_data['In-Reply-To'] > 0][['From','Date','In-Reply-To']]
replies = pd.merge(from_users, to_users, how='inner',
right_on='In-Reply-To',left_index=True,
suffixes=['_original','_response'])
Explanation: Let's compute and plot the top senders
End of explanation
#compute and plot top repliers (people responding to mails)
#set the number of top repliers to be displayed
n_top_repliers = 10
from collections import defaultdict
repliers_count = defaultdict(int)
for reply in replies['From_response']:
repliers_count[reply] += 1
repliers_count = sorted(repliers_count.iteritems(), key = lambda (k,v):(v,k))
for replier_count in repliers_count[-n_top_repliers:]:
print replier_count[0]+' '+str(replier_count[1])
repliers_count = pd.DataFrame.from_records(repliers_count, index = 0)
repliers_count[-n_top_repliers:].plot(kind = 'barh', width = 1)
Explanation: Let's compute and plot the top repliers
End of explanation
#compute and plot top dyads (pairs of replier-receiver)
#select the number of top dyads to be desplayed
n_top_dyads = 10
dyads = twoppl.panda_allpairs(replies, twoppl.unique_pairs(replies))
dyads = dyads.sort("num_replies", ascending = False)
print dyads[:n_top_dyads]["A"]+' '+dyads[:n_top_dyads]["B"]+' '+str(dyads[:n_top_dyads]["num_replies"])
dyads['dyad'] = dyads['A']+dyads['B']
dyads[:n_top_dyads].plot(kind = 'barh', width = 1, x = 'dyad', y = 'num_replies')
Explanation: Let's compute and plot the top-dyads
End of explanation |
4,514 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create Matrix
Step2: Invert Matrix | Python Code:
# Load library
import numpy as np
Explanation: Title: Invert A Matrix
Slug: invert_a_matrix
Summary: How to invert a matrix in Python.
Date: 2017-09-03 12:00
Category: Machine Learning
Tags: Vectors Matrices Arrays
Authors: Chris Albon
Preliminaries
End of explanation
# Create matrix
matrix = np.array([[1, 4],
[2, 5]])
Explanation: Create Matrix
End of explanation
# Calculate inverse of matrix
np.linalg.inv(matrix)
Explanation: Invert Matrix
End of explanation |
4,515 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Group Galaxy Catalog for the DR5 Gallery
The purpose of this notebook is to build a group catalog (using a simple friends-of-friends algorithm) from a diameter-limited (D25>5 arcsec) parent sample of galaxies defined and documented as part of the Legacy Survey Large Galaxy Atlas.
Preliminaries
Import the libraries we need, define the I/O path, and specify the desired linking length (in arcminutes) and the minimum D(25) of the galaxy sample.
Step1: Read the parent HyperLeda catalog.
We immediately throw out objects with objtype='g' in Hyperleda, which are "probably extended" and many (most? all?) have incorrect D(25) diameters. We also toss out objects with D(25)>2.5 arcmin and B>16, which are also probably incorrect.
Step2: Run FoF with spheregroup
Identify groups using a simple angular linking length. Then construct a catalog of group properties.
Step3: Populate the output group catalog
Also add GROUPID to parent catalog to make it easier to cross-reference the two tables. D25MAX and D25MIN are the maximum and minimum D(25) diameters of the galaxies in the group.
Step4: Groups with one member--
Step5: Groups with more than one member-- | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import astropy.units as u
from astropy.table import Table, Column
from astropy.coordinates import SkyCoord
import fitsio
from pydl.pydlutils.spheregroup import spheregroup
%matplotlib inline
LSLGAdir = os.getenv('LSLGA_DIR')
mindiameter = 0.25 # [arcmin]
linking_length = 2.5 # [arcmin]
Explanation: Group Galaxy Catalog for the DR5 Gallery
The purpose of this notebook is to build a group catalog (using a simple friends-of-friends algorithm) from a diameter-limited (D25>5 arcsec) parent sample of galaxies defined and documented as part of the Legacy Survey Large Galaxy Atlas.
Preliminaries
Import the libraries we need, define the I/O path, and specify the desired linking length (in arcminutes) and the minimum D(25) of the galaxy sample.
End of explanation
suffix = '0.05'
ledafile = os.path.join(LSLGAdir, 'sample', 'leda-logd25-{}.fits'.format(suffix))
leda = Table.read(ledafile)
keep = (np.char.strip(leda['OBJTYPE']) != 'g') * (leda['D25'] / 60 > mindiameter)
leda = leda[keep]
keep = ['SDSS' not in gg and '2MAS' not in gg for gg in leda['GALAXY']]
#keep = np.logical_and( (np.char.strip(leda['OBJTYPE']) != 'g'), ~((leda['D25'] / 60 > 2.5) * (leda['BMAG'] > 16)) )
leda = leda[keep]
leda
fig, ax = plt.subplots()
ax.scatter(leda['RA'], leda['DEC'], s=1, alpha=0.5)
fig, ax = plt.subplots()
ax.hexbin(leda['BMAG'], leda['D25'] / 60, extent=(5, 20, 0, 20),
mincnt=1)
ax.set_xlabel('B mag')
ax.set_ylabel('D(25) (arcmin)')
if False:
these = (leda['RA'] > 200) * (leda['RA'] < 210) * (leda['DEC'] > 5) * (leda['DEC'] < 10.0)
leda = leda[these]
print(np.sum(these))
Explanation: Read the parent HyperLeda catalog.
We immediately throw out objects with objtype='g' in Hyperleda, which are "probably extended" and many (most? all?) have incorrect D(25) diameters. We also toss out objects with D(25)>2.5 arcmin and B>16, which are also probably incorrect.
End of explanation
%time grp, mult, frst, nxt = spheregroup(leda['RA'], leda['DEC'], linking_length / 60.0)
npergrp, _ = np.histogram(grp, bins=len(grp), range=(0, len(grp)))
nbiggrp = np.sum(npergrp > 1).astype('int')
nsmallgrp = np.sum(npergrp == 1).astype('int')
ngrp = nbiggrp + nsmallgrp
print('Found {} total groups, including:'.format(ngrp))
print(' {} groups with 1 member'.format(nsmallgrp))
print(' {} groups with 2-5 members'.format(np.sum( (npergrp > 1)*(npergrp <= 5) ).astype('int')))
print(' {} groups with 5-10 members'.format(np.sum( (npergrp > 5)*(npergrp <= 10) ).astype('int')))
print(' {} groups with >10 members'.format(np.sum( (npergrp > 10) ).astype('int')))
Explanation: Run FoF with spheregroup
Identify groups using a simple angular linking length. Then construct a catalog of group properties.
End of explanation
groupcat = Table()
groupcat.add_column(Column(name='GROUPID', dtype='i4', length=ngrp, data=np.arange(ngrp))) # unique ID number
groupcat.add_column(Column(name='GALAXY', dtype='S1000', length=ngrp))
groupcat.add_column(Column(name='NMEMBERS', dtype='i4', length=ngrp))
groupcat.add_column(Column(name='RA', dtype='f8', length=ngrp)) # average RA
groupcat.add_column(Column(name='DEC', dtype='f8', length=ngrp)) # average Dec
groupcat.add_column(Column(name='DIAMETER', dtype='f4', length=ngrp))
groupcat.add_column(Column(name='D25MAX', dtype='f4', length=ngrp))
groupcat.add_column(Column(name='D25MIN', dtype='f4', length=ngrp))
leda_groupid = leda.copy()
leda_groupid.add_column(Column(name='GROUPID', dtype='i4', length=len(leda)))
leda_groupid
Explanation: Populate the output group catalog
Also add GROUPID to parent catalog to make it easier to cross-reference the two tables. D25MAX and D25MIN are the maximum and minimum D(25) diameters of the galaxies in the group.
End of explanation
smallindx = np.arange(nsmallgrp)
ledaindx = np.where(npergrp == 1)[0]
groupcat['RA'][smallindx] = leda['RA'][ledaindx]
groupcat['DEC'][smallindx] = leda['DEC'][ledaindx]
groupcat['NMEMBERS'][smallindx] = 1
groupcat['GALAXY'][smallindx] = np.char.strip(leda['GALAXY'][ledaindx])
groupcat['DIAMETER'][smallindx] = leda['D25'][ledaindx] # [arcsec]
groupcat['D25MAX'][smallindx] = leda['D25'][ledaindx] # [arcsec]
groupcat['D25MIN'][smallindx] = leda['D25'][ledaindx] # [arcsec]
leda_groupid['GROUPID'][ledaindx] = groupcat['GROUPID'][smallindx]
Explanation: Groups with one member--
End of explanation
bigindx = np.arange(nbiggrp) + nsmallgrp
coord = SkyCoord(ra=leda['RA']*u.degree, dec=leda['DEC']*u.degree)
def biggroups():
for grpindx, indx in zip(bigindx, np.where(npergrp > 1)[0]):
ledaindx = np.where(grp == indx)[0]
_ra, _dec = np.mean(leda['RA'][ledaindx]), np.mean(leda['DEC'][ledaindx])
d25min, d25max = np.min(leda['D25'][ledaindx]), np.max(leda['D25'][ledaindx])
groupcat['RA'][grpindx] = _ra
groupcat['DEC'][grpindx] = _dec
groupcat['D25MAX'][grpindx] = d25max
groupcat['D25MIN'][grpindx] = d25min
groupcat['NMEMBERS'][grpindx] = len(ledaindx)
groupcat['GALAXY'][grpindx] = ','.join(np.char.strip(leda['GALAXY'][ledaindx]))
leda_groupid['GROUPID'][ledaindx] = groupcat['GROUPID'][grpindx]
# Get the distance of each object from the group center.
cc = SkyCoord(ra=_ra*u.degree, dec=_dec*u.degree)
diameter = 2 * coord[ledaindx].separation(cc).arcsec.max()
groupcat['DIAMETER'][grpindx] = np.max( (diameter*1.02, d25max) )
%time biggroups()
leda_groupid
groupcat
ww = np.where(groupcat['NMEMBERS'] >= 2)[0]
fig, ax = plt.subplots()
ax.scatter(groupcat['RA'][ww], groupcat['DEC'][ww], s=1, alpha=0.5)
groupfile = os.path.join(LSLGAdir, 'sample', 'leda-logd25-{}-groupcat.fits'.format(suffix))
print('Writing {}'.format(groupfile))
groupcat.write(groupfile, overwrite=True)
ledafile_groupid = os.path.join(LSLGAdir, 'sample', 'leda-logd25-{}-groupid.fits'.format(suffix))
print('Writing {}'.format(ledafile_groupid))
leda_groupid.write(ledafile_groupid, overwrite=True)
Explanation: Groups with more than one member--
End of explanation |
4,516 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Receiver Operating Characteristics (ROC)
Step1: 1. Binary classification
Step2: Accuracy, precision, recall
Step3: Note
Step4: The problem here is that we're not passing the correct second argument.
y_score has to be
Step5: For each record in the training dataset predict_proba outputs a probabilty per class. In our example this takes the form of
Step6: Make sure that all rows sum to 1.
Step7: Quiz
Step8: Passing the wrong column (negative class) result in 1-AUC.
Step9: Does this work with decision trees?
Step10: Accuracy and AUC score of 1???
Quiz
Step11: Let's try the following models
Step12: No AUC for SVC?
Suffice it to say right now that SVC by default does not generate probabilities. We'll come back to this later. The point is that not all classifiers output probabilities and therefore we can't always calculate AUC.
ROC curve
Let's say that our "cancer" classifier is predicting the following probabilities for patients A, B, C, D, and E
Step13: roc_curve generates the coordinates of the ROC curve (fpr and tpr) that are needed for plotting it. It also generates the
Step14: Note that the thresholds are not equi-distance. This has to do with removing redundant coordinates.
Step15: The actual plot
Now we have everything we need.
Quiz | Python Code:
%matplotlib inline
from IPython.display import Image
import numpy as np
import matplotlib.pyplot as plt
# some classification metrics
# more here:
# http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics
from sklearn.metrics import (auc, roc_curve, roc_auc_score,
accuracy_score, precision_score,
recall_score, f1_score, )
from sklearn.cross_validation import train_test_split
# a few classifiers
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
Explanation: Receiver Operating Characteristics (ROC)
End of explanation
# breast cancer dataset, a binary classification task
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
x_train, x_test, y_train, y_test = train_test_split(cancer.data,
cancer.target,
test_size=0.4,
random_state=0)
x_train.shape
# only two class, 0 and 1
set(y_train)
clf = LogisticRegression()
clf.fit(x_train, y_train)
Explanation: 1. Binary classification
End of explanation
# score by default calculate the accuracy for a classifier
print '%30s: %s' % ('Default score (accuracy)', clf.score(x_train, y_train))
# predict outputs the predicted class label given the dataset features
# this might seem odd given that the model has already seen all of x_train
# however most of the time models do not fit the training data perfectly
# therefore you didn't see 100% accuracy above
# as you increase the model complexity (e.g. random forest, neural network, etc.)
# there's a higher likelihood that your model will fit the training data perfectly
# but as you'll learn this is most likely not a good thing (i.e. overfit)
predicted_labels = clf.predict(x_train)
predicted_labels[:5]
# notice that this is the same as what we computed earlier
print '%30s: %s' % ('Accuracy', accuracy_score(y_train, predicted_labels))
# precision is calculated as the ratio of true positives
# over the sum of true positives and false positives
# we'll come back to this later
print '%30s: %s' % ('Precision', precision_score(y_train, predicted_labels))
# recall or sensitivity is the ratio of true positives
# over the sum of true positives and false negatives
# we'll come back to this later
print '%30s: %s' % ('Recall', recall_score(y_train, predicted_labels))
Explanation: Accuracy, precision, recall
End of explanation
print '%30s: %s' % ('AUC (not correct)', roc_auc_score(y_train, predicted_labels))
Explanation: Note: These are considered VERY good quality metrics in most cases which should raise suspicision. In this case the problem is that we're training and calculating scores on the same dataset.
Quiz: Why is training and testing on the same dataset a bad idea?
AUC score
Looking at the roc_auc_score signature from the sklearn <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.roc_auc_score.html">documentation</a>:
roc_auc_score(y_true, y_score, average='macro')
One might be tempted to try the following:
End of explanation
predicted_probabilities = clf.predict_proba(x_train)
predicted_probabilities[:5]
Explanation: The problem here is that we're not passing the correct second argument.
y_score has to be:
Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by “decision_function” on some classifiers).
Let's calculate the probability estimates. With most classifiers you can get this using predict_proba.
End of explanation
predicted_probabilities.shape
Explanation: For each record in the training dataset predict_proba outputs a probabilty per class. In our example this takes the form of:
[probability of class 1, probability of class 2]
These two probabilities sum to one. For example the first row (4.34552482e-03 + 9.95654475e-01 ~=1).
End of explanation
assert all(predicted_probabilities.sum(axis=1) == 1), "At least one row is not summing to one"
Explanation: Make sure that all rows sum to 1.
End of explanation
print '%30s: %s' % ('AUC', roc_auc_score(y_train, predicted_probabilities[:, 1]))
Explanation: Quiz: what is assert? what is all? what is axis=1 doing? why is there no output?
Let's calculate the the AUC score by passing the correct metric. We have to pass the probability for the positive class which corresponds to the 2nd column.
End of explanation
print '%30s: %s' % ('1 - AUC', roc_auc_score(y_train, predicted_probabilities[:, 0]))
Explanation: Passing the wrong column (negative class) result in 1-AUC.
End of explanation
clf = DecisionTreeClassifier()
clf.fit(x_train, y_train)
predicted_labels = clf.predict(x_train)
print '%30s: %s' % ('Accuracy', accuracy_score(y_train, predicted_labels))
predicted_probabilities = clf.predict_proba(x_train)
print '%30s: %s' % ('AUC', roc_auc_score(y_train, predicted_probabilities[:, 1]))
Explanation: Does this work with decision trees?
End of explanation
def classifier_metrics(model):
clf = model()
clf.fit(x_train, y_train)
print '%30s: %s' % ('Default score (accuracy)', clf.score(x_train, y_train))
predicted_labels = clf.predict(x_train)
print '%30s: %s' % ('Accuracy', accuracy_score(y_train, predicted_labels))
print '%30s: %s' % ('Precision', accuracy_score(y_train, predicted_labels))
print '%30s: %s' % ('Recall', accuracy_score(y_train, predicted_labels))
print '%30s: %s' % ('F1', f1_score(y_train, predicted_labels))
try:
predicted_probabilities = clf.predict_proba(x_train)
print '%30s: %s' % ('AUC', roc_auc_score(y_train, predicted_probabilities[:, 1]))
except:
print '*** predict_proba failed for %s' % model.__name__
Explanation: Accuracy and AUC score of 1???
Quiz: why are we getting perfect accuracy and AUC?
In fact we can calcualte all these metrics for any classifer (except for ROC/AUC). So let's refactor the code and make it more generic.
End of explanation
for model in [LogisticRegression, DecisionTreeClassifier, RandomForestClassifier, SVC]:
print 'Metrics for %s' % model.__name__
print '=' * 50
classifier_metrics(model)
print '\n'
Explanation: Let's try the following models:
LogisticRegression
DecisionTreeClassifier
RandomForestClassifier
SVC (Support Vector Classification)
End of explanation
# same as before
clf = LogisticRegression()
clf.fit(x_train, y_train)
predicted_probabilities = clf.predict_proba(x_train)
roc_auc = roc_auc_score(y_train, predicted_probabilities[:, 1])
Explanation: No AUC for SVC?
Suffice it to say right now that SVC by default does not generate probabilities. We'll come back to this later. The point is that not all classifiers output probabilities and therefore we can't always calculate AUC.
ROC curve
Let's say that our "cancer" classifier is predicting the following probabilities for patients A, B, C, D, and E:
- A, 50% chance of having cancer
- B, 99% chance of having cancer
- C, 80% chance of having cancer
- D, 40% chance of having cancer
- E, 2% chance of having cancer
Which patients should we call in for further screening?
Before we proceed here's some terminology:
- The positive class (since we want to predict it) is "having cancer".
- The negative class is not having cancer.
- A false positive means predicting someone has cancer who does not.
- A false negative means predicting someone doesn't have cancer when they actually do.
Here are a few different ways to go about it:
- Let's call in everyone, even 2% means there's a chance and we don't want to risk missing someone. If we decide to proceed this way we're going to have zero false negatives, but probably a lot of false positives. Do we really want to put everyone through all the screening and incur the cost?
- Let's call in patients with probability > 95%. This way we're going to have very little false positives, assuming the model probabilities map to reality (they're calibrated). But we're going to miss many actual patients. So low false positives at the expense of high false negatives.
- Let's call people with probability > 50%. Since this is really a trade-off a simple answer is to pick the most intuitive probability (50%).
This is the essence of the ROC curve. The number you pick as the threshold gives you one point on the ROC curve. Plotting the ROC curve involves changing the threshold from 1 to 0 in small increments and plotting the corresponding points (more details to follow).
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/36/ROC_space-2.png/640px-ROC_space-2.png">
Plotting the ROC curve
End of explanation
fpr, tpr, thresholds = roc_curve(y_train,
predicted_probabilities[:, 1])
Explanation: roc_curve generates the coordinates of the ROC curve (fpr and tpr) that are needed for plotting it. It also generates the
End of explanation
# distance between consequtive thresholds
# x-axis is diff index, y axis is difference
plt.plot(np.abs(np.diff(thresholds)));
Explanation: Note that the thresholds are not equi-distance. This has to do with removing redundant coordinates.
End of explanation
plt.title('Receiver Operating Characteristic (ROC)')
plt.plot(fpr, tpr, 'b',
label='AUC = %0.2f'% roc_auc)
plt.plot([0,1],[0,1],'k--')
plt.xlim([-0.1,1.2])
plt.ylim([-0.1,1.2])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.legend(loc='lower right')
plt.show()
Explanation: The actual plot
Now we have everything we need.
Quiz: why is thresholds not used for plotting?
End of explanation |
4,517 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TD 1
Step1: Partie 1
Un langage de programmation permet de décrire avec précision des opérations très simples sur des données. Comme tout langage, il a une grammaire et des mot-clés. La complexité d'un programme vient de ce qu'il faut beaucoup d'opérations simples pour arriver à ses fins. Voyons cela quelques usages simples. Il vous suffit d'exécuter chaque petit extrait en appuyant sur le triangle pointant vers la droite ci-dessus. N'hésitez pas à modifier les extraits pour mieux comprendre ce que le programme fait.
La calculatrice
Step2: On programme sert souvent à automatiser un calcul comme le calcul mensuel du taux de chômage, le taux d'inflation, le temps qu'il fera demain... Pour pouvoir répéter ce même calcul sur des valeurs différentes, il faut pouvoir décrire ce calcul sans savoir ce que sont ces valeurs. Un moyen simple est de les nommer
Step3: Lorsqu'on programme, on passe son temps à écrire des calculs à partir de variables pour les stocker dans d'autres variables voire dans les mêmes variables. Lorsqu'on écrit y=x+5, cela veut dire qu'on doit ajouter 5 à x et qu'on stocke le résultat dans y. Lorsqu'on écrit x += 5, cela veut dire qu'on doit ajouter 5 à x et qu'on n'a plus besoin de la valeur que x contenait avant l'opération.
La répétition ou les boucles
Step4: Le mot-clé print n'a pas d'incidence sur le programme. En revanche, il permet d'afficher l'état d'une variable au moment où on exécute l'instruction print.
L'aiguillage ou les tests
Step5: Les chaînes de caractères
Step6: Toute valeur a un type et cela détermine les opérations qu'on peut faire dessus. 2 + 2 fait 4 pour tout le monde. 2 + "2" fait quatre pour un humain, mais est incompréhensible pour l'ordinateur car on ajoute deux choses différentes (torchon + serviette).
Step7: Partie 2
Dans cette seconde série, partie, il s'agit d'interpréter pourquoi un programme ne fait pas ce qu'il est censé faire ou pourquoi il provoque une erreur, et si possible, de corriger cette erreur.
Un oubli
Step8: Une erreur de syntaxe
Step9: Une autre erreur de syntaxe
Step10: Une opération interdite
Step11: Un nombre impair de...
Step12: Partie 3
Il faut maintenant écrire trois programmes qui
Step13: Tutor Magic
Cet outil permet de visualiser le déroulement des programmes (pas trop grand, site original pythontutor.com). | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: TD 1 : Premiers pas en Python
End of explanation
x = 5
y = 10
z = x + y
print (z) # affiche z
Explanation: Partie 1
Un langage de programmation permet de décrire avec précision des opérations très simples sur des données. Comme tout langage, il a une grammaire et des mot-clés. La complexité d'un programme vient de ce qu'il faut beaucoup d'opérations simples pour arriver à ses fins. Voyons cela quelques usages simples. Il vous suffit d'exécuter chaque petit extrait en appuyant sur le triangle pointant vers la droite ci-dessus. N'hésitez pas à modifier les extraits pour mieux comprendre ce que le programme fait.
La calculatrice
End of explanation
x = 2
y = x + 1
print (y)
x += 5
print (x)
Explanation: On programme sert souvent à automatiser un calcul comme le calcul mensuel du taux de chômage, le taux d'inflation, le temps qu'il fera demain... Pour pouvoir répéter ce même calcul sur des valeurs différentes, il faut pouvoir décrire ce calcul sans savoir ce que sont ces valeurs. Un moyen simple est de les nommer : on utilise des variables. Une variable désigne des données. x=5 signifie que la variable xcontient 5. x+3 signifie qu'on ajoute 3 à x sans avoir besoin de savoir ce que x désigne.
L'addition, l'incrémentation
End of explanation
a = 0
for i in range (0, 10) :
a = a + i # répète dix fois cette ligne
print (a)
Explanation: Lorsqu'on programme, on passe son temps à écrire des calculs à partir de variables pour les stocker dans d'autres variables voire dans les mêmes variables. Lorsqu'on écrit y=x+5, cela veut dire qu'on doit ajouter 5 à x et qu'on stocke le résultat dans y. Lorsqu'on écrit x += 5, cela veut dire qu'on doit ajouter 5 à x et qu'on n'a plus besoin de la valeur que x contenait avant l'opération.
La répétition ou les boucles
End of explanation
a = 10
if a > 0 :
print(a) # un seul des deux blocs est pris en considération
else :
a -= 1
print (a)
Explanation: Le mot-clé print n'a pas d'incidence sur le programme. En revanche, il permet d'afficher l'état d'une variable au moment où on exécute l'instruction print.
L'aiguillage ou les tests
End of explanation
a = 10
print (a) # quelle est la différence
print ("a") # entre les deux lignes
s = "texte"
s += "c"
print (s)
Explanation: Les chaînes de caractères
End of explanation
print("2" + "3")
print(2+3)
Explanation: Toute valeur a un type et cela détermine les opérations qu'on peut faire dessus. 2 + 2 fait 4 pour tout le monde. 2 + "2" fait quatre pour un humain, mais est incompréhensible pour l'ordinateur car on ajoute deux choses différentes (torchon + serviette).
End of explanation
a = 5
a + 4
print (a) # ou voudrait voir 9 mais c'est 5 qui apparaît
Explanation: Partie 2
Dans cette seconde série, partie, il s'agit d'interpréter pourquoi un programme ne fait pas ce qu'il est censé faire ou pourquoi il provoque une erreur, et si possible, de corriger cette erreur.
Un oubli
End of explanation
a = 0
for i in range (0, 10)
a = a + i
print (a)
Explanation: Une erreur de syntaxe
End of explanation
a = 0
for i in range (0, 10):
a = a + i
print (a)
Explanation: Une autre erreur de syntaxe
End of explanation
a = 0
s = "e"
print (a + s)
Explanation: Une opération interdite
End of explanation
a = 0
for i in range (0, 10) :
a = (a + (i+2)*3
print (a)
Explanation: Un nombre impair de...
End of explanation
14%2, 233%2
Explanation: Partie 3
Il faut maintenant écrire trois programmes qui :
Ecrire un programme qui calcule la somme des 10 premiers entiers au carré.
Ecrire un programme qui calcule la somme des 5 premiers entiers impairs au carré.
Ecrire un programme qui calcule la somme des qui 10 premières factorielles : $\sum_{i=1}^{10} i!$.
A propos de la parité :
End of explanation
%load_ext tutormagic
%%tutor --lang python3
a = 0
for i in range (0, 10):
a = a + i
Explanation: Tutor Magic
Cet outil permet de visualiser le déroulement des programmes (pas trop grand, site original pythontutor.com).
End of explanation |
4,518 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Как помочь нам разобрать протокол?
Вот пример как это делается в простом случае.
Есть устройство на чипе HS1527 и к нему даже нашелся datasheet
Step1: Посмотрим на гистограммы длин импульсов и пауз
Step2: Немножко удобной автоматики для группирования сигналов по длинам
Step3: Получили группы сигналов
Step4: Заменим каждый сигнал буквой, обозначающей его группу. Так зачастую удобнее
Step5: Это уже почти человеко-читаемо. Сейчас смотрим в даташит и видим, что 'Ac' - импульс и длинная пауза - это преамбула. А 'Ab' и 'Ba' задают 0 и 1.
Step6: Здесь уже хорошо видно периодичные 'P'-шки. Поделим всю строку на пакеты.
Step7: Видим, что подряд идет помногу одинаковых пакетов. В снятых мною дампах это сигналы от датчика движения при двух разных конфигурациях джамперов | Python Code:
# takes filename
# open file, read binary data
# returns numpy.array of impulses (positive integer)
# and pauses (negative integer)
def file_to_data(filename):
pic = open(filename, "rb")
data = []
while True:
buf = pic.read(4)
if not buf or len(buf) != 4:
break
sign = (1 if buf[3] == 1 else -1)
#print(len(buf))
buf = bytes(buf[:3] + bytes([0]))
#print(len(buf))
data.append(sign * struct.unpack('i', buf)[0])
return np.array(data)
# takes files' mask
# returns numpy.array of data
def files_to_data(mask):
# откуда брать дампы
filenames = glob.glob(mask)
print("%d files found" % len(filenames))
datas = []
# посмотрим файлики с дампами, преобразуем в импульсы
for name in filenames:
datas.append(file_to_data(name))
return np.concatenate(datas)
# читаем информацию
data = files_to_data("./*.rcf")
Explanation: Как помочь нам разобрать протокол?
Вот пример как это делается в простом случае.
Есть устройство на чипе HS1527 и к нему даже нашелся datasheet: http://sc-tech.cn/en/hs1527.pdf
Есть Wiren Board c rfsniffer. Первое что мы делаем, записываем дамп.
{some path}/wb-homa-rfsniffer -W
Когда запись произойдет, останавливаем программу и начинаем изучать данные, сохраненные в .rcf файле.
End of explanation
# show histogramm of lengthes
# ignore lengthes that is greater than max_len
# (0.9-fractile) * 1.3 by default
def show_hist(data, title, threshold=0.02, max_len=None):
if max_len == None:
k = int(len(data) * (1 - threshold))
data = np.partition(data, k)
max_len = int(data[k] * 1.3)
data = data[data <= max_len]
hist, bins = np.histogram(data)
#width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
#plt.bar(center, hist, align='center', width=width)
plt.hist(data, bins = 100)
#plt.yscale("log")
plt.title(title)
plt.xlim([0, max_len])
plt.show()
show_hist(data[data > 0], "Pulses", threshold=0.1)
show_hist(-data[data < 0], "Pauses", threshold=0.02) # обращаем внимание на непримечательный пик справа
Explanation: Посмотрим на гистограммы длин импульсов и пауз
End of explanation
# data - list of signals
# threshold - length of minimal distance between clusters
# output format list of such structures
# (letter, lower, upper, count) giving information about group
def clusterize_signals(data, threshold = 100, threshold_count = 10):
groups = []
for signals in (data[data > 0], data[data < 0]):
signals_color = hac.fclusterdata(X = np.matrix([signals]).T,
criterion='distance', t = threshold)
for i in range(1, 10000):
group = signals[signals_color == i]
if (len(group) == 0):
break
#print(group)
bounds = (abs(int(group.mean())), group.min(), group.max(), len(group))
if len(group) > threshold_count:
groups.append(bounds)
groups = sorted(groups)
cur_impulse_code = ord('A')
cur_pause_code = ord('a')
for i in range(len(groups)):
mean, lower, upper, count = groups[i]
code = 0
if (lower > 0):
code = cur_impulse_code
cur_impulse_code += 1
groups[i] = (chr(code),
max(1, int(lower - threshold / 3)),
int(upper + threshold / 3),
count)
else:
code = cur_pause_code
cur_pause_code += 1
groups[i] = (chr(code),
int(lower - threshold / 3),
min(int(upper + threshold / 3), -1),
count)
return groups
# делаем группы
groups = clusterize_signals(data, threshold = 100, threshold_count = 15)
print("(letter, lower, upper, count)")
print("\n".join(map(str, groups)))
Explanation: Немножко удобной автоматики для группирования сигналов по длинам
End of explanation
groups = list(filter(lambda x: x[0] in {'A', 'B', 'a', 'b', 'c'}, groups))
groups
# All the same but in a table
data_frame = pandas.DataFrame([(lower, upper, count) for letter, lower, upper, count in groups],
index=[letter for letter, lower, upper, count in groups],
columns=['Lower length', 'Upper length', 'count of signals'])
data_frame.insert(1, "Type", ["impulse" if lower > 0 else "pause" for c, lower, upper, count in groups])
data_frame = data_frame.sort_index()
#data_frame = data_frame.pivot_table(index='Letter')
data_frame
Explanation: Получили группы сигналов
End of explanation
# finds signal in groups
# returns a corresponding letter
def decode_signal(x, groups):
for c, lower, upper, group in groups:
if lower <= x <= upper:
return c
return "?"
# decode list of signals
# each signal is decoded separately
def decode_signals(data, groups):
return [decode_signal(signal, groups) for signal in data]
# decoded signals
data_letters = decode_signals(data, groups)
print("Decoded (characters): ", "".join(data_letters))
Explanation: Заменим каждый сигнал буквой, обозначающей его группу. Так зачастую удобнее
End of explanation
# заменим пары символов на их смысл
data_1 = "".join(data_letters).replace('Ac', 'P').replace('Ab', '0').replace('Ba', '1')
print(data_1[:300])
Explanation: Это уже почти человеко-читаемо. Сейчас смотрим в даташит и видим, что 'Ac' - импульс и длинная пауза - это преамбула. А 'Ab' и 'Ba' задают 0 и 1.
End of explanation
data = list(filter(lambda x: len(x) == 24, data_1.split('P')))
print(data[:5])
data = set(data)
print(data)
Explanation: Здесь уже хорошо видно периодичные 'P'-шки. Поделим всю строку на пакеты.
End of explanation
# именнованый кортеж, для более понятного вывода
from collections import namedtuple
Message = namedtuple('HSMessage', 'data channel')
# разделяем на непосредственно сообщение и флаги D0-D3
data_pairs = list(map(lambda x: Message(int(x[:20][::-1], 2), int(x[20:], 2)), data))
print(data_pairs)
# или в 16-ричной, если вам так больше нравится
data_pairs16 = list(map(lambda x: Message(hex(x[0]), hex(x[1])), data_pairs))
print(data_pairs16)
Explanation: Видим, что подряд идет помногу одинаковых пакетов. В снятых мною дампах это сигналы от датчика движения при двух разных конфигурациях джамперов
End of explanation |
4,519 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview
Step1: Extract NN Features
Step2: Predicting Own Labels from Selected Images
within a folder (find class 1, class 0).
(split into test train)
get matrix of img X features X class
fit logistic regression (or other classifier)
assess test set-fit.
html (sample images used to define class; top and bottom predictions from test-set.
Step3: Horizontal Striped Data
Step4: neither the svm or the logistic reg is doing well
Step5: the accuracy achieved is above chance (as determined by permutation testing)
Red / Pink Data
Step6: classification performance is mucher better on this dataset | Python Code:
import sys
import os
sys.path.append(os.getcwd()+'/../')
# our lib
from lib.resnet50 import ResNet50
from lib.imagenet_utils import preprocess_input, decode_predictions
#keras
from keras.preprocessing import image
from keras.models import Model
# sklearn
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import permutation_test_score
# other
import numpy as np
import glob
import pandas as pd
import ntpath
# plotting
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
def preprocess_img(img_path):
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return(x,img)
def perf_measure(y_actual, y_hat):
TP = 0
FP = 0
TN = 0
FN = 0
for i in range(len(y_hat)):
if y_actual[i]==y_hat[i]==1:
TP += 1
for i in range(len(y_hat)):
if (y_hat[i]==1) and (y_actual[i]!=y_hat[i]):
FP += 1
for i in range(len(y_hat)):
if y_actual[i]==y_hat[i]==0:
TN += 1
for i in range(len(y_hat)):
if (y_hat[i]==0) and (y_actual[i]!=y_hat[i]):
FN += 1
return(TP, FP, TN, FN)
Explanation: Overview: Training Network for Useful Features.
we provide:
- set of images that match along some interpretable feature. (e.g. striped dress)
- a whole bunch of images that don't match
Code:
- estimates neural network features from trained resnet 50.
- estimates weights for those neural network features to predict the interpreable feature class
- do so with cross-validation.
- regularized logisitic regression.
- other classifiers.
Evaluation:
- save out weights to use as new features (new features = w*original features)
End of explanation
# instantiate the model
base_model = ResNet50(include_top=False, weights='imagenet') #this will pull the weights from the folder
# cut the model to lower levels only
model = Model(input=base_model.input, output=base_model.get_layer('avg_pool').output)
#img_paths = glob.glob('../img/baiyi/*')
#
img_paths = glob.glob('../original_img/*')
img_paths[0:3]
# create dataframe with all image features
img_feature_df = pd.DataFrame()
for i,img_path in enumerate(img_paths):
x,img = preprocess_img(img_path) # preprocess
model_output = model.predict(x)[0,0,0,:]
img_feature_df.loc[i,'img_path']=img_path
img_feature_df.loc[i,'nn_features']=str(list(model_output))
img_feature_df['img_name'] = img_feature_df['img_path'].apply(lambda x: ntpath.basename(x))
img_feature_df.head()
img_feature_df.to_csv('../data_nn_features/img_features_all.csv')
Explanation: Extract NN Features
End of explanation
#data_folder ='processed_data/classifer_exp_1/'
#os.mkdir(data_folder)
# get target and non-target lists
def create_image_class_dataframe(target_img_folder):
# all the image folders
non_target_img_folders = ['../original_img/']
target_img_paths=glob.glob(target_img_folder+'*')
target_img_paths_stemless = [ntpath.basename(t) for t in target_img_paths]
non_target_img_paths =[]
for non_target_folder in non_target_img_folders:
for img_path in glob.glob(non_target_folder+'*'):
if ntpath.basename(img_path) not in target_img_paths_stemless: # remove targets from non-target list
non_target_img_paths.append(img_path)
# create data frame with image name and label
img_paths = np.append(target_img_paths,non_target_img_paths)
labels = np.append(np.ones(len(target_img_paths)),np.zeros(len(non_target_img_paths)))
df = pd.DataFrame(data=np.vstack((img_paths,labels)).T,columns=['img_path','label'])
df['img_name'] = df['img_path'].apply(lambda x: ntpath.basename(x)) # add image name
df['label'] = df['label'].apply(lambda x: float(x)) # add label
# load up features per image
img_feature_df = pd.read_csv('../data_nn_features/img_features_all.csv',index_col=0)
img_feature_df.head()
# create feature matrix out of loaded up features.
for i,row in df.iterrows():
features = img_feature_df.loc[img_feature_df.img_name==row['img_name'],'nn_features'].as_matrix()[0].replace(']','').replace('[','').split(',')
features = [np.float(f) for f in features]
lab = row['img_name']
if i==0:
X = features
labs = lab
else:
X = np.vstack((X,features))
labs = np.append(labs,lab)
xcolumns = ['x'+str(i) for i in np.arange(X.shape[1])]
X_df = pd.DataFrame(np.hstack((labs[:,np.newaxis],X)),columns=['img_name']+xcolumns)
# merge together
df = df.merge(X_df,on='img_name')
# make sure there is only one instance per image in dataframe
lens = np.array([])
for img_name in df.img_name.unique():
lens = np.append(lens,len(df.loc[df.img_name==img_name]))
assert len(np.unique(lens)[:])==1
return(df)
# remove some non-targets to make dataset smaller #
# i_class0 = np.where(df.label==0.0)[0]
# i_class0_remove = np.random.choice(i_class0,int(np.round(len(i_class0)/1.1)))
# df_smaller = df.drop(i_class0_remove)
#df_smaller.to_csv('test.csv')
Explanation: Predicting Own Labels from Selected Images
within a folder (find class 1, class 0).
(split into test train)
get matrix of img X features X class
fit logistic regression (or other classifier)
assess test set-fit.
html (sample images used to define class; top and bottom predictions from test-set.
End of explanation
# image folder
target_img_folder ='../data_img_classes/class_horiztonal_striped/'
df = create_image_class_dataframe(target_img_folder)
df.head()
print('target class')
plt.figure(figsize=(12,3))
for i in range(5):
img_path= df['img_path'][i]
img = image.load_img(img_path, target_size=(224, 224))
plt.subplot(1,5,i+1)
plt.imshow(img)
plt.grid(b=False)
xcolumns=['x'+str(i) for i in np.arange(2024)]
X = df.loc[:,xcolumns].as_matrix().astype('float')
y= df.loc[:,'label'].as_matrix().astype('float')
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X,y,stratify=y,test_size=.33)
print(' training shape {0} \n testing shape {1}').format(X_train.shape,X_test.shape)
print('\n target/non-target \n (train) {0}\{1} \n (test) {2}\{3}').format(y_train.sum(),(1-y_train).sum(),y_test.sum(),(1-y_test).sum())
# classifiers
C = 1.0
clf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)
clf_svm = sklearn.svm.SVC(C=C,kernel='linear')
clf_LR.fit(X_train, y_train)
clf_svm.fit(X_train, y_train)
coef = clf_LR.coef_[0,:]
plt.figure(figsize=(12,3))
sns.set_style('white')
plt.scatter(np.arange(len(coef)),coef)
plt.xlabel('nnet feature')
plt.ylabel('LogReg coefficient')
sns.despine()
y_pred = clf_LR.predict(X_test)
(TP,FP,TN,FN) =perf_measure(y_test,y_pred)
print('TruePos:{0}\nFalsePos:{1}\nTrueNeg:{2}\nFalseNeg:{3}').format(TP,FP,TN,FN)
y_pred = clf_svm.predict(X_test)
(TP,FP,TN,FN) =perf_measure(y_test,y_pred)
print('TruePos:{0}\nFalsePos:{1}\nTrueNeg:{2}\nFalseNeg:{3}').format(TP,FP,TN,FN)
Explanation: Horizontal Striped Data
End of explanation
# from sklearn.model_selection import StratifiedKFold
# skf = StratifiedKFold(n_splits=5,shuffle=True)
# for train, test in skf.split(X, y):
# #print("%s %s" % (train, test))
# C=1.0
# clf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)
# clf_LR.fit(X[train], y[train])
# y_pred = clf_LR.predict(X[test])
# (TP,FP,TN,FN) =perf_measure(y[test],y_pred)
# print('\nTruePos:{0}\nFalsePos:{1}\nTrueNeg:{2}\nFalseNeg:{3}').format(TP,FP,TN,FN)
clf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)
skf = StratifiedKFold(n_splits=5,shuffle=True)
score, permutation_scores, pvalue = permutation_test_score(
clf_LR, X, y, scoring="accuracy", cv=skf, n_permutations=100)
#
plt.hist(permutation_scores)
plt.axvline(score)
sns.despine()
plt.xlabel('accuracy')
print(pvalue)
Explanation: neither the svm or the logistic reg is doing well
End of explanation
# image folder
target_img_folder ='../data_img_classes/class_red_pink/'
df = create_image_class_dataframe(target_img_folder)
df.head()
print('target class')
plt.figure(figsize=(12,3))
for i in range(5):
img_path= df['img_path'][i+1]
img = image.load_img(img_path, target_size=(224, 224))
plt.subplot(1,5,i+1)
plt.imshow(img)
plt.grid(b=False)
# split data
xcolumns=['x'+str(i) for i in np.arange(2024)]
X = df.loc[:,xcolumns].as_matrix().astype('float')
y= df.loc[:,'label'].as_matrix().astype('float')
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X,y,stratify=y,test_size=.33)
print(' training shape {0} \n testing shape {1}').format(X_train.shape,X_test.shape)
print('\n target/non-target \n (train) {0}\{1} \n (test) {2}\{3}').format(y_train.sum(),(1-y_train).sum(),y_test.sum(),(1-y_test).sum())
# Train
clf_svm.fit(X_train, y_train)
# test
y_pred = clf_svm.predict(X_test)
(TP,FP,TN,FN) =perf_measure(y_test,y_pred)
print('TruePos:{0}\nFalsePos:{1}\nTrueNeg:{2}\nFalseNeg:{3}').format(TP,FP,TN,FN)
Explanation: the accuracy achieved is above chance (as determined by permutation testing)
Red / Pink Data
End of explanation
clf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)
skf = StratifiedKFold(n_splits=5,shuffle=True)
score, permutation_scores, pvalue = permutation_test_score(
clf_LR, X, y, scoring="accuracy", cv=skf, n_permutations=100)
plt.hist(permutation_scores)
plt.axvline(score)
sns.despine()
plt.xlabel('accuracy')
plt.title('permutation test on test set classification')
print(pvalue)
Explanation: classification performance is mucher better on this dataset
End of explanation |
4,520 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Theory and Practice of Visualization Exercise 1
Imports
Step1: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook.
I receive a zero for the section where I was supposed to upload an image, but the image I described below, where my description got a perfect score, shows up when I run this code. | Python Code:
from IPython.display import Image
Explanation: Theory and Practice of Visualization Exercise 1
Imports
End of explanation
# Add your filename and uncomment the following line:
Image(filename='good data viz.png')
Explanation: Graphical excellence and integrity
Find a data-focused visualization on one of the following websites that is a positive example of the principles that Tufte describes in The Visual Display of Quantitative Information.
Vox
Upshot
538
BuzzFeed
Upload the image for the visualization to this directory and display the image inline in this notebook.
I receive a zero for the section where I was supposed to upload an image, but the image I described below, where my description got a perfect score, shows up when I run this code.
End of explanation |
4,521 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Coroutines
A coroutine is a type of subroutine which can have multiple entry and exit points. They're useful for a number of concurrency patterns.
Python introduced coroutines as "Enhanced Generators" in PEP 342 (https
Step1: Sending values
When using a yield expression, the coroutine receives a value through a call to its send method. Like next, sending a value progresses to the next yield and suspends execution. In fact, given an instance of a generator, say my_gen, calling next(my_gen) is the same as my_gen.send(None).
Also note that before a value can be sent to a coroutine, it must be initialized with a call to next. This can be automated with a decorator.
Step2: Can a generator both produce and receive?
Yes, but it can make the behaviour of the code difficult to understand.
The example below is a natural use of taking in and returning values in one function. track_largest is a function that yields the largest value that it's seen so far. The semantics of how this works is somewhat confusing, and it would not be appropriate to use this in a for loop.
The execution pauses after a value has been produced, but before a value has been recieved. This partly explains why you must first initialize a coroutine with a call to next. | Python Code:
def print_if_error(error_is="error"):
while True:
line = yield
if line.startswith(error_is):
print(line)
Explanation: Coroutines
A coroutine is a type of subroutine which can have multiple entry and exit points. They're useful for a number of concurrency patterns.
Python introduced coroutines as "Enhanced Generators" in PEP 342 (https://www.python.org/dev/peps/pep-0342/), by introducing the "yield expression". That is by making it possible to have yield on the right hand side of assignment.
End of explanation
logs = [
"[WARN]: A new user subscribed to cat facts!",
"[INFO]: Todays fact: Adult cats only meow to communicate with humans.",
"[ERR]: The user has unsubscribed from cat facts!",
"[ERR]: 0 users are subscribed",
"[WARN]: shutting down."
]
err_printer = print_if_error("[ERR]")
next(err_printer)
for log in logs:
err_printer.send(log)
Explanation: Sending values
When using a yield expression, the coroutine receives a value through a call to its send method. Like next, sending a value progresses to the next yield and suspends execution. In fact, given an instance of a generator, say my_gen, calling next(my_gen) is the same as my_gen.send(None).
Also note that before a value can be sent to a coroutine, it must be initialized with a call to next. This can be automated with a decorator.
End of explanation
def track_largest(start=0):
largest = start
while True:
next_val = yield largest
largest = next_val if next_val > largest else largest
tracker = track_largest(0)
print(next(tracker))
print(tracker.send(3))
print(tracker.send(2))
print(tracker.send(100))
print(tracker.send(9000))
Explanation: Can a generator both produce and receive?
Yes, but it can make the behaviour of the code difficult to understand.
The example below is a natural use of taking in and returning values in one function. track_largest is a function that yields the largest value that it's seen so far. The semantics of how this works is somewhat confusing, and it would not be appropriate to use this in a for loop.
The execution pauses after a value has been produced, but before a value has been recieved. This partly explains why you must first initialize a coroutine with a call to next.
End of explanation |
4,522 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Kalman Filter applied to 1D Movement with constant velocity
Step1: Parameters
The system under consideration is an object traveling under constant velocity.
Its motion (in both time and space) can be parametrized as a straight line with
intercept $x_0$ and inclination $v$. The position is measured $N$ times at time
intervals $dt$, or alternatively at some fixed positions given by $k$ surfaces.
Step2: True trajectory
Step3: The measurement is noisy and the results are normally distributed with variance
$\sigma^2$.
Measured trajectory
Step4: Kalman Filter
System equation
In this simplest case the state vector $\mathbf{p}_k = [x_0, v]$ at surface $k$
is left unchanged by the time evolution of the system. An alternative
parametrization is given by the The deterministic function $\mathbf{f}_k$
(which has a linear approximation $\mathbf{F}_k$ that describes how the track
parameter would change from one surface to another is just the identity.
Additionaly, future track parameters are affected by process noise
$\mathbf{\delta}_k$. Usually only a subset of the track parameters are affected
by process noise. This is expressed by multiplying the matrix representation of
process noise with a projection matrix $\mathbf{P}_k$.
The covariance matrix of $\mathbf{\delta}_k$ is denoted $\mathbf{Q}_k$.
Measurement equation
The deterministic function $\mathbf{h}_k$ with linear expansion $\mathbf{H}_k$
maps the track parameters $\mathbf{p}_k$ to measurable quantities (p.ex. space
time points). The covariance of the measurement noise is denoted $\mathbf{V}_k$
Noattion
Step5: Plot results
Step6: 2. Same problem but with unknown velocity
Step7: 3. Same problem but with unknown velocity that is also measured
In principle should be better than 2. - why isn't ?? Additional measurement
(on x_velocity) should improve kalman\
But kalman already knows about the velocity from the transformation matrix A /
the initial value we give to xkal and xpredict? | Python Code:
# allow use of python3 syntax
from __future__ import division, print_function, absolute_import
import numpy as np
# local script with often used
import kalman as k
# contents of local file kalman.py
# %load kalman.py
import numpy as np
import matplotlib.pyplot as plt
def kalman_predict( A, # transition matrix
r, # measurement error matrix
H, # transformation matrix from state vector to measurement
p, # initial variance on prediction
xkal, # estimated state vector
xpredict, # predicted state vector
xmeas): # measurements
for i in range(1, xkal.shape[0]): # for each measurement do
# prediction: recursive formula
xpredict[:, i] = np.dot(A, xkal[:, i - 1])
# predict covariance
p = np.dot(np.dot(A, p), A.T)
# construct kalman gain matrix according to prediction equations
# higher gain leads to higher influence of measurement,
# lower gain to higher influence of predicion
K = np.dot(np.dot(p, H.T), np.linalg.inv(np.dot(np.dot(H, p), H.T) + r))
# construct estimate from prediction and gain
xkal[:, i] = xpredict3[:, i] + np.dot(K, (xmeas[:, i] - H*xpredict[:, i]))
# update covariance with gain
p = np.dot(np.identity(K.shape[0]) - K, p)
return xkal, xpredict
def plot_results(xkal, xpredict, xmeas, xtrue):
fig1 = plt.figure()
ax1 = plt.axes()
plt.plot(xtrue, 'b-', label = 'True')
plt.plot(xmeas[0].T, 'rx', label = 'Measuement')
plt.plot(xpredict[0].T, 'g.', label = 'Prediction')
plt.plot(xkal[0].T, 'ko', label = 'Kalman')
plt.xlabel('Iteration')
plt.ylabel('X')
fig2 = plt.figure()
ax2 = plt.axes()
plt.axhline(v)
plt.axhline(np.mean(xmeas[1]))
plt.plot(xpredict[1].T, 'g.', label = 'Prediction')
plt.plot(xmeas[1].T, 'rx', label = 'Measurement')
plt.plot(xkal[1].T, 'ko', label = 'Kalman')
plt.xlabel('Iteration')
plt.ylabel('Velocity')
return [[fig1, fig2], [ax1, ax2]]
Explanation: 1. Kalman Filter applied to 1D Movement with constant velocity
End of explanation
# number of measurements
N = 10
# time step
dt = 1.
# final time
T = N * dt
# velocity
v = -10.
Explanation: Parameters
The system under consideration is an object traveling under constant velocity.
Its motion (in both time and space) can be parametrized as a straight line with
intercept $x_0$ and inclination $v$. The position is measured $N$ times at time
intervals $dt$, or alternatively at some fixed positions given by $k$ surfaces.
End of explanation
# initial position
x0 = 100.
# elementwise add offset x0 to array of positions at different times
xtrue = x0 + v * np.linspace(0, T, N)
print(xtrue)
Explanation: True trajectory
End of explanation
sigma = 10
noise = np.random.normal(loc=0, scale=sigma, size=xtrue.shape)
xmeas = xtrue + noise
print(xmeas)
Explanation: The measurement is noisy and the results are normally distributed with variance
$\sigma^2$.
Measured trajectory
End of explanation
# estimated track parameters at times k
xkal = np.zeros(xmeas.shape)
# prediction for new track parameters based on previous ones
xpredict = np.zeros(xmeas.shape)
# covariance matrices (here only numbers) of the measurements
p = np.zeros(xmeas.shape)
# Kalman gain matrices
K = np.zeros(xmeas.shape)
# initial position
xpredict[0] = xkal[0] = xmeas[0]
# initial variance on prediction
p[0] = 20
# measurement error
r = sigma**2
# transformation matrix (from state to measurement)
H = 1
for i in range(1, N):
# prediction: recursive formula
xpredict[i] = xkal[i - 1] + v * dt
p[i] = p[i - 1]
# constructing Kalman gain matrix
# in this case, the gain shrinks with each recursion
# makes sense, as one outlier should not influence a prediction based on many points
K[i] = p[i] / (p[i] + r)
# final estimate of local track paramters based on prediction and
# measurement
xkal[i] = xpredict[i] + K[i] * (xmeas[i] - H * xpredict[i])
# update covariance
p[i] = (1 - K[i]) * p[i]
Explanation: Kalman Filter
System equation
In this simplest case the state vector $\mathbf{p}_k = [x_0, v]$ at surface $k$
is left unchanged by the time evolution of the system. An alternative
parametrization is given by the The deterministic function $\mathbf{f}_k$
(which has a linear approximation $\mathbf{F}_k$ that describes how the track
parameter would change from one surface to another is just the identity.
Additionaly, future track parameters are affected by process noise
$\mathbf{\delta}_k$. Usually only a subset of the track parameters are affected
by process noise. This is expressed by multiplying the matrix representation of
process noise with a projection matrix $\mathbf{P}_k$.
The covariance matrix of $\mathbf{\delta}_k$ is denoted $\mathbf{Q}_k$.
Measurement equation
The deterministic function $\mathbf{h}_k$ with linear expansion $\mathbf{H}_k$
maps the track parameters $\mathbf{p}_k$ to measurable quantities (p.ex. space
time points). The covariance of the measurement noise is denoted $\mathbf{V}_k$
Noattion:
[1] Frühwirth, Rudolf, and Meinhard Regler. Data analysis techniques for high-
energy physics. Vol. 11. Cambridge University Press, 2000.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plot
plot.plot(xtrue, 'b-', label = 'True')
plot.plot(xmeas, 'rx', label = 'Measuement')
plot.plot(xpredict, 'g.', label = 'Prediction')
plot.plot(xkal, 'ko', label = 'Kalman')
plot.xlabel('Iteration')
plot.ylabel('X')
plot.legend()
plot.show()
plot.subplot(3,1,1)
plot.plot(p,'o')
plot.ylabel('Prediction cov')
plot.subplot(3,1,2)
plot.plot(K,'o')
plot.ylabel('Kalman gain')
plot.xlabel('Iteration')
plot.show()
Explanation: Plot results
End of explanation
xpredict2 = np.matrix (np.linspace(0,10,N*2).reshape((2, N)))
xkal2 = np.matrix (np.linspace(0,10,N*2).reshape((2, N)))
# initial position and velocity
xpredict2[:,0] = xkal2[:,0] = np.array ( [[xmeas[0]], [np.random.normal(v,1.5) ] ])
# initial variance on prediction
p2 = np.matrix ( [[20, 0],
[0, 20]] )
# measurement error
r = np.matrix([[sigma^2]])
# prediction matrix
A = np.matrix ( [[1, dt],
[0, 1]] )
# transformation matrix (from measurement to state vector)
H = np.matrix ( [[1 , 0]] )
for i in range(1,N):
# prediction: recursive formula
xpredict2[:,i] = np.dot(A, xkal2[:,i-1] )
p2 = A*p2*A.T
K2 = np.dot(p2*H.T, np.linalg.inv(H*p2*H.T+r))
xkal2[:,i] = xpredict2[:,i] + K2*(xmeas[i] - H*xpredict2[:,i])
p2 = (np.identity(2)-K2) * p2
plot.plot(xtrue, 'b-', label = 'True')
plot.plot(xmeas, 'rx', label = 'Measuement')
plot.plot(xpredict2[0].T, 'g.', label = 'Prediction')
plot.plot(xkal2[0].T, 'ko', label = 'Kalman')
plot.xlabel('Iteration')
plot.ylabel('X')
plot.show()
plot.axhline(v)
plot.plot(xpredict2[1].T, 'g.', label = 'Prediction')
plot.plot(xkal2[1].T, 'ko', label = 'Kalman')
plot.xlabel('Iteration')
plot.ylabel('Velocity')
plot.show()
Explanation: 2. Same problem but with unknown velocity
End of explanation
xmeas3 = np.matrix (np.linspace(0,10,N*2).reshape((2, N)))
sigma3 = 1
for i in range(0,N):
xmeas3[0,i] = np.random.normal(xtrue[i], sigma)
xmeas3[1,i] = np.random.normal(v, sigma3)
print(xmeas3.T)
xpredict3 = np.matrix (np.linspace(0,10,N*2).reshape((2, N)))
xkal3 = np.matrix (np.linspace(0,10,N*2).reshape((2, N)))
# initial position
xpredict3[:,0] = xkal3[:,0] = np.array ( [[xmeas3[0,0]], [xmeas3[1,0]] ] )
# initial variance on prediction
p2 = np.matrix ( [[20, 0],
[0, 20]] )
# measurement error
r3 = np.matrix([[0.001*sigma*sigma, 0],
[0 , 0.001*sigma3*sigma3]])
# prediction matrix
A = np.matrix ( [[1, dt],
[0, 1]] )
# transformation matrix (from measurement to state vector)
H3 = np.matrix ( [[1 , 0],
[0, 1]] )
xkal3, xpredict3 = k.kalman_predict(A, r3, H3, p2, xkal3, xpredict3, xmeas3)
figs = plot_results(xkal3, xpredict3, xmeas3, xtrue)
plt.show()
Explanation: 3. Same problem but with unknown velocity that is also measured
In principle should be better than 2. - why isn't ?? Additional measurement
(on x_velocity) should improve kalman\
But kalman already knows about the velocity from the transformation matrix A /
the initial value we give to xkal and xpredict?
End of explanation |
4,523 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
python libs for all vis things
Step1: bokeh
Step2: bokeh + ipywidgets | Python Code:
%pylab inline
t = arange(0.0, 1.0, 0.01)
y1 = sin(2*pi*t)
y2 = sin(2*2*pi*t)
import pandas as pd
df = pd.DataFrame({'t': t, 'y1': y1, 'y2': y2})
df.head(10)
Explanation: python libs for all vis things
End of explanation
from bokeh.plotting import figure, output_notebook, show
# output inline with notebook
output_notebook()
# create a new plot with a title and axis labels
p = figure(title="simple sine example", x_axis_label='t', y_axis_label='sin(2*pi*t)')
# add a line renderer with legend and line thickness
p.line(t, sin(2*pi*t), legend="Temp.", line_width=2)
# show the results
show(p)
Explanation: bokeh
End of explanation
# this uses Bokeh for plotting + ipywidgets for widgets
# translation: no Bokeh server required
from ipywidgets import interact
import numpy as np
from bokeh.io import push_notebook
from bokeh.plotting import figure, show, output_notebook
x = np.linspace(0, 2*np.pi, 2000)
y = np.sin(x)
output_notebook()
p = figure(title="simple curvy example", plot_height=300, plot_width=600, y_range=(-5,5))
r = p.line(x, y, color="#2222aa", line_width=3)
def update(f, w=1, A=1, phi=0):
if f == "sin": func = np.sin
elif f == "cos": func = np.cos
elif f == "tan": func = np.tan
r.data_source.data['y'] = A * func(w * x + phi)
push_notebook() # only updates *last* shown object
interact(update, f=["sin", "cos", "tan"], w=(0,100), A=(1,5), phi=(0, 20, 0.1))
show(p)
from bokeh.plotting import figure, output_file, show
# prepare some data
x = [0.1, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0]
y0 = [i**2 for i in x]
y1 = [10**i for i in x]
y2 = [10**(i**2) for i in x]
output_notebook()
# create a new plot
p = figure(
tools="pan,box_zoom,reset,save",
y_axis_type="log", y_range=[0.001, 10**11], title="log axis example",
x_axis_label='sections', y_axis_label='particles'
)
# add some renderers
p.line(x, x, legend="y=x")
p.circle(x, x, legend="y=x", fill_color="white", size=8)
p.line(x, y0, legend="y=x^2", line_width=3)
p.line(x, y1, legend="y=10^x", line_color="red")
p.circle(x, y1, legend="y=10^x", fill_color="red", line_color="red", size=6)
p.line(x, y2, legend="y=10^x^2", line_color="orange", line_dash="4 4")
# show the results
show(p)
Explanation: bokeh + ipywidgets
End of explanation |
4,524 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Performance Overview
Here, we will example the performance of FNGS as a function of time on several datasets. These investigations were performed on a 4 core machine (4 threads) with a 4.0 GhZ processor. These investigations were performed on the version of FNGS in ndmg/eric-dev-gkiar-fmri on 03/27.
Step1: BNU 1
Step2: HNU Dataset
Step3: DC1 Dataset
Step4: NKI 1 | Python Code:
%%script false
## disklog.sh
#!/bin/bash -e
# run this in the background with nohup ./disklog.sh > disk.txt &
#
while true; do
echo "$(du -s $1 | awk '{print $1}')"
sleep 30
done
##cpulog.sh
import psutil
import time
import argparse
def cpulog(outfile):
with open(outfile, 'w') as outf:
while(True):
cores = psutil.cpu_percent(percpu=True)
corestr = ",".join([str(core) for core in cores])
outf.write(corestr + '\n')
outf.flush()
time.sleep(1) # delay for 1 second
def main():
parser = argparse.ArgumentParser()
parser.add_argument('outfile', help='the file to write core usage to.')
args = parser.parse_args()
cpulog(args.outfile)
if __name__ == "__main__":
main()
## memlog.sh
#!/bin/bash -e
# run this in the background with nohup ./memlog.sh > mem.txt &
#
while true; do
echo "$(free -m | grep buffers/cache | awk '{print $3}')"
sleep 1
done
## runonesub.sh
# A function for generating memory and cpu summaries for fngs pipeline.
#
# Usage: ./generate_statistics.sh /path/to/rest /path/to/anat /path/to/output
rm -rf $3
mkdir $3
./memlog.sh > ${3}/mem.txt &
memkey=$!
python cpulog.py ${3}/cpu.txt &
cpukey=$!
./disklog.sh $3 > ${3}/disk.txt &
diskkey=$!
res=2mm
atlas='/FNGS_server/atlases/atlas/MNI152_T1-${res}.nii.gz'
atlas_brain='/FNGS_server/atlases/atlas/MNI152_T1-${res}_brain.nii.gz'
atlas_mask='/FNGS_server/atlases/mask/MNI152_T1-${res}_brain_mask.nii.gz'
lv_mask='/FNGS_server/atlases/mask/HarvOx_lv_thr25-${res}.nii.gz'
label='/FNGS_server/atlases/label/desikan-${res}.nii.gz'
exec 4<$1
exec 5<$2
fngs_pipeline $1 $2 $atlas $atlas_brain $atlas_mask $lv_mask $3 none $label --fmt graphml
kill $memkey $cpukey $diskkey
%matplotlib inline
import numpy as np
import re
import matplotlib.pyplot as plt
from IPython.display import Image, display
def memory_function(infile, dataset):
with open(infile, 'r') as mem:
lines = mem.readlines()
testar = np.asarray([line.strip() for line in lines]).astype(float)/1000
fig=plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, testar.shape[0]), testar - min(testar))
ax.set_ylabel('memory usage in GB')
ax.set_xlabel('Time (s)')
ax.set_title(dataset + ' Memory Usage; max = %.2f GB; mean = %.2f GB' % (max(testar), np.mean(testar)))
return fig
def cpu_function(infile, dataset):
with open(infile, 'r') as cpuf:
lines = cpuf.readlines()
testar = [re.split(',',line.strip()) for line in lines][0:-1]
corear = np.zeros((len(testar), len(testar[0])))
for i in range(0, len(testar)):
corear[i,:] = np.array([float(cpu) for cpu in testar[i]])
fig=plt.figure()
ax = fig.add_subplot(111)
lines = [ax.plot(corear[:,i], '--', label='cpu '+ str(i),
alpha=0.5)[0] for i in range(0, corear.shape[1])]
total = corear.sum(axis=1)
lines.append(ax.plot(total, label='all cores')[0])
labels = [h.get_label() for h in lines]
fig.legend(handles=lines, labels=labels, loc='lower right', prop={'size':6})
ax.set_ylabel('CPU usage (%)')
ax.set_ylim([0, max(total)+10])
ax.set_xlabel('Time (s)')
ax.set_title(dataset + ' Processor Usage; max = %.1f per; mean = %.1f per' % (max(total), np.mean(total)))
return fig
def disk_function(infile, dataset):
with open(infile, 'r') as disk:
lines = disk.readlines()
testar = np.asarray([line.strip() for line in lines]).astype(float)/1000000
fig=plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, testar.shape[0]), testar)
ax.set_ylabel('Disk usage GB')
ax.set_xlabel('Time (30 s)')
ax.set_title(dataset + ' Disk Usage; max = %.2f GB; mean = %.2f GB' % (max(testar), np.mean(testar)))
return fig
Explanation: Performance Overview
Here, we will example the performance of FNGS as a function of time on several datasets. These investigations were performed on a 4 core machine (4 threads) with a 4.0 GhZ processor. These investigations were performed on the version of FNGS in ndmg/eric-dev-gkiar-fmri on 03/27.
End of explanation
memfig = memory_function('mem.txt', 'BNU 1 single')
diskfig = disk_function('disk.txt', 'BNU 1 single')
cpufig = cpu_function('cpu.txt', 'BNU 1 single')
memfig.show()
diskfig.show()
cpufig.show()
Explanation: BNU 1
End of explanation
memfig = memory_function('/data/HNU_sub/HNU_single/mem.txt', 'HNU 1 single')
diskfig = disk_function('/data/HNU_sub/HNU_single/disk.txt', 'HNU 1 single')
cpufig = cpu_function('/data/HNU_sub/HNU_single/cpu.txt', 'HNU 1 single')
memfig.show()
diskfig.show()
cpufig.show()
Explanation: HNU Dataset
End of explanation
memfig = memory_function('/data/DC_sub/DC_single/mem.txt', 'DC 1 single')
diskfig = disk_function('/data/DC_sub/DC_single/disk.txt', 'DC 1 single')
cpufig = cpu_function('/data/DC_sub/DC_single/cpu.txt', 'DC 1 single')
memfig.show()
diskfig.show()
cpufig.show()
Explanation: DC1 Dataset
End of explanation
memfig = memory_function('/data/NKI_sub/NKI_single/mem.txt', 'NKI 1 single')
diskfig = disk_function('/data/NKI_sub/NKI_single/disk.txt', 'NKI 1 single')
cpufig = cpu_function('/data/NKI_sub/NKI_single/cpu.txt', 'NKI 1 single')
memfig.show()
diskfig.show()
cpufig.show()
Explanation: NKI 1
End of explanation |
4,525 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fast brain decoding with random sampling and random projections
Andres HOYOS-IDROBO, Gael VAROQUAUX and Bertrand THIRION
PARIETAL TEAM, INRIA, CEA, University Paris-Saclay
Presented on
Step1: Testing on Haxby 2001, discriminating between faces and places
Step2: Prediction using the whole brain (non-reduced)
Step4: Prediction on reduced data
Step5: Correlation between non-reduced and Nystrom | Python Code:
%matplotlib inline
import numpy as np
import time
import matplotlib.pyplot as plt
from nilearn.plotting import plot_stat_map
from nilearn.input_data import NiftiMasker
Explanation: Fast brain decoding with random sampling and random projections
Andres HOYOS-IDROBO, Gael VAROQUAUX and Bertrand THIRION
PARIETAL TEAM, INRIA, CEA, University Paris-Saclay
Presented on: the 6th International workshop on Pattern Recognition in Neuroimaging(PRNI) 2016. Trento, Italy
link to the paper
End of explanation
# Fetching haxby dataset
from nilearn import datasets
data_files = datasets.fetch_haxby(n_subjects=1)
masker = NiftiMasker(smoothing_fwhm=4, standardize=True, mask_strategy='epi',
memory='cache', memory_level=1)
labels = np.recfromcsv(data_files.session_target[0], delimiter=" ")
# Restrict to face and house conditions
target = labels['labels']
condition_mask = np.logical_or(target == b"face", target == b"house")
# Split data into train and test samples, using the chunks
condition_mask_train = np.logical_and(condition_mask, labels['chunks'] <= 6)
condition_mask_test = np.logical_and(condition_mask, labels['chunks'] > 6)
X_masked = masker.fit_transform(data_files['func'][0])
X_train = X_masked[condition_mask_train]
X_test = X_masked[condition_mask_test]
y_train = target[condition_mask_train]
y_test = target[condition_mask_test]
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer(pos_label=1, neg_label=-1)
y_train = lb.fit_transform(y_train).ravel()
y_test = lb.transform(y_test).ravel()
Explanation: Testing on Haxby 2001, discriminating between faces and places
End of explanation
# Fit model on train data and predict on test data
from sklearn.linear_model import LogisticRegressionCV
clf = LogisticRegressionCV(Cs=10, penalty='l2')
ti = time.time()
clf.fit(X_train, y_train)
to_raw = time.time() - ti
y_pred = clf.predict(X_test)
accuracy = (y_pred == y_test).mean() * 100.
raw_coef = masker.inverse_transform(clf.coef_)
print("classification accuracy : %g%%, time %.4fs" % (accuracy, to_raw))
Explanation: Prediction using the whole brain (non-reduced)
End of explanation
from sklearn.kernel_approximation import Nystroem
class LinearNistroem(Nystroem):
We are using a linear kernel only and adding the invertion method.
Parameters
-----------
n_components: int, the number of components should be at most n
random_state: int, the random seed (optional)
def __init__(self, n_components=100, random_state=None):
super(LinearNistroem, self).__init__(
n_components=n_components, kernel='linear',
random_state=random_state)
def fit_transform(self, X, y=None):
self.fit(X)
return self.transform(X)
def inverse_transform(self, X):
return X.dot(self.normalization_).dot(self.components_)
nystroem = LinearNistroem(n_components=80)
X_train_nys = nystroem.fit_transform(X_train)
X_test_nys = nystroem.transform(X_test)
ti = time.time()
clf.fit(X_train_nys, y_train)
to_nys = time.time() - ti
y_pred = clf.predict(X_test_nys)
accuracy = (y_pred == y_test).mean() * 100.
nys_coef = masker.inverse_transform(nystroem.inverse_transform(clf.coef_))
print("classification accuracy : %g%%, time %.4fs" % (accuracy, to_nys))
Explanation: Prediction on reduced data: adding Nystrom method
End of explanation
from nilearn.plotting import plot_stat_map
bg_img = data_files['anat'][0]
plot_stat_map(raw_coef, display_mode='yz', bg_img=bg_img, title=r'$non-reduced$', cut_coords=(-34, -16))
plot_stat_map(nys_coef, display_mode='yz', bg_img=bg_img, title=r'$Nystr\"om$', cut_coords=(-34, -16))
from scipy.stats import pearsonr
raw_masked = masker.transform(raw_coef).squeeze()
nys_masked = masker.transform(nys_coef).squeeze()
correlation = pearsonr(raw_masked, nys_masked)[0]
print("correlation %.4f" % correlation)
Explanation: Correlation between non-reduced and Nystrom
End of explanation |
4,526 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial on the Analytical Advection kernel in Parcels
While Lagrangian Ocean Analysis has been around since at least the 1980s, the Blanke and Raynaud (1997) paper has really spurred the use of Lagrangian particles for large-scale simulations. In their 1997 paper, Blanke and Raynaud introduce the so-called Analytical Advection scheme for pathway integration. This scheme has been the base for the Ariane and TRACMASS tools. We have also implemented it in Parcels, particularly to facilitate comparison with for example the Runge-Kutta integration scheme.
In this tutorial, we will briefly explain what the scheme is and how it can be used in Parcels. For more information, see for example Döös et al (2017).
Most advection schemes, including for example Runge-Kutta schemes, calculate particle trajectories by integrating the velocity field through time-stepping. The Analytical Advection scheme, however, does not use time-stepping. Instead, the trajectory within a grid cell is analytically computed assuming that the velocities change linearly between grid cells. This yields Ordinary Differential Equations for the time is takes to cross a grid cell in each direction. By solving these equations, we can compute the trajectory of a particle within a grid cell, from one face to another. See Figure 2 of Van Sebille et al (2018) for a schematic comparing the Analytical Advection scheme to the fourth order Runge-Kutta scheme.
Note that the Analytical scheme works with a few limitations
Step1: Radial rotation example
As in Figure 4a of Lange and Van Sebille (2017), we define a circular flow with period 24 hours, on a C-grid
Step2: Now simulate a set of particles on this fieldset, using the AdvectionAnalytical kernel. Keep track of how the radius of the Particle trajectory changes during the run.
Step3: Now plot the trajectory and calculate how much the radius has changed during the run.
Step5: Double-gyre example
Define a double gyre fieldset that varies in time
Step6: Now simulate a set of particles on this fieldset, using the AdvectionAnalytical kernel
Step7: And then show the particle trajectories in an animation
Step8: Now, we can also compute these trajectories with the AdvectionRK4 kernel
Step9: And we can then compare the final locations of the particles from the AdvectionRK4 and AdvectionAnalytical simulations
Step11: The final locations are similar, but not exactly the same. Because everything else is the same, the difference has to be due to the different kernels. Which one is more correct, however, can't be determined from this analysis alone.
Bickley Jet example
Let's as a second example, do a similar analysis for a Bickley Jet, as detailed in e.g. Hadjighasem et al (2017).
Step12: Add a zonal halo for periodic boundary conditions in the zonal direction
Step13: And simulate a set of particles on this fieldset, using the AdvectionAnalytical kernel
Step14: And then show the particle trajectories in an animation
Step15: Like with the double gyre above, we can also compute these trajectories with the AdvectionRK4 kernel
Step16: And finally, we can again compare the end locations from the AdvectionRK4 and AdvectionAnalytical simulations | Python Code:
%pylab inline
from parcels import FieldSet, ParticleSet, ScipyParticle, JITParticle, Variable
from parcels import AdvectionAnalytical, AdvectionRK4, plotTrajectoriesFile
import numpy as np
from datetime import timedelta as delta
import matplotlib.pyplot as plt
Explanation: Tutorial on the Analytical Advection kernel in Parcels
While Lagrangian Ocean Analysis has been around since at least the 1980s, the Blanke and Raynaud (1997) paper has really spurred the use of Lagrangian particles for large-scale simulations. In their 1997 paper, Blanke and Raynaud introduce the so-called Analytical Advection scheme for pathway integration. This scheme has been the base for the Ariane and TRACMASS tools. We have also implemented it in Parcels, particularly to facilitate comparison with for example the Runge-Kutta integration scheme.
In this tutorial, we will briefly explain what the scheme is and how it can be used in Parcels. For more information, see for example Döös et al (2017).
Most advection schemes, including for example Runge-Kutta schemes, calculate particle trajectories by integrating the velocity field through time-stepping. The Analytical Advection scheme, however, does not use time-stepping. Instead, the trajectory within a grid cell is analytically computed assuming that the velocities change linearly between grid cells. This yields Ordinary Differential Equations for the time is takes to cross a grid cell in each direction. By solving these equations, we can compute the trajectory of a particle within a grid cell, from one face to another. See Figure 2 of Van Sebille et al (2018) for a schematic comparing the Analytical Advection scheme to the fourth order Runge-Kutta scheme.
Note that the Analytical scheme works with a few limitations:
1. The velocity field should be defined on a C-grid (see also the Parcels NEMO tutorial).
And specifically for the implementation in Parcels
2. The AdvectionAnalytical kernel only works for Scipy Particles.
3. Since Analytical Advection does not use timestepping, the dt parameter in pset.execute() should be set to np.inf. For backward-in-time simulations, it should be set to -np.inf.
4. For time-varying fields, only the 'intermediate timesteps' scheme (section 2.3 of Döös et al 2017) is implemented. While there is also a way to also analytically solve the time-evolving fields (section 2.4 of Döös et al 2017), this is not yet implemented in Parcels.
We welcome contributions to the further development of this algorithm and in particular the analytical time-varying case. See here for the code of the AdvectionAnalytical kernel.
Below, we will show how this AdvectionAnalytical kernel performs on one idealised time-constant flow and two idealised time-varying flows: a radial rotation, the time-varying double-gyre as implemented in e.g. Froyland and Padberg (2009) and the Bickley Jet as implemented in e.g. Hadjighasem et al (2017).
First import the relevant modules.
End of explanation
def radialrotation_fieldset(xdim=201, ydim=201):
# Coordinates of the test fieldset (on C-grid in m)
a = b = 20000 # domain size
lon = np.linspace(-a/2, a/2, xdim, dtype=np.float32)
lat = np.linspace(-b/2, b/2, ydim, dtype=np.float32)
dx, dy = lon[2]-lon[1], lat[2]-lat[1]
# Define arrays R (radius), U (zonal velocity) and V (meridional velocity)
U = np.zeros((lat.size, lon.size), dtype=np.float32)
V = np.zeros((lat.size, lon.size), dtype=np.float32)
R = np.zeros((lat.size, lon.size), dtype=np.float32)
def calc_r_phi(ln, lt):
return np.sqrt(ln**2 + lt**2), np.arctan2(ln, lt)
omega = 2 * np.pi / delta(days=1).total_seconds()
for i in range(lon.size):
for j in range(lat.size):
r, phi = calc_r_phi(lon[i], lat[j])
R[j, i] = r
r, phi = calc_r_phi(lon[i]-dx/2, lat[j])
V[j, i] = -omega * r * np.sin(phi)
r, phi = calc_r_phi(lon[i], lat[j]-dy/2)
U[j, i] = omega * r * np.cos(phi)
data = {'U': U, 'V': V, 'R': R}
dimensions = {'lon': lon, 'lat': lat}
fieldset = FieldSet.from_data(data, dimensions, mesh='flat')
fieldset.U.interp_method = 'cgrid_velocity'
fieldset.V.interp_method = 'cgrid_velocity'
return fieldset
fieldsetRR = radialrotation_fieldset()
Explanation: Radial rotation example
As in Figure 4a of Lange and Van Sebille (2017), we define a circular flow with period 24 hours, on a C-grid
End of explanation
def UpdateR(particle, fieldset, time):
particle.radius = fieldset.R[time, particle.depth, particle.lat, particle.lon]
class MyParticle(ScipyParticle):
radius = Variable('radius', dtype=np.float32, initial=0.)
radius_start = Variable('radius_start', dtype=np.float32, initial=fieldsetRR.R)
pset = ParticleSet(fieldsetRR, pclass=MyParticle, lon=0, lat=4e3, time=0)
output = pset.ParticleFile(name='radialAnalytical.nc', outputdt=delta(hours=1))
pset.execute(pset.Kernel(UpdateR) + AdvectionAnalytical,
runtime=delta(hours=24),
dt=np.inf, # needs to be set to np.inf for Analytical Advection
output_file=output)
Explanation: Now simulate a set of particles on this fieldset, using the AdvectionAnalytical kernel. Keep track of how the radius of the Particle trajectory changes during the run.
End of explanation
output.close()
plotTrajectoriesFile('radialAnalytical.nc')
print('Particle radius at start of run %f' % pset.radius_start[0])
print('Particle radius at end of run %f' % pset.radius[0])
print('Change in Particle radius %f' % (pset.radius[0] - pset.radius_start[0]))
Explanation: Now plot the trajectory and calculate how much the radius has changed during the run.
End of explanation
def doublegyre_fieldset(times, xdim=51, ydim=51):
Implemented following Froyland and Padberg (2009), 10.1016/j.physd.2009.03.002
A = 0.25
delta = 0.25
omega = 2 * np.pi
a, b = 2, 1 # domain size
lon = np.linspace(0, a, xdim, dtype=np.float32)
lat = np.linspace(0, b, ydim, dtype=np.float32)
dx, dy = lon[2]-lon[1], lat[2]-lat[1]
U = np.zeros((times.size, lat.size, lon.size), dtype=np.float32)
V = np.zeros((times.size, lat.size, lon.size), dtype=np.float32)
for i in range(lon.size):
for j in range(lat.size):
x1 = lon[i]-dx/2
x2 = lat[j]-dy/2
for t in range(len(times)):
time = times[t]
f = delta * np.sin(omega * time) * x1**2 + (1-2 * delta * np.sin(omega * time)) * x1
U[t, j, i] = -np.pi * A * np.sin(np.pi * f) * np.cos(np.pi * x2)
V[t, j, i] = np.pi * A * np.cos(np.pi * f) * np.sin(np.pi * x2) * (2 * delta * np.sin(omega * time) * x1 + 1 - 2 * delta * np.sin(omega * time))
data = {'U': U, 'V': V}
dimensions = {'lon': lon, 'lat': lat, 'time': times}
allow_time_extrapolation = True if len(times) == 1 else False
fieldset = FieldSet.from_data(data, dimensions, mesh='flat', allow_time_extrapolation=allow_time_extrapolation)
fieldset.U.interp_method = 'cgrid_velocity'
fieldset.V.interp_method = 'cgrid_velocity'
return fieldset
fieldsetDG = doublegyre_fieldset(times=np.arange(0, 3.1, 0.1))
Explanation: Double-gyre example
Define a double gyre fieldset that varies in time
End of explanation
X, Y = np.meshgrid(np.arange(0.15, 1.85, 0.1), np.arange(0.15, 0.85, 0.1))
psetAA = ParticleSet(fieldsetDG, pclass=ScipyParticle, lon=X, lat=Y)
output = psetAA.ParticleFile(name='doublegyreAA.nc', outputdt=0.1)
psetAA.execute(AdvectionAnalytical,
dt=np.inf, # needs to be set to np.inf for Analytical Advection
runtime=3,
output_file=output)
Explanation: Now simulate a set of particles on this fieldset, using the AdvectionAnalytical kernel
End of explanation
output.close()
plotTrajectoriesFile('doublegyreAA.nc', mode='movie2d_notebook')
Explanation: And then show the particle trajectories in an animation
End of explanation
psetRK4 = ParticleSet(fieldsetDG, pclass=JITParticle, lon=X, lat=Y)
psetRK4.execute(AdvectionRK4, dt=0.01, runtime=3)
Explanation: Now, we can also compute these trajectories with the AdvectionRK4 kernel
End of explanation
plt.plot(psetRK4.lon, psetRK4.lat, 'r.', label='RK4')
plt.plot(psetAA.lon, psetAA.lat, 'b.', label='Analytical')
plt.legend()
plt.show()
Explanation: And we can then compare the final locations of the particles from the AdvectionRK4 and AdvectionAnalytical simulations
End of explanation
def bickleyjet_fieldset(times, xdim=51, ydim=51):
Bickley Jet Field as implemented in Hadjighasem et al 2017, 10.1063/1.4982720
U0 = 0.06266
L = 1770.
r0 = 6371.
k1 = 2 * 1 / r0
k2 = 2 * 2 / r0
k3 = 2 * 3 / r0
eps1 = 0.075
eps2 = 0.4
eps3 = 0.3
c3 = 0.461 * U0
c2 = 0.205 * U0
c1 = c3 + ((np.sqrt(5)-1)/2.) * (k2/k1) * (c2 - c3)
a, b = np.pi*r0, 7000. # domain size
lon = np.linspace(0, a, xdim, dtype=np.float32)
lat = np.linspace(-b/2, b/2, ydim, dtype=np.float32)
dx, dy = lon[2]-lon[1], lat[2]-lat[1]
U = np.zeros((times.size, lat.size, lon.size), dtype=np.float32)
V = np.zeros((times.size, lat.size, lon.size), dtype=np.float32)
P = np.zeros((times.size, lat.size, lon.size), dtype=np.float32)
for i in range(lon.size):
for j in range(lat.size):
x1 = lon[i]-dx/2
x2 = lat[j]-dy/2
for t in range(len(times)):
time = times[t]
f1 = eps1 * np.exp(-1j * k1 * c1 * time)
f2 = eps2 * np.exp(-1j * k2 * c2 * time)
f3 = eps3 * np.exp(-1j * k3 * c3 * time)
F1 = f1 * np.exp(1j * k1 * x1)
F2 = f2 * np.exp(1j * k2 * x1)
F3 = f3 * np.exp(1j * k3 * x1)
G = np.real(np.sum([F1, F2, F3]))
G_x = np.real(np.sum([1j * k1 * F1, 1j * k2 * F2, 1j * k3 * F3]))
U[t, j, i] = U0 / (np.cosh(x2/L)**2) + 2 * U0 * np.sinh(x2/L) / (np.cosh(x2/L)**3) * G
V[t, j, i] = U0 * L * (1./np.cosh(x2/L))**2 * G_x
data = {'U': U, 'V': V, 'P': P}
dimensions = {'lon': lon, 'lat': lat, 'time': times}
allow_time_extrapolation = True if len(times) == 1 else False
fieldset = FieldSet.from_data(data, dimensions, mesh='flat', allow_time_extrapolation=allow_time_extrapolation)
fieldset.U.interp_method = 'cgrid_velocity'
fieldset.V.interp_method = 'cgrid_velocity'
return fieldset
fieldsetBJ = bickleyjet_fieldset(times=np.arange(0, 1.1, 0.1)*86400)
Explanation: The final locations are similar, but not exactly the same. Because everything else is the same, the difference has to be due to the different kernels. Which one is more correct, however, can't be determined from this analysis alone.
Bickley Jet example
Let's as a second example, do a similar analysis for a Bickley Jet, as detailed in e.g. Hadjighasem et al (2017).
End of explanation
fieldsetBJ.add_constant('halo_west', fieldsetBJ.U.grid.lon[0])
fieldsetBJ.add_constant('halo_east', fieldsetBJ.U.grid.lon[-1])
fieldsetBJ.add_periodic_halo(zonal=True)
def ZonalBC(particle, fieldset, time):
if particle.lon < fieldset.halo_west:
particle.lon += fieldset.halo_east - fieldset.halo_west
elif particle.lon > fieldset.halo_east:
particle.lon -= fieldset.halo_east - fieldset.halo_west
Explanation: Add a zonal halo for periodic boundary conditions in the zonal direction
End of explanation
X, Y = np.meshgrid(np.arange(0, 19900, 100), np.arange(-100, 100, 100))
psetAA = ParticleSet(fieldsetBJ, pclass=ScipyParticle, lon=X, lat=Y, time=0)
output = psetAA.ParticleFile(name='bickleyjetAA.nc', outputdt=delta(hours=1))
psetAA.execute(AdvectionAnalytical+psetAA.Kernel(ZonalBC),
dt=np.inf,
runtime=delta(days=1),
output_file=output)
Explanation: And simulate a set of particles on this fieldset, using the AdvectionAnalytical kernel
End of explanation
output.close()
plotTrajectoriesFile('bickleyjetAA.nc', mode='movie2d_notebook')
Explanation: And then show the particle trajectories in an animation
End of explanation
psetRK4 = ParticleSet(fieldsetBJ, pclass=JITParticle, lon=X, lat=Y)
psetRK4.execute(AdvectionRK4+psetRK4.Kernel(ZonalBC),
dt=delta(minutes=5), runtime=delta(days=1))
Explanation: Like with the double gyre above, we can also compute these trajectories with the AdvectionRK4 kernel
End of explanation
plt.plot(psetRK4.lon, psetRK4.lat, 'r.', label='RK4')
plt.plot(psetAA.lon, psetAA.lat, 'b.', label='Analytical')
plt.legend()
plt.show()
Explanation: And finally, we can again compare the end locations from the AdvectionRK4 and AdvectionAnalytical simulations
End of explanation |
4,527 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
L'objectif est de décrire l'évolution des montants des accises de la TICPE depuis 1993
Import de modules généraux
Step1: Import de fonctions spécifiques à Openfisca Indirect Taxation
Step2: Recherche des paramètres de la législation
Step3: Réalisation des graphiques | Python Code:
import seaborn
seaborn.set_palette(seaborn.color_palette("Set2", 12))
%matplotlib inline
Explanation: L'objectif est de décrire l'évolution des montants des accises de la TICPE depuis 1993
Import de modules généraux
End of explanation
from openfisca_france_indirect_taxation.examples.utils_example import graph_builder_bar_list
from openfisca_france_indirect_taxation.examples.dataframes_from_legislation.get_accises import \
get_accise_ticpe_majoree
Explanation: Import de fonctions spécifiques à Openfisca Indirect Taxation
End of explanation
liste = ['ticpe_gazole', 'ticpe_super9598', 'super_plombe_ticpe']
df_accises = get_accise_ticpe_majoree()
Explanation: Recherche des paramètres de la législation
End of explanation
graph_builder_bar_list(df_accises['accise majoree sans plomb'], 1, 1)
graph_builder_bar_list(df_accises['accise majoree diesel'], 1, 1)
graph_builder_bar_list(df_accises['accise majoree super plombe'], 1, 1)
Explanation: Réalisation des graphiques
End of explanation |
4,528 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Import the necessary packages to read in the data, plot, and create a linear regression model
Step1: 2. Read in the hanford.csv file
Step2: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step5: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10 | Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
import numpy as np
import scipy as sp
Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
df = pd.read_csv("hanford.csv")
df
Explanation: 2. Read in the hanford.csv file
End of explanation
df.describe()
Explanation: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
End of explanation
df.corr()
df.plot(kind="scatter",x="Exposure",y="Mortality")
r =
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
lm = smf.ols(formula="Exposure-Mortality",data=df).fit()
intercept, slop, smf.ols
lm.params
Explanation: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
def predict_mr(exposure):
return intercept + float(expsure) * slope
predict_mr(10)
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
End of explanation |
4,529 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read in parquet files from pre-processing
Step1: To make templates
Step2: Add cluster labels to sentences and mentions (entities)
Step3: Get the size of each cluster
Step4: Get the distribution of CUIs in each cluster
How many clusters on average does a CUI appear in
Step5: Max and average number of clusters that CUIs appear in
Step6: The preferred text of cuis that occur in the most number of clusters
Step7: Average number of unique CUIs in a cluster
Step8: Get the cluster label frequency by sentence position
Step9: Get the number of documents in each cluster
Step10: Generating Notes
Get all the entities for the document
Step11: Drop templates that contain entities not in the document
Step12: Choose a cluster based on cluster frequency for that sentence position
Step13: Choose a template from the cluster base on frequency for that sentence position
Step14: Fill template blank
Choosing text
Select text to fill the template blank based on the frequency of strings for the CUI associated with the mention
Step15: Write a full note
Step16: Write until all mentions have been used | Python Code:
# do the reading
templates = pd.read_parquet('data/processed_dfs/templates.parquet' )
sentences = pd.read_parquet('data/processed_dfs/sentences.parquet')
mentions = pd.read_parquet('data/processed_dfs/mentions.parquet')
umls = pd.read_parquet('data/processed_dfs/umls.parquet')
sentences.head()
mentions.head()
templates.head()
Explanation: Read in parquet files from pre-processing
End of explanation
print(len(templates))
# templates = templates.drop_duplicates('sem_template')
# print(len(templates))
def get_vectors(df):
tf = TfidfVectorizer()
return tf.fit_transform(df['sem_template'])
# Only use unique templates
vectors = get_vectors(templates)
vecd = vectors.todense()
print(vectors.shape)
cluster_sizes = [70, 80, 90, 100, 110, 120, 125, 130, 140, 150, 200]
for n_cluster in cluster_sizes:
km = KMeans( init='k-means++', max_iter=100, n_init=1,
n_clusters=n_cluster, verbose=False)
km.fit(vectors)
predictions = km.predict(vectors)
sil_score = silhouette_score(vectors, predictions, metric='euclidean')
print(f"Silhouette score for n_clusters={n_cluster}:")
print(sil_score)
km = KMeans( init='k-means++', max_iter=100, n_init=1,
n_clusters=120, verbose=False)
km.fit(vectors)
predictions = km.predict(vectors)
sil_score = silhouette_score(vectors, predictions, metric='euclidean')
# print(km.cluster_centers_.shape)
# order_centroids = km.cluster_centers_.argsort()[:, ::-1]
# terms = tf.get_feature_names()
# for i in range(50):
# print("Cluster %d:" % i, end='')
# for ind in order_centroids[i, :15]:
# print(' %s' % terms[ind], end='')
# print()
predictions = km.predict(vectors)
silhouette_score(vectors, predictions, metric='euclidean')
templates['cluster'] = predictions
templates.head()
sentences.shape
Explanation: To make templates:
1 Make an empty data frame with the fields to hold template info
2 For each sentence:
* Get the predicates for that sentence
* trim the frameset after the '.'
* Get the mentions
* Get mention type
* Append umls cui to end of mention (just take the first one)
* Order the predicates and mentions by begin offset
* Combine into a string separated by spaces
* Write the template and semantic template to the dataframe
End of explanation
sentences = sentences.merge(templates[['sent_id', 'cluster']], on='sent_id')
mentions = mentions.merge(templates[['sent_id', 'cluster']], on='sent_id')
sentences.head()
mentions.head()
Explanation: Add cluster labels to sentences and mentions (entities)
End of explanation
pdf = pd.DataFrame(predictions, columns=['cluster'])
cluster_counts = pdf.groupby('cluster').size().reset_index(name='count')
cluster_counts['count'].plot(kind='bar')
cluster_counts['frequency'] = cluster_counts['count'] / cluster_counts['count'].sum()
cluster_counts.head()
Explanation: Get the size of each cluster
End of explanation
cui_clust_freq = mentions.groupby(['cui', 'cluster']).size().reset_index(name='cluster_count')
cui_clust_freq.sort_values('cluster_count', ascending=False).head(10)
num_clusters_per_cui = cui_clust_freq.groupby('cui').size().reset_index(name='num_clusters')
# avg_num_clusters = .agg({'num_clusters': 'mean'})
num_clusters_per_cui.sort_values('num_clusters', ascending=False).head(10)
Explanation: Get the distribution of CUIs in each cluster
How many clusters on average does a CUI appear in
End of explanation
print("Max number of clusters that a cui appears in")
print(num_clusters_per_cui.agg({'num_clusters': 'max'}))
print('Average number of clusters that cuis appear in:')
print(num_clusters_per_cui.agg({'num_clusters': 'mean'}))
max_clusters = num_clusters_per_cui[num_clusters_per_cui['num_clusters'] == 23]
max_clusters
Explanation: Max and average number of clusters that CUIs appear in
End of explanation
mentions[mentions['cui'].isin(max_clusters['cui'])]['preferred_text'].unique()
Explanation: The preferred text of cuis that occur in the most number of clusters
End of explanation
num_cuis_in_cluster_freq = cui_clust_freq[['cui', 'cluster']] \
.groupby('cluster') \
.size() \
.reset_index(name="num_cuis_in_cluster")
num_cuis_in_cluster_freq.sort_values('num_cuis_in_cluster', ascending=False)
num_cuis_in_cluster_freq.agg({'num_cuis_in_cluster': 'mean'})
Explanation: Average number of unique CUIs in a cluster
End of explanation
cluster_label_by_sentence_pos = pd.crosstab(templates['cluster']
,templates['sentence_number']
).apply(lambda x: x / x.sum(), axis=0)
cluster_label_by_sentence_pos
Explanation: Get the cluster label frequency by sentence position
End of explanation
mentions[mentions['cluster'] == 1]
umls[umls['xmi_id'].isin([17309, 11768, 11337, 4456, 15539, 16616, 10061, 13422]) ]
sentences[sentences['sent_id'] == 'f918cc4a-2f8b-4c5e-a904-3de84efe714b']
notes = pd.read_parquet('data/note-events.parquet', engine='fastparquet')
notes[notes['ROW_ID'] == 333908]['TEXT'].iloc[0][1368:1372]
Explanation: Get the number of documents in each cluster
End of explanation
doc_ids = templates['doc_id'].unique()
notes = notes[notes['ROW_ID'].isin(doc_ids)]
notes = notes.reset_index(drop=True)
# notes = notes.drop(['CHARTDATE','CHARTTIME','STORETIME','CGID','ISERROR'],axis=1)
doc = notes.sample(n=1)
doc_id = doc['ROW_ID'].iloc[0]
doc_id
Explanation: Generating Notes
Get all the entities for the document
End of explanation
ents_in_doc = mentions[mentions['doc_id'] == doc['ROW_ID'].iloc[0]]
ments_in_doc = ents_in_doc.mention_type.unique()
# print(ments_in_doc)
ents_in_doc.head()
# get metions where mention_type is in doc entities types
print(len(mentions))
doc_ments = mentions[mentions.cui.isin(ents_in_doc.cui.unique())]
# print(len(doc_ments))
doc_ments.head()
# get templates that have the corresponding sentence ids from doc_ments
template_candidates = templates[templates.sent_id.isin(doc_ments.sent_id)]
template_candidates.head()
Explanation: Drop templates that contain entities not in the document
End of explanation
candidate_cluster_labels = template_candidates.cluster.sort_values().unique()
candidate_clusters = cluster_label_by_sentence_pos.iloc[candidate_cluster_labels]
sent_pos = 0
# remove cluster labels not present in template candidates
selected_cluster = candidate_clusters.sample(
n=1,
weights=candidate_clusters.loc[:,sent_pos]
).iloc[0].name
selected_cluster
# templates_in_cluster = template_candidates[template_candidates['cluster'] == selected_cluster.iloc[0].index]
cluster_templates = template_candidates[template_candidates.cluster == selected_cluster]
cluster_templates.head()
Explanation: Choose a cluster based on cluster frequency for that sentence position
End of explanation
# templates_at_pos = cluster_templates[cluster_templates.sentence_number == sent_pos]
template = cluster_templates.sample(n=1)
template
# sentences[sentences.sent_id == 'deef8a81-b222-4d1f-aa3f-7dfc160cb428'].iloc[0].text
Explanation: Choose a template from the cluster base on frequency for that sentence position
End of explanation
# get mentions in this template
template_id = template.iloc[0]['sent_id']
ments_in_temp = mentions[mentions.sent_id == template_id]
ments_in_temp
# Get the sentence for that template
raw_sentence = sentences[sentences.sent_id == template_id]
raw_sentence.iloc[0].text
# Select entities from entities in the document that match that entity type
#
ments_in_temp
# ments_in_temp.drop(ments_in_temp.loc[482].name, axis=0)
concepts = umls[umls.cui == ments_in_temp.iloc[0].cui]
concepts.head()
# ents_in_doc
# txt_counts.sample(n=1, weights=txt_counts.cnt).iloc[0].text
def template_filler(template, sentences, entities, all_mentions):
# print(template.sem_template)
num_start = len(entities)
template_id = template.iloc[0]['sent_id']
ments_in_temp = all_mentions[all_mentions.sent_id == template_id]
raw_sentence = sentences[sentences.sent_id == template_id]
# print(f'raw sent df size: {len(raw_sentence)}')
# print(template_id)
sent_begin = raw_sentence.iloc[0].begin
sent_end = raw_sentence.iloc[0].end
raw_text = raw_sentence.iloc[0].text
replacements = []
# rows_to_drop = []
# print('Mention types in template')
# print(ments_in_temp.mention_type.unique())
# print('types in entities')
# print(entities.mention_type.unique())
for i, row in ments_in_temp.iterrows():
ents_subset = entities[entities.mention_type == row.mention_type]
if len(ents_subset) == 0:
print('Empty list of doc entities')
print(entities.mention_type)
print(row.mention_type)
break
rand_ent = ents_subset.sample(n=1)
entities = entities[entities['id'] != rand_ent.iloc[0]['id']]
# rows_to_drop.append(rand_ent.iloc[0].name)
ent_cui = rand_ent.iloc[0].cui
# print(ent_cui)
span_text = get_text_for_mention(ent_cui, all_mentions)
replacements.append({
'text' : span_text,
'begin' : row.begin - sent_begin,
'end' : row.end - sent_begin,
})
new_sentence = ''
for i, r in enumerate(replacements):
if i == 0:
new_sentence += raw_text[0 : r['begin'] ]
else:
new_sentence += raw_text[replacements[i-1]['end'] : r['begin']]
new_sentence += r['text']
if(len(replacements) > 1):
new_sentence += raw_text[replacements[-1]['end'] : ]
# clean up
num_end = len(entities)
# print(f"Dropped {num_start - num_end} rows")
return new_sentence, entities
# Find all the text associated with the cui of the mention in the template
# choose a text span based on frequency
def get_text_for_mention(cui, mentions):
txt_counts = mentions[mentions.cui == cui].groupby('text').size().reset_index(name='cnt')
return txt_counts.sample(n=1, weights=txt_counts.cnt).iloc[0].text
Explanation: Fill template blank
Choosing text
Select text to fill the template blank based on the frequency of strings for the CUI associated with the mention
End of explanation
# Select document to write note for
# doc = notes.sample(n=1)
# doc_id = doc['ROW_ID'].iloc[0]
doc_id = 374185
# Get all the entities in the chosen document
ents_in_doc = mentions[mentions['doc_id'] == doc_id]
new_doc_sentences = []
sent_pos = 0
while len(ents_in_doc) > 0:
# print(f"Sentence position: {sent_pos}")
# print(f"Length of remaining entities: {len(ents_in_doc)}")
# Get list of possible mentions based on CUIs found in the document
mentions_pool = mentions[(mentions.cui.isin(ents_in_doc.cui.unique()))
& (mentions.mention_type.isin(ents_in_doc.mention_type.unique()))]
# Get template pool based on mentions pool
# TODO: Need to only choose templates where all the mentions are in `ents_in_doc`
template_candidates = templates[templates.sent_id.isin(mentions_pool.sent_id)]
# ts = len(template_candidates.sent_id.unique())
# ms = len(mentions_pool.sent_id.unique())
# print(ts, ms)
def all_ents_present(row, doc_ents, ments_pool):
# Get mentions in this template
all_temp_ments = ments_pool[ments_pool['sent_id'] == row['sent_id']]
available_mentions = all_temp_ments[all_temp_ments['mention_type'].isin(doc_ents['mention_type'])]
return (len(available_mentions) > 0)
mask = template_candidates.apply(all_ents_present,
args=(ents_in_doc, mentions_pool),
axis=1)
template_candidates = template_candidates[mask]
# print(f'num templates: {len(template_candidates)}')
#If there are no more possible templates then break
if len(template_candidates) == 0:
break
# Get candidate clusters based on template pool
# Remove the cluster labels that aren't present in template bank
candidate_cluster_labels = template_candidates.cluster.sort_values().unique()
candidate_clusters = cluster_label_by_sentence_pos.iloc[candidate_cluster_labels]
# print(f"Num clusters: {len(candidate_clusters)}")
# Select cluster based on frequency at sentence position
selected_cluster = None
try:
selected_cluster = candidate_clusters.sample(
n=1,
weights=candidate_clusters.loc[:,sent_pos]
).iloc[0].name
except:
# It's possible the clusters we chose don't appear at that position
# so we can choose randomly
# print('choosing random cluster')
selected_cluster = candidate_clusters.sample(n=1).iloc[0].name
# print('selected cluster:')
# print(selected_cluster)
cluster_templates = template_candidates[template_candidates.cluster == selected_cluster]
# Choose template from cluster at random
template = cluster_templates.sample(n=1)
template_id = template.iloc[0]['sent_id']
# Get mentions in the template
ments_in_temp = mentions[mentions.sent_id == template_id]
# Write the sentence and update entities found in the document !!!
t, ents_in_doc = template_filler(template, sentences, ents_in_doc, mentions_pool)
new_doc_sentences.append(t)
sent_pos += 1
'\n'.join(new_doc_sentences)
notes[notes.ROW_ID == 374185].iloc[0].TEXT
Explanation: Write a full note
End of explanation
mentions.groupby('doc_id').size().reset_index(name='cnt').sort_values('cnt').head(10)
mentions[mentions.doc_id == 476781]
Explanation: Write until all mentions have been used
End of explanation |
4,530 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST classification with Vowpal Wabbit
Step1: Train
I found some help with parameters here
Step2: Predict
-t
is for test file
-i
specifies the model file created earlier
-p
where to store the class predictions [1,10]
Step4: Analyze | Python Code:
from __future__ import division
import re
import numpy as np
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
#%qtconsole
Explanation: MNIST classification with Vowpal Wabbit
End of explanation
!rm train.vw.cache
!rm mnist_train.model
!vw -d data/mnist_train.vw -b 19 --oaa 10 -f mnist_train.model -q ii --passes 30 -l 0.4 --early_terminate 3 --cache_file train.vw.cache --power_t 0.6
Explanation: Train
I found some help with parameters here:
* https://github.com/JohnLangford/vowpal_wabbit/wiki/Tutorial
* https://github.com/JohnLangford/vowpal_wabbit/wiki/Command-line-arguments
--cache_file train.cache
converts train_ALL.vw to a binary file for future faster processing.
Next time we go through the model building, we will use the cache file
and not the text file.
--passes
is the number of passes
--oaa 10
refers to oaa learning algorithm with 10 classes (1 to 10)
-q ii
creates interaction between variables in the two referred to namespaces
which here are the same i.e. 'image' Namespace.
An interaction variable is created from two variables 'A' and 'B'
by multiplying the values of 'A' and 'B'.
-f mnist_ALL.model
refers to file where model will be saved.
-b
refers to number of bits in the feature table.
Default number is 18 but as we have increased the number of features much more
by introducing interaction features, value of '-b' has been increased to 22.
-l rate
Adjust the learning rate. Defaults to 0.5
--power_t p
This specifies the power on the learning rate decay. You can adjust this --power_t p where p is in the range [0,1]. 0 means the learning rate does not decay, which can be helpful when state tracking, while 1 is very aggressive. Defaults to 0.5
End of explanation
!rm predict.txt
!vw -t data/mnist_test.vw -i mnist_train.model -p predict.txt
Explanation: Predict
-t
is for test file
-i
specifies the model file created earlier
-p
where to store the class predictions [1,10]
End of explanation
y_true=[]
with open("data/mnist_test.vw", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
found = m.group()
y_true.append(int(found))
y_pred = []
with open("predict.txt", 'rb') as f:
for line in f:
m = re.search('^\d+', line)
if m:
found = m.group()
y_pred.append(int(found))
target_names = ["1", "2", "3", "4", "5", "6", "7", "8", "9", "10"] # NOTE: plus one
def plot_confusion_matrix(cm,
target_names,
title='Proportional Confusion matrix: VW on 784 pixels',
cmap=plt.cm.Paired):
given a confusion matrix (cm), make a nice plot
see the skikit-learn documentation for the original done for the iris dataset
plt.figure(figsize=(8, 6))
plt.imshow((cm/cm.sum(axis=1)), interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(target_names))
plt.xticks(tick_marks, target_names, rotation=45)
plt.yticks(tick_marks, target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
cm = confusion_matrix(y_true, y_pred)
print(cm)
model_accuracy = sum(cm.diagonal())/len(y_pred)
model_misclass = 1 - model_accuracy
print("\nModel accuracy: {0}, model misclass rate: {1}".format(model_accuracy, model_misclass))
plot_confusion_matrix(cm, target_names)
Explanation: Analyze
End of explanation |
4,531 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using spaCy for Text Preprocessing
Date
Step1: 1. What is spaCy
spaCy is a free, open-source library for NLP in Python
Providing optimized pipelines for taking models to production, i.e., facilitating integration with other components, and scalability
Current version (spaCy v3, released in Feb 2021) comes with pre-trained deep learning models, including state-of-the-art transformers, trained over huge data sets of documents
Available models can be fine-tuned to better fit specific document collections characteristics
SpaCy is intended to be used as a component of a more complex system, not as final application itself, i.e., it cannot be directly used to implement a chatbot system, a sentiment analyzer, etc ... but it provides a lot of tools that are easy to integrate for taking such systems into production.
1.1. SpaCy Features
spaCy provides a lot of features similar to those we have already discussed for the NLTK library.
spaCy makes it very easy to concatenate several of these operations
Step2: In order to use a specific model you need to download it first. If working locally, you will need to download the model just once; however, in the cloud your environment resets and you will need to download the model on each session.
For this tutorial we will use an English model of medium size, the smallest model that incorporates word embeddings. For a complete list of available models, please refer to the spaCy website.
Step3: 2.2. Obtaining Model Info
You can retrieve the most relevant information about available language models using the following command
spacy.info('model_name')
Note that you can only apply this command on models that have already been downloades. Otherwise, an exception is thrown.
Exercise 1
Step4: 3. Spacy Data Structures and Processing Pipelines
3.1. Introduction and basic usage
Processing texts with spaCy is really easy. You just need to load the model, and pass any text you wish to process. SpaCy will execute a series of transformations (a pipeline) and return a Doc object. The returned object has all information extracted from the original text, and provides a number of features to facilitate accessing the desired information.
<figure>
<center>
<img src='https
Step5: Note how in the example we could easily access all lemmas and entities found by iterating over the document (variable doc) itself or over its entitities (doc.ents)
3.2. Architecture
Central data structures
Step6: Exercise 3
Step7: 3.3. Usual Pipelines Components and Annotations
4. Linguistic Features
In this Section we will review a set of liguistic features provided by most spaCy pretrained pipelines. We will focus mainly on pipeline components that are relevant to build Bag of Words (BoW) representations of text.
4.1. Tokenizer
4.1.1. Word Tokenization
The Tokenizer is always the first component of the spaCy pretrained pipelines.
It has the important role of producing a Doc object out of a text string
It first splits the string using blank spaces
Then tokens are processed sequentially from left to right performing two operations
First, language-specific rules and exceptions are applied (e.g., in English "don't" is splitted into two separate tokens, but U.K. is kept as one token)
Second, prefixes or suffixes are identified. This is relevant to separate punctuation marks from the main tokens
It is important to note that tokenization rules, as well as exceptions are language specific. This means you need to make sure that the languages of the text and the selected Tokenizer match, otherwise you could get unexpected results.
Once the Doc object has been created, you can easily iterate over the identified tokens. Note also that the original text is preserved. You can access the string representation of Doc, Span, Token and even Lexeme objects by using the text attribute.
Step8: Unlike other spaCy components, the Tokenizer is not a statistical model. A finite set of rules and exceptions are encoded. If you wish to modify its behavior, you cannot retrain the component using labeled data. Instead, you would need to extend the list of rules and exceptions.
The following example adds an exception to expand word MUSD into tokens MUSD. Newly added exceptions are always applied after previous rules. Note also that exceptions must preserve the original text. Otherwise, an exception will be raised.
Step9: 4.1.2. Sentence Tokenization
Note that with the Doc object you can also iterate over sentences
Step10: However, be aware that sentences are not identified by the Tokenizer element we have just described, but sentence tokenization is carried out instead as a subproduct of the dependency extraction component, that we will shortly review.
This can be a problem for multilingual documents, since all components of the previously used pipeline assumes an input in English language. In this case, what we normally do is
Step11: Example
Step12: 4.2. Part of Speech Tagging
The next component available in most spaCy pipelines is a POS tagger. The role of this component is to predict the part of speech of every detected token. This is where the statistical models come in. These components have been trained using machine learning methods over a large set of annotated data. If necessary, the models can be fine-tuned providing additional training data. This can be sometimes helpful when working in specialized domains.
The attributes calculated by all pipeline components after the Tokenizer are added as additional attributes to each of the patterns.
POS tags can be accessed as Token.pos_ (POS strings are hashed, and the underscore facilitates accessing the readable format of the variable).
Finer tags can be accessed as Token.tag_
The following code fragment allows us to represent the calculated POS for each token in the provided text.
Step13: Exercise 5
Step14: 4.3. Dependency Parser
The dependency parser aims at syntatic analysis of the text. It is the component that identifies the relation among the tokens in the given text, e.g., noun-chunks, verb objects, dependent clauses, etc.
Since our goal here is to use the pipeline to obtain BoW representation of the documents, we will not go deeper in the description of this component. If you are interested in learning more about dependy parsing in spaCy, you can check the official documentation
Since we will not be using this information, we can disable the component in the pipeline. This will also speed up document preprocessing.
Step15: 4.4. Named Entity Recognition
According to spaCy documentation
Step16: Discussion
Step17: 4.5. Lemmatization
English lemmatizer in spaCy consists of the following elements
Step18: 4.6. Other Annotations
Exercise 6 | Python Code:
# Common imports
import numpy as np
import pandas as pd
import zipfile as zp
from termcolor import colored
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
#To wrap long text lines
from IPython.display import HTML, display
def set_css():
display(HTML('''
<style>
pre {
white-space: pre-wrap;
}
</style>
'''))
get_ipython().events.register('pre_run_cell', set_css)
#For fancy table Display
%load_ext google.colab.data_table
Explanation: Using spaCy for Text Preprocessing
Date: Mar 16, 2021
Author: Jerónimo Arenas-García ([email protected])
Version 1.0
This notebook is based on the spaCy 101 course and documentation available at the spaCy website.
Our goal here is to present a basic overview of spacy that covers the elements necessary for implementing the preprocessing pipelines that we will need for obtaining the Bag of Word Representation of the document.
A more Advanced Tutorial by Ines Montani, one of the main developers of the library, is proposed for further study of interested students. In that tutorial, you can learn how to use spaCy matching functionalities, or how to retrain neural network models using your own training data.
End of explanation
!pip install --upgrade spacy
import spacy
Explanation: 1. What is spaCy
spaCy is a free, open-source library for NLP in Python
Providing optimized pipelines for taking models to production, i.e., facilitating integration with other components, and scalability
Current version (spaCy v3, released in Feb 2021) comes with pre-trained deep learning models, including state-of-the-art transformers, trained over huge data sets of documents
Available models can be fine-tuned to better fit specific document collections characteristics
SpaCy is intended to be used as a component of a more complex system, not as final application itself, i.e., it cannot be directly used to implement a chatbot system, a sentiment analyzer, etc ... but it provides a lot of tools that are easy to integrate for taking such systems into production.
1.1. SpaCy Features
spaCy provides a lot of features similar to those we have already discussed for the NLTK library.
spaCy makes it very easy to concatenate several of these operations:
Pipelines allow to concatenate a number of components to carry out the desired preprocessing tasks
Specific components can be enabled or disabled if necessary
It is possible to add ad-hoc components
Other developers are providing specific components ready to use in spaCy, e.g., spaCy langdetect is a wrapper for the langdetect library for language classification.
1.2. Language and Models
spaCy v3 comes with 55 pre-trained models for 17 languages. Details and installation instructions can be found here.
For most of these languages, three models are available, e.g.:
- en_core_web_sm
- en_core_web_md
- en_core_web_lg
[Convention for the model name is language_core_source_size]
These models are optimized for CPU usage, but they still incorporate neural networks for certain components.
Medium and Large models come with word-embeddings available, while small model does not
The larger the model, the higher the accuracy, but also the longer it takes to analyze a text fragment. I.e., accuracy comes at the cost of larger networks and, therefore, more computation
Accuracy of pipeline components are provided for specific annotated datasets
For English, Spanish, French, German, and Chinese, a fourth model (e.g. en_core_web_trf) based on transformers is also provided. These models are optimized to run over a GPU
1.3. Performance
--- WPS: Words per second
2. Using spaCy in Google Colab
2.1. Installing spaCy and loading language models
You can check that Google Colab already comes with spaCy v2 preinstalled. However, since in this notebook we will be using the new v3 release, you will need to upgrade to the latest available version
End of explanation
!python -m spacy download en_core_web_md
Explanation: In order to use a specific model you need to download it first. If working locally, you will need to download the model just once; however, in the cloud your environment resets and you will need to download the model on each session.
For this tutorial we will use an English model of medium size, the smallest model that incorporates word embeddings. For a complete list of available models, please refer to the spaCy website.
End of explanation
spacy.info('en_core_web_md')
Explanation: 2.2. Obtaining Model Info
You can retrieve the most relevant information about available language models using the following command
spacy.info('model_name')
Note that you can only apply this command on models that have already been downloades. Otherwise, an exception is thrown.
Exercise 1: Run the following command and find the information related to
- Components included in the pipeline
- Are all components enabled?
- How many types of entities can be recognized by the corresponding component?
- What Part-of-Speech elements can you recognize?
- What is the dimension of the word-embeddings incorporated in the model?
Detailed information about some specific components of the pipeline, as well as how they can be used, will be studied in the next sections.
End of explanation
text = 'Modern condensed matter physics research has produced novel materials with fundamental properties that underpin a remarkable number of cutting-edge technologies. It is now generally accepted that novel materials are necessary for critical advances in technologies and whoever discovers novel materials generally controls the science and technology of the future. Transition metal oxides have attracted enormous interest within both the basic and applied science communities. However, for many decades, the overwhelming balance of effort was focused on the 3d-elements (such as iron, copper, etc.) and their compounds; the heavier 4d- and 5d-elements (such as ruthenium, iridium, etc., which constitute two thirds of the d-elements listed in the Periodic Table) and their compounds have been largely ignored until recently. The principal investigator seeks to discover novel materials containing 4d- and/or 5d-elements and understand how they offer wide-ranging opportunities for the discovery of new physics and, ultimately, new device paradigms. This project also provides rigorous training to all students involved, focusing on synthesis and characterization techniques covering a broad spectrum of materials and experimental probes available in the principal investigator\'s laboratory. Technical Abstract: Physics driven by spin-orbit interactions is among the most important topics in contemporary condensed matter physics. Since the spin-orbit interaction is comparable to the on-site Coulomb and other relevant interactions, it creates a unique balance between competing interactions that drive complex behaviors and exotic states not observed in other materials. The project encompasses a systematic effort to elucidate physics of novel phenomena in spin-orbit-coupled and correlated materials and a rigorous search for new materials having exotic ground states. This project focuses on the following areas: (1) Novel phenomena at high pressures and high magnetic fields, (2) Unusual correlations between the insulating gap and magnetic transition in iridates and ruthenates, (3) Exotic metallic and superconducting states in iridates, (4) Mott insulators with "intermediate-strength" spin-orbit interaction and other competing energies, and (5) Single-crystal synthesis and search for novel materials. The principal investigator is one of a few key pioneers who have initiated seminal studies on iridates and, before that, ruthenates, and has comprehensive facilities and proven expertise for single-crystal synthesis and wide-ranging studies of structural, transport, magnetic, thermal and dielectric properties as functions of temperature, magnetic field, pressure and doping.'
print(text)
nlp = spacy.load('en_core_web_md')
doc = nlp(text)
print(colored('============= Original Text =============', 'blue'))
print(doc)
print(colored('\n============= Lemmatized Text =============', 'red'))
print(' '.join([tk.lemma_ for tk in doc]))
print(colored('\n============= Entities Found =============', 'green'))
print('\n'.join([ent.text for ent in doc.ents]))
Explanation: 3. Spacy Data Structures and Processing Pipelines
3.1. Introduction and basic usage
Processing texts with spaCy is really easy. You just need to load the model, and pass any text you wish to process. SpaCy will execute a series of transformations (a pipeline) and return a Doc object. The returned object has all information extracted from the original text, and provides a number of features to facilitate accessing the desired information.
<figure>
<center>
<img src='https://spacy.io/pipeline-fde48da9b43661abcdf62ab70a546d71.svg' width="800"></img>
<figcaption>Source: https://spacy.io/pipeline-fde48da9b43661abcdf62ab70a546d71.svg</figcaption></center>
</figure>
End of explanation
#<SOL>
#</SOL>
Explanation: Note how in the example we could easily access all lemmas and entities found by iterating over the document (variable doc) itself or over its entitities (doc.ents)
3.2. Architecture
Central data structures:
Language: is instantiated when loading the model, and contains the pipeline. Transforms text into spaCy documents.
Doc: Sequence of tokens with annotations. We can iterate over tokens, access individual tokens (doc[3]) or a span of tokens (doc[5:15]).
Vocab: Unique vocabulary associated to the language. Vocabulary is composed of Lexemes that are hashed and stored in the vocabulary with word vectors and attributes. This is memory efficient and assures a unique ground truth.
<figure>
<center>
<img src='https://spacy.io/architecture-415624fc7d149ec03f2736c4aa8b8f3c.svg' width="600"></img>
<figcaption>Source: https://spacy.io/architecture-415624fc7d149ec03f2736c4aa8b8f3c.svg</figcaption></center>
</figure>
The Tokenizer component of the pipeline is special, since this is where the Doc object is generated from the text. Subsequent pipeline components perform operations in place, obtanining new attributes that are stored as annotations in the tokens.
Pipeline components can be fine-tuned using annotated data
New components can be easily implemented and added to the Pipeline
Exercise 2:
- Find the Spans associated to the following text fragments contained in the original text:
* structural, transport, magnetic, thermal and dielectric properties
* temperature, magnetic field, pressure and doping
* This project also provides rigorous training to all students involved
- Use command dir to examine what are the different methods and attributes of the Span object
- Recover the vector representation associated to each of the previous strings
- Compute the Euclidean distances between the selected Spans
--Hint: To compute Euclidean distances at this point, it can be convenient to use numpy function np.linalg.norm. Later in the notebook you will find that spaCy provides functions to carry out these calculations.
End of explanation
#<SOL>
#</SOL>
#<SOL>
#</SOL>
#<SOL>
#</SOL>
Explanation: Exercise 3: You can access all vocab elements as nlp.vocab. Each element of the vocabulary is known as a Lexeme
- Use command dir to examine what are the different methods and attributes of Lexeme objects.
- For each element in the vocabulary, print the text representation, the hash representation, and whether the term should be considered as a stopword or not.
- Find all stopwords in the Vocabulary
- Which is the current size of your vocabulary? Create an additional doc object from a text with words that have not been previously used, and check the new size of the vocabulary after processing the new document.
--Hint: For displaying the vocabulary in a convenient format, you can store the requested information in a Pandas DataFrame, and print the DataFrame instead
End of explanation
shortext = 'Natural Language Processing is a key component of many relevant Artificial Intelligence Applications.' \
' Libraries such as spaCy v3 make it simple to benefit from statistical NLP models based on neural networks.' \
' It is estimated that NLP market in the U.S. will grow to around 30000 MUSD during the next five years.' \
' I don\'t know how accurate this is, but a solid growth is guaranteed'
shortdoc = nlp(shortext)
print(colored('============= The original text information is still kept in the Doc object =============', 'blue'))
print(shortdoc)
print(colored('\n============= Identified Tokens =============', 'red'))
for token in shortdoc:
print(token.text, end='\t\t')
#print('\t\t'.join([token.text for token in shortdoc]))
Explanation: 3.3. Usual Pipelines Components and Annotations
4. Linguistic Features
In this Section we will review a set of liguistic features provided by most spaCy pretrained pipelines. We will focus mainly on pipeline components that are relevant to build Bag of Words (BoW) representations of text.
4.1. Tokenizer
4.1.1. Word Tokenization
The Tokenizer is always the first component of the spaCy pretrained pipelines.
It has the important role of producing a Doc object out of a text string
It first splits the string using blank spaces
Then tokens are processed sequentially from left to right performing two operations
First, language-specific rules and exceptions are applied (e.g., in English "don't" is splitted into two separate tokens, but U.K. is kept as one token)
Second, prefixes or suffixes are identified. This is relevant to separate punctuation marks from the main tokens
It is important to note that tokenization rules, as well as exceptions are language specific. This means you need to make sure that the languages of the text and the selected Tokenizer match, otherwise you could get unexpected results.
Once the Doc object has been created, you can easily iterate over the identified tokens. Note also that the original text is preserved. You can access the string representation of Doc, Span, Token and even Lexeme objects by using the text attribute.
End of explanation
# Add special case rule
from spacy.symbols import ORTH
special_case = [{ORTH: "M"}, {ORTH: "USD"}]
nlp.tokenizer.add_special_case("MUSD", special_case)
shortdoc = nlp(shortext)
print(colored('============= The original text information is still kept in the Doc object =============', 'blue'))
print(shortdoc)
print(colored('\n============= Identified Tokens =============', 'red'))
for token in shortdoc:
print(token.text, end='\t\t')
#print('\t\t'.join([token.text for token in shortdoc]))
Explanation: Unlike other spaCy components, the Tokenizer is not a statistical model. A finite set of rules and exceptions are encoded. If you wish to modify its behavior, you cannot retrain the component using labeled data. Instead, you would need to extend the list of rules and exceptions.
The following example adds an exception to expand word MUSD into tokens MUSD. Newly added exceptions are always applied after previous rules. Note also that exceptions must preserve the original text. Otherwise, an exception will be raised.
End of explanation
for sentence in shortdoc.sents:
print(sentence.text)
Explanation: 4.1.2. Sentence Tokenization
Note that with the Doc object you can also iterate over sentences
End of explanation
!python -m spacy download xx_sent_ud_sm
!pip install --upgrade spacy_langdetect
multilingualtext = 'Natural Language Processing is a key component of many relevant Artificial Intelligence Applications.' \
' El Procesamiento de Lenguaje Natural es un componente de gran importancia en multitud de aplicaciones de la Inteligencia Artificial.' \
' Libraries such as spaCy v3 make it simple to benefit from statistical NLP models based on neural networks.' \
' SpaCy v3 y otras librerías similares hacen posible emplear métodos de NLP basados en redes neuronales de manera sencilla.' \
' It is estimated that NLP market in the U.S. will grow to around 30000 MUSD during the next five years.' \
' Se estima que el mercado del NLP en USA será de alrededor de 30.000 millones de dolares en cinco años.'
#<SOL>
#</SOL>
print(colored('\n============= English sentences =============', 'green'))
print(english_text)
print(colored('\n============= Spanish sentences =============', 'green'))
print(spanish_text)
Explanation: However, be aware that sentences are not identified by the Tokenizer element we have just described, but sentence tokenization is carried out instead as a subproduct of the dependency extraction component, that we will shortly review.
This can be a problem for multilingual documents, since all components of the previously used pipeline assumes an input in English language. In this case, what we normally do is:
- Split the document in sentences using a multilingual sentence tokenizer
- Detect the language of each sentence
- Use the appropriate pipeline for each sentence depending on its language
Exercise 4: Split the following paragraph into two variables english_text and spanish_text using multilingual sentence tokenizers and language detection libraries.
Sentence tokenization: If you opt to use spaCy, you can use the multilingual pipeline xx_sent_ud_sm which provides just a basic (rule-based) sentence tokenizer. You may also use NLTK library (from nltk.tokenize import sent_tokenize)
Language detection: You can use python library langdetect, or the pipeline component spacy-langdetect for spaCy, which is just a wrapper for the previous library
End of explanation
from spacy.language import Language
from spacy_langdetect import LanguageDetector
# Add LanguageDetector and assign it a string name
@Language.factory("language_detector")
def create_language_detector(nlp, name):
return LanguageDetector(language_detection_function=None)
mult_nlp = spacy.load('xx_sent_ud_sm')
mult_nlp.add_pipe('language_detector', last=True)
mult_doc = mult_nlp(multilingualtext)
# document level language detection. Think of it like average language of the document!
print(colored('============= Document level language detection =============', 'blue'))
print(mult_doc._.language)
# sentence level language detection
print(colored('\n============= Sentence level language detection =============', 'red'))
for sent in mult_doc.sents:
print(sent, sent._.language)
# English and Spanish Texts
print(colored('\n============= English sentences =============', 'green'))
english_text = ' '.join([sent.text for sent in mult_doc.sents if sent._.language['language']=='en'])
print(english_text)
print(colored('\n============= Spanish sentences =============', 'green'))
spanish_text = ' '.join([sent.text for sent in mult_doc.sents if sent._.language['language']=='es'])
print(spanish_text)
Explanation: Example: The following code fragment adapts the example provided in the documenation for spacy-langdetect to construct a new pipeline that concatenates xx_sent_ud_sm and spacy-langdetect.
The new pipeline is then used to calculate variables english_text and spanish_text
End of explanation
text = 'Modern condensed matter physics research has produced novel materials with fundamental properties that underpin a remarkable number of cutting-edge technologies. It is now generally accepted that novel materials are necessary for critical advances in technologies and whoever discovers novel materials generally controls the science and technology of the future. Transition metal oxides have attracted enormous interest within both the basic and applied science communities. However, for many decades, the overwhelming balance of effort was focused on the 3d-elements (such as iron, copper, etc.) and their compounds; the heavier 4d- and 5d-elements (such as ruthenium, iridium, etc., which constitute two thirds of the d-elements listed in the Periodic Table) and their compounds have been largely ignored until recently. The principal investigator seeks to discover novel materials containing 4d- and/or 5d-elements and understand how they offer wide-ranging opportunities for the discovery of new physics and, ultimately, new device paradigms. This project also provides rigorous training to all students involved, focusing on synthesis and characterization techniques covering a broad spectrum of materials and experimental probes available in the principal investigator\'s laboratory. Technical Abstract: Physics driven by spin-orbit interactions is among the most important topics in contemporary condensed matter physics. Since the spin-orbit interaction is comparable to the on-site Coulomb and other relevant interactions, it creates a unique balance between competing interactions that drive complex behaviors and exotic states not observed in other materials. The project encompasses a systematic effort to elucidate physics of novel phenomena in spin-orbit-coupled and correlated materials and a rigorous search for new materials having exotic ground states. This project focuses on the following areas: (1) Novel phenomena at high pressures and high magnetic fields, (2) Unusual correlations between the insulating gap and magnetic transition in iridates and ruthenates, (3) Exotic metallic and superconducting states in iridates, (4) Mott insulators with "intermediate-strength" spin-orbit interaction and other competing energies, and (5) Single-crystal synthesis and search for novel materials. The principal investigator is one of a few key pioneers who have initiated seminal studies on iridates and, before that, ruthenates, and has comprehensive facilities and proven expertise for single-crystal synthesis and wide-ranging studies of structural, transport, magnetic, thermal and dielectric properties as functions of temperature, magnetic field, pressure and doping.'
nlp = spacy.load('en_core_web_md')
doc = nlp(text)
df = pd.DataFrame([[token.text, token.pos_, token.tag_] for token in doc],
columns = ['Token', 'POS', 'TAG'])
df
Explanation: 4.2. Part of Speech Tagging
The next component available in most spaCy pipelines is a POS tagger. The role of this component is to predict the part of speech of every detected token. This is where the statistical models come in. These components have been trained using machine learning methods over a large set of annotated data. If necessary, the models can be fine-tuned providing additional training data. This can be sometimes helpful when working in specialized domains.
The attributes calculated by all pipeline components after the Tokenizer are added as additional attributes to each of the patterns.
POS tags can be accessed as Token.pos_ (POS strings are hashed, and the underscore facilitates accessing the readable format of the variable).
Finer tags can be accessed as Token.tag_
The following code fragment allows us to represent the calculated POS for each token in the provided text.
End of explanation
# Descriptions for POS values
#<SOL>
#</SOL>
# Descriptions for TAGS values
#<SOL>
#</SOL>
Explanation: Exercise 5: Use spaCy command spacy.explain() to obtain the descriptions of all POS and TAG values that you got for the previous text fragment. Avoid repetitions.
End of explanation
nlp.disable_pipe("parser")
#If you wish to completely remove the component from the pipeline, you can use the following command
#nlp.remove_pipe("parser")
Explanation: 4.3. Dependency Parser
The dependency parser aims at syntatic analysis of the text. It is the component that identifies the relation among the tokens in the given text, e.g., noun-chunks, verb objects, dependent clauses, etc.
Since our goal here is to use the pipeline to obtain BoW representation of the documents, we will not go deeper in the description of this component. If you are interested in learning more about dependy parsing in spaCy, you can check the official documentation
Since we will not be using this information, we can disable the component in the pipeline. This will also speed up document preprocessing.
End of explanation
doc = nlp(english_text)
df_ents = pd.DataFrame([[ent.text, ent.label_, spacy.explain(ent.label_)] for ent in doc.ents], columns=['Entity', 'Type', 'Description'])
df_ents
Explanation: 4.4. Named Entity Recognition
According to spaCy documentation:
spaCy features an extremely fast statistical entity recognition system, that assigns labels to contiguous spans of tokens. The default trained pipelines can indentify a variety of named and numeric entities, including companies, locations, organizations and products. You can add arbitrary classes to the entity recognition system, and update the model with new examples.
A named entity is a “real-world object” that’s assigned a name – for example, a person, a country, a product or a book title. spaCy can recognize various types of named entities in a document, by asking the model for a prediction.
Because models are statistical and strongly depend on the examples they were trained on, this doesn’t always work perfectly and might need some tuning later, depending on your use case
You can iterate over the entities found in the text using doc.ents, as illustrated in the following example that displays also the types of the entities found.
End of explanation
from spacy import displacy
wiki_text = 'Albert Einstein (14 March 1879 – 18 April 1955) was a German-born theoretical physicist, widely acknowledged to be one of the greatest physicists of all time.' \
' Einstein is known widely for developing the theory of relativity, but he also made important contributions to the development of the theory of quantum mechanics.' \
' He received the 1921 Nobel Prize in Physics "for his services to theoretical physics, and especially for his discovery of the law of the photoelectric effect".' \
' Einstein was born in the German Empire, but moved to Switzerland in 1895, forsaking his German citizenship the following year.' \
' Einstein was awarded a PhD by the University of Zürich.' \
' On the eve of World War II, he endorsed a letter to President Franklin D. Roosevelt.'
wiki_doc = nlp(wiki_text)
displacy.render(wiki_doc, style="ent", jupyter=True, options={'distance': 90})
entypes = set([ent.label_ for ent in wiki_doc.ents])
df_ent = pd.DataFrame([[enttyp, spacy.explain(enttyp)] for enttyp in entypes], columns=['Entity type', 'Description'])
df_ent
Explanation: Discussion: You can check that the NER algorithm is not always very accurate, both with respect to detection of entities and entity type classification. Note, however, that for some other texts performance can be much better. The following example contains an excerpt from the Albert Einstein wikipedia web page.
How does NER accuracy in this example compares to the previous case?
What do you believe is the reason for this?
End of explanation
doc = nlp(text)
print(colored('============= Original text =============', 'blue'))
print(doc.text)
print(colored('\n============= Lemmas =============', 'red'))
print(' '.join([token.lemma_ for token in doc]))
Explanation: 4.5. Lemmatization
English lemmatizer in spaCy consists of the following elements:
Lookup tables
Rule-based lemmatizer, that exploits POS information
List-based exceptions aquired from WordNet
The annotation attribute can be easily accessed as Token.lemma_
End of explanation
mult_nlp = spacy.load('xx_sent_ud_sm')
mult_nlp.add_pipe('language_detector', last=True)
nlp = spacy.load('en_core_web_md')
nlp.disable_pipe('parser')
nlp.disable_pipe('ner')
valid_POS = set(['VERB', 'NOUN', 'ADJ', 'PROPN'])
specific_stw = set(['relevant', 'simple', 'base'])
def text_preprocessing(rawtext):
#<SOL>
#</SOL>
print(colored('============= Original text =============', 'blue'))
print(multilingualtext)
print(colored('\n============= Lemmatized text =============', 'red'))
print(text_preprocessing(multilingualtext))
Explanation: 4.6. Other Annotations
Exercise 6: Have a look at the available attributes and functions of spaCy tokens using Python dir command.
Find out the significance of the following attributes: is_stop, is_alpha, is_digit, like_url, like_email, like_num, vector, and test your findings using text examples of your own.
5. Final Implementation of a pre-processing pipeline
Exercise 7: Implement a function that takes a string object and outputs the lemmatized text, ready for calculating BoW representation. The function should carry out the following steps:
Sentence tokenization and filtering of non-English sentences
Tokenization
POS
Lemmatization
Keep only alphanumeric tokens
Keep nouns, verbs, and adjectives
Generic Stopword removal
Specific Stopword removal
End of explanation |
4,532 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Pie Chart
Step1: Update Data
Step2: Display Values
Step3: Enable sort
Step4: Set different styles for selected slices
Step5: For more on piechart interactions, see the Mark Interactions notebook
Modify label styling
Step6: Update pie shape and style
Step7: Change pie dimensions
Step8: Move the pie around
x and y attributes control the position of the pie in the figure.
If no scales are passed for x and y, they are taken in absolute
figure coordinates, between 0 and 1.
Step9: Change slice styles
Pie slice colors cycle through the colors and opacities attribute, as the Lines Mark.
Step10: Represent an additional dimension using Color
The Pie allows for its colors to be determined by data, that is passed to the color attribute.
A ColorScale with the desired color scheme must also be passed.
Step11: Position the Pie using custom scales
Pies can be positioned, via the x and y attributes,
using either absolute figure scales or custom 'x' or 'y' scales | Python Code:
data = np.random.rand(3)
pie = Pie(sizes=data, display_labels="outside", labels=list(string.ascii_uppercase))
fig = Figure(marks=[pie], animation_duration=1000)
fig
Explanation: Basic Pie Chart
End of explanation
n = np.random.randint(1, 10)
pie.sizes = np.random.rand(n)
Explanation: Update Data
End of explanation
with pie.hold_sync():
pie.display_values = True
pie.values_format = ".1f"
Explanation: Display Values
End of explanation
pie.sort = True
Explanation: Enable sort
End of explanation
pie.selected_style = {"opacity": 1, "stroke": "white", "stroke-width": 2}
pie.unselected_style = {"opacity": 0.2}
pie.selected = [1]
pie.selected = None
Explanation: Set different styles for selected slices
End of explanation
pie.label_color = "Red"
pie.font_size = "20px"
pie.font_weight = "bold"
Explanation: For more on piechart interactions, see the Mark Interactions notebook
Modify label styling
End of explanation
pie1 = Pie(sizes=np.random.rand(6), inner_radius=0.05)
fig1 = Figure(marks=[pie1], animation_duration=1000)
fig1
Explanation: Update pie shape and style
End of explanation
# As of now, the radius sizes are absolute, in pixels
with pie1.hold_sync():
pie1.radius = 150
pie1.inner_radius = 100
# Angles are in radians, 0 being the top vertical
with pie1.hold_sync():
pie1.start_angle = -90
pie1.end_angle = 90
Explanation: Change pie dimensions
End of explanation
pie1.y = 0.1
pie1.x = 0.6
pie1.radius = 180
Explanation: Move the pie around
x and y attributes control the position of the pie in the figure.
If no scales are passed for x and y, they are taken in absolute
figure coordinates, between 0 and 1.
End of explanation
pie1.stroke = "brown"
pie1.colors = ["orange", "darkviolet"]
pie1.opacities = [0.1, 1]
fig1
Explanation: Change slice styles
Pie slice colors cycle through the colors and opacities attribute, as the Lines Mark.
End of explanation
from bqplot import ColorScale, ColorAxis
Nslices = 7
size_data = np.random.rand(Nslices)
color_data = np.random.randn(Nslices)
sc = ColorScale(scheme="Reds")
# The ColorAxis gives a visual representation of its ColorScale
ax = ColorAxis(scale=sc)
pie2 = Pie(sizes=size_data, scales={"color": sc}, color=color_data)
Figure(marks=[pie2], axes=[ax])
Explanation: Represent an additional dimension using Color
The Pie allows for its colors to be determined by data, that is passed to the color attribute.
A ColorScale with the desired color scheme must also be passed.
End of explanation
from datetime import datetime
from bqplot.traits import convert_to_date
from bqplot import DateScale, LinearScale, Axis
avg_precipitation_days = [
(d / 30.0, 1 - d / 30.0) for d in [2, 3, 4, 6, 12, 17, 23, 22, 15, 4, 1, 1]
]
temperatures = [9, 12, 16, 20, 22, 23, 22, 22, 22, 20, 15, 11]
dates = [datetime(2010, k, 1) for k in range(1, 13)]
sc_x = DateScale()
sc_y = LinearScale()
ax_x = Axis(scale=sc_x, label="Month", tick_format="%b")
ax_y = Axis(scale=sc_y, orientation="vertical", label="Average Temperature")
pies = [
Pie(
sizes=precipit,
x=date,
y=temp,
display_labels="none",
scales={"x": sc_x, "y": sc_y},
radius=30.0,
stroke="navy",
apply_clip=False,
colors=["navy", "navy"],
opacities=[1, 0.1],
)
for precipit, date, temp in zip(avg_precipitation_days, dates, temperatures)
]
Figure(
title="Kathmandu Precipitation",
marks=pies,
axes=[ax_x, ax_y],
padding_x=0.05,
padding_y=0.1,
)
Explanation: Position the Pie using custom scales
Pies can be positioned, via the x and y attributes,
using either absolute figure scales or custom 'x' or 'y' scales
End of explanation |
4,533 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Transforming an input to a known output
Step2: relation between input and output is linear
Step3: Defining the model to train
untrained single unit (neuron) also outputs a line from same input, although another one
The Artificial Neuron
Step4: Output of a single untrained neuron
Step5: Loss - Mean Squared Error
Loss function is the prerequisite to training. We need an objective to optimize for. We calculate the difference between what we get as output and what we would like to get.
Mean Squared Error
$MSE = {\frac {1}{n}}\sum {i=1}^{n}(Y{i}-{\hat {Y_{i}}})^{2}$
https
Step6: Minimize Loss by changing parameters of neuron
Move in parameter space in the direction of a descent
<img src='https
Step7: Learning Curve after training
Step8: Line drawn by neuron after training
result after training is not perfect, but almost looks like the same line
https | Python Code:
# import and check version
import tensorflow as tf
# tf can be really verbose
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
# a small sanity check, does tf seem to work ok?
sess = tf.Session()
hello = tf.constant('Hello TF!')
print(sess.run(hello))
sess.close()
Explanation: <a href="https://colab.research.google.com/github/DJCordhose/ai/blob/master/notebooks/tensorflow/tf_low_level_training.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Low Level TensorFlow, Part III: Layers and Training
https://www.tensorflow.org/guide/low_level_intro#training
https://developers.google.com/machine-learning/glossary/#gradient_descent
https://developers.google.com/machine-learning/glossary/#optimizer
End of explanation
input = [[-1], [0], [1], [2], [3], [4]]
output = [[2], [1], [0], [-1], [-2], [-3]]
import matplotlib.pyplot as plt
plt.xlabel('input')
plt.ylabel('output')
plt.plot(input, output, 'kX')
Explanation: Transforming an input to a known output
End of explanation
plt.plot(input, output)
plt.plot(input, output, 'ro')
x = tf.constant(input, dtype=tf.float32)
y_true = tf.constant(output, dtype=tf.float32)
y_true
Explanation: relation between input and output is linear
End of explanation
# short version, though harder to inspect
# y_pred = tf.layers.dense(inputs=x, units=1)
# matrix multiplication under the hood
# tf.matmul(x, w) + b
linear_model = tf.layers.Dense(units=1)
y_pred = linear_model(x)
y_pred
# single neuron and single input: one weight and one bias
# weights and biases are represented as variables
# https://www.tensorflow.org/guide/variables
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
weights = sess.run(linear_model.trainable_weights)
print(weights)
Explanation: Defining the model to train
untrained single unit (neuron) also outputs a line from same input, although another one
The Artificial Neuron: Foundation of Deep Neural Networks (simplified, more later)
a neuron takes a number of numerical inputs
multiplies each with a weight, sums up all weighted input and
adds bias (constant) to that sum
from this it creates a single numerical output
for one input (one dimension) this would be a description of a line
for more dimensions this describes a hyper plane that can serve as a decision boundary
this is typically expressed as a matrix multplication plus an addition
<img src='https://djcordhose.github.io/ai/img/insurance/neuron211.jpg'>
From single neuron to network in the TensorFlow Playground
<img src='https://djcordhose.github.io/ai/img/tf-plaground.png'>
https://playground.tensorflow.org/#activation=linear&batchSize=10&dataset=circle®Dataset=reg-plane&learningRate=0.01®ularizationRate=0&noise=0&networkShape=1&seed=0.98437&showTestData=false&discretize=false&percTrainData=50&x=true&y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=false&sinX=false&cosY=false&sinY=false&collectStats=false&problem=classification&initZero=false&hideText=false
End of explanation
# when you execute this cell, you should see a different line, as the initialization is random
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_pred = sess.run(y_pred)
print(output_pred)
weights = sess.run(linear_model.trainable_weights)
print(weights)
plt.plot(input, output_pred)
plt.plot(input, output, 'ro')
Explanation: Output of a single untrained neuron
End of explanation
loss = tf.losses.mean_squared_error(labels=y_true, predictions=y_pred)
loss
# when this loss is zero (which it is not right now) we get the desired output
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(loss))
Explanation: Loss - Mean Squared Error
Loss function is the prerequisite to training. We need an objective to optimize for. We calculate the difference between what we get as output and what we would like to get.
Mean Squared Error
$MSE = {\frac {1}{n}}\sum {i=1}^{n}(Y{i}-{\hat {Y_{i}}})^{2}$
https://en.wikipedia.org/wiki/Mean_squared_error
End of explanation
# move the parameters of our single neuron in the right direction with a pretty high intensity (learning rate)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(loss)
train
losses = []
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# iterations aka epochs, optimizing the parameters of the neuron
for i in range(500):
# executing optimizer and current loss, but only displaying current loss
_, loss_value = sess.run((train, loss))
losses.append(loss_value)
print(sess.run(loss))
Explanation: Minimize Loss by changing parameters of neuron
Move in parameter space in the direction of a descent
<img src='https://djcordhose.github.io/ai/img/gradients.jpg'>
https://twitter.com/colindcarroll/status/1090266016259534848
Job of the optimizer
<img src='https://djcordhose.github.io/ai/img/manning/optimizer.png' height=500>
End of explanation
# wet dream of every machine learning person (typically you see a noisy curve only sort of going down)
plt.yscale('log')
plt.ylabel("loss")
plt.xlabel("epochs")
plt.plot(losses)
Explanation: Learning Curve after training
End of explanation
output_pred = sess.run(y_pred)
print(output_pred)
plt.plot(input, output_pred)
plt.plot(input, output, 'ro')
# single neuron and single input: one weight and one bias
# slope m ~ -1
# y-axis offset y0 ~ 1
# https://en.wikipedia.org/wiki/Linear_equation#Slope%E2%80%93intercept_form
weights = sess.run(linear_model.trainable_weights)
print(weights)
Explanation: Line drawn by neuron after training
result after training is not perfect, but almost looks like the same line
https://en.wikipedia.org/wiki/Linear_equation#Slope%E2%80%93intercept_form
End of explanation |
4,534 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Práctica 2 - Cinemática directa y dinámica de manipuladores
Una vez obtenida la dinámica del manipulador, tenemos la necesidad de construir una función f para poder simular el comportamiento del manipulador, empecemos escribiendo la ecuación
Step1: Mandamos llamar al simulador | Python Code:
def f(t, x):
# Se importan funciones matematicas necesarias
from numpy import matrix, sin, cos
# Se desenvuelven las variables que componen al estado
q1, q2, q̇1, q̇2 = x
# Se definen constantes del sistema
g = 9.81
m1, m2, J1, J2 = 0.3, 0.2, 0.0005, 0.0002
l1, l2 = 0.4, 0.3
τ1, τ2 = 0, 0
# Se agrupan terminos en vectores
q̇ = matrix([[q̇1], [q̇2]])
τ = matrix([[τ1], [τ2]])
# Se calculan terminos comúnes
μ1 = m2*l2**2
μ2 = m2*l1*l2
c1 = cos(q1)
c2 = cos(q2)
s2 = sin(q2)
c12 = cos(q1 + q2)
# Se calculan las matrices de la ecuación de movimiento
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
# Se calculan las variables a devolver por el sistema
# ESCRIBE TU CODIGO AQUI
raise NotImplementedError
q1pp = qpp.item(0)
q2pp = qpp.item(1)
# Se devuelve la derivada de las variables de entrada
return [q1p, q2p, q1pp, q2pp]
from numpy.testing import assert_almost_equal
assert_almost_equal(f(0, [0, 0, 0, 0]), [0,0,-1392.38, 3196.16], 2)
assert_almost_equal(f(0, [1, 1, 0, 0]), [0,0,-53.07, 104.34], 2)
print("Sin errores")
Explanation: Práctica 2 - Cinemática directa y dinámica de manipuladores
Una vez obtenida la dinámica del manipulador, tenemos la necesidad de construir una función f para poder simular el comportamiento del manipulador, empecemos escribiendo la ecuación:
$$
\tau =
\begin{bmatrix}
J_1 + J_2 + m_1 l_1^2 + m_2 l_1^2 + \mu_1 + 2 \mu_2 c_2 & J_2 + \mu_1 + \mu_2 c_2 \
J_2 + \mu_1 + \mu_2 c_2 & J_2 + \mu_1
\end{bmatrix}\ddot{q} - \mu_2 s_2
\begin{bmatrix}
2 \dot{q}2 & \dot{q}_2 \ -\dot{q}_1 & 0
\end{bmatrix} + g
\begin{bmatrix}
m_1 l_1 c_1 + m_2 l_1 c_1 + m_2 l_2 c{12} \ m_2 l_2 c_{12}
\end{bmatrix}
$$
en donde $\mu_1 = m_2 l_2^2$ y $\mu_2 = m_2 l_1 l_2$; por lo que de aqui en adelante, podemos caracterizar la dinámica de este manipulador como la siguiente ecuación:
$$
\tau = M(q)\ddot{q} + C(q, \dot{q}) + G(q)
$$
Si ahora cambiamos nuestra atención al problema de contruir la función
$$
\dot{x} = f(x, t)
$$
tenemos que empezar por que representa el estado $x$.
En el ejercicio pasado nuestro manipulador tenía un solo grado de libertad, por lo que el estado terminaba siendo:
$$
x =
\begin{pmatrix}
q_1 \ \dot{q}_1
\end{pmatrix}
$$
En este caso, nuestro manipulador tiene dos grados de libertad, por lo que necesitamos que el estado incluya a la posición de ambos grados de libertad, así como su velocidad:
$$
x =
\begin{pmatrix}
q_1 \ q_2 \ \dot{q}_1 \ \dot{q}_2
\end{pmatrix}
$$
Por lo que para construir $f(x,t)$, necesitamos calcular los siguientes terminos:
$$
\dot{x} =
\begin{pmatrix}
\dot{q}_1 \ \dot{q}_2 \ \ddot{q}_1 \ \ddot{q}_2
\end{pmatrix}
$$
en donde los primeros dos terminos son triviales, ya que son los mismos que obtenemos del estado del sistema ($\dot{q}_1$, $\dot{q}_2$), y los segundos dos terminos los podemos obtener de la ecuación de movimiento del manipulador:
$$
\ddot{q} = M^{-1}\left( \tau - C(q, \dot{q})\dot{q} - G(q) \right)
$$
End of explanation
from robots.simuladores import simulador
%matplotlib widget
ts, xs = simulador(puerto_zmq="5551", f=f, x0=[0, 0, 0, 0], dt=0.02)
Explanation: Mandamos llamar al simulador
End of explanation |
4,535 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SAP Router
The following subsections show a graphical representation of the main protocol packets and how to generate them.
First we need to perform some setup to import the packet classes
Step1: SAP Router Admin packets
Step2: SAP Router Error Information / Control packets
Step3: SAP Router Route packet
Step4: SAP Router Pong packet | Python Code:
from pysap.SAPRouter import *
from IPython.display import display
Explanation: SAP Router
The following subsections show a graphical representation of the main protocol packets and how to generate them.
First we need to perform some setup to import the packet classes:
End of explanation
for command in router_adm_commands:
p = SAPRouter(type=SAPRouter.SAPROUTER_ADMIN, adm_command=command)
print(router_adm_commands[command])
display(p.canvas_dump())
Explanation: SAP Router Admin packets
End of explanation
for opcode in router_control_opcodes:
p = SAPRouter(type=SAPRouter.SAPROUTER_CONTROL, opcode=opcode)
if opcode in [70, 71]:
p.snc_frame = ""
print(router_control_opcodes[opcode])
display(p.canvas_dump())
Explanation: SAP Router Error Information / Control packets
End of explanation
router_string = [SAPRouterRouteHop(hostname="8.8.8.8", port=3299),
SAPRouterRouteHop(hostname="10.0.0.1", port=3200, password="S3cr3t")]
router_string_lens = map(len, map(str, router_string))
p = SAPRouter(type=SAPRouter.SAPROUTER_ROUTE,
route_entries=len(router_string),
route_talk_mode=1,
route_rest_nodes=1,
route_length=sum(router_string_lens),
route_offset=router_string_lens[0],
route_string=router_string)
display(p.canvas_dump())
for x in router_string:
display(x.canvas_dump())
Explanation: SAP Router Route packet
End of explanation
p = SAPRouter(type=SAPRouter.SAPROUTER_PONG)
p.canvas_dump()
Explanation: SAP Router Pong packet
End of explanation |
4,536 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CRA prediction with Regression
Reference link.
Importing packages
This packages will be used to analysis data and train the models.
Step1: Reading data
This command shows a sample from data read. We can observe that predict column is "CRA", and the other are features to train our model. The "Matricula" column shows an irrelevant feature that indicates the student identification. So, we do not consider this in training phase.
Step2: Distribuition analysis
Plot graphics
To see the distribution of predict feature (CRA), we plot the graph of distribution. We can plot to all features, but to resume data, we'll visualize only predict feature.
Step3: Check skew in data
Skewness analysis
Let's check the skewness of data if a feature has skewness bigger than 0.75 or less than -0.75, it indicates that the feature distribution has skewness. In this case, only "LP2" feature has a strong skewness (-1.9).
Step4: It was observed that LP2 discipline has skew, so to treat this bias, it was used exponent function.
In the graphic below, we can observe that multiplies points in the board are very distant from red line. Data without skewness will appear close to red line.
Step5: Treating "LP2" skew
Step6: After applying exponential function in this feature, it was removed the skewness of -1.9.
After applying exponential function in this feature, the data closer to the red line.
Step7: Is necessary fill missing values?
Step8: No. How can be observed, all features have 88 values, or none line is the missing value because all matrix has 88 lines.
#### Preprocessing data
Step9: RIDGE Regression
Step10: Training Ridge Regression with different lambda
Step11: Ploting RMSE in Ridge Regression
Ploting graphic
Step12: In this graphic, we can observe the Root Mean Square Error for different lambdas. The smallest error is desirable, in this case, when lambda = 50. And, without regularization (lambda=0), the results were worst in interval [0
Step13: In this graphic, we can observe the RMSE to each cross using lambda=50 (smallest RMSE) and without regularization (bigger RMSE in the interval [0-80]).
The variation in each cross (until 0.3) indicates that some cross is very different each other.
Training a model with all data
Step14: LASSO Regression
Training data with Lasso regression with and without regularization.
Step15: The smallest lambda was 0.075 with RMSE = 0.54. And the RMSE is bigger when regularization is not used (lambda=0).
Step16: In this graphic, we can observe the RMSE to each cross using lambda=0.075 (smallest RMSE) and without regularization (bigger RMSE in the interval [0-0.15]).
The variation in each cross (until 0.3) indicates that some cross is very different each other.
Besides that, using Lasso with lambda = 0.075 get RMSE = 0.54, while Ridge using lambda = 50 provides RMSE = 0.55. A small difference between the models.
Training a model with all data and comparing the coefficients
Step17: We can observe that with regularization, we obtain the smallest norma of coefficients.
KNN
Step18: In this graphic, it was observed that the smallest RMSE (0.736) is when using neighbor = 20.
Neighbor comparison
Step19: In this graphic, it was observed that the RMSE in each cross for 1, 2 and 20 neighbors.
Training K-NN with the best number of neighbors
Step20: Residual versus Prediction
Ridge
Step21: In this graphic, shows that features
Step22: This graphic shows that the trained model has no bias with data.
Lasso
Step23: The Lasso regression plot the same coefficients distribution that Ridge regression.
Step24: The Lasso regression has a little difference in comparison with Ridge regression.
K-NN
Step25: This graphic shows some patterns in data (some columns in prediction).
Test with real values
Step26: Print RMSE for all classifiers | Python Code:
#enconding=utf8
import copy
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import skew
from scipy.stats.stats import pearsonr
%config InlineBackend.figure_format = 'retina' #set 'png' here when working on notebook
%matplotlib inline
Explanation: CRA prediction with Regression
Reference link.
Importing packages
This packages will be used to analysis data and train the models.
End of explanation
data = pd.read_csv("treino.csv")
data.columns = ['matricula', 'vetorial','lpt','p1','ic','lp1','calculo2','discreta','p2','grafos','fis_classica','lp2','cra','calculo1']
data.head()
Explanation: Reading data
This command shows a sample from data read. We can observe that predict column is "CRA", and the other are features to train our model. The "Matricula" column shows an irrelevant feature that indicates the student identification. So, we do not consider this in training phase.
End of explanation
'''
pd.DataFrame(data.vetorial).hist()
pd.DataFrame(data.lpt).hist()
pd.DataFrame(data.p1).hist()
pd.DataFrame(data.ic).hist()
pd.DataFrame(data.lp1).hist()
pd.DataFrame(data.calculo1).hist()
pd.DataFrame(data.calculo2).hist()
pd.DataFrame(data.discreta).hist()
pd.DataFrame(data.p2).hist()
pd.DataFrame(data.grafos).hist()
pd.DataFrame(data.fis_classica).hist()
pd.DataFrame(data.lp2).hist()
'''
pd.DataFrame(data.cra).hist()
Explanation: Distribuition analysis
Plot graphics
To see the distribution of predict feature (CRA), we plot the graph of distribution. We can plot to all features, but to resume data, we'll visualize only predict feature.
End of explanation
def check_skewness(data, thresh=0.75):
numeric_feats = data.dtypes[data.dtypes != "object"].index
skewed_feats = data[numeric_feats].apply(lambda x: skew(x.dropna())) #compute skewness
features_index = skewed_feats[(skewed_feats < -thresh) | (skewed_feats > thresh)]
return features_index
features_index = check_skewness(data)
print(features_index)
Explanation: Check skew in data
Skewness analysis
Let's check the skewness of data if a feature has skewness bigger than 0.75 or less than -0.75, it indicates that the feature distribution has skewness. In this case, only "LP2" feature has a strong skewness (-1.9).
End of explanation
from scipy import stats
import matplotlib.pyplot as plt
index = features_index.index[0]
lp2 = data[index]
res = stats.probplot(lp2, plot=plt)
Explanation: It was observed that LP2 discipline has skew, so to treat this bias, it was used exponent function.
In the graphic below, we can observe that multiplies points in the board are very distant from red line. Data without skewness will appear close to red line.
End of explanation
#exp transform skewed numeric features:
data[features_index.index] = np.exp(data[features_index.index]) # ".index" get the column name
features_index = check_skewness(data)
print(features_index)
Explanation: Treating "LP2" skew:
End of explanation
lp2 = data[index]
res = stats.probplot(lp2, plot=plt)
Explanation: After applying exponential function in this feature, it was removed the skewness of -1.9.
After applying exponential function in this feature, the data closer to the red line.
End of explanation
data.isnull().apply(pd.value_counts)
Explanation: Is necessary fill missing values?
End of explanation
H = data.drop(['matricula', 'cra'], 1)
y = data['cra']
Explanation: No. How can be observed, all features have 88 values, or none line is the missing value because all matrix has 88 lines.
#### Preprocessing data
End of explanation
from sklearn.linear_model import Ridge, RidgeCV, LassoCV, LassoLarsCV
from sklearn.model_selection import cross_val_score
def rmse_cv(model, X_train, y, cv_num):
rmse = np.sqrt(-cross_val_score(model, X_train, y, scoring="neg_mean_squared_error", cv = cv_num))
return(rmse)
Explanation: RIDGE Regression
End of explanation
alphas = [0.0, 0.1, 0.3, 1, 3, 5, 10, 15, 30, 40, 50, 60, 70, 80]
cv_num = 5
cv_ridge = [rmse_cv(Ridge(alpha = alpha), H, y, cv_num)
for alpha in alphas]
Explanation: Training Ridge Regression with different lambda
End of explanation
def plot_graphic(X, index, title, xlabel, ylabel, label):
cv_ridge = pd.Series(X, index = index)
cv_ridge.plot(title = title, label = label)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
cv_ridge_mean = np.mean(cv_ridge, axis=1)
title = "Validation"
xlabel = "Lambda"
ylabel = "RMSE"
label = "Ridge Regression"
plot_graphic(cv_ridge_mean, alphas, title, xlabel, ylabel, label)
small_pos_r = np.argmin(cv_ridge_mean) # small position
small_lambda_r = alphas[small_pos_r]
small_rmse_r = cv_ridge_mean[small_pos_r]
print("RMSE with lambda = {0}: {1}".format(small_lambda_r, small_rmse_r))
print("RMSE without regularization: {0}".format(cv_ridge_mean[0]))
Explanation: Ploting RMSE in Ridge Regression
Ploting graphic
End of explanation
seq = np.arange(1, cv_num+1) # 1, ..., cv_num
small_rmse_ridge = cv_ridge[small_pos_r]
plot_graphic(small_rmse_ridge, seq, "RMSE x Fold", "Fold", "RMSE", "Lambda: {0}".format(small_lambda_r))
plot_graphic(cv_ridge[0], seq, "RMSE x Fold", "Fold", "RMSE", "Without Regularization") #Without Regularization
Explanation: In this graphic, we can observe the Root Mean Square Error for different lambdas. The smallest error is desirable, in this case, when lambda = 50. And, without regularization (lambda=0), the results were worst in interval [0:80].
Besides that, the RMSE without regularization is bigger than Lasso using lambda = 50.
Plotting error in cross with smallest RMSE
End of explanation
ols = Ridge(alpha=0.0)
ridge_adjusted = Ridge(alpha=small_lambda_r)
#Training data
ols.fit(H, y)
ridge_adjusted.fit(H, y)
Explanation: In this graphic, we can observe the RMSE to each cross using lambda=50 (smallest RMSE) and without regularization (bigger RMSE in the interval [0-80]).
The variation in each cross (until 0.3) indicates that some cross is very different each other.
Training a model with all data
End of explanation
from sklearn import linear_model
alphas = [0.0, 0.001, 0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.15]
cv_lasso = [rmse_cv(linear_model.Lasso(alpha = alpha), H, y, cv_num)
for alpha in alphas]
cv_lasso_mean = np.mean(cv_lasso, axis=1)
label = "Lasso Regression"
plot_graphic(cv_lasso_mean, alphas, title, xlabel, ylabel, label)
Explanation: LASSO Regression
Training data with Lasso regression with and without regularization.
End of explanation
small_pos = np.argmin(cv_lasso_mean) # small position
small_lambda = alphas[small_pos]
small_rmse = cv_lasso_mean[small_pos]
print("RMSE with lambda = {0}: {1}".format(small_lambda, small_rmse))
print("RMSE without regularization: {0}".format(cv_lasso_mean[0]))
small_rmse_lasso = cv_lasso[small_pos]
plot_graphic(small_rmse_lasso, seq, "RMSE x Fold", "Fold", "RMSE", "Lambda: {0}".format(small_lambda))
plot_graphic(cv_lasso[0], seq, "RMSE x Fold", "Fold", "RMSE", "Without Regularization") #Without Regularization
Explanation: The smallest lambda was 0.075 with RMSE = 0.54. And the RMSE is bigger when regularization is not used (lambda=0).
End of explanation
lasso_adjusted = linear_model.Lasso(alpha=small_lambda)
lasso_adjusted.fit(H, y)
from numpy import linalg as LA
print("Norma OLS: {0}".format(LA.norm(ols.coef_)))
print("Norma Ridge (lambda={0}): {1}".format(small_lambda_r, LA.norm(ridge_adjusted.coef_)))
print("Norma Lasso (lambda={0}): {1}".format(small_lambda, LA.norm(lasso_adjusted.coef_)))
Explanation: In this graphic, we can observe the RMSE to each cross using lambda=0.075 (smallest RMSE) and without regularization (bigger RMSE in the interval [0-0.15]).
The variation in each cross (until 0.3) indicates that some cross is very different each other.
Besides that, using Lasso with lambda = 0.075 get RMSE = 0.54, while Ridge using lambda = 50 provides RMSE = 0.55. A small difference between the models.
Training a model with all data and comparing the coefficients
End of explanation
from sklearn.neighbors import KNeighborsRegressor
neighboor=[1, 2, 3, 4, 5, 10, 15, 20, 25, 30, 35]
cv_num = 5
cv_knn = [rmse_cv(KNeighborsRegressor(n_neighbors = n), H, y, cv_num) for n in neighboor]
cv_knn_mean = np.mean(cv_knn, axis=1)
xlabel = "Neighboors"
label = "K-NN"
plot_graphic(cv_knn_mean, neighboor, title, xlabel, ylabel, label)
Explanation: We can observe that with regularization, we obtain the smallest norma of coefficients.
KNN
End of explanation
small_pos_knn = np.argmin(cv_knn_mean)
small_neighbor_knn = neighboor[small_pos_knn]
small_rmse_knn = cv_knn_mean[small_pos_knn]
small_rmse_knn = cv_knn[small_pos_knn]
plot_graphic(cv_knn[0], seq, "RMSE x Cross", "Fold", "RMSE", "1 Neighboor")
plot_graphic(cv_knn[1], seq, "RMSE x Cross", "Fold", "RMSE", "2 Neighbors")
plot_graphic(small_rmse_knn, seq, "RMSE x Cross", "Fold", "RMSE", "{0} Neighbors".format(small_neighbor_knn))
Explanation: In this graphic, it was observed that the smallest RMSE (0.736) is when using neighbor = 20.
Neighbor comparison
End of explanation
neighbor = KNeighborsRegressor(n_neighbors=small_neighbor_knn)
neighbor.fit(H, y)
Explanation: In this graphic, it was observed that the RMSE in each cross for 1, 2 and 20 neighbors.
Training K-NN with the best number of neighbors
End of explanation
def plot_coeficients(values, names):
coef = pd.Series(values, index = names)
matplotlib.rcParams['figure.figsize'] = (8.0, 10.0)
coef.plot(kind = "barh")
plt.title("Coeficients importance")
plot_coeficients(ridge_adjusted.coef_, H.columns)
Explanation: Residual versus Prediction
Ridge
End of explanation
def plot_residual(model, X, y):
matplotlib.rcParams['figure.figsize'] = (6.0, 6.0)
preds = pd.DataFrame({"preds":model.predict(X), "true":y})
preds["residuals"] = preds["true"] - preds["preds"]
preds.plot(x = "preds", y = "residuals",kind = "scatter")
plot_residual(ridge_adjusted, H, y)
Explanation: In this graphic, shows that features: Grafos, P2, and Discreta are the most important features to predict CRA. While LP1, LP2, P1, and LPT are the less important.
End of explanation
plot_coeficients(lasso_adjusted.coef_, H.columns)
print("Lasso picked " + str(sum(lasso_adjusted.coef_ != 0)) + " variables and eliminated the other " + str(sum(lasso_adjusted.coef_ == 0)) + " variables")
Explanation: This graphic shows that the trained model has no bias with data.
Lasso
End of explanation
plot_residual(lasso_adjusted, H, y)
Explanation: The Lasso regression plot the same coefficients distribution that Ridge regression.
End of explanation
plot_residual(neigh, H, y)
Explanation: The Lasso regression has a little difference in comparison with Ridge regression.
K-NN
End of explanation
data_test = pd.read_csv("teste.csv")
y_p = data_test.drop(['matricula','Cálculo1','Vetorial','LPT','P1','IC','LP1','Cálculo2','Discreta','P2','Grafos','Fís.Clássica','LP2'], 1)
H_p = data_test.drop(['matricula','cra'], 1)
H_p = H_p[['Vetorial','LPT','P1','IC','LP1','Cálculo2','Discreta','P2','Grafos','Fís.Clássica','LP2','Cálculo1']]
y_knn = neighbor.predict(H_p)
H_p = np.c_[np.ones(len(H_p)), H_p]
w_ridge = np.r_[ridge_adjusted.intercept_, ridge_adjusted.coef_]
w_lasso = np.r_[lasso_adjusted.intercept_, lasso_adjusted.coef_]
y_ridge = np.dot(H_p, w_ridge)
y_lasso = np.dot(H_p, w_lasso)
Explanation: This graphic shows some patterns in data (some columns in prediction).
Test with real values
End of explanation
from sklearn.metrics import mean_squared_error
rmse_test_ridge = np.sqrt(mean_squared_error(y_ridge, y_p))
rmse_test_lasso = np.sqrt(mean_squared_error(y_lasso, y_p))
rmse_test_knn = np.sqrt(mean_squared_error(y_knn, y_p))
print("RMSE for Ridge Regression test: {0}".format(rmse_test_ridge))
print("RMSE for Lasso Regression test: {0}".format(rmse_test_lasso))
print("RMSE for K-NN test: {0}".format(rmse_test_knn))
Explanation: Print RMSE for all classifiers
End of explanation |
4,537 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: 10. Syntax — Lab exercises
Preparations
Introduction
In this lab, we are going to use the Python Natural Language Toolkit (nltk). It has an API that allows you to create, read, and parse with Context-free Grammars (CFG), as well as to convert parse trees to Chomsky Normal Form (CNF) and back and to display or pretty print them.
During the first few exercises, we are going to acquint ourselves with nltk using a toy grammar. In the second part, you will be asked to implement the CKY algorithm and test it on a real world treebank.
Infrastructure
For today's exercises, you will need the docker image again. Provided you have already downloaded it last time, you can start it by
Step4: Disclaimer
NLTK is not the only NLP library for Python. [spaCy] is "industrial-strength" library which, like NLTK, implements various NLP tools for multiple languages. However, it also supports neural network models (on the GPU as well) and it integrates word vectors. A comparison is availabe here. We teach NLTK in this course because
1. it lends itself better to education and experimentation
1. of certain scandals
However, if you are doing serious NLP work, you should also consider spaCy.
Exercises
1. Get to know nltk
In this exercise, we are using the toy grammar from the lecture with a little modification so that it can handle ditransitives.
Step7: Unfortunately, generate() only generates the sentences in order. Also, it can run into problems with recursive grammars. Here is a version that generates random sentences.
Step8: Sentences can also be parsed
Step9: The parse returns an iterator of nltk.tree.Tree objects. This class has some useful functions, such as
Step10: Note that in nltk, one can convert a Tree to CNF, but not the whole grammar. nltk has some strange design choices - the other being their reliance on Tcl. If you run this notebook on your own machine, a nifty grammar editing tool will pop up if you run
Step12: 2. Arithmetics
2.1 Basics
Model the four elementary mathematical operations, namely +, -, * and /. Your tasks is to validate mathematical expressions that use them. Specifically
Step13: 2.2 Precedence
If you implemented the previous task with a single nonterminal, you will see that the grammar is undeterministic, and some parses do not reflect the precedence of mathematical operators. Fix the grammar so that it does!
Hints
Step14: 2.3 CNF
Parse an expression and convert the resulting tree into CNF. If you succeed, congratulations, you can skip this exercise.
However, most likely the function will throw an exception. This is because the NLTK algorithm cannot cope with rules that mix nonterminals and terminals in certain ways (e.g. A -> B '+' C). Fix your grammar by introducing a POS-like nonterminal (e.g. add for +) into each such rule.
Step16: 2.4 Evaluation*
Compute the value of the expression. Implement a recursive function that traverses the tree and returns an interger.
Note
Step17: 3. CKY
Up until now, we used NLTK's ChartParser to parse our grammar. In this exercise, we will replace it with our own implementation of CKY.
3.1 The parser class
First, create the CKYParser class. Imitate the interface of ChartParser. You don't need to look up the API
Step19: 3.2 Implement parse()
Implement the parse() method. You don't need to worry about the backpointers for now; just treat the cells of the matrix as a piece of paper and write strings to them. The functions should just return True if the sentence is grammatical and False if it isn't.
Hints
Step20: 3.3 The full monty
Modify parse() so that it returns the parse tree. In the original CKY algorithm, each nonterminal maintains backpointers to its children. Instead, we will build the Tree object directly (which is little more that a label and a list of backpointers, really).
There are two things you should do here
Step21: 4. Treebanks
NLTK also contains corpora. Amongst others, it contains about 10% of the Penn TreeBank (PTB).
4.1 Download
Download the corpus with the nltk.download() tool. It is under Corpora and is called treebank.
4.2 Corpus statistics
The functions below can be used to get the file ids, words, sentences, parse trees from the treebank.
Using them, get the following following corpus statistics | Python Code:
import graphviz
import nltk
from nltk import Nonterminal
from nltk.parse.generate import generate
from nltk.tree import Tree
def does_tcl_work():
Checks if Tcl is installed and works (e.g. it won't on a headless server).
tree = nltk.tree.Tree('test', [])
try:
tree._repr_png_()
return True
except:
return False
def draw_tree(tree):
Draws an NLTK parse tree via Graphviz.
def draw_tree_rec(curr_root, graph, last_node):
node_id = str(int(last_node) + 1)
for child in curr_root:
if isinstance(child, nltk.tree.Tree):
graph.node(node_id, child.label(), penwidth='0')
graph.edge(last_node, node_id, color='darkslategray3', style='bold')
node_id = draw_tree_rec(child, graph, node_id)
else:
graph.node(node_id, child, penwidth='0')
graph.edge(last_node, node_id, color='darkslategray3', style='bold')
node_id = str(int(node_id) + 1)
return str(int(node_id) + 1)
graph = graphviz.Graph()
graph.graph_attr['ranksep'] = '0.2'
graph.node('0', tree.label(), penwidth='0')
draw_tree_rec(tree, graph, '0')
return graph._repr_svg_()
# Use Graphviz to draw the tree if the Tcl backend of nltk doesn't work
if not does_tcl_work():
svg_formatter = get_ipython().display_formatter.formatters['image/svg+xml']
svg_formatter.for_type(nltk.tree.Tree, draw_tree)
# Delete the nltk drawing function, just to be sure
delattr(Tree, '_repr_png_')
Explanation: 10. Syntax — Lab exercises
Preparations
Introduction
In this lab, we are going to use the Python Natural Language Toolkit (nltk). It has an API that allows you to create, read, and parse with Context-free Grammars (CFG), as well as to convert parse trees to Chomsky Normal Form (CNF) and back and to display or pretty print them.
During the first few exercises, we are going to acquint ourselves with nltk using a toy grammar. In the second part, you will be asked to implement the CKY algorithm and test it on a real world treebank.
Infrastructure
For today's exercises, you will need the docker image again. Provided you have already downloaded it last time, you can start it by:
docker ps -a: lists all the containers you have created. Pick the one you used last time (with any luck, there is only one)
docker start <container id>
docker exec -it <container id> bash
In order to be able to run today's exercises, you will have to install some system- and Python packages as well:
bash
apt-get install python3-tk
pip install graphviz
When that's done, update your git repository:
bash
cd /nlp/python_nlp_2017_fall/
git pull
And start the notebook:
jupyter notebook --port=9999 --ip=0.0.0.0 --no-browser --allow-root
Boilerplate
The following code imports the packages we are going to use. It also defines a function that draws the parse trees with the Graphviz library. nltk can display the trees, but it depends on Tcl, which doesn't work on a headless (GUI-less) system.
End of explanation
# fromstring() returns a CFG instance from a string
# Observe the two ways one can specify alternations in the grammar
# and how terminal symbols are specified
toy_grammar = nltk.CFG.fromstring(
S -> NP VP
NP -> Pronoun | ProperNoun | Det Nominal
Nominal -> Nominal Noun
Nominal -> Noun
VP -> Verb | Verb PP | Verb NP | Verb NP PP | Verb NP NP | Verb NP NP PP
PP -> Preposition NP
Pronoun -> 'he' | 'she' | 'him' | 'her'
ProperNoun -> 'John' | 'Mary' | 'Fido'
Det -> 'a' | 'an' | 'the'
Noun -> 'flower' | 'bone' | 'necklace' | 'dream' | 'hole' | 'café' | 'house' | 'bed'
Verb -> 'loves' | 'gives' | 'gave' | 'sleeps' | 'digs' | 'dag' | 'ate'
Preposition -> 'in' | 'on' | 'behind'
)
# Now for some properties:
print('Max RHS length:', toy_grammar.max_len())
print('The start symbol is', toy_grammar.start())
print('Is it in CNF:', toy_grammar.is_chomsky_normal_form())
print('Is this a lexical grammar:', toy_grammar.is_lexical())
print('All productions:', toy_grammar.productions())
# Let's generate a few sentences
for sentence in generate(toy_grammar, n=10):
print(' '.join(sentence))
Explanation: Disclaimer
NLTK is not the only NLP library for Python. [spaCy] is "industrial-strength" library which, like NLTK, implements various NLP tools for multiple languages. However, it also supports neural network models (on the GPU as well) and it integrates word vectors. A comparison is availabe here. We teach NLTK in this course because
1. it lends itself better to education and experimentation
1. of certain scandals
However, if you are doing serious NLP work, you should also consider spaCy.
Exercises
1. Get to know nltk
In this exercise, we are using the toy grammar from the lecture with a little modification so that it can handle ditransitives.
End of explanation
import random
from itertools import count
def generate_sample(grammar, start=None):
Generates a single sentence randomly.
gen = [start or grammar.start()]
curr_p = 0
while curr_p < len(gen):
production = random.choice(grammar.productions(lhs=gen[curr_p]))
if production.is_lexical():
gen[curr_p] = production.rhs()[0]
curr_p += 1
else:
gen = gen[:curr_p] + list(production.rhs()) + gen[curr_p + 1:]
return ' '.join(gen)
def generate_random(grammar, start=None, n=None):
Generates sentences randomly.
for i in count(0):
yield generate_sample(grammar, start)
if i == n:
break
for sentence in generate_random(toy_grammar, n=10):
print(sentence)
Explanation: Unfortunately, generate() only generates the sentences in order. Also, it can run into problems with recursive grammars. Here is a version that generates random sentences.
End of explanation
toy_parser = nltk.ChartParser(toy_grammar)
# the split() part is important
for tree in toy_parser.parse('John gave Mary a flower in the café'.split()):
display(tree)
Explanation: Sentences can also be parsed:
End of explanation
# Converts the tree to CNF
tree.chomsky_normal_form()
display(tree)
# Let's convert it back...
tree.un_chomsky_normal_form()
print('The tree has', len(tree), 'children.')
print('The first child is another tree:', tree[0])
print('All nonterminals are Trees. They have labels:', tree[1].label())
print('Terminals are just strings:', tree[0][0][0])
Explanation: The parse returns an iterator of nltk.tree.Tree objects. This class has some useful functions, such as
End of explanation
nltk.app.rdparser()
Explanation: Note that in nltk, one can convert a Tree to CNF, but not the whole grammar. nltk has some strange design choices - the other being their reliance on Tcl. If you run this notebook on your own machine, a nifty grammar editing tool will pop up if you run
End of explanation
# Your solution here
agr = nltk.CFG.fromstring(
)
aparser = nltk.ChartParser(agr)
# Test
for tree in aparser.parse('1 - 2 / ( 3 - 4 )'.split()):
display(tree)
Explanation: 2. Arithmetics
2.1 Basics
Model the four elementary mathematical operations, namely +, -, * and /. Your tasks is to validate mathematical expressions that use them. Specifically:
- single-digit numbers are valid expressions
- if expr1 and expr2 are valid expressions, these are also valid:
- expr1 + expr2
- expr1 - expr2
- expr1 * expr2
- expr1 / expr2
- (expr1)
Try to solve it with as few nonterminals as possible.
End of explanation
# Your solution here
# Test
for tree in aparser.parse('1 - 2 / ( 3 - 4 )'.split()):
display(tree)
assert len(list(aparser.parse('1 - 2 + 3 / ( 4 - 5 )'.split()))) > 0
Explanation: 2.2 Precedence
If you implemented the previous task with a single nonterminal, you will see that the grammar is undeterministic, and some parses do not reflect the precedence of mathematical operators. Fix the grammar so that it does!
Hints:
- + and - should be higher up the tree than * and /
- you will need at least 3 nonterminals
- allow chaining of the same operator types, e.g. 1 + 2 - 3. One of the nonterminals in the toy grammar above does something similar
- do not worry about unit productions, but don't create a unit recursion cycle (e.g. A -> B -> C -> A)
End of explanation
# Your solution here
# Test
tree = list(aparser.parse('1 - 2 / ( 3 - 4 )'.split()))[0]
tree.chomsky_normal_form()
display(tree)
Explanation: 2.3 CNF
Parse an expression and convert the resulting tree into CNF. If you succeed, congratulations, you can skip this exercise.
However, most likely the function will throw an exception. This is because the NLTK algorithm cannot cope with rules that mix nonterminals and terminals in certain ways (e.g. A -> B '+' C). Fix your grammar by introducing a POS-like nonterminal (e.g. add for +) into each such rule.
End of explanation
def evaluate_tree(tree):
Returns the value of the expression represented by tree.
pass
# Test
assert evaluate_tree(next(aparser.parse('1+2'))) == 3
assert evaluate_tree(next(aparser.parse('1+2*3'))) == 7
assert evaluate_tree(next(aparser.parse('3/(2-3)-4/2-5'))) == -10
Explanation: 2.4 Evaluation*
Compute the value of the expression. Implement a recursive function that traverses the tree and returns an interger.
Note: if you implemented this function well, but get an AssertionError from the last line, it means that your grammar is probably right associative. Look at the (non-CNF) tree to confirm this. If so, make it left associative.
End of explanation
class CKYParser:
pass
Explanation: 3. CKY
Up until now, we used NLTK's ChartParser to parse our grammar. In this exercise, we will replace it with our own implementation of CKY.
3.1 The parser class
First, create the CKYParser class. Imitate the interface of ChartParser. You don't need to look up the API: support only the functions we used thus far.
End of explanation
import numpy
# Test
grammar = nltk.CFG.fromstring(
S -> NP VP | ProperNoun VP | NP Verb | ProperNoun Verb
NP -> Det Nominal | Det Noun
Nominal -> Nominal Noun | Noun Noun
VP -> Verb NP | Verb ProperNoun
Det -> 'the'
Noun -> 'dog' | 'bit'
ProperNoun -> 'John'
Verb -> 'bit'
)
parser = CKYParser(grammar)
print('Sentence is grammatical:', parser.parse('the dog bit John'.split()))
Explanation: 3.2 Implement parse()
Implement the parse() method. You don't need to worry about the backpointers for now; just treat the cells of the matrix as a piece of paper and write strings to them. The functions should just return True if the sentence is grammatical and False if it isn't.
Hints:
- the easiest format for the matrix is probably a 2D numpy array with a list in each cell (we might have multiple candidates in a cell). Use dtype=object. Don't forget to initialize it.
- the display() method works on arrays and is a useful tool for debugging
- in 2D numpy arrays, rows are numbered from top to bottom. That takes care of the cell indexing part, because a cell represents the words sentence[row:col+1].
- Implement just the main diagonal (lexical rules) first.
- Use the grammar.productions() function to get the list of production rules. To see how to use it, refer to
- the generate_sample function above
- help(grammar.productions)
- Note that in the production rules returned by grammar.productions(), terminals will be strings, and nonterminals instances of the Nonterminal object. You can get the actual symbol out of the latter with the symbol() method.
Use the CNF grammar below for development and the example sentence for testing.
End of explanation
# Test
parser = CKYParser(grammar)
for tree in parser.parse('the dog bit John'.split()):
display(tree)
Explanation: 3.3 The full monty
Modify parse() so that it returns the parse tree. In the original CKY algorithm, each nonterminal maintains backpointers to its children. Instead, we will build the Tree object directly (which is little more that a label and a list of backpointers, really).
There are two things you should do here:
1. When filling a cell: instead of adding the name of the nonterminal to the list in the cell, add a Tree with the name as label and the right children. The constructor's signature is Tree(node, children), where the latter is a list.
2. Change your method to be a generator: yield all Trees from the top right cell whose label is S.
Don't forget that Tree.label()s are strings, so if you want to look for them in grammar.productions(), enclose them into a Nonterminal object.
End of explanation
from nltk.corpus import treebank
# PTB file ids
print('Ids:', treebank.fileids())
# Words in one of the files
print('Words:', treebank.words('wsj_0003.mrg'))
# Word - POS-tag pairs
print('Tagged words:', treebank.tagged_words('wsj_0003.mrg'))
display(treebank.parsed_sents('wsj_0003.mrg')[0])
# Your solution here
Explanation: 4. Treebanks
NLTK also contains corpora. Amongst others, it contains about 10% of the Penn TreeBank (PTB).
4.1 Download
Download the corpus with the nltk.download() tool. It is under Corpora and is called treebank.
4.2 Corpus statistics
The functions below can be used to get the file ids, words, sentences, parse trees from the treebank.
Using them, get the following following corpus statistics:
- the number of sentences
- number of words
End of explanation |
4,538 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Illustration of A-projection
How to deal with direction- and baseline-dependent delays when imaging using $A$-kernels.
Step1: Generate baseline coordinates for a short observation with the VLA where the target is near the zenith. This means minimal w-values - the easy case.
Step2: Ionosphere
However, let us assume that the ionosphere introduces random delays (refraction) into our data based on the location of the antenna and direction. Normally this delay screen would depend on both time and frequency too, but let us ignore that for the moment to keep things simple
Step3: For example, here is the phase screen applying to our field of view at the centre of the telescope
Step4: Now let's simulate visibilities. The delay will depend on both antennas involved in the baseline and the target position. This introduces some considerable "noise" into the phases
Step5: Because we chose low $w$-values, we can use simple imaging here. However, the noise we added to the phases messes quite a bit with our ability to locate sources
Step6: The tricky aspect of this noise is that it is direction-dependent. This means that they have to be removed within imaging where we introduce direction again. As we will be working in the grid, we therefore make $A$-kernels that compensate for the introduced phase error.
Note that normally $A$-kernels are unknowns, so we would at best have approximations of those available
Step7: Our actual kernels will however we for antenna combinations (baselines). Therefore we make combinations. These are our actual kernels, so now we can do the FFT. We reduce the support a bit as well to make imaging faster.
Step8: Some examples for the kernels we just generated. Short baselines will see almost exactly the same ionosphere, so the kernel is going to be relatively trivial - dominated by a single dot at $(0,0)$
Step9: On the other hand, for long baselines there is much more turbulence to compensate for, so the kernels start looking increasingly chaotic
Step10: As random as these kernels look, they are exactly what we need to restore imaging performance
Step11: Not required, but we can also easily add an anti-aliasing function into the mix and oversample the kernel. Gridding complexity doesn't change, and we get more accuracy | Python Code:
%matplotlib inline
import sys
sys.path.append('../..')
from matplotlib import pylab
pylab.rcParams['figure.figsize'] = 10, 10
import functools
import numpy
import scipy
import scipy.special
import astropy
import astropy.units as u
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from ipywidgets import interact
from crocodile.synthesis import *
from crocodile.simulate import *
from util.visualize import *
from arl.test_support import create_named_configuration
Explanation: Illustration of A-projection
How to deal with direction- and baseline-dependent delays when imaging using $A$-kernels.
End of explanation
theta = 0.04
lam = 18000
grid_size = int(theta * lam)
vlas = create_named_configuration('VLAA_north')
ha_range = numpy.arange(numpy.radians(-30),
numpy.radians(30),
numpy.radians(90 / 360))
dec = vlas.location.lat
vobs = xyz_to_baselines(vlas.data['xyz'], ha_range, dec)
# Create antenna mapping for visibilities
antennas = vlas.data['xyz']
nant = len(antennas)
ant1,ant2 = baseline_ids(nant, len(ha_range))
ant1xy = vlas.data['xyz'][ant1,:2]
ant2xy = vlas.data['xyz'][ant2,:2]
# Wavelength: 5 metres
wvl=5
uvw = vobs / wvl
ax = plt.figure().add_subplot(111, projection='3d')
ax.scatter(uvw[:,0], uvw[:,1] , uvw[:,2])
max_uvw = numpy.amax(uvw)
ax.set_xlabel('U [$\lambda$]'); ax.set_xlim((-max_uvw, max_uvw))
ax.set_ylabel('V [$\lambda$]'); ax.set_ylim((-max_uvw, max_uvw))
ax.set_zlabel('W [$\lambda$]'); ax.set_zlim((-max_uvw, max_uvw))
ax.view_init(20, 20)
pylab.show()
Explanation: Generate baseline coordinates for a short observation with the VLA where the target is near the zenith. This means minimal w-values - the easy case.
End of explanation
ion_res = 2000 # m
ion_height = 300000 # m
ion_fov = int(theta * ion_height)
print("Ionospheric field of view:", ion_fov//1000, "km")
ion_size = 74000 + ion_fov # m
print("Delay screen size:", ion_size//1000, "km")
ion_max_delay = 2e-8 # s
numpy.random.seed(0)
ion_delay = ion_max_delay * numpy.random.random((ion_size // ion_res, ion_size // ion_res))
# Visualise, including antenna (ground) positions (for ha=0) to give a sense of scale
ax = plt.subplot()
img = ax.imshow(ion_delay,interpolation='bilinear',
extent=(-ion_size/2,ion_size/2,-ion_size/2,ion_size/2));
ax.scatter(vlas.data['xyz'][:,0], vlas.data['xyz'][:,1], c='red')
ax.set_title("Ionospheric delay"); plt.colorbar(img)
ax.set_xlabel('X [m]'); ax.set_ylabel('Y [m]');
Explanation: Ionosphere
However, let us assume that the ionosphere introduces random delays (refraction) into our data based on the location of the antenna and direction. Normally this delay screen would depend on both time and frequency too, but let us ignore that for the moment to keep things simple:
End of explanation
def ion_sample(ant, l, m):
# Sample image at appropriate position over the antenna
d = sample_image(ion_delay, (ant[0] + l * ion_height) / ion_res,
(ant[1] + m * ion_height) / ion_res)
# Convert to phase difference for our wavelength
return(numpy.exp(2j * numpy.pi * d * astropy.constants.c.value / wvl))
ls, ms = theta * coordinates2(5*ion_fov // ion_res)
pylab.rcParams['figure.figsize'] = 16, 10
show_image(ion_sample((0,0), ls, ms), "phase screen", theta);
Explanation: For example, here is the phase screen applying to our field of view at the centre of the telescope:
End of explanation
def add_point(l, m):
phasor = ion_sample(numpy.transpose(antennas[ant1,:2]), l, m) / \
ion_sample(numpy.transpose(antennas[ant2,:2]), l, m)
return phasor, phasor * simulate_point(uvw, l,m)
# Grid of points in the middle
vis = numpy.zeros(len(uvw), dtype=complex)
import itertools
for il, im in itertools.product(range(-3, 4), range(-3, 4)):
vis += add_point(theta/10*il, theta/10*im)[1]
# Extra dot to mark upper-right corner
vis += add_point(theta*0.28, theta*0.28)[1]
# Extra dot to mark upper-left corner
vis += add_point(theta*-0.32, theta*0.28)[1]
Explanation: Now let's simulate visibilities. The delay will depend on both antennas involved in the baseline and the target position. This introduces some considerable "noise" into the phases:
End of explanation
d,p,_=do_imaging(theta, lam, uvw, None, vis, simple_imaging)
show_image(d, "image", theta)
def zoom(l=0, m=0): show_image(d, "image", theta, xlim=(l-theta/10,l+theta/10), ylim=(m-theta/10,m+theta/10))
interact(zoom, l=(-theta/2,theta/2,theta/10), m=(-theta/2,theta/2,theta/10));
Explanation: Because we chose low $w$-values, we can use simple imaging here. However, the noise we added to the phases messes quite a bit with our ability to locate sources:
End of explanation
ion_oversample = 10
ls, ms = theta * coordinates2(ion_oversample * ion_fov // ion_res)
print("A pattern size: %dx%d" % ls.shape)
apattern = []
for ant in range(nant):
apattern.append(ion_sample(vlas.data['xyz'][ant], ls, ms))
show_image(apattern[0], "apattern", theta)
show_grid(fft(apattern[0]), "akern", theta)
Explanation: The tricky aspect of this noise is that it is direction-dependent. This means that they have to be removed within imaging where we introduce direction again. As we will be working in the grid, we therefore make $A$-kernels that compensate for the introduced phase error.
Note that normally $A$-kernels are unknowns, so we would at best have approximations of those available:
End of explanation
Nkern = min(25, ls.shape[0])
akern_combs = numpy.empty((nant, nant, 1, 1, Nkern, Nkern), dtype=complex)
for a1 in range(nant):
for a2 in range(a1+1,nant):
akern_combs[a1, a2, 0, 0] = extract_mid(fft(apattern[a2] / apattern[a1]), Nkern)
Explanation: Our actual kernels will however we for antenna combinations (baselines). Therefore we make combinations. These are our actual kernels, so now we can do the FFT. We reduce the support a bit as well to make imaging faster.
End of explanation
show_grid(akern_combs[0,1,0,0], "aakern", theta)
Explanation: Some examples for the kernels we just generated. Short baselines will see almost exactly the same ionosphere, so the kernel is going to be relatively trivial - dominated by a single dot at $(0,0)$:
End of explanation
longest = numpy.argmax(uvw[:,0]**2+uvw[:,1]**2)
show_grid(akern_combs[ant1[longest], ant2[longest],0,0], "aakern", theta)
Explanation: On the other hand, for long baselines there is much more turbulence to compensate for, so the kernels start looking increasingly chaotic:
End of explanation
d_w,p_w,_=do_imaging(theta, lam, uvw, numpy.transpose([ant1, ant2]), vis, conv_imaging, kv=akern_combs)
show_image(d_w, "image", theta)
def zoom(l=0, m=0): show_image(d_w, "image", theta, xlim=(l-theta/10,l+theta/10), ylim=(m-theta/10,m+theta/10))
interact(zoom, l=(-theta/2,theta/2,theta/10), m=(-theta/2,theta/2,theta/10));
Explanation: As random as these kernels look, they are exactly what we need to restore imaging performance:
End of explanation
Qpx = 8; c = 5
aa = anti_aliasing_function(ls.shape, 0, c)
akern_combs2 = numpy.empty((nant, nant, Qpx, Qpx, Nkern, Nkern), dtype=complex)
for a1 in range(nant):
for a2 in range(a1+1,nant):
akern_combs2[a1, a2] = kernel_oversample(aa * apattern[a2] / apattern[a1], Qpx, Nkern)
show_grid(akern_combs2[0,1,0,0], "aakern", theta)
show_grid(akern_combs2[ant1[longest], ant2[longest],0,0], "aakern", theta)
d_w2,p_w2,_=do_imaging(theta, lam, uvw, numpy.transpose([ant1, ant2]), vis, conv_imaging, kv=akern_combs2)
d_w2 /= anti_aliasing_function(d_w2.shape, 0, c)
show_image(d_w2, "image", theta)
def zoom(l=0, m=0): show_image(d_w2, "image", theta, xlim=(l-theta/10,l+theta/10), ylim=(m-theta/10,m+theta/10))
interact(zoom, l=(-theta/2,theta/2,theta/10), m=(-theta/2,theta/2,theta/10));
Explanation: Not required, but we can also easily add an anti-aliasing function into the mix and oversample the kernel. Gridding complexity doesn't change, and we get more accuracy:
End of explanation |
4,539 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
★ Monte Carlo Simulation To Calculate PI ★
Step1: Necesaary Function For Monte Carlo Simulation
Step2: Monte Carlo Simulation (with Minimal standard random number generator)
Step3: Monte Carlo Simulation (with LCG where multiplier = 13, offset = 0 and modulus = 31)
Step4: Monte Carlo Simulation (with Quasi-random numbers)¶ | Python Code:
# Import modules
import time
import math
import numpy as np
import scipy
import matplotlib.pyplot as plt
Explanation: ★ Monte Carlo Simulation To Calculate PI ★
End of explanation
def linear_congruential_generator(x, a, b, m):
x = (a * x + b) % m
u = x / m
return u, x, a, b, m
def stdrand(x):
return linear_congruential_generator(x, pow(7, 5), 0, pow(2, 31) - 1)[:2]
def halton(p, n):
b = np.zeros(math.ceil(math.log(n + 1) / math.log(p)))
u = np.zeros(n)
for j in range(n):
i = 0
b[0] = b[0] + 1
while b[i] > p - 1 + np.finfo(float).eps:
b[i] = 0
i += 1
b[i] += 1
u[j] = 0
for k in range(1, b.size + 1):
u[j] = u[j] + b[k-1] * pow(p, -k)
return u
Explanation: Necesaary Function For Monte Carlo Simulation
End of explanation
def monte_carlo_process_std(toss):
x = time.time()
hit = 0
for i in range(toss):
u1, x = stdrand(x)
u2, x = stdrand(x)
if pow(u1, 2) + pow(u2, 2) < 1.0:
hit += 1
return hit * 4.0 / toss
pi = monte_carlo_process_std(2000000)
print('pi = %.10f, err = %.10f' %(pi, abs(pi - np.pi)))
Explanation: Monte Carlo Simulation (with Minimal standard random number generator)
End of explanation
def monte_carlo_process_customized(toss):
x0 = time.time()
args = (x0, 13, 0, 31)
hit = 0
for i in range(toss):
u1, *args = linear_congruential_generator(*args)
u2, *args = linear_congruential_generator(*args)
if pow(u1, 2) + pow(u2, 2) < 1.0:
hit += 1
return hit * 4.0 / toss
pi = monte_carlo_process_customized(2000000)
print('pi = %.10f, err = %.10f' %(pi, abs(pi - np.pi)))
Explanation: Monte Carlo Simulation (with LCG where multiplier = 13, offset = 0 and modulus = 31)
End of explanation
def monte_carlo_process_quasi(toss):
hit = 0
px = halton(2, toss)
py = halton(3, toss)
for i in range(toss):
u1 = px[i]
u2 = py[i]
if pow(u1, 2) + pow(u2, 2) < 1.0:
hit += 1
return hit * 4.0 / toss
pi = monte_carlo_process_quasi(2000000)
print('pi = %.10f, err = %.10f' %(pi, abs(pi - np.pi)))
Explanation: Monte Carlo Simulation (with Quasi-random numbers)¶
End of explanation |
4,540 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='top'></a>
Complex vibration modes
Complex vibration modes arise in experimental research and numerical simulations with non proportional damping. In such cases the eigenproblem is nonlinear, thus requiring additional work in order to obtain the vibration frequencies and mode shapes of the dynamic system.
The solution to the free vibration response of the dynamic equilibrium equation involves some form of linearization of the quadratic eigenvalue problem it yields. For more information on this subject see for example this report.
Table of contents
Preamble
Dynamic equilibrium equation
State space formulation
Dynamic system setup
Undamped system
Proportionally damped system
Non proportionally damped system
Conclusions
Odds and ends
Preamble
We will start by setting up the computational environment for this notebook. Furthermore, we will need numpy and scipy for the numerical simulations and matplotlib for the plots
Step1: We will also need a couple of specific modules and a litle "IPython magic" to show the plots
Step2: Back to top
Dynamic equilibrium equation
In structural dynamics the second order differential dynamic equilibrium equation can be written in terms of generalized coordinates (d[isplacement]) and their first (v[elocity]) and second (a[cceleration]) time derivatives
Step3: We will also setup two damping matrices, one proportional to the mass and stiffness matrices (C1) and the other non proportional (C2)
Step4: Back to top
Undamped system
In the undamped system the damping matrix is all zeros and therefore the eigenproblem is a linear one as it involves only the mass and stiffness matrices
Step5: The angular frequencies are computed as the square root of the eigenvalues
Step6: The modal vectors, the columns of the modal matrix, have unit norm
Step7: Contrary to what is normaly done, we will visualize the modal vectors in a polar plot of the corresponding amplitudes and angles of equivalent complex values
Step8: Back to top
Proportionally damped system
This damping matrix is orthogonal because the mass and stiffness matrices are also orthogonal
Step9: The system and input matrices are the following
Step10: The eigenanalysis yields the eigenvalues and eigenvectors
Step11: As we can see, the eigenvalues come in complex conjugate pairs. Let us take only the ones in the upper half-plane
Step12: These complex eigenvalues can be decomposed into angular frequency and damping coefficient
Step13: The columns of the modal matrix, the modal vectors, also come in conjugate pairs, each vector having unit norm
Step14: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices
Step15: We will visualize again the complex valued modal vectors with a polar plot of the corresponding amplitudes and angles
Step16: Back to top
Non proportionally damped system
In non proportionally damped systems the damping matrix is not proportional neither to the mass matrix nor the stiffness matrix.
Non proportinal damping carries the fact that the damping matrix is not orthogonal anymore
Step17: The system and input matrices are the following
Step18: The eigenanalysis yields the eigenvalues and eigenvectors of the system matrix
Step19: As we can see, the eigenvalues come in complex conjugate pairs. Again, let us take only the ones in the upper half-plane
Step20: These complex eigenvalues can be decomposed into angular frequency and damping coefficient much like in the propotional damping case
Step21: Again, the columns of the modal matrix, the modal vectors, come in conjugate pairs, and each vector has unit norm
Step22: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices
Step23: Once more we will visualize the complex valued modal vectors through a polar plot of the corresponding amplitudes and angles | Python Code:
import sys
import numpy as np
import scipy as sp
import matplotlib as mpl
print('System: {}'.format(sys.version))
print('numpy version: {}'.format(np.__version__))
print('scipy version: {}'.format(sp.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
Explanation: <a id='top'></a>
Complex vibration modes
Complex vibration modes arise in experimental research and numerical simulations with non proportional damping. In such cases the eigenproblem is nonlinear, thus requiring additional work in order to obtain the vibration frequencies and mode shapes of the dynamic system.
The solution to the free vibration response of the dynamic equilibrium equation involves some form of linearization of the quadratic eigenvalue problem it yields. For more information on this subject see for example this report.
Table of contents
Preamble
Dynamic equilibrium equation
State space formulation
Dynamic system setup
Undamped system
Proportionally damped system
Non proportionally damped system
Conclusions
Odds and ends
Preamble
We will start by setting up the computational environment for this notebook. Furthermore, we will need numpy and scipy for the numerical simulations and matplotlib for the plots:
End of explanation
from numpy import linalg as LA
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: We will also need a couple of specific modules and a litle "IPython magic" to show the plots:
End of explanation
MM = np.matrix(np.diag([1., 2.]))
print(MM)
KK = np.matrix([[20., -10.], [-10., 10.]])
print(KK)
Explanation: Back to top
Dynamic equilibrium equation
In structural dynamics the second order differential dynamic equilibrium equation can be written in terms of generalized coordinates (d[isplacement]) and their first (v[elocity]) and second (a[cceleration]) time derivatives:
\begin{equation}
\mathbf{M} \times \mathbf{a(t)} + \mathbf{C} \times \mathbf{v(t)} + \mathbf{K} \times \mathbf{d(t)} = \mathbf{F(t)}
\end{equation}
where:
$\mathbf{M}$ is the mass matrix
$\mathbf{C}$ is the damping matrix
$\mathbf{K}$ is the stiffness matrix
$\mathbf{a(t)}$ is the acceleration vector
$\mathbf{v(t)}$ is the velocity vector
$\mathbf{d(t)}$ is the displacement vector
$\mathbf{F(t)}$ is the force input vector
Considering a dynamic system with $NDOF$ is the number of generalized degrees of freedom, the vectors will have dimensions of $NDOF \times 1$ and the matrices $NDOF \times NDOF$.
When the system is undamped, the damping matrix will be null. In this case the eigenproblem is linear:
\begin{equation}
\left[ -\mathbf{M} \times \mathbf{\omega^2} + \mathbf{K} \right] \times \mathbf{v} = \mathbf{0}
\end{equation}
In a proportionally damped system, the damping matrix is proportional to the mass and stiffness matrices:
\begin{equation}
\mathbf{C} = \alpha \times \mathbf{M} + \beta \times \mathbf{K}
\end{equation}
where $\alpha$ and $\beta$ are mass and stiffness proportionality coefficients. These are typically very small positive numbers. The resulting eigenproblem is still linear because the damping matrix can be decomposed by the modal vectors.
In a system with non proportional damping, the damping matrix will not be proportional neither to the mass nor to the stiffness matrices.
Back to top
Dynamic system setup
In this example we will use the folowing mass and stiffness matrices:
End of explanation
C1 = 0.1*MM+0.04*KK
print(C1)
C2 = np.matrix([[0.1, 0.2], [0.2, 0.2]])
print(C2)
Explanation: We will also setup two damping matrices, one proportional to the mass and stiffness matrices (C1) and the other non proportional (C2):
End of explanation
W2, F1 = LA.eig(LA.solve(MM,KK)) # eigenanalysis
ix = np.argsort(np.absolute(W2)) # sort eigenvalues in ascending order
W2 = W2[ix] # sorted eigenvalues
F1 = F1[:,ix] # sorted eigenvectors
print(np.round_(W2, 4))
print(np.round_(F1, 4))
Explanation: Back to top
Undamped system
In the undamped system the damping matrix is all zeros and therefore the eigenproblem is a linear one as it involves only the mass and stiffness matrices:
End of explanation
print(np.sqrt(W2))
Explanation: The angular frequencies are computed as the square root of the eigenvalues:
End of explanation
print(LA.norm(F1, axis=0))
Explanation: The modal vectors, the columns of the modal matrix, have unit norm:
End of explanation
fig, ax = plt.subplots(1, 2, subplot_kw=dict(polar=True))
for mode in range(2):
ax[mode].set_title('Mode #{}'.format(mode+1))
for dof in range(2):
r = np.array([0, np.absolute(F1[dof,mode])])
t = np.array([0, np.angle(F1[dof,mode])])
ax[mode].plot(t, r, 'o-', label='DOF #{}'.format(dof+1))
plt.legend(loc='lower left', bbox_to_anchor=(1., 0.))
plt.show()
Explanation: Contrary to what is normaly done, we will visualize the modal vectors in a polar plot of the corresponding amplitudes and angles of equivalent complex values:
End of explanation
print(np.round_(F1.T*C1*F1, 4))
Explanation: Back to top
Proportionally damped system
This damping matrix is orthogonal because the mass and stiffness matrices are also orthogonal:
End of explanation
A = np.bmat([[np.zeros_like(MM), MM], [MM, C1]])
print(A)
B = np.bmat([[MM, np.zeros_like(MM)], [np.zeros_like(MM), -KK]])
print(B)
Explanation: The system and input matrices are the following:
End of explanation
w1, v1 = LA.eig(LA.solve(A,B))
ix = np.argsort(np.absolute(w1))
w1 = w1[ix]
v1 = v1[:,ix]
print(np.round_(w1, 4))
print(np.round_(v1, 4))
Explanation: The eigenanalysis yields the eigenvalues and eigenvectors:
End of explanation
print(np.round_(w1[::2], 4))
Explanation: As we can see, the eigenvalues come in complex conjugate pairs. Let us take only the ones in the upper half-plane:
End of explanation
zw = -w1.real # damping coefficient time angular frequency
wD = w1.imag # damped angular frequency
zn = 1./np.sqrt(1.+(wD/-zw)**2) # the minus sign is formally correct!
wn = zw/zn # undamped angular frequency
print('Angular frequency: {}'.format(wn[[0,2]]))
print('Damping coefficient: {}'.format(zn[[0,2]]))
Explanation: These complex eigenvalues can be decomposed into angular frequency and damping coefficient:
End of explanation
print(LA.norm(v1[:,::2], axis=0))
Explanation: The columns of the modal matrix, the modal vectors, also come in conjugate pairs, each vector having unit norm:
End of explanation
AA = v1[:2,[0,2]]
AB = AA.conjugate()
BA = np.multiply(AA,w1[[0,2]])
BB = BA.conjugate()
v1_new = np.bmat([[AA, AB], [BA, BB]])
print(np.round_(v1_new[:,[0,2,1,3]], 4))
Explanation: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices:
End of explanation
fig, ax = plt.subplots(1, 2, subplot_kw=dict(polar=True))
for mode in range(2):
ax[mode].set_title('Mode #{}'.format(mode+1))
for dof in range(2):
r = np.array([0, np.absolute(v1[dof,2*mode])])
t = np.array([0, np.angle(v1[dof,2*mode])])
ax[mode].plot(t, r, 'o-', label='DOF #{}'.format(dof+1))
plt.legend(loc='lower left', bbox_to_anchor=(1., 0.))
plt.show()
Explanation: We will visualize again the complex valued modal vectors with a polar plot of the corresponding amplitudes and angles:
End of explanation
print(np.round_(F1.T*C2*F1, 4))
Explanation: Back to top
Non proportionally damped system
In non proportionally damped systems the damping matrix is not proportional neither to the mass matrix nor the stiffness matrix.
Non proportinal damping carries the fact that the damping matrix is not orthogonal anymore:
End of explanation
A = np.bmat([[np.zeros_like(MM), MM], [MM, C2]])
print(A)
B = np.bmat([[MM, np.zeros_like(MM)], [np.zeros_like(MM), -KK]])
print(B)
Explanation: The system and input matrices are the following:
End of explanation
w2, v2 = LA.eig(LA.solve(A,B))
ix = np.argsort(np.absolute(w2))
w2 = w2[ix]
v2 = v2[:,ix]
print(np.round_(w2, 4))
print(np.round_(v2, 4))
Explanation: The eigenanalysis yields the eigenvalues and eigenvectors of the system matrix:
End of explanation
print(np.round_(w2[[0,2]], 4))
Explanation: As we can see, the eigenvalues come in complex conjugate pairs. Again, let us take only the ones in the upper half-plane:
End of explanation
zw = -w2.real # damping coefficient times angular frequency
wD = w2.imag # damped angular frequency
zn = 1./np.sqrt(1.+(wD/-zw)**2) # the minus sign is formally correct!
wn = zw/zn # undamped angular frequency
print('Angular frequency: {}'.format(wn[[0,2]]))
print('Damping coefficient: {}'.format(zn[[0,2]]))
Explanation: These complex eigenvalues can be decomposed into angular frequency and damping coefficient much like in the propotional damping case:
End of explanation
print(LA.norm(v2[:,[0,2]], axis=0))
Explanation: Again, the columns of the modal matrix, the modal vectors, come in conjugate pairs, and each vector has unit norm:
End of explanation
AA = v2[:2,[0,2]]
AB = AA.conjugate()
BA = np.multiply(AA,w2[[0,2]])
BB = BA.conjugate()
v2_new = np.bmat([[AA, AB], [BA, BB]])
print(np.round_(v2_new[:,[0,2,1,3]], 4))
Explanation: Moreover, the modal matrix is composed of four blocks, each with $NDOF \times NDOF$ dimension. Some column reordering is necessary in order to match both modal matrices:
End of explanation
fig, ax = plt.subplots(1, 2, subplot_kw=dict(polar=True))
for mode in range(2):
ax[mode].set_title('Mode #{}'.format(mode+1))
for dof in range(2):
r = np.array([0, np.absolute(v2[dof,2*mode])])
t = np.array([0, np.angle(v2[dof,2*mode])])
ax[mode].plot(t, r, 'o-', label='DOF #{}'.format(dof+1))
plt.legend(loc='lower left', bbox_to_anchor=(1., 0.))
plt.show()
Explanation: Once more we will visualize the complex valued modal vectors through a polar plot of the corresponding amplitudes and angles:
End of explanation |
4,541 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting Babyweight Using BigQuery ML
Learning Objectives
- Explore the machine learning capabilities of BigQuery
- Learn how to train a linear regression model in BigQuery
- Examine the TRAINING_INFO produced by training a model
- Make predictions with a trained model in BigQuery using ML.PREDICT
Introduction
BQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat. But, for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow. That being said, what BQML gives up in complexity, it gains in ease of use.
Please see this notebook for more context on this problem and how the features were chosen.
We'll start as usual by setting our environment variables.
Step1: Exploring the data
Here, we will be taking natality data and training on features to predict the birth weight.
The CDC's Natality data has details on US births from 1969 to 2008 and is available in BigQuery as a public data set. More details
Step2: Define features
Looking over the data set, there are a few columns of interest that could be leveraged into features for a reasonable prediction of approximate birth weight.
Further, some feature engineering may be accomplished with the BigQuery CAST function -- in BQML, all strings are considered categorical features and all numeric types are considered continuous ones.
The hashmonth is added so that we can repeatably split the data without leakage -- we want all babies that share a birthday to be either in training set or in test set and not spread between them (otherwise, there would be information leakage when it comes to triplets, etc.)
Step3: Train a model in BigQuery
With the relevant columns chosen to accomplish predictions, it is then possible to create (train) the model in BigQuery. First, a dataset will be needed store the model. (if this throws an error in Datalab, simply create the dataset from the BigQuery console).
Step4: With the demo dataset ready, it is possible to create a linear regression model to train the model.
This will take approximately 4 minutes to run and will show Done when complete.
Exercise 1
Complete the TODOs in the cell below to train a linear regression model in BigQuery using weight_pounds as the label. Name your model babyweight_model_asis; it will reside within the demo dataset we created above.
Have a look at the documentation for CREATE MODEL in BQML to see examples of the correct syntax.
Step5: Explore the training statistics
During the model training (and after the training), it is possible to see the model's training evaluation statistics. For each training run, a table named <model_name>_eval is created. This table has basic performance statistics for each iteration.
While the new model is training, review the training statistics in the BigQuery UI to see the below model training
Step6: Some of these columns are obvious; although, what do the non-specific ML columns mean (specific to BQML)?
training_run - Will be zero for a newly created model. If the model is re-trained using warm_start, this will increment for each re-training.
iteration - Number of the associated training_run, starting with zero for the first iteration.
duration_ms - Indicates how long the iteration took (in ms).
Note, you can also see these stats by refreshing the BigQuery UI window, finding the <model_name> table, selecting on it, and then the Training Stats sub-header.
Let's plot the training and evaluation loss to see if the model has an overfit.
Step7: As you can see, the training loss and evaluation loss are essentially identical. We do not seem to be overfitting.
Make a prediction with BQML using the trained model
With a trained model, it is now possible to make a prediction on the values. The only difference from the second query above is the reference to the model. The data has been limited (LIMIT 100) to reduce amount of data returned.
When the ml.predict function is leveraged, output prediction column name for the model is predicted_<label_column_name>.
Exercise 3
Complete the TODO in the cell below to make predictions in BigQuery with our newly trained model demo.babyweight_model_asis on the public.samples.natality table. You'll need to preprocess the data for training by selecting only those examples which have
- year greater than 2000
- gestation_weeks greater than 0
- mother_age greater than 0
- plurality greater than 0
- weight_pounds greater than 0
Look at the expected syntax for the ML.PREDICT Function.
Hint
Step8: More advanced...
In the original example, we were taking into account the idea that if no ultrasound has been performed, some of the features (e.g. is_male) will not be known. Therefore, we augmented the dataset with such masked features and trained a single model to deal with both these scenarios.
In addition, during data exploration, we learned that the data size set for mothers older than 45 was quite sparse, so we will discretize the mother age.
Step9: On the same dataset, will also suppose that it is unknown whether the child is male or female (on the same dataset) to simulate that an ultrasound was not been performed.
Step10: Bringing these two separate data sets together, there is now a dataset for male or female children determined with ultrasound or unknown if without.
Step11: Create a new model
With a data set which has been feature engineered, it is ready to create model with the CREATE or REPLACE MODEL statement
This will take 5-10 minutes and will show Done when complete.
Exercise 4
As in Exercise 1 above, below you are asked to complete the TODO in the cell below to train a linear regression model in BigQuery using weight_pounds as the label. This time, since we're using the supplemented dataset containing without_ultrasound data, name your model babyweight_model_fc. This model will reside within the demo dataset.
Have a look at the documentation for CREATE MODEL in BQML to see examples of the correct syntax.
Step12: Training Statistics
While the new model is training, review the training statistics in the BigQuery UI to see the below model training
Step13: Make a prediction with the new model
Perhaps it is of interest to make a prediction of the baby's weight given a number of other factors | Python Code:
PROJECT = 'cloud-training-demos' # Replace with your PROJECT
BUCKET = 'cloud-training-bucket' # Replace with your BUCKET
REGION = 'us-central1' # Choose an available region for Cloud MLE
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%load_ext google.cloud.bigquery
Explanation: Predicting Babyweight Using BigQuery ML
Learning Objectives
- Explore the machine learning capabilities of BigQuery
- Learn how to train a linear regression model in BigQuery
- Examine the TRAINING_INFO produced by training a model
- Make predictions with a trained model in BigQuery using ML.PREDICT
Introduction
BQML is a great option when a linear model will suffice, or when you want a quick benchmark to beat. But, for more complex models such as neural networks you will need to pull the data out of BigQuery and into an ML Framework like TensorFlow. That being said, what BQML gives up in complexity, it gains in ease of use.
Please see this notebook for more context on this problem and how the features were chosen.
We'll start as usual by setting our environment variables.
End of explanation
%%bigquery --project $PROJECT
SELECT
*
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
LIMIT 10
Explanation: Exploring the data
Here, we will be taking natality data and training on features to predict the birth weight.
The CDC's Natality data has details on US births from 1969 to 2008 and is available in BigQuery as a public data set. More details: https://bigquery.cloud.google.com/table/publicdata:samples.natality?tab=details
Lets start by looking at the data since 2000 with useful values; i.e. those greater than zero!
End of explanation
%%bigquery --project $PROJECT
SELECT
weight_pounds, -- this is the label; because it is continuous, we need to use regression
CAST(is_male AS STRING) AS is_male,
mother_age,
CAST(plurality AS STRING) AS plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
LIMIT 10
Explanation: Define features
Looking over the data set, there are a few columns of interest that could be leveraged into features for a reasonable prediction of approximate birth weight.
Further, some feature engineering may be accomplished with the BigQuery CAST function -- in BQML, all strings are considered categorical features and all numeric types are considered continuous ones.
The hashmonth is added so that we can repeatably split the data without leakage -- we want all babies that share a birthday to be either in training set or in test set and not spread between them (otherwise, there would be information leakage when it comes to triplets, etc.)
End of explanation
%%bash
bq --location=US mk -d demo
Explanation: Train a model in BigQuery
With the relevant columns chosen to accomplish predictions, it is then possible to create (train) the model in BigQuery. First, a dataset will be needed store the model. (if this throws an error in Datalab, simply create the dataset from the BigQuery console).
End of explanation
%%bigquery --project $PROJECT
# TODO: Your code goes here
WITH natality_data AS (
SELECT
weight_pounds,-- this is the label; because it is continuous, we need to use regression
CAST(is_male AS STRING) AS is_male,
mother_age,
CAST(plurality AS STRING) AS plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
)
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
natality_data
WHERE
ABS(MOD(hashmonth, 4)) < 3 -- select 75% of the data as training
Explanation: With the demo dataset ready, it is possible to create a linear regression model to train the model.
This will take approximately 4 minutes to run and will show Done when complete.
Exercise 1
Complete the TODOs in the cell below to train a linear regression model in BigQuery using weight_pounds as the label. Name your model babyweight_model_asis; it will reside within the demo dataset we created above.
Have a look at the documentation for CREATE MODEL in BQML to see examples of the correct syntax.
End of explanation
%%bigquery --project $PROJECT
# TODO: Your code goes here
Explanation: Explore the training statistics
During the model training (and after the training), it is possible to see the model's training evaluation statistics. For each training run, a table named <model_name>_eval is created. This table has basic performance statistics for each iteration.
While the new model is training, review the training statistics in the BigQuery UI to see the below model training: https://bigquery.cloud.google.com/. Since these statistics are updated after each iteration of model training, you will see different values for each refresh while the model is training.
The training details may also be viewed after the training completes from this notebook.
Exercise 2
The cell below is missing the SQL query to examine the training statistics of our trained model. Complete the TODO below to view the results of our training job above.
Look back at the usage of the ML.TRAINING_INFO function and its correct syntax.
End of explanation
from google.cloud import bigquery
bq = bigquery.Client(project=PROJECT)
df = bq.query("SELECT * FROM ML.TRAINING_INFO(MODEL demo.babyweight_model_asis)").to_dataframe()
# plot both lines in same graph
import matplotlib.pyplot as plt
plt.plot( 'iteration', 'loss', data=df, marker='o', color='orange', linewidth=2)
plt.plot( 'iteration', 'eval_loss', data=df, marker='', color='green', linewidth=2, linestyle='dashed')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.legend();
Explanation: Some of these columns are obvious; although, what do the non-specific ML columns mean (specific to BQML)?
training_run - Will be zero for a newly created model. If the model is re-trained using warm_start, this will increment for each re-training.
iteration - Number of the associated training_run, starting with zero for the first iteration.
duration_ms - Indicates how long the iteration took (in ms).
Note, you can also see these stats by refreshing the BigQuery UI window, finding the <model_name> table, selecting on it, and then the Training Stats sub-header.
Let's plot the training and evaluation loss to see if the model has an overfit.
End of explanation
%%bigquery --project $PROJECT
SELECT
*
FROM
# TODO: Your code goes here
LIMIT 100
Explanation: As you can see, the training loss and evaluation loss are essentially identical. We do not seem to be overfitting.
Make a prediction with BQML using the trained model
With a trained model, it is now possible to make a prediction on the values. The only difference from the second query above is the reference to the model. The data has been limited (LIMIT 100) to reduce amount of data returned.
When the ml.predict function is leveraged, output prediction column name for the model is predicted_<label_column_name>.
Exercise 3
Complete the TODO in the cell below to make predictions in BigQuery with our newly trained model demo.babyweight_model_asis on the public.samples.natality table. You'll need to preprocess the data for training by selecting only those examples which have
- year greater than 2000
- gestation_weeks greater than 0
- mother_age greater than 0
- plurality greater than 0
- weight_pounds greater than 0
Look at the expected syntax for the ML.PREDICT Function.
Hint: You will need to cast the features is_male and plurality as STRINGs
End of explanation
%%bigquery --project $PROJECT
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
IF(mother_age < 18, 'LOW',
IF(mother_age > 45, 'HIGH',
CAST(mother_age AS STRING))) AS mother_age,
CAST(plurality AS STRING) AS plurality,
CAST(gestation_weeks AS STRING) AS gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
LIMIT 25
Explanation: More advanced...
In the original example, we were taking into account the idea that if no ultrasound has been performed, some of the features (e.g. is_male) will not be known. Therefore, we augmented the dataset with such masked features and trained a single model to deal with both these scenarios.
In addition, during data exploration, we learned that the data size set for mothers older than 45 was quite sparse, so we will discretize the mother age.
End of explanation
%%bigquery --project $PROJECT
SELECT
weight_pounds,
'Unknown' AS is_male,
IF(mother_age < 18, 'LOW',
IF(mother_age > 45, 'HIGH',
CAST(mother_age AS STRING))) AS mother_age,
IF(plurality > 1, 'Multiple', 'Single') AS plurality,
CAST(gestation_weeks AS STRING) AS gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
LIMIT 25
Explanation: On the same dataset, will also suppose that it is unknown whether the child is male or female (on the same dataset) to simulate that an ultrasound was not been performed.
End of explanation
%%bigquery --project $PROJECT
WITH with_ultrasound AS (
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
IF(mother_age < 18, 'LOW',
IF(mother_age > 45, 'HIGH',
CAST(mother_age AS STRING))) AS mother_age,
CAST(plurality AS STRING) AS plurality,
CAST(gestation_weeks AS STRING) AS gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
),
without_ultrasound AS (
SELECT
weight_pounds,
'Unknown' AS is_male,
IF(mother_age < 18, 'LOW',
IF(mother_age > 45, 'HIGH',
CAST(mother_age AS STRING))) AS mother_age,
IF(plurality > 1, 'Multiple', 'Single') AS plurality,
CAST(gestation_weeks AS STRING) AS gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
),
preprocessed AS (
SELECT * from with_ultrasound
UNION ALL
SELECT * from without_ultrasound
)
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
preprocessed
WHERE
ABS(MOD(hashmonth, 4)) < 3
LIMIT 25
Explanation: Bringing these two separate data sets together, there is now a dataset for male or female children determined with ultrasound or unknown if without.
End of explanation
%%bigquery --project $PROJECT
# TODO: Your code goes here
WITH with_ultrasound AS (
SELECT
weight_pounds,
CAST(is_male AS STRING) AS is_male,
IF(mother_age < 18, 'LOW',
IF(mother_age > 45, 'HIGH',
CAST(mother_age AS STRING))) AS mother_age,
CAST(plurality AS STRING) AS plurality,
CAST(gestation_weeks AS STRING) AS gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
),
without_ultrasound AS (
SELECT
weight_pounds,
'Unknown' AS is_male,
IF(mother_age < 18, 'LOW',
IF(mother_age > 45, 'HIGH',
CAST(mother_age AS STRING))) AS mother_age,
IF(plurality > 1, 'Multiple', 'Single') AS plurality,
CAST(gestation_weeks AS STRING) AS gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE
year > 2000
AND gestation_weeks > 0
AND mother_age > 0
AND plurality > 0
AND weight_pounds > 0
),
preprocessed AS (
SELECT * from with_ultrasound
UNION ALL
SELECT * from without_ultrasound
)
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks
FROM
preprocessed
WHERE
ABS(MOD(hashmonth, 4)) < 3
Explanation: Create a new model
With a data set which has been feature engineered, it is ready to create model with the CREATE or REPLACE MODEL statement
This will take 5-10 minutes and will show Done when complete.
Exercise 4
As in Exercise 1 above, below you are asked to complete the TODO in the cell below to train a linear regression model in BigQuery using weight_pounds as the label. This time, since we're using the supplemented dataset containing without_ultrasound data, name your model babyweight_model_fc. This model will reside within the demo dataset.
Have a look at the documentation for CREATE MODEL in BQML to see examples of the correct syntax.
End of explanation
bq = bigquery.Client(project=PROJECT)
df = # TODO: Your code goes here
# plot both lines in same graph
import matplotlib.pyplot as plt
plt.plot( 'iteration', 'loss', data=df, marker='o', color='orange', linewidth=2)
plt.plot( 'iteration', 'eval_loss', data=df, marker='', color='green', linewidth=2, linestyle='dashed')
plt.xlabel('iteration')
plt.ylabel('loss')
plt.legend();
Explanation: Training Statistics
While the new model is training, review the training statistics in the BigQuery UI to see the below model training: https://bigquery.cloud.google.com/
The training details may also be viewed after the training completes from this notebook.
Exercise 5
Just as in Exercise 2 above, let's plot the train and eval curve using the TRAINING_INFO from the model training job for the babyweight_model_fc model we trained above. Complete the TODO to create a Pandas dataframe that has the TRAINING_INFO from the training job.
End of explanation
%%bigquery --project $PROJECT
SELECT
*
FROM
# TODO: Your code goes here
Explanation: Make a prediction with the new model
Perhaps it is of interest to make a prediction of the baby's weight given a number of other factors: Male, Mother is 28 years old, Mother will only have one child, and the baby was born after 38 weeks of pregnancy.
To make this prediction, these values will be passed into the SELECT statement.
Exercise 6
Use your newly trained babyweight_model_fc to predict the birth weight of a baby that have the following characteristics
- the baby is male
- the mother's age is 28
- there are not multiple babies (i.e., no twins, triplets, etc)
- the baby had 38 weeks of gestation
End of explanation |
4,542 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementation of time blocking for compression/serialization and de-serialization/de-compression of wavefields with Devito operators
Introduction
The goal of this tutorial is to prototype the compression/serialization and de-serialization/de-compression for wavefields in Devito. The motivation is using seismic modeling operators for full waveform inversion (FWI). Some of the steps in FWI require the use of previously computed wavefields, and of particular interest the adjoint of the Jacobian linearized operator -- an operator that maps data perturbation into velocity perturbation and is used to build FWI gradients -- requires a zero-lag temporal correlation with the wavefield that is computed with the nonlinear source.
There are implemented alternatives to serialization/de-serialization like checkpointing, but we investigate the serialization option here. For more information on checkpointing, please see the details for pyrevolve, a python implementation of optimal checkpointing for Devito (https
Step1: Imports
We have grouped all imports used in this notebook here for consistency.
Step2: Instantiate the model for a two dimensional problem
We are aiming at a small model as this is a POC.
- 101 x 101 cell model
- 20x20 m discretization
- Modeling sample rate explicitly chosen
Step3: Plot velocity and density models
Next we plot the velocity and density models for illustration, with source location shown as a large red asterisk and receiver line shown as a black line.
Step4: Implementation of the nonlinear forward
We copy the nonlinear forward PDE described in the 1st self-adjoint notebook linked above
Step5: Run the time blocking implementation over blocks of M time steps
After each block of $M$ time steps, we return control to Python to extract the Born term and serialize/compress.
The next cell exercises the time blocking operator over $N$ time blocks.
Step6: Correctness test for nonlinear forward wavefield
We now test correctness by measuring the maximum absolute difference between the wavefields computed with theBuffer and save all time steps implementations.
Step7: Implementation of Jacobian linearized forward
As before we have two implementations
Step8: Run the time blocking implementation over blocks of M time steps
Before each block of $M$ time steps, we de-serialize and de-compress.
TODO
In the linearized op below figure out how to use a pre-allocated array to hold the compressed bytes, instead of returning a new bytearray each time we read. Note this will not be a problem when not using Python, and there surely must be a way.
Step9: Correctness test for Jacobian forward wavefield
We now test correctness by measuring the maximum absolute difference between the wavefields computed with theBuffer and save all time steps implementations.
Step10: Implementation of Jacobian linearized adjoint
Again we have two implementations
Step11: Run the time blocking implementation over blocks of M time steps
Before each block of $M$ time steps, we de-serialize and de-compress.
Step12: Correctness test for Jacobian adjoint wavefield and computed gradient
We now test correctness by measuring the maximum absolute difference between the gradients computed with theBuffer and save all time steps implementations.
Step13: Delete the file used for serialization | Python Code:
# NBVAL_IGNORE_OUTPUT
# Install pyzfp package in the current Jupyter kernel
import sys
!{sys.executable} -m pip install blosc
import blosc
Explanation: Implementation of time blocking for compression/serialization and de-serialization/de-compression of wavefields with Devito operators
Introduction
The goal of this tutorial is to prototype the compression/serialization and de-serialization/de-compression for wavefields in Devito. The motivation is using seismic modeling operators for full waveform inversion (FWI). Some of the steps in FWI require the use of previously computed wavefields, and of particular interest the adjoint of the Jacobian linearized operator -- an operator that maps data perturbation into velocity perturbation and is used to build FWI gradients -- requires a zero-lag temporal correlation with the wavefield that is computed with the nonlinear source.
There are implemented alternatives to serialization/de-serialization like checkpointing, but we investigate the serialization option here. For more information on checkpointing, please see the details for pyrevolve, a python implementation of optimal checkpointing for Devito (https://github.com/devitocodes/pyrevolve).
We aim to provide a proof of concept for compression/serialization and de-serialization/de-compression of the nonlinear wavefield. We will achieve this via time blocking: we will run a number of time steps in the generated c kernel, and then return control to Python for compression/serialization (for the nonlinear forward operator), and de-serialization/de-compression (for the linearized Jacobian forward and adjoint operators).
In order to illustrate the use case for serialization, we outline the workflow for computing the gradient of the FWI objective function, ignoring a lot of details, as follows:
Generate the nonlinear forward modeled data at the receivers $d_{mod}$
$$
d_{mod} = F m
$$
Compress and serialize the nonlinear source wavefield to disk in time blocks during computation of 1. The entire nonlinear wavefield is of size $[nt,nx,nz]$ in 2D, but we deal with a block of $M$ time steps at a time, so the size of the chunk to be compressed and serialized is $[M,nx,nz]$.
Compute the data residual $\delta r$ by differencing observed and modeled data at the receivers
$$
\delta r = d_{obs} - d_{mod}
$$
Backproject the data residual $\delta r$ via time reversal with the adjoint linearized Jacobian operator.
De-serialize and de-compress the nonlinear source wavefield from disk during computation in step 4, synchronizing time step between the nonlinear wavefield computed forward in time, and time reversed adjoint wavefield. We will deal with de-serialization and de-compression of chunks of $M$ time steps of size $[M,nx,nz]$.
Increment the model perturbation via zero lag correlation of the de-serialized nonlinear source wavefield and the backprojected receiver adjoint wavefield. Note that this computed model perturbation is the gradient of the FWI objective function.
$$
\delta m = \bigl( \nabla F\bigr)^\top\ \delta r
$$
Please see other notebooks in the seismic/self=adjoint directory for more details, in particular the notebooks describing the self-adjoint modeling operators.
| Self-adjoint notebook description | name |
|:---|:---|
| Nonlinear operator | sa_01_iso_implementation1.ipynb |
| Jacobian linearized operators | sa_02_iso_implementation2.ipynb |
| Correctness tests | sa_03_iso_correctness.ipynb |
Outline
Definition of symbols
Description of tests to verify correctness
Description of time blocking implementation
Compression note -- use of blosc
Create small 2D test model
Implement and test the Nonlinear forward operation
Save all time steps
Time blocking plus compression/serialization
Ensure differences are at machine epsilon
Implement and test the Jacobian linearized forward operation
Save all time steps
Time blocking plus compression/serialization
Ensure differences are at machine epsilon
Implement and test the Jacobian linearized adjoint operation
Save all time steps
Time blocking plus compression/serialization
Ensure differences are at machine epsilon
Discussion
Table of symbols
We show the symbols here relevant to the implementation of the linearized operators.
| Symbol | Description | Dimensionality |
|:---|:---|:---|
| $m_0(x,y,z)$ | Reference P wave velocity | function of space |
| $\delta m(x,y,z)$ | Perturbation to P wave velocity | function of space |
| $u_0(t,x,y,z)$ | Reference pressure wavefield | function of time and space |
| $\delta u(t,x,y,z)$ | Perturbation to pressure wavefield | function of time and space |
| $q(t,x,y,z)$ | Source wavefield | function of time, localized in space to source location |
| $r(t,x,y,z)$ | Receiver wavefield | function of time, localized in space to receiver locations |
| $\delta r(t,x,y,z)$ | Receiver wavefield perturbation | function of time, localized in space to receiver locations |
| $F[m; q]$ | Forward nonlinear modeling operator | Nonlinear in $m$, linear in $q$: $\quad$ maps $m \rightarrow r$ |
| $\nabla F[m; q]\ \delta m$ | Forward Jacobian modeling operator | Linearized at $[m; q]$: $\quad$ maps $\delta m \rightarrow \delta r$ |
| $\bigl( \nabla F[m; q] \bigr)^\top\ \delta r$ | Adjoint Jacobian modeling operator | Linearized at $[m; q]$: $\quad$ maps $\delta r \rightarrow \delta m$ |
Description of tests to verify correctness
In order to make sure we have implemented the time blocking correctly, we numerically compare the output from two runs:
1. all time steps saved implementation -- requires a lot of memory to hold wavefields at each time step
1. time blocking plus compression/serialization implementation -- requires only enough memory to hold the wavefields in a time block
We perform these tests for three phases of FWI modeling:
1. nonlinear forward: maps model to data, forward in time
1. Jacobian linearized forward: maps model perturbation to data perturbation, forward in time
1. Jacobian linearized adjoint: maps data perturbation to model perturbation, backward in time
We will design a small 2D test experiment with a source in the middle of the model and short enough elapsed modeling time that we do not need to worry about boundary reflections for these tests, or runnin out of memory saving all time steps.
Description of time blocking implementation
We gratefully acknowledge Downunder Geosolutions (DUG) for in depth discussions about their production time blocking implementation. Those discussions shaped the implementation shown here. The most important idea is to separate the programmatic containers used for propagation and serialization. To do this we utilize two distinct TimeFunction's.
Propagation uses TimeFunction(..., save=None)
We use a default constructed TimeFunction for propagation. This can be specified in the constructor via either save=None or no save argument at all. Devito backs such a default TimeFunction by a Buffer of size time_order+1, or 3 for second order in time. We show below the mapping from the monotonic ordinary time indices to the buffered modulo time indices as used by a Buffer in a TimeFuntion with time_order=2.
Modulo indexing for Buffer of size 3
Ordinary time indices: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Modulo time indices: 0 1 2 0 1 2 0 1 2 0 1 2 0 1 2 0
Important note: the modulo indexing of Buffer is the reason we will separate propagation from serialization. If we use a larger Bufffer as the TimeFunction for propagation, we would have to deal with the modulo indexing not just for the current time index, but also previous and next time indices (assuming second order in time). This means that the previous and next time steps can overwrite the locations of the ordinary time indices when you propagate for a block of time steps. This is the reason we do not use the same TimeFunction for both propagation and serialization.
Generated code for a second order in time PDE
We now show an excerpt from Devito generated code for a second order in time operator. A second order in time PDE requires two wavefields in order to advance in time: the wavefield at the next time step $u(t+\Delta t)$ is a function of the wavefield at previous time step $u(t-\Delta t)$ and the wavefield at the current time step $u(t)$. Remember that Devito uses a Buffer of size 3 to handle this.
In the generated code there are three modulo time indices that are a function of ordinary time and cycle through the values $[0,1,2]$:
* t0 -- current time step
* t1 -- next time step
* t2 -- previous time step
We show an excerpt at the beginning of the time loop from the generated code below, with ordinary time loop index time. Note that we have simplified the generated code by breaking a single for loop specification line into multiple lines for clarity. We have also added comments to help understand the mapping from ordinary to modulo time indices.
```
for (int time = time_m; time <= time_M; time += 1) {
t0 = (time + 0)%(3); // time index for the current time step
t1 = (time + 1)%(3); // time index for the next time step
t2 = (time + 2)%(3); // time index for the previous time step
// ... PIE: propagation, source injection, receiver extraction ...
}
```
It should be obvious that using a single container for both propagation and serialization is problematic because the loop runs over ordinary time indices time_m through time_M, but will access stored previous time step wavefields at indices time_m-1 through time_M-1 and store computed next time step wavefields in indices time_m+1 through time_M+1.
Serialization uses TimeFunction(..., save=Buffer(M))
We will use an independent second Buffer of size $M$ for serialization in our time blocking implementation. This second TimeFunction will also use modulo indexing, but by design we only access indices time_m through time_M in each time block. This means we do not need to worry about wrapping indices from previous time step or next time step wavefields.
Minimum and maximum allowed time index for second order PDE
It is important to note that for a second order in time system the minimum allowed time index time_m will be $1$, because time index $0$ would imply that the previous time step wavefield $u(t-\Delta t)$ exists at time index $-1$, and $0$ is the minimum array location.
Similarly, the maximum allowed time index time_M will be $nt-2$, because time index $nt-1$ would imply that the next time step wavefield $u(t+\Delta t)$ exists at time index $nt$, and $nt-1$ is the maximum array location.
Flow charts for time blocking
Putting this all together, here are flow charts outlining control flow the first two time blocks with $M=5$.
Time blocking for the nonlinear forward
```
Time block 1
Call generated code Operator(time_m=1, time_M=5)
Return control to Python
Compress indices 1,2,3,4,5
Serialize indices 1,2,3,4,5
(access modulo indices 1%5,2%5,3%5,4%5,5%5)
Time block 2
Call generated code Operator(time_m=6, time_M=10)
Return control to Python
Compress indices 6,7,8,9,10
Serialize indices 6,7,8,9,10
(access modulo indices 6%5,7%5,8%5,9%5,10%5)
```
Time blocking for the linearized Jacobian adjoint (time reversed)
```
Time block 2
De-serialize indices 6,7,8,9,10
De-compress indices 6,7,8,9,10
(access modulo indices 6%5,7%5,8%5,9%5,10%5)
Call generated code Operator(time_m=6, time_M=10)
Return control to Python
Time block 1
De-serialize indices 1,2,3,4,5
De-compress indices 1,2,3,4,5
(access modulo indices 1%5,2%5,3%5,4%5,5%5)
Call generated code Operator(time_m=1, time_M=5)
Return control to Python
```
Arrays used to save file offsets and compressed sizes
We use two arrays the length of the total number of time steps to save bookeeping information used for the serialization and compression. During de-serialization these offsets and lengths will be used to seek the correct location and read the correct length from the binary file saving the compressed data.
| Array | Description |
|:---|:---|
| file_offset | stores the location of the start of the compressed block for each time step |
| file_length | stores the length of the compressed block for each time step |
Compression note -- use of blosc
In this notebook we use blosc compression that is not loaded by default in devito. The first operational cell immediately below ensures that blosc compression and Python wrapper are installed in this jupyter kernel.
Note that blosc provides lossless compression, and in practice one would use lossy compression to achieve significantly better compression ratios. Consider the use of blosc here as a placeholder for your compression method of choice, providing all the essential characteristics of what might be used at scale.
We will use the low level interface to blosc compression because it allows the easiest use of numpy arrays. A synopsis of the compression and decompression calls is shown below for devito TimeFunction $u$, employing compression level $9$ of the zstd type with the optional shuffle, at modulo time index kt%M.
```
c = blosc.compress_ptr(u._data[kt%M,:,:].array_interface['data'][0],
np.prod(u2._data[kt%M,:,:].shape),
u._data[kt%M,:,:].dtype.itemsize, 9, True, 'zstd')
blosc.decompress_ptr(u._data[kt,:,:], d.__array_interface__['data'][0])
```
blosc Reference:
* c project and library: https://blosc.org
* Python wrapper: https://github.com/Blosc/python-blosc
* Python wrapper documentation: http://python-blosc.blosc.org
* low level interface to compression call: http://python-blosc.blosc.org/tutorial.html#compressing-from-a-data-pointer
End of explanation
import numpy as np
from examples.seismic import RickerSource, Receiver, TimeAxis
from devito import (Grid, Function, TimeFunction, SpaceDimension, Constant,
Eq, Operator, configuration, norm, Buffer)
from examples.seismic.self_adjoint import setup_w_over_q
import matplotlib as mpl
import matplotlib.pyplot as plt
from matplotlib import cm
import copy
import os
# These lines force images to be displayed in the notebook, and scale up fonts
%matplotlib inline
mpl.rc('font', size=14)
# Make white background for plots, not transparent
plt.rcParams['figure.facecolor'] = 'white'
# Set logging to debug, captures statistics on the performance of operators
# configuration['log-level'] = 'DEBUG'
configuration['log-level'] = 'INFO'
Explanation: Imports
We have grouped all imports used in this notebook here for consistency.
End of explanation
# NBVAL_IGNORE_OUTPUT
# Define dimensions for the interior of the model
nx,nz = 101,101
npad = 10
dx,dz = 20.0,20.0 # Grid spacing in m
shape = (nx, nz) # Number of grid points
spacing = (dx, dz) # Domain size is now 5 km by 5 km
origin = (0., 0.) # Origin of coordinate system, specified in m.
extent = tuple([s*(n-1) for s, n in zip(spacing, shape)])
# Define the dimensions
x = SpaceDimension(name='x', spacing=Constant(name='h_x', value=extent[0]/(shape[0]-1)))
z = SpaceDimension(name='z', spacing=Constant(name='h_z', value=extent[1]/(shape[1]-1)))
# Initialize the Devito grid
dtype = np.float32
grid = Grid(extent=extent, shape=shape, origin=origin, dimensions=(x, z), dtype=dtype)
print("shape; ", shape)
print("origin; ", origin)
print("spacing; ", spacing)
print("extent; ", extent)
print("")
print("grid.shape; ", grid.shape)
print("grid.extent; ", grid.extent)
print("grid.spacing_map;", grid.spacing_map)
# Create velocity and buoyancy fields.
space_order = 8
m0 = Function(name='m0', grid=grid, space_order=space_order)
b = Function(name='b', grid=grid, space_order=space_order)
m0.data[:] = 1.5
b.data[:,:] = 1.0 / 1.0
# Perturbation to velocity: a square offset from the center of the model
dm = Function(name='dm', grid=grid, space_order=space_order)
size = 4
x0 = (nx-1)//2 + 1
z0 = (nz-1)//2 + 1
dm.data[:] = 0.0
dm.data[(x0-size):(x0+size+1), (z0-size):(z0+size+1)] = 1.0
# Initialize the attenuation profile for the absorbing boundary
fpeak = 0.001
w = 2.0 * np.pi * fpeak
qmin = 0.1
wOverQ = Function(name='wOverQ_025', grid=grid, space_order=space_order)
setup_w_over_q(wOverQ, w, qmin, 100.0, npad)
# Time sampling
t0 = 0 # Simulation time start
tn = 250 # Simulation time end
dt = 2.5 # Simulation time step interval
time_range = TimeAxis(start=t0, stop=tn, step=dt)
nt = time_range.num
print("")
print("time_range; ", time_range)
# Source 10 Hz center frequency
src = RickerSource(name='src', grid=grid, f0=fpeak, npoint=1, time_range=time_range)
src.coordinates.data[0,:] = [dx * ((nx-1) / 2 - 10), dz * (nz-1) / 2]
# Receivers: for nonlinear forward and linearized forward
# one copy each for save all and time blocking implementations
nr = 51
z1 = dz * ((nz - 1) / 2 - 40)
z2 = dz * ((nz - 1) / 2 + 40)
nl_rec1 = Receiver(name='nl_rec1', grid=grid, npoint=nr, time_range=time_range)
nl_rec2 = Receiver(name='nl_rec2', grid=grid, npoint=nr, time_range=time_range)
ln_rec1 = Receiver(name='ln_rec1', grid=grid, npoint=nr, time_range=time_range)
ln_rec2 = Receiver(name='ln_rec2', grid=grid, npoint=nr, time_range=time_range)
nl_rec1.coordinates.data[:,0] = nl_rec2.coordinates.data[:,0] = \
ln_rec1.coordinates.data[:,0] = ln_rec2.coordinates.data[:,0] = dx * ((nx-1) / 2 + 10)
nl_rec1.coordinates.data[:,1] = nl_rec2.coordinates.data[:,1] = \
ln_rec1.coordinates.data[:,1] = ln_rec2.coordinates.data[:,1] = np.linspace(z1, z2, nr)
print("")
print("src_coordinate X; %+12.4f" % (src.coordinates.data[0,0]))
print("src_coordinate Z; %+12.4f" % (src.coordinates.data[0,1]))
print("rec_coordinates X min/max; %+12.4f %+12.4f" % \
(np.min(nl_rec1.coordinates.data[:,0]), np.max(nl_rec1.coordinates.data[:,0])))
print("rec_coordinates Z min/max; %+12.4f %+12.4f" % \
(np.min(nl_rec1.coordinates.data[:,1]), np.max(nl_rec1.coordinates.data[:,1])))
Explanation: Instantiate the model for a two dimensional problem
We are aiming at a small model as this is a POC.
- 101 x 101 cell model
- 20x20 m discretization
- Modeling sample rate explicitly chosen: 2.5 msec
- Time range 250 milliseconds (101 time steps)
- Wholespace model
- velocity: 1500 m/s
- density: 1 g/cm^3
- Source to left of center
- Vertical line of receivers to right of center
- Velocity perturbation box for linearized ops in center of model
- Visco-acoustic absorbing boundary from the self-adjoint operators linked above, 10 points on exterior boundaries.
- We generate a velocity perturbation for the linearized forward Jacobian operator
End of explanation
# note: flip sense of second dimension to make the plot positive downwards
plt_extent = [origin[0], origin[0] + extent[0], origin[1] + extent[1], origin[1]]
vmin, vmax = 1.4, 1.7
pmin, pmax = -1, +1
dmin, dmax = 0.9, 1.1
plt.figure(figsize=(12,14))
# plot velocity
plt.subplot(2, 2, 1)
plt.imshow(np.transpose(m0.data), cmap=cm.jet,
vmin=vmin, vmax=vmax, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='Velocity (m/msec)')
plt.plot(nl_rec1.coordinates.data[:, 0], nl_rec1.coordinates.data[:, 1], \
'black', linestyle='-', label="Receiver")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("Constant velocity")
# plot density
plt.subplot(2, 2, 2)
plt.imshow(np.transpose(1 / b.data), cmap=cm.jet,
vmin=dmin, vmax=dmax, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='Density (m^3/kg)')
plt.plot(nl_rec1.coordinates.data[:, 0], nl_rec1.coordinates.data[:, 1], \
'black', linestyle='-', label="Receiver")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("Constant density")
# plot velocity perturbation
plt.subplot(2, 2, 3)
plt.imshow(np.transpose(dm.data), cmap="seismic",
vmin=pmin, vmax=pmax, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='Velocity (m/msec)')
plt.plot(nl_rec1.coordinates.data[:, 0], nl_rec1.coordinates.data[:, 1], \
'black', linestyle='-', label="Receiver")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("Velocity Perturbation")
# Plot the log of the generated Q profile
q = np.log10(w / wOverQ.data)
lmin, lmax = np.log10(qmin), np.log10(100)
plt.subplot(2, 2, 4)
plt.imshow(np.transpose(q.data), cmap=cm.jet, vmin=lmin, vmax=lmax, extent=plt_extent)
plt.colorbar(orientation='horizontal', label='log10(Q)')
plt.plot(nl_rec1.coordinates.data[:, 0], nl_rec1.coordinates.data[:, 1], \
'black', linestyle='-', label="Receiver")
plt.plot(src.coordinates.data[:, 0], src.coordinates.data[:, 1], \
'red', linestyle='None', marker='*', markersize=15, label="Source")
plt.xlabel("X Coordinate (m)")
plt.ylabel("Z Coordinate (m)")
plt.title("log10 of $Q$ model")
plt.tight_layout()
None
Explanation: Plot velocity and density models
Next we plot the velocity and density models for illustration, with source location shown as a large red asterisk and receiver line shown as a black line.
End of explanation
# NBVAL_IGNORE_OUTPUT
# Define M: number of time steps in each time block
M = 5
# Create TimeFunctions
u1 = TimeFunction(name="u1", grid=grid, time_order=2, space_order=space_order, save=None)
u2 = TimeFunction(name="u2", grid=grid, time_order=2, space_order=space_order, save=None)
v1 = TimeFunction(name="v1", grid=grid, time_order=2, space_order=space_order, save=nt)
v2 = TimeFunction(name="v2", grid=grid, time_order=2, space_order=space_order, save=Buffer(M))
# get time and space dimensions
t,x,z = u1.dimensions
# Source terms (see notebooks linked above for more detail)
src1_term = src.inject(field=u1.forward, expr=src * t.spacing**2 * m0**2 / b)
src2_term = src.inject(field=u2.forward, expr=src * t.spacing**2 * m0**2 / b)
nl_rec1_term = nl_rec1.interpolate(expr=u1.forward)
nl_rec2_term = nl_rec2.interpolate(expr=u2.forward)
# The nonlinear forward time update equation
update1 = (t.spacing**2 * m0**2 / b) * \
((b * u1.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) + \
(b * u1.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2)) + \
(2 - t.spacing * wOverQ) * u1 + \
(t.spacing * wOverQ - 1) * u1.backward
update2 = (t.spacing**2 * m0**2 / b) * \
((b * u2.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) + \
(b * u2.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2)) + \
(2 - t.spacing * wOverQ) * u2 + \
(t.spacing * wOverQ - 1) * u2.backward
stencil1 = Eq(u1.forward, update1)
stencil2 = Eq(u2.forward, update2)
# Equations for the Born term
v1_term = Eq(v1, (2 * b * m0**-3) * (wOverQ * u1.dt(x0=t-t.spacing/2) + u1.dt2))
v2_term = Eq(v2, (2 * b * m0**-3) * (wOverQ * u2.dt(x0=t-t.spacing/2) + u2.dt2))
# Update spacing_map (see notebooks linked above for more detail)
spacing_map = grid.spacing_map
spacing_map.update({t.spacing : dt})
# Build the Operators
nl_op1 = Operator([stencil1, src1_term, nl_rec1_term, v1_term], subs=spacing_map)
nl_op2 = Operator([stencil2, src2_term, nl_rec2_term, v2_term], subs=spacing_map)
# Run operator 1 for all time samples
u1.data[:] = 0
v1.data[:] = 0
nl_rec1.data[:] = 0
nl_op1(time_m=1, time_M=nt-2)
None
# NBVAL_IGNORE_OUTPUT
# Continuous integration hooks for the save all timesteps implementation
# We ensure the norm of these computed wavefields is repeatable
print("%.3e" % norm(u1))
print("%.3e" % norm(nl_rec1))
print("%.3e" % norm(v1))
assert np.isclose(norm(u1), 4.145e+01, atol=0, rtol=1e-3)
assert np.isclose(norm(nl_rec1), 2.669e-03, atol=0, rtol=1e-3)
assert np.isclose(norm(v1), 1.381e-02, atol=0, rtol=1e-3)
Explanation: Implementation of the nonlinear forward
We copy the nonlinear forward PDE described in the 1st self-adjoint notebook linked above:
$$
L_t[\cdot] \equiv \frac{\omega_c}{Q} \overleftarrow{\partial_t}[\cdot] + \partial_{tt}[\cdot]
$$
$$
\frac{b}{m^2} L_t[u] =
\overleftarrow{\partial_x}\left(b\ \overrightarrow{\partial_x}\ u \right) +
\overleftarrow{\partial_y}\left(b\ \overrightarrow{\partial_y}\ u \right) +
\overleftarrow{\partial_z}\left(b\ \overrightarrow{\partial_z}\ u \right) + q
$$
Quantity that is serialized
Recall that for Jacobian operators we need the scaled time derivatives of the reference wavefield for both the Born source in the Jacobian forward, and the imaging condition in the Jacobian adjoint. The quantity shown below is used in those expressions, and this is what we will serialize and compress during the nonlinear forward.
$$
\frac{2\ b}{m_0}\ L_t[u_0]
$$
The two implementations
We borrow the stencil from the self-adjoint operators shown in the jupyter notebooks linked above, and make two operators. Here are details about the configuration.
Save all time steps implementation
wavefield for propagation: u1 = TimeFunctions(..., save=None)
wavefield for serialization: v1 = TimeFunctions(..., save=nt)
Run the operator in a single execution from time_m=1 to time_M=nt-1
Time blocking implementation
wavefield for propagation: u2 = TimeFunctions(..., save=None)
wavefield for serialization: v2 = TimeFunctions(..., save=Buffer(M))
Run the operator in a sequence of time blocks, each with $M$ time steps, from time_m=1 to time_M=nt-1
Note on code duplication
The stencils for the two operators you see below are exactly the same, the only significant difference is that we use two different TimeFunctions. We could therefore reduce code duplication in two ways:
Use the placeholder design pattern and stencil.subs to substitude the appropriate TimeFunction.
Please see the FAQ for more information https://github.com/devitocodes/devito/wiki/FAQ#how-are-abstractions-used-in-the-seismic-examples
Write a function and use it to build the stencils.
To increase the clarity of the exposition below, We do neither of these and duplicate the stencil code.
End of explanation
# NBVAL_IGNORE_OUTPUT
# We make an array the full size for correctness testing
v2_all = np.zeros(v1.data.shape, dtype=dtype)
# Number of time blocks
N = int((nt-1) / M) + 1
# Open a binary file in append mode to save the wavefield chunks
filename = "timeblocking.nonlinear.bin"
if os.path.exists(filename):
os.remove(filename)
f = open(filename, "ab")
# Arrays to save offset and length of compressed data
file_offset = np.zeros(nt, dtype=np.int64)
file_length = np.zeros(nt, dtype=np.int64)
# The length of the data type, 4 bytes for float32
itemsize = v2.data[0,:,:].dtype.itemsize
# The length of a an uncompressed wavefield, used to compute compression ratio below
len0 = 4.0 * np.prod(v2._data[0,:,:].shape)
# Loop over time blocks
v2_all[:] = 0
u2.data[:] = 0
v2.data[:] = 0
nl_rec2.data[:] = 0
for kN in range(0,N,1):
kt1 = max((kN + 0) * M, 1)
kt2 = min((kN + 1) * M - 1, nt-2)
nl_op2(time_m=kt1, time_M=kt2)
# Copy computed Born term for correctness testing
for kt in range(kt1,kt2+1):
# assign
v2_all[kt,:,:] = v2.data[(kt%M),:,:]
# compression
c = blosc.compress_ptr(v2._data[(kt%M),:,:].__array_interface__['data'][0],
np.prod(v2._data[(kt%M),:,:].shape),
v2._data[(kt%M),:,:].dtype.itemsize, 9, True, 'zstd')
# compression ratio
cratio = len0 / (1.0 * len(c))
# serialization
file_offset[kt] = f.tell()
f.write(c)
file_length[kt] = len(c)
# Uncomment these lines to see per time step output
# rms_v1 = np.linalg.norm(v1.data[kt,:,:].reshape(-1))
# rms_v2 = np.linalg.norm(v2_all[kt,:,:].reshape(-1))
# rms_12 = np.linalg.norm(v1.data[kt,:,:].reshape(-1) - v2_all[kt,:,:].reshape(-1))
# print("kt1,kt2,len,cratio,|u1|,|u2|,|v1-v2|; %3d %3d %3d %10.4f %12.6e %12.6e %12.6e" %
# (kt1, kt2, kt2 - kt1 + 1, cratio, rms_v1, rms_v2, rms_12), flush=True)
# Close the binary file
f.close()
# NBVAL_IGNORE_OUTPUT
# Continuous integration hooks for the time blocking implementation
# We ensure the norm of these computed wavefields is repeatable
# Note these are exactly the same norm values as the save all timesteps check above
print("%.3e" % norm(nl_rec1))
print("%.3e" % np.linalg.norm(v2_all))
assert np.isclose(norm(nl_rec1), 2.669e-03, atol=0, rtol=1e-3)
assert np.isclose(np.linalg.norm(v2_all), 1.381e-02, atol=0, rtol=1e-3)
Explanation: Run the time blocking implementation over blocks of M time steps
After each block of $M$ time steps, we return control to Python to extract the Born term and serialize/compress.
The next cell exercises the time blocking operator over $N$ time blocks.
End of explanation
# NBVAL_IGNORE_OUTPUT
norm_v1 = np.linalg.norm(v1.data.reshape(-1))
norm_v12 = np.linalg.norm(v1.data.reshape(-1) - v2_all.reshape(-1))
print("Relative norm of difference wavefield; %+.4e" % (norm_v12 / norm_v1))
assert norm_v12 / norm_v1 < 1e-7
Explanation: Correctness test for nonlinear forward wavefield
We now test correctness by measuring the maximum absolute difference between the wavefields computed with theBuffer and save all time steps implementations.
End of explanation
# NBVAL_IGNORE_OUTPUT
# Create TimeFunctions
duFwd1 = TimeFunction(name="duFwd1", grid=grid, time_order=2, space_order=space_order, save=nt)
duFwd2 = TimeFunction(name="duFwd2", grid=grid, time_order=2, space_order=space_order, save=nt)
ln_rec1_term = ln_rec1.interpolate(expr=duFwd1.forward)
ln_rec2_term = ln_rec2.interpolate(expr=duFwd2.forward)
# The Jacobian linearized forward time update equation
update1 = (t.spacing**2 * m0**2 / b) * \
((b * duFwd1.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) + \
(b * duFwd1.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2) + \
(dm * v1)) + (2 - t.spacing * wOverQ) * duFwd1 + \
(t.spacing * wOverQ - 1) * duFwd1.backward
update2 = (t.spacing**2 * m0**2 / b) * \
((b * duFwd2.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) + \
(b * duFwd2.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2) + \
(dm * v2)) + (2 - t.spacing * wOverQ) * duFwd2 + \
(t.spacing * wOverQ - 1) * duFwd2.backward
stencil1 = Eq(duFwd1.forward, update1)
stencil2 = Eq(duFwd2.forward, update2)
# Build the Operators
lf_op1 = Operator([stencil1, ln_rec1_term], subs=spacing_map)
lf_op2 = Operator([stencil2, ln_rec2_term], subs=spacing_map)
# Run operator 1 for all time samples
duFwd1.data[:] = 0
ln_rec1.data[:] = 0
lf_op1(time_m=1, time_M=nt-2)
None
# NBVAL_IGNORE_OUTPUT
# Continuous integration hooks for the save all timesteps implementation
# We ensure the norm of these computed wavefields is repeatable
print("%.3e" % norm(duFwd1))
print("%.3e" % norm(ln_rec1))
assert np.isclose(norm(duFwd1), 6.438e+00, atol=0, rtol=1e-3)
assert np.isclose(norm(ln_rec1), 2.681e-02, atol=0, rtol=1e-3)
Explanation: Implementation of Jacobian linearized forward
As before we have two implementations:
1. operates on all time steps in a single implementation and consumes the all time steps saved version of the nonlinear forward wavefield
1. operates in time blocks, de-serializes and de-compresses $M$ time steps at a time, and consumes the compressed and serialized time blocking version of the nonlinear forward wavefield
One difference in the correctness testing for this case is that we will assign the propagated perturbed wavefields to two TimeFunction(..., save=nt) for comparison.
End of explanation
# NBVAL_IGNORE_OUTPUT
# Open the binary file in read only mode
f = open(filename, "rb")
# Temporay nd array for decompression
d = copy.copy(v2._data[0,:,:])
# Array to hold compression ratio
cratio = np.zeros(nt, dtype=dtype)
# Loop over time blocks
duFwd2.data[:] = 0
ln_rec2.data[:] = 0
for kN in range(0,N,1):
kt1 = max((kN + 0) * M, 1)
kt2 = min((kN + 1) * M - 1, nt-2)
# 1. Seek to file_offset[kt]
# 2. Read file_length[kt1] bytes from file
# 3. Decompress wavefield and assign to v2 Buffer
for kt in range(kt1,kt2+1):
f.seek(file_offset[kt], 0)
c = f.read(file_length[kt])
blosc.decompress_ptr(c, v2._data[(kt%M),:,:].__array_interface__['data'][0])
cratio[kt] = len0 / (1.0 * len(c))
# Run the operator for this time block
lf_op2(time_m=kt1, time_M=kt2)
# Uncomment these lines to see per time step outputs
# for kt in range(kt1,kt2+1):
# rms_du1 = np.linalg.norm(duFwd1.data[kt,:,:].reshape(-1))
# rms_du2 = np.linalg.norm(duFwd2.data[kt,:,:].reshape(-1))
# rms_d12 = np.linalg.norm(duFwd1.data[kt,:,:].reshape(-1) - duFwd2.data[kt,:,:].reshape(-1))
# print("kt1,kt2,len,cratio,|du1|,|du2|,|du1-du2|; %3d %3d %3d %10.4f %12.6e %12.6e %12.6e" %
# (kt1, kt2, kt2 - kt1 + 1, cratio[kt], rms_du1, rms_du2, rms_d12), flush=True)
# NBVAL_IGNORE_OUTPUT
# Continuous integration hooks for the save all timesteps implementation
# We ensure the norm of these computed wavefields is repeatable
# Note these are exactly the same norm values as the save all timesteps check above
print("%.3e" % norm(duFwd2))
print("%.3e" % norm(ln_rec2))
assert np.isclose(norm(duFwd2), 6.438e+00, atol=0, rtol=1e-3)
assert np.isclose(norm(ln_rec2), 2.681e-02, atol=0, rtol=1e-3)
Explanation: Run the time blocking implementation over blocks of M time steps
Before each block of $M$ time steps, we de-serialize and de-compress.
TODO
In the linearized op below figure out how to use a pre-allocated array to hold the compressed bytes, instead of returning a new bytearray each time we read. Note this will not be a problem when not using Python, and there surely must be a way.
End of explanation
# NBVAL_IGNORE_OUTPUT
norm_du1 = np.linalg.norm(duFwd1.data.reshape(-1))
norm_du12 = np.linalg.norm(duFwd1.data.reshape(-1) - duFwd2.data.reshape(-1))
print("Relative norm of difference wavefield; %+.4e" % (norm_du12 / norm_du1))
assert norm_du12 / norm_du1 < 1e-7
Explanation: Correctness test for Jacobian forward wavefield
We now test correctness by measuring the maximum absolute difference between the wavefields computed with theBuffer and save all time steps implementations.
End of explanation
# NBVAL_IGNORE_OUTPUT
# Create TimeFunctions for adjoint wavefields
duAdj1 = TimeFunction(name="duAdj1", grid=grid, time_order=2, space_order=space_order, save=nt)
duAdj2 = TimeFunction(name="duAdj2", grid=grid, time_order=2, space_order=space_order, save=nt)
# Create Functions to hold the computed gradients
dm1 = Function(name='dm1', grid=grid, space_order=space_order)
dm2 = Function(name='dm2', grid=grid, space_order=space_order)
# The Jacobian linearized adjoint time update equation
update1 = (t.spacing**2 * m0**2 / b) * \
((b * duAdj1.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) +
(b * duAdj1.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2)) +\
(2 - t.spacing * wOverQ) * duAdj1 + \
(t.spacing * wOverQ - 1) * duAdj1.forward
update2 = (t.spacing**2 * m0**2 / b) * \
((b * duAdj2.dx(x0=x+x.spacing/2)).dx(x0=x-x.spacing/2) +
(b * duAdj2.dz(x0=z+z.spacing/2)).dz(x0=z-z.spacing/2)) +\
(2 - t.spacing * wOverQ) * duAdj2 + \
(t.spacing * wOverQ - 1) * duAdj2.forward
stencil1 = Eq(duAdj1.backward, update1)
stencil2 = Eq(duAdj2.backward, update2)
# Equations to sum the zero lag correlations
dm1_update = Eq(dm1, dm1 + duAdj1 * v1)
dm2_update = Eq(dm2, dm2 + duAdj2 * v2)
# We will inject the Jacobian linearized forward receiver data, time reversed
la_rec1_term = ln_rec1.inject(field=duAdj1.backward, expr=ln_rec1 * t.spacing**2 * m0**2 / b)
la_rec2_term = ln_rec2.inject(field=duAdj2.backward, expr=ln_rec2 * t.spacing**2 * m0**2 / b)
# Build the Operators
la_op1 = Operator([dm1_update, stencil1, la_rec1_term], subs=spacing_map)
la_op2 = Operator([dm2_update, stencil2, la_rec2_term], subs=spacing_map)
# Run operator 1 for all time samples
duAdj1.data[:] = 0
dm1.data[:] = 0
la_op1(time_m=1, time_M=nt-2)
None
# NBVAL_IGNORE_OUTPUT
# Continuous integration hooks for the save all timesteps implementation
# We ensure the norm of these computed wavefields is repeatable
print("%.3e" % norm(duAdj1))
print("%.3e" % norm(dm1))
assert np.isclose(norm(duAdj1), 4.626e+01, atol=0, rtol=1e-3)
assert np.isclose(norm(dm1), 1.426e-04, atol=0, rtol=1e-3)
Explanation: Implementation of Jacobian linearized adjoint
Again we have two implementations:
1. operates on all time steps in a single implementation and consumes the all time steps saved version of the nonlinear forward wavefield
1. operates in time blocks, de-serializes and de-compresses $M$ time steps at a time, and consumes the compressed and serialized time blocking version of the nonlinear forward wavefield
For correctness testing here we will compare the final gradients computed via these two implementations.
End of explanation
# NBVAL_IGNORE_OUTPUT
# Open the binary file in read only mode
f = open(filename, "rb")
# Temporay nd array for decompression
d = copy.copy(v2._data[0,:,:])
# Array to hold compression ratio
cratio = np.zeros(nt, dtype=dtype)
# Loop over time blocks
duAdj2.data[:] = 0
dm2.data[:] = 0
for kN in range(N-1,-1,-1):
kt1 = max((kN + 0) * M, 1)
kt2 = min((kN + 1) * M - 1, nt-2)
# 1. Seek to file_offset[kt]
# 2. Read file_length[kt1] bytes from file
# 3. Decompress wavefield and assign to v2 Buffer
for kt in range(kt1,kt2+1,+1):
f.seek(file_offset[kt], 0)
c = f.read(file_length[kt])
blosc.decompress_ptr(c, v2._data[(kt%M),:,:].__array_interface__['data'][0])
cratio[kt] = len0 / (1.0 * len(c))
# Run the operator for this time block
la_op2(time_m=kt1, time_M=kt2)
# Uncomment these lines to see per time step outputs
# for kt in range(kt2,kt1-1,-1):
# rms_du1 = np.linalg.norm(duAdj1.data[kt,:,:].reshape(-1))
# rms_du2 = np.linalg.norm(duAdj2.data[kt,:,:].reshape(-1))
# rms_d12 = np.linalg.norm(duAdj1.data[kt,:,:].reshape(-1) - duAdj2.data[kt,:,:].reshape(-1))
# print("kt2,kt1,kt,cratio,|du1|,|du2|,|du1-du2|; %3d %3d %3d %10.4f %12.6e %12.6e %12.6e" %
# (kt2, kt1, kt, cratio[kt], rms_du1, rms_du2, rms_d12), flush=True)
# NBVAL_IGNORE_OUTPUT
# Continuous integration hooks for the save all timesteps implementation
# We ensure the norm of these computed wavefields is repeatable
# Note these are exactly the same norm values as the save all timesteps check above
print("%.3e" % norm(duAdj2))
print("%.3e" % norm(dm2))
assert np.isclose(norm(duAdj2), 4.626e+01, atol=0, rtol=1e-3)
assert np.isclose(norm(dm2), 1.426e-04, atol=0, rtol=1e-3)
Explanation: Run the time blocking implementation over blocks of M time steps
Before each block of $M$ time steps, we de-serialize and de-compress.
End of explanation
# NBVAL_IGNORE_OUTPUT
norm_du1 = np.linalg.norm(duAdj1.data.reshape(-1))
norm_du12 = np.linalg.norm(duAdj1.data.reshape(-1) - duAdj2.data.reshape(-1))
norm_dm1 = np.linalg.norm(dm1.data.reshape(-1))
norm_dm12 = np.linalg.norm(dm1.data.reshape(-1) - dm2.data.reshape(-1))
print("Relative norm of difference wavefield,gradient; %+.4e %+.4e" %
(norm_du12 / norm_du1, norm_dm12 /norm_dm1))
assert norm_du12 / norm_du1 < 1e-7
assert norm_dm12 / norm_dm1 < 1e-7
Explanation: Correctness test for Jacobian adjoint wavefield and computed gradient
We now test correctness by measuring the maximum absolute difference between the gradients computed with theBuffer and save all time steps implementations.
End of explanation
os.remove(filename)
Explanation: Delete the file used for serialization
End of explanation |
4,543 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This page briefly goes over the regression metrics found in scikit-learn. The metrics are first calculated with NumPy and then calculated using the higher level functions available in sklearn.metrics.
1. Generate data and fit with linear model
Step1: 2. Regression Metrics
Step2: Mean Squared Error
This metric is a component of one of the most popular regression metrics (Root Mean Squared Error). It penalizes outliers due to its squared component. It is calculated as the average of the squares of the difference between the predicted and true values of y.
Step3: Root Mean Squared Error
RMSE (Root mean squared error) is commonly used as an evaluation metric in regression problems. It is calculated by taking the square root of Mean Squared Error. Conveniently, the RMSE as the same units as the quantity estimated (y).
Step4: R^2 Score
Also know as the coefficient of determination. It gives some idea of the "goodness of fit" of the model. It calculates the proportion of variance which is explained by the model. Ranges from 0 to 1 where perfect explanation is denoted with a 1.
Step5: Explained Variance Score
Similar to R^2 Score.
Step6: Mean Absolute Error
A metric which is sensitive to outliers due to the fact that it is a mean. It is calculated by taking the mean value of the absolute differences between the predicted and true values of y. One advantage is that it is easily interpretable. Conveniently, its units are the same as y's.
Step7: Median Absolute Error
Similar to Mean Absolute Error but it is robust to outliers (due to its reliance on the median). It is calculated as the median of the absolute differences between the predicted and true values of y. Like MSE, its units are conveniently the same as y's.
Step8: Mean Squared Log Error
This metric penalizes errors in proportion with the size of y (even small errors are penalzied for small values of y, but small errors are not penalized for large values of y). | Python Code:
from sklearn.linear_model import LinearRegression
from sklearn.datasets import make_regression
import matplotlib.pyplot as plt
%matplotlib inline
#Generate data
regression_data, regression_values = make_regression(n_samples=100,n_features=1,n_informative=1,noise=10)
#Set X, y_true (and shift to quadrant 1)
X = regression_data[:,0].reshape(100,1)+200
y_true = regression_values.reshape(100,1)+200
##Fit data
lr_model = LinearRegression()
lr_model.fit(X,y_true)
#Make predictions
y_pred = lr_model.predict(X)
#Plot Data
plt.style.use('seaborn')
plt.scatter(X,y_true)
plt.plot(X,y_pred,'g-');
Explanation: This page briefly goes over the regression metrics found in scikit-learn. The metrics are first calculated with NumPy and then calculated using the higher level functions available in sklearn.metrics.
1. Generate data and fit with linear model
End of explanation
from sklearn.metrics import mean_squared_error, mean_squared_log_error, mean_absolute_error, median_absolute_error,explained_variance_score,r2_score
import numpy as np
Explanation: 2. Regression Metrics
End of explanation
MSE = np.mean((y_true-y_pred)**2)
#or use sklearn
MSE_sklearn = mean_squared_error(y_true,y_pred)
if MSE==MSE_sklearn:
print("Mean squared error: {}".format(MSE))
Explanation: Mean Squared Error
This metric is a component of one of the most popular regression metrics (Root Mean Squared Error). It penalizes outliers due to its squared component. It is calculated as the average of the squares of the difference between the predicted and true values of y.
End of explanation
RMSE = np.sqrt(MSE)
#no sklearn function available as of v0.19.0
print("Root mean squared error: {}".format(RMSE))
Explanation: Root Mean Squared Error
RMSE (Root mean squared error) is commonly used as an evaluation metric in regression problems. It is calculated by taking the square root of Mean Squared Error. Conveniently, the RMSE as the same units as the quantity estimated (y).
End of explanation
residuals_sum_of_squares = np.sum((y_true-y_pred)**2)
total_sum_of_squares = np.sum((y_true-np.mean(y_true))**2)
r2 = 1-residuals_sum_of_squares/total_sum_of_squares
#Sklearn convenience method
r2_sklearn = r2_score(y_true,y_pred)
if r2 == r2_sklearn:
print("R^2 Score: {}".format(r2))
Explanation: R^2 Score
Also know as the coefficient of determination. It gives some idea of the "goodness of fit" of the model. It calculates the proportion of variance which is explained by the model. Ranges from 0 to 1 where perfect explanation is denoted with a 1.
End of explanation
y_error = y_true-y_pred
numerator = np.sum((y_error-np.mean(y_error))**2)
explained_var = 1-numerator/total_sum_of_squares
#sklearn convenience method
explained_var_sklearn=explained_variance_score(y_true,y_pred)
if explained_var == explained_var_sklearn:
print("Explained variance score: {}".format(explained_var))
Explanation: Explained Variance Score
Similar to R^2 Score.
End of explanation
MAE = np.mean(np.abs(y_true-y_pred))
#or use sklearn
MAE_sklearn = mean_absolute_error(y_true,y_pred)
if MAE==MAE_sklearn:
print("MAE: {}".format(MAE))
Explanation: Mean Absolute Error
A metric which is sensitive to outliers due to the fact that it is a mean. It is calculated by taking the mean value of the absolute differences between the predicted and true values of y. One advantage is that it is easily interpretable. Conveniently, its units are the same as y's.
End of explanation
MedAE = np.median(np.abs(y_true-y_pred))
#or use sklearn
MedAE_sklearn = median_absolute_error(y_true,y_pred)
if MedAE==MedAE_sklearn:
print("MedAE: {}".format(MedAE))
Explanation: Median Absolute Error
Similar to Mean Absolute Error but it is robust to outliers (due to its reliance on the median). It is calculated as the median of the absolute differences between the predicted and true values of y. Like MSE, its units are conveniently the same as y's.
End of explanation
MSLE = np.mean((np.log(y_true+1)-np.log(y_pred+1))**2)
#or use sklearn
MSLE_sklearn = mean_squared_log_error(y_true,y_pred)
if MSLE==MSLE_sklearn:
print("Mean squared log error: {}".format(MSLE))
Explanation: Mean Squared Log Error
This metric penalizes errors in proportion with the size of y (even small errors are penalzied for small values of y, but small errors are not penalized for large values of y).
End of explanation |
4,544 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convergence between teaching and doing quant finance with QuantSA
Quantitative finance is a broad term, here I am referring to solving pricing problems in the capital markets space (with all their regulatory and other side effects).
I am going to explore the possibility of creating a quant library that
Step2: Making a product
Step3: Setting up a model
We haven't described how to make a model or exactly what it does but the following code is fairly easy to understand
Step4: Valuing the product with the model
Step5: Aha, this is good. You can't value a FRA with a discounting model because its cashflow depends on 3 month Jibar and your model does not know anything about 3 month Jibar.
With this type of constraint (which is deeply embedded in the library)
Step6: Is the value right?
Step7: And just like that the cashflow definition can be turned into a value.
Same Product Different Model
I have hinted that models and products are independent.
Here is a demonstration of the same FRA with a Hull White model instead of deterministic curve discounting
Step8: Implementing a Model
I won't spend much time describing how to implement models, that remains roughly the same as in the "olden days"
Step9: Higher Order Measures
xVA etc.
Most "valuations" that we perform are based on
Step10: Early exercise
For completeness we should at least mention early exercise products.
Stopping times in quant finance are not complicated stochastic control problems.
The only exercise decisions that I have ever seen involve deciding to exercise or not at a set of dates.
Always, one knows the cashflows as functions of states of the world both
* when one exercises, and
* when one does not exercise .
The optimal stopping time for the person who owns this right is the one that chooses the alternative with the higher expected value.
This is again a general problem that does not need to be solved for each product and model.
Recall that we defined a product as
Step11: Final Notes
Students can be taught to implement toy models that match the textbooks.
These models can then work on | Python Code:
import clr # to be able to use the C# library
clr.AddReference("System.Collections")
clr.AddReference(r'C:\Dev\QuantSA\QuantSA\Valuation\bin\Debug\QuantSA.General.dll')
clr.AddReference(r'C:\Dev\QuantSA\QuantSA\Valuation\bin\Debug\QuantSA.Valuation.dll')
from System.Collections.Generic import List
from QuantSA.General import *
from QuantSA.Valuation import *
print("The library is ready to use!")
Explanation: Convergence between teaching and doing quant finance with QuantSA
Quantitative finance is a broad term, here I am referring to solving pricing problems in the capital markets space (with all their regulatory and other side effects).
I am going to explore the possibility of creating a quant library that:
Meets the requirements of a bank to solve pricing problems
Has a close link to the fundamental maths concepts so that it can be used for teaching
Put differently
It is fast and easy to use for actual products
You can find $(\Omega, \mathcal{F}, \mathbb{Q})$ and $\mathbb{E}^{\mathbb{Q}}\left[ H(T) \middle| \mathcal{F}_t \right]$ in the code.
To be useful on both sides of the job offer gap the library would need
Few and uncontroversial design decisions
An ability to extend and easily consume other people's extensions
A clear separation of problem domains:
Defining product cashflows
Simulating economies
Mapping products and economies into cashflow and discounted expectation simulations
Defining higher order measures
A distributable task runner (not in the public domain yet)
Overview
Summary of teaching and doing objectives
Describe the library's main components
Show the overlap between describing an economy (teaching) and implementing a product (doing)
Implement a product and value it
How do models work?
Forward values and regression as a general tool for higher order cashflow modelling
A comment about early exercise
Final notes
Conclusions
Teaching and Doing
Teaching
When we teach we are aiming to get students to:
Work with SDEs
Understand measure changes and risk neutrality
See contingent claims as random variables
Evaluate expectations of these random variables
Learn supporting numerical techniques to evaluate these expectations
Doing
When we do quant work we are aiming to
Understand a contract and business's cashflows in all states of the world
Select a model that realistically captures the uncertainty of those states of the world
Calibrate the model to a cross section of prices
Analyse the impacts of different modelling decisions
Provide the model user with an easy way to get the fair present value and sensitivities of these cashflows
Overlap
Unfortunately it seems that these two lists do not have a lot of overlap.
Sometimes we explicitly teach the cashflows for a particular contingent claim
* usually to derive a closed form price for it
* rather than the (messy) details of the flows (lags, physical exchanges, etc)
Doing requires delivering working software to the consumers of the models, but
* Not every quant wants to be a software developer
* Not every quant needs to be a software developer
* Someone must provide a mechanism of getting models into working software.
Can we teach in a way that increases the overlap?
That is what we are going to try:
Main Library Components
Market Observables
Products (that don't know about models)
Models (that don't know about products)
a Coordinator
A regression and early exercise layer
Higher order metrics (PV, EPE, CVA, FVA, ...)
Setting up an Economy
We assume that we have:
a final time $T$
a probability space, a filtration and a measure: $(\Omega, \mathcal{F}, \mathbb{Q})$
stochastic processes: $\textbf{W}(t)$ (not necessarily Brownian motions notwithstanding the use of $W$)
a single numeraire asset $N(t)$ which is a function of $\textbf{W}(t)$, and
$K$ market observables are labelled $S_1(t),...S_K(t)$ which are also all functions of $\textbf{W}(t)$
these are not necessarily all tradable assets.
could be things like forward rates or default indicators
just something you can see on a screen and agree on the value
We further assume that the measure $\mathbb{Q}$ is already the risk neutral one so that for any tradable asset $P$ (see later) with a cashflow only at $0<t_i<T$ we have
$$ \frac{P(t)}{N(t)} = \mathbb{E}^\mathbb{Q} \left[ \frac{P(t_i)}{N(t_i)} \middle| \mathcal{F}(t) \right] $$
Tradable Assets/Products
We assume the cashflows on any product or portfolio
* take place at fixed times
* are calculated as functions of the $K$ market observables.
(
This does not practically limit the types of products. E.g.
* a cashflow when a share price hits K, is a cashflow of zero everyday when it does not.
)
In general if there is a cashflow at $u_i$ it will depend on market observables at times on or before $u_i$
$$ X_i = f(S_{j_0}(v_{k_0}), S_{j_1}(v_{k_1}), ...) $$
with $(j_l, k_l)$ in some set $\mathcal{J}(u_i)$ that depends on the product and $u_i$; and $v_{k_l} \leq u_i$
Value of Product
The value of any product at $t_0$ is then:
$$ V(t_0) = \mathbb{E}^\mathbb{Q} \left[ \sum_{u_i>t_0}{\frac{S_{xn}(u_i)X_i}{N(u_i)}} \right] $$
<div style="text-align: right"><b> Equation 1</b></div>
Where
* $N(u_i)$ is the numeraire in the value currency and
* $S_{xn}$ is that market observable that converts units of the cashflow currency into units of the numeraire currency, i.e. the exchange rate.
Any product without optionality can then be represented by
* the set of random variables and
* the times at which the cashflows represented by the random variables take place
$$ P = \left{ \left(X_1, u_1\right), ..., \left(X_M, u_M\right) \right} $$
Link to Doing
This all seems more like teaching quant finance rather than doing quant finance.
What do the $S$ look like and what does a $P$ look like?
The types of $S$ that currently exist in QuantSA are:
CurrencyPair
DefaultRecovery
DefaultTime
Dividend
FloatingIndex
Share
And each of these has a specific sub-type, e.g.:
* CurrencyPair will have a base and counter currency,
* DefaultTime will have a company and default type and
* FloatingIndex will be one of the world's named floating indices such as 3 month Jibar.
The specific sub-type of each of these has a value that is observable on a well defined screen at a well defined time.
The cashflows on a product can be written as functions of these observables.
Example FRA
The cashflow on a South African FRA depends only on 3m Jibar, let's call that say $S_0$, observed on a single date $u_1$, a fixed rate $K$ and an accrual fraction $\Delta t$:
$$ P_{FRA} = \left{ \left( (S_0(u_1)-K) \Delta t \frac{1}{1+ S_0(u_1) \Delta t} , u_1 \right) \right}$$
Example Equity Call Option
The cashflow on a cash settled equity call option with exercise date $u_1$ on a single share, say $S_1$, with strike $K$:
$$ P_{CALL} = \left{ \left( \max(S_1(u_1)-K,0), u_1 \right) \right}$$
Implementing a Product
We have seen the maths of how products are defined, now let us see how to implement them in QuantSA.
Example code FRA
```cs
Date date = new Date(2017, 08, 28);
FloatingIndex jibar = FloatingIndex.JIBAR3M;
double dt = 91.0/365.0;
double fixedRate = 0.071;
double notional = 1000000.0;
Currency currency = Currency.ZAR;
public override List<Cashflow> GetCFs()
{
double reset = Get(jibar, date);
double cfAmount = notional * (reset - fixedRate)dt/(1+dtreset);
return new List<Cashflow>() { new Cashflow(date, cfAmount, currency) };
}
```
Example code Call
```cs
Date exerciseDate = new Date(2017, 08, 28);
Share share = new Share("AAA", Currency.ZAR);
double strike = 100.0;
public override List<Cashflow> GetCFs()
{
double amount = Math.Max(0, Get(share, exerciseDate) - strike);
return new List<Cashflow>() {new Cashflow(exerciseDate, amount, share.currency) };
}
```
Code Explanation
We simply
* define the market observables,
* specify other contract details and
* implement a formula to describe the cashflow.
The only apparent magic is the function call:
Get(jibar, date)
This product description script is common to many quant libraries available on the market.
* similar to Portfolio Aggregation Language (PAL) described in Cesari et al (2010).
* smaller vocabulary,
* the syntax is straight C#, and
* there is no information other than what would be contained in the trade confirmation sheet.
As you can see we have "taught" exactly what a FRA and a call option are in a completely model independent way.
The next step is to look at how to value these products.
Examples of using the Library
Technicalities
The library is written in C#. There are many reasons why this makes sense:
It is a type-safe, object-oriented language - good for building and maintaining large code bases.
It is easier to learn and write than C++
It is faster than Python or Matlab
It is even becoming portable
It plays nicely with the typical bank employee's Microsoft Windows and Office environment
Nevertheless Python (or Matlab) remain much more convenient for scientific computing where you are experimenting with different models and methods.
In the following we will use QuantSA from Python.
Letting Python see the C# Library:
End of explanation
source = Date date = new Date(2017, 8, 28);
FloatingIndex jibar = FloatingIndex.JIBAR3M;
double dt = 91.0/365.0;
double fixedRate = 0.069;
double notional = 1000000.0;
Currency currency = Currency.ZAR;
public override List<Cashflow> GetCFs()
{
double reset = Get(jibar, date);
double cfAmount = notional * (reset - fixedRate)*dt/(1+dt*reset);
return new List<Cashflow>() { new Cashflow(date, cfAmount, currency) };
}
# Make a product at runtime
fra = RuntimeProduct.CreateFromString("MyFRA", source);
print("Now we have a FRA:")
print(fra)
Explanation: Making a product
End of explanation
# Set up the model
valueDate = Date(2016, 9, 17)
maximumDate = Date(2026, 9, 17)
dates = [Date(2016, 9, 17) , Date(2026, 9, 17)]
rates = [ 0.07, 0.07 ]
discountCurve = DatesAndRates(Currency.ZAR, valueDate, dates, rates, maximumDate)
numeraireModel = DeterminsiticCurves(discountCurve);
otherModels = List[Simulator]() # no model other than discounting for now.
coordinator = Coordinator(numeraireModel, otherModels, 1) # the magic ingredient that gets
# models and products to work
# together
print("A model is ready.")
Explanation: Setting up a model
We haven't described how to make a model or exactly what it does but the following code is fairly easy to understand:
End of explanation
# Run the valuation
portfolio = [fra]
try:
value = coordinator.Value(portfolio, valueDate)
except Exception as e:
print(e)
Explanation: Valuing the product with the model
End of explanation
# add a forecast curve
forwardRates = [0.070614, 0.070614]
forecastCurve = ForecastCurve(valueDate, FloatingIndex.JIBAR3M, dates, forwardRates) # use flat 7% rates for forecasting
numeraireModel.AddRateForecast(forecastCurve) # add the forecast curve to the model
# value the product
portfolio = [fra]
value = coordinator.Value(portfolio, valueDate)
print("value is: {:.2f}".format(value))
Explanation: Aha, this is good. You can't value a FRA with a discounting model because its cashflow depends on 3 month Jibar and your model does not know anything about 3 month Jibar.
With this type of constraint (which is deeply embedded in the library):
You will never work under the wrong numeraire again
You will never use the wrong curve to forecast a rate or asset price
You will never incorrectly combine cashflows in different currencies
etc.
For our problem at hand we need to fix the model by setting it up to forecast some rates:
End of explanation
# check the value
import numpy as np
date = Date(2017, 8, 28)
t = (date.value - valueDate.value) / 365.0 # C# operator overloading does not work in Python
dt = 91.0 / 365.0
fixedRate = 0.069
notional = 1000000.0
fwdRate = 0.070614
refValue = (notional * (fwdRate - fixedRate) * dt / (1 + fwdRate * dt) *
np.exp(-t * 0.07))
print("value is: {:.2f}. Expected {:.2f}".format(value, refValue))
Explanation: Is the value right?
End of explanation
valueDate = Date(2016, 9, 17)
flatRate = 0.07
newModel = HullWhite1F(Currency.ZAR, 0.05, 0.01, flatRate, flatRate, valueDate)
# tell HW model it is allowed to make some forecasts
newModel.AddForecast(FloatingIndex.JIBAR3M)
newCoordinator = Coordinator(newModel, List[Simulator](), 100000)
value = newCoordinator.Value(portfolio, valueDate)
print("value with the new model is: {:.2f}".format(value))
Explanation: And just like that the cashflow definition can be turned into a value.
Same Product Different Model
I have hinted that models and products are independent.
Here is a demonstration of the same FRA with a Hull White model instead of deterministic curve discounting:
End of explanation
# Set up a swap, which has a more interesting value profile than a FRA
rate = 0.08
payFixed = True
notional = 1000000
startDate = Date(2016, 9, 17)
tenor = Tenor.Years(5)
swap = IRSwap.CreateZARSwap(rate, payFixed, notional, startDate, tenor)
print("Now we have a swap:")
print(swap)
# Set up a stochastic model
valueDate = Date(2016, 9, 17)
a = 0.05
vol = 0.01
flatCurveRate = 0.07
hullWiteSim = HullWhite1F(Currency.ZAR, a, vol, flatCurveRate, flatCurveRate, valueDate)
hullWiteSim.AddForecast(FloatingIndex.JIBAR3M)
hwCoordinator = Coordinator(hullWiteSim, List[Simulator](), 5000)
print("A stochastic rate model is ready (Hull White).")
# make the forward dates on which the values are required
from datetime import datetime
from datetime import timedelta
step = timedelta(days=10)
pyDates = []
date = datetime(2016, 9, 17)
endDate = datetime(2021,9,17)
while date<endDate:
date += step
pyDates.append(date)
csDates = [Date(d.year, d.month, d.day) for d in pyDates]
# Get the simulated forward values and the regressors used to obtain them
hwCoordinator.SetThreadedness(True)
valuePaths = hwCoordinator.GetValuePaths([swap], valueDate, csDates)
print("Available data:")
for s in valuePaths.GetNames():
print(" " + s)
import sys
sys.path.insert(0, r'..\Python')
import quantsa as qsa
fwdCashflowPVs = qsa.getnumpy(valuePaths.Get("fwdCashflowPVs"))
regressor0 = qsa.getnumpy(valuePaths.Get("regressor0"))
regressedFwdsPVs = qsa.getnumpy(valuePaths.Get("regressedFwdsPVs"))
%matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.set_xlim([0, 0.15])
ax.set_ylim([-150000, 100000])
col = 17 # up to 181
plt.plot(regressor0[:,col], fwdCashflowPVs[:,col],'.')
plt.plot(regressor0[:,col], regressedFwdsPVs[:,col],'k.')
Explanation: Implementing a Model
I won't spend much time describing how to implement models, that remains roughly the same as in the "olden days":
You calibrate to market data,
obtain some parametrization of the stochastic processes $\textbf{W}(t)$ and functions $S_i(t)$, and then
simulate the $S$s.
The coordinating component moves the simulated values and cashflows between the products and models.
The canonical numerical scheme is Monte Carlo.
* For many real world pricing problems Monte Carlo is the only feasible option once
* sufficiently realistic dynamics and
* enough of the side effects are modelled.
Other numerical schemes can be implemented:
* The first FRA model above uses the same machinery but is a curve based valuation.
* The curve based model is fast even in this general framework:
* In a test it valued a large portfolio of swaps averaging 0.5ms per swap.
* In principle other numerical schemes could also be possible
Expectation via Simulation
The simulation based valuation boils down to estimating:
$$ V(t_0) = \mathbb{E}^\mathbb{Q} \left[ \sum_{u_i>t_0}{\frac{S_{xn}(u_i)X_i}{N(u_i)}} \right] $$
as
$$ V(t_0) \approx \frac{1}{j_{max}+1} \sum_{j=0}^{j_{max}} \left[ \sum_{u_i>t_0}{\frac{S^{(j)}_{xn}(u_i)X^{(j)}_i}{N^{(j)}(u_i)}} \right] $$
The deterministic model does one simulation
Forward Values
In addition to the cashflows on a product that are explicitly defined by the bilateral contract, there are many other financial effects of trading a product such as:
The need to fund the cashflows
The capital required to be held against the position
The loss in the event the counterparty defaults
The need to place collateral and fund that collateral position
The gain in the event that we default ourselves...
In general these depend on the bank's fair value of these products at future dates
Simulating forward values
If we require the forward value at time $t_i > t_0$ we need to evaluate
$$
V(t_i) = N(t_i)\mathbb{E}^\mathbb{Q}\left[ \sum_{u_i>t_i}{\frac{S_{xn}(u_i)X_i}{N(u_i)}} \middle| \mathcal{F}(t_i) \right]
$$
Note that $V(t_0)$ is not random since all states of the world agree up to $t_0$ the time we are at when we fit the model and perform the valuation.
If $t_i>t_0$ then $V(t_i)$ is random and will be a function of the world observed up to $t_i$
This could be evaluated with another Monte Carlo simulation beyond $t_i$ for each state of the world observed up to $t_i$ but this is prohibitively expensive.
We rather assume that the $X$s, $S$s and $N$ are Markov and note (see Shreve [4] Def 2.3.6) that
$$
V(t_i) = g(t_i, \textbf{W}(t_i))
$$
Longstaff and Schwartz [3] describe how to use regression to estimate this function $g$ given realizations of $\sum_{u_i>t_i}{\frac{S_{xn}(u_i)X_i}{N(u_i)}}$
In general $g$ is exactly that function of $\textbf{W}(t_i)$ that minimizes $\mathbb{E}^\mathbb{Q}\left[ \left(g(\textbf{W}(t_i)) - \sum_{u_i>t_i}{\frac{S_{xn}(u_i)X_i}{N(u_i)}} \right)^2 \right]$
The approximation comes because we are estimating it from a finite sample.
Because of the finite sample:
* we need to apply our own regularity conditions to $g$
* otherwise it would be possible to set the square error to zero for one set of paths
* we need a $g$ that will work with out-of-sample paths.
In the QuantSA library this problem is solved by the coordinating component
* models and products do not need to worry about it.
* It is a separate component where development can be done that would benefit all models and products
* The problem is well defined and is amenable to all the modern tools of datascience.
* e.g. could train offline with cross validation to choose the best set of basis functions then online fit to all the data.
Lets look at an example of it working:
Example of obtaining forward values
End of explanation
positive_mtm = regressedFwdsPVs
positive_mtm[positive_mtm<0] = 0
epe = np.mean(positive_mtm, 0)
plt.plot(epe)
Explanation: Higher Order Measures
xVA etc.
Most "valuations" that we perform are based on:
* Fair value simulations,
* Possibly some other market observables (eg. an FVA could need a funding rate)
* A operation on these two
Example 1: The expected positive exposure depends on
* expected future fair values
* no other market observables
* a simple positive operator
Example 2: The CVA depends on:
* expected future fair values
* A default indicator on each path at each time
* The product of default on a path at a time and the positive part of the future value at the same point
Again these are separate from the models and the products.
If you implement a way to estimate, say, the pricing impact of initial margin it will work with anybody else's models and or products.
Code Example: Expected Positive Exposure
End of explanation
# Make a Bermudan Swaption
exDates = List[Date]()
exDates.Add(Date(2017, 9, 17))
exDates.Add(Date(2018, 9, 17))
exDates.Add(Date(2019, 9, 17))
exDates.Add(Date(2020, 9, 17))
bermudan = BermudanSwaption(swap, exDates, True)
# Get the simulated forward values and the regressors used to obtain them
valuePaths = hwCoordinator.GetValuePaths([bermudan], valueDate, csDates)
fwdCashflowPVs = qsa.getnumpy(valuePaths.Get("fwdCashflowPVs"))
regressor0 = qsa.getnumpy(valuePaths.Get("regressor0"))
regressedFwdsPVs = qsa.getnumpy(valuePaths.Get("regressedFwdsPVs"))
# Examine the values after optimal exercise rule is applied
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.set_xlim([0, 0.15])
ax.set_ylim([-150000, 100000])
col = 20 # up to 181.
plt.plot(regressor0[:,col], fwdCashflowPVs[:,col],'.')
plt.plot(regressor0[:,col], regressedFwdsPVs[:,col],'k.')
Explanation: Early exercise
For completeness we should at least mention early exercise products.
Stopping times in quant finance are not complicated stochastic control problems.
The only exercise decisions that I have ever seen involve deciding to exercise or not at a set of dates.
Always, one knows the cashflows as functions of states of the world both
* when one exercises, and
* when one does not exercise .
The optimal stopping time for the person who owns this right is the one that chooses the alternative with the higher expected value.
This is again a general problem that does not need to be solved for each product and model.
Recall that we defined a product as:
$$ P = \left{ \left(X_1, u_1\right), ..., \left(X_M, u_M\right) \right} $$
Similarly we can define a product with early exercise as
$$
O = P_{noex} \text{ and } \left{ \left(Q_1, e_1\right), ..., \left(Q_M, e_M\right) \right}
$$
Where
* the $Q_i$ are the products that will be exercised into if the optimal stopping time is equal to $e_i$, and
* the cashflows in $P_{noex}$ will stop at the optimal stopping time.
The extra pieces that need to be implemented on a product are then:
cs
List<Product> GetPostExProducts();
List<Date> GetExerciseDates()
Code Example: Bermudan Swaption
For a Bermudan swaption, there are no cashflows before exercise.
Once exercise has taken place the cashflows are those of a swap.
We have a swap from above, this forms the $Q_i$ at each of the exercise dates
We simply need to set up the exercise dates:
End of explanation
import numpy as np
with plt.xkcd():
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xticks([])
plt.yticks([])
ax.set_ylim([-30, 10])
data = np.ones(100)
data[70:] -= np.arange(30)
plt.annotate(
'THE DAY PEOPLE STARTED\nUSING QUANTSA',
xy=(70, 0.9), arrowprops=dict(arrowstyle='->'), xytext=(15, -10))
plt.plot(data)
plt.xlabel('year')
plt.ylabel('quant hours wasted \n repeating the same work')
plt.show()
with plt.xkcd():
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
ax.bar([-0.125, 1.0-0.125], [100, 10], 0.25)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticks([0, 1])
ax.set_xlim([-0.5, 1.5])
ax.set_ylim([0, 110])
ax.set_xticklabels(['BUSINESS AS\n USUAL', 'QUANTSA ON \n LEFT AND RIGHT'])
plt.yticks([])
plt.ylabel('Difficulty')
plt.title("TRANSITIONING FROM THEORY TO PRACTICE")
plt.show()
Explanation: Final Notes
Students can be taught to implement toy models that match the textbooks.
These models can then work on:
* toy products or real products
* toy xVAs or real xVAs
When students go to work they
* make the same models, in the same framework but
* work with more factors
* make simulation and calibration faster
* use these enhanced models on the bank's real portfolios and higher order metrics
The definition of product/model/measure interaction in terms only of market observables is fundamental and is unlikely to prove inadequate in the future.
This library does not change the way all courses would be taught.
For me there are also other topics in the course that I teach that do not have a place directly in the main library framework such as:
Measure change
Getting closed form prices in HW and Black
How to build trees and
How to derive and solve PDEs
etc
But, these tools and more will be required for calibrating the models in the library...
until that also just becomes another module :)
Conclusion
With careful software management we can have a library that creates as a bridge between teaching and doing.
The library has a separable design and a plugin framework that will meet the needs for proprietary models, special cases and user customization without damaging the clarity and teaching suitability.
I am hoping that many parts of quant finance can become like optimization: we all learn how to do it but in real life we use standard implementations.
The Future
Finally Python offers us a great way to visualize the future of quant finance with QuantSA:
End of explanation |
4,545 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Character-based LSTM
Grab all Chesterton texts from Gutenberg
Step1: Create the Training set
Build a training and test dataset. Take 40 characters and then save the 41st character. We will teach the model that a certain 40 char sequence should generate the 41st char. Use a step size of 3 so there is overlap in the training set and we get a lot more 40/41 samples.
Step2: One-hot encode
Step3: Create the Model
Step4: Train the Model
Step5: Generate new sequence | Python Code:
from nltk.corpus import gutenberg
gutenberg.fileids()
text = ''
for txt in gutenberg.fileids():
if 'chesterton' in txt:
text += gutenberg.raw(txt).lower()
chars = sorted(list(set(text)))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
'corpus length: {} total chars: {}'.format(len(text), len(chars))
print(text[:100])
Explanation: Character-based LSTM
Grab all Chesterton texts from Gutenberg
End of explanation
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(text) - maxlen, step):
sentences.append(text[i: i+maxlen])
next_chars.append(text[i + maxlen])
print("sequences: ", len(sentences))
print(sentences[0])
print(sentences[1])
print(next_chars[0])
Explanation: Create the Training set
Build a training and test dataset. Take 40 characters and then save the 41st character. We will teach the model that a certain 40 char sequence should generate the 41st char. Use a step size of 3 so there is overlap in the training set and we get a lot more 40/41 samples.
End of explanation
import numpy as np
X = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
X[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
Explanation: One-hot encode
End of explanation
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras.optimizers import RMSprop
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars)))
model.add(Activation('softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()
Explanation: Create the Model
End of explanation
epochs = 2
batch_size = 128
model.fit(X, y, batch_size=batch_size, epochs=epochs)
Explanation: Train the Model
End of explanation
import random
def sample(preds, temperature=1.0):
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
import sys
start_index = random.randint(0, len(text) - maxlen - 1)
for diversity in [0.2, 0.5, 1.0]:
print()
print('----- diversity:', diversity)
generated = ''
sentence = text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x[0, t, char_indices[char]] = 1.
preds = model.predict(x, verbose=0)[0]
next_index = sample(preds, diversity)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
Explanation: Generate new sequence
End of explanation |
4,546 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Загрузка данных и краткий обзор
Step1: Сравним численность людей, вернувших кредит, и не вернувших его.
Step2: Вспомогательные методы
Step3: Задание 1. Размер кредитного лимита (LIMIT_BAL)
Рассмотрим 2 группы людей - тех, кто вернул кредит (default = 0) и тех, кто его не вернул (default = 1).
Построим гистограммы распределения размера кредитного лимита в обеих группах.
Step4: По гистограммам можно оценить, что по каждому интервалу кредитного лимита людей, вернувших кредит больше, чем не вернувших.
Проверим гипотезу о равенстве медианных значений кредитного лимита против общей альтернативы
Step5: 95%-ый доверительный интервал для разности медиан не содержит 0, следовательно нулевая гипотеза отвергается в пользу альтернативы о неравенстве медиан.
Проверим гипотезу о равенстве распределений значения LIMIT_BAL
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$
Применим перестановочный критерий.
Необходимое условие для его применения - приблизительное равенство дисперсий обеих выборок. В данном случае
Step6: Дисперсии ненамного отличаются (порядок одинаков), поэтому перестановочный критерий можно применять.
Step7: Достигаемый уровень значимости заметно меньше 0.05, следовательно нулевая гипотеза о равенстве распределений отвергается в пользу альтернативы.
Полученный результат о неравенстве медиан довольно легко объяснить с точки зрения общей логики
Step8: По построенным гистограммам можно предположить, что гендерный состав отличается. Среди не вернувших кредит он более сбалансированный.
Численность можно представить также в таблице
Step9: Доверительный интервал для разности долей не содержит нуля, следовательно, можно отвергнуть гипотезу о равернстве доли мужчин среди вернувших и не вернувших кредит. Таким образом, гендерный состав отличается.
Теперь проверим ту же гипотезу с помощью Z-критерия. Ограничением для его применения является наличие в выборке только двух типов значений. Это условие выполнено.
Step10: Достигаемый уровень значимости сильно меньше 0.05, поэтому гипотеза о равенстве доли отвергается. Гендерный состав отличается.
Задание 3. Образование (EDUCATION)
Построим гистограмму, показывающую соотношение людей, вернувших и не вернувших кредит в зависимости от уровня их образования.
Step11: По гистограмме можно сделать предположение, что среди людей с более высоким уровнем образования доля тех, кто не вернул кредит, меньше.
Проверяем гипотезу
$H_0\colon $ уровень образования не влияет на возврат долга
$H_1\colon $ уровень образования влияет на возврат долга
Составим таблицу сопряженности "образование" на "возврат долга"
| вернули кредит | не вернули кредит
------------- | -------------|
доктор | 14 | 0
магистр | 8549 | 2036
бакалавр | 10700 | 3330
выпускник школы | 3680 | 1237
начальное образование | 116 | 7
прочее | 262 | 18
нет данных | 43 | 8
$\sum$ | 23364 | 6636
Применим критерий Хи-квадрат. Это можно сделать, т.к. условия для его применимости выполняются
Step12: Достигаемый уровень значимости сильно меньше 0.05, следовательно гипотеза $H_0$ отвергается. Уровень образования влияет на возврат долга.
Отобразим ожидаемое количество человек по описанным группам
Step13: Для наглядности составим также таблицу сопряженности, где значением ячейки будет разность между ожидаемым и наблюдаемым количеством человек, вернувших и не вернувших кредит, в зависимости от образования.
| вернули кредит | не вернули кредит
------------- | -------------|
доктор | -4 | 3
магистр | -306 | 305
бакалавр | 226 | -227
выпускник школы | 149 | -150
начальное образование | -21 | 20
прочее | -44 | 43
нет данных | -4 | 3
$\sum$ | -4 | -3
Для того, чтобы привести таблицу к одному масштабу, ее можно нормализовать на количество клиентов в каждой группе.
Наилучшим индикатором того, что человек отдаст долг является образование на уровне доктора. Того, что не отдаст - уровень бакалавра или выпускника школы.
Задание 4. Семейное положение (MARRIAGE)
Построим гистограмму, показывающую соотношение людей, вернувших и не вернувших кредит, в зависимости от их семейного статуса.
Step14: По гистограмме можно сделать предположение, что среди холостых вероятность возврата кредита выше.
Составим таблицу сопряженности "семейное положение" на "возврат долга"
| вернули кредит | не вернули кредит
------------- | -------------|
отказ | 49 | 5
замужем/женат | 10453 | 3206
холост | 12623 | 3341
нет данных | 239 | 84
$\sum$ | 23364 | 6636
Для измерения возможной связи этих переменных посчитаем коэффициент V Крамера. Он основывается на критерии Хи-квадрат, для которого нужно проверить условия применимости - размер выборки больше 40, количество элементов в каждой ячейке таблицы меньше 5 не более, чем в 20% ячеек, выборки независимы. Эти условия выполняются.
Step15: Коэффициент V Крамера оказался довольно близок к 0, поэтому можно сделать вывод, что семейное положение не сильно влияет на возврат кредита.
Задание 5. Возраст (AGE)
Рассмотрим 2 группы людей - тех, кто вернул кредит (default = 0) и тех, кто его не вернул (default = 1). Построим гистограммы распределения возраста в обеих группах.
Step16: По гистограммам можно оценить, что по каждому интервалу возрастов людей, вернувших кредит больше, чем не вернувших.
Проверим гипотезу о равенстве медианных значений возрастов против общей альтернативы
Step17: 95%-ый доверительный интервал для разности медиан граничит с 0, поэтому нулевую гипотезу нельзя однозначно отвергнуть.
Проверим гипотезу о равенстве распределений значения AGE
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$
Применим перестановочный критерий. Необходимое условие для его применения - приблизительное равенство дисперсий обеих выборок. В данном случае
Step18: Дисперсии ненамного отличаются (порядок одинаков), поэтому перестановочный критерий можно применять.
Step19: Достигаемый уровень значимости меньше 0.05, следовательно нулевая гипотеза о равенстве распределений отвергается в пользу альтернативы.
Полученный результат обладает определенной практической значимостью, так как показывает, что среди более молодых людей оказалось больше тех, кто вернул кредит.
Для большей наглядности построим гистограмму числа людей, вернувших и не вернувших кредит, в зависимости от возраста. Для этого введем новый столбец "Возрастная категория" | Python Code:
data = pd.read_csv('credit_card_default_analysis.csv', index_col=0)
data.head()
data.shape
data.describe()
Explanation: Загрузка данных и краткий обзор
End of explanation
data.default.value_counts()
Explanation: Сравним численность людей, вернувших кредит, и не вернувших его.
End of explanation
def make_hist_target_by_category(data, factor, names, width=0.3):
group0 = np.bincount(data[data['default'] == 0][factor].astype('int32').values)
group1 = np.bincount(data[data['default'] == 1][factor].astype('int32').values)
if group0[0] == 0 and group1[0] == 0:
group0 = group0[1:]
group1 = group1[1:]
ind = np.arange(2)
fig, ax = plt.subplots()
rects_data = []
cmap = plt.get_cmap('Spectral')
colors = cmap(np.linspace(0, len(group0)))
for i in xrange(len(group0)):
rects = ax.bar(ind + i*width, (group0[i], group1[i]), width, color=colors[i])
rects_data.append(rects)
ax.set_ylabel('Amount')
ax.set_title('Histogram of clients by ' + factor)
ax.set_xticks(ind + width * (len(group0)) / float(2))
ax.set_xticklabels(('default == 0', 'default == 1'))
ax.legend(tuple(rects[0] for rects in rects_data), names)
for rects in rects_data:
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., 1.05*height, '%d' % int(height),ha='center', va='bottom')
plt.show()
def make_hist_category_by_target(data, factor, names, width=0.3):
group0 = np.bincount(data[data['default'] == 0][factor].astype('int32').values)
group1 = np.bincount(data[data['default'] == 1][factor].astype('int32').values)
if group0[0] == 0 and group1[0] == 0:
group0 = group0[1:]
group1 = group1[1:]
ind = np.arange(len(group0))
fig, ax = plt.subplots()
rects1 = ax.bar(ind, tuple(group0), width, color='r')
rects2 = ax.bar(ind + width, tuple(group1), width, color='y')
ax.set_ylabel('Amount')
ax.set_title('Histogram of clients by ' + factor)
ax.set_xticks(ind + width)
ax.set_xticklabels(names)
ax.legend((rects1[0], rects2[0]), ('default == 0', 'default == 1'))
def autolabel(rects):
for rect in rects:
height = rect.get_height()
ax.text(rect.get_x() + rect.get_width()/2., 1.05*height,
'%d' % int(height),
ha='center', va='bottom')
autolabel(rects1)
autolabel(rects2)
plt.show()
def make_hist_numerical(data, factor, rng, bins=10):
pylab.figure(figsize(12, 5))
pylab.subplot(1,2,1)
pylab.hist(data[data['default'] == 0][factor], bins = bins, color = 'b', range = rng,
label = 'default == 0')
pylab.legend()
pylab.subplot(1,2,2)
pylab.hist(data[data['default'] == 1][factor], bins = bins, color = 'r', range = rng,
label = 'default == 1')
pylab.legend()
pylab.show()
def get_bootstrap_samples(data, n_samples):
indices = np.random.randint(0, len(data), (n_samples, len(data)))
samples = data[indices]
return samples
def stat_intervals(stat, alpha):
boundaries = np.percentile(stat, [100 * alpha / 2., 100 * (1 - alpha / 2.)])
return boundaries
def permutation_t_stat_ind(sample1, sample2):
return np.mean(sample1) - np.mean(sample2)
def get_random_combinations(n1, n2, max_combinations):
index = range(n1 + n2)
indices = set([tuple(index)])
for i in range(max_combinations - 1):
np.random.shuffle(index)
indices.add(tuple(index))
return [(index[:n1], index[n1:]) for index in indices]
def permutation_zero_dist_ind(sample1, sample2, max_combinations = None):
joined_sample = np.hstack((sample1, sample2))
n1 = len(sample1)
n = len(joined_sample)
if max_combinations:
indices = get_random_combinations(n1, len(sample2), max_combinations)
else:
indices = [(list(index), filter(lambda i: i not in index, range(n))) \
for index in itertools.combinations(range(n), n1)]
distr = [joined_sample[list(i[0])].mean() - joined_sample[list(i[1])].mean() \
for i in indices]
return distr
def permutation_test(sample, mean, max_permutations = None, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
t_stat = permutation_t_stat_ind(sample, mean)
zero_distr = permutation_zero_dist_ind(sample, mean, max_permutations)
if alternative == 'two-sided':
return sum([1. if abs(x) >= abs(t_stat) else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'less':
return sum([1. if x <= t_stat else 0. for x in zero_distr]) / len(zero_distr)
if alternative == 'greater':
return sum([1. if x >= t_stat else 0. for x in zero_distr]) / len(zero_distr)
def proportions_confint_diff_ind(a1, n1, a2, n2, alpha = 0.05):
z = sc.stats.norm.ppf(1 - alpha / 2.)
p1 = float(a1) / n1
p2 = float(a2) / n2
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ n1 + p2 * (1 - p2)/ n2)
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ n1 + p2 * (1 - p2)/ n2)
return (left_boundary, right_boundary)
def proportions_diff_z_stat_ind(a, n1, b, n2):
p1 = float(a) / n1
p2 = float(b) / n2
P = float(p1*n1 + p2*n2) / (n1 + n2)
return (p1 - p2) / np.sqrt(P * (1 - P) * (1. / n1 + 1. / n2))
def proportions_diff_z_test(z_stat, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
if alternative == 'two-sided':
return 2 * (1 - sc.stats.norm.cdf(np.abs(z_stat)))
if alternative == 'less':
return sc.stats.norm.cdf(z_stat)
if alternative == 'greater':
return 1 - sc.stats.norm.cdf(z_stat)
Explanation: Вспомогательные методы
End of explanation
make_hist_numerical(data, 'LIMIT_BAL', (10000, 1000000), bins=10)
Explanation: Задание 1. Размер кредитного лимита (LIMIT_BAL)
Рассмотрим 2 группы людей - тех, кто вернул кредит (default = 0) и тех, кто его не вернул (default = 1).
Построим гистограммы распределения размера кредитного лимита в обеих группах.
End of explanation
limit_bal0 = data[data['default'] == 0]['LIMIT_BAL'].values
limit_bal1 = data[data['default'] == 1]['LIMIT_BAL'].values
np.random.seed(0)
limit_bal0_median_scores = map(np.median, get_bootstrap_samples(limit_bal0, 1000))
limit_bal1_median_scores = map(np.median, get_bootstrap_samples(limit_bal1, 1000))
delta_median_scores = map(lambda x: x[0] - x[1], zip(limit_bal0_median_scores, limit_bal1_median_scores))
print "95% confidence interval for the difference between medians", stat_intervals(delta_median_scores, 0.05)
Explanation: По гистограммам можно оценить, что по каждому интервалу кредитного лимита людей, вернувших кредит больше, чем не вернувших.
Проверим гипотезу о равенстве медианных значений кредитного лимита против общей альтернативы:
$H_0\colon$ медианы значений кредитных лимитов в двух группах равны между собой
$H_1\colon$ медианы значений кредитных лимитов в двух группах не равны между собой
Воспользуемся интервальной оценкой. Посчитаем 95%-ый доверительный интервал разности медианных значений в двух выборках с помощью bootstrap.
End of explanation
limit_bal0.std()**2 / limit_bal1.std()**2
Explanation: 95%-ый доверительный интервал для разности медиан не содержит 0, следовательно нулевая гипотеза отвергается в пользу альтернативы о неравенстве медиан.
Проверим гипотезу о равенстве распределений значения LIMIT_BAL
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$
Применим перестановочный критерий.
Необходимое условие для его применения - приблизительное равенство дисперсий обеих выборок. В данном случае:
End of explanation
print "p-value: %f" % permutation_test(limit_bal0, limit_bal1, max_permutations = 10000)
Explanation: Дисперсии ненамного отличаются (порядок одинаков), поэтому перестановочный критерий можно применять.
End of explanation
make_hist_target_by_category(data, 'SEX', ('Men', 'Women'), width=0.4)
Explanation: Достигаемый уровень значимости заметно меньше 0.05, следовательно нулевая гипотеза о равенстве распределений отвергается в пользу альтернативы.
Полученный результат о неравенстве медиан довольно легко объяснить с точки зрения общей логики: если система принятия решений о выдаче кредита достаточно надежная, то наибольшие кредиты выдаются тем людям, в которых банк более всего уверен. Логично, что такие люди, как правило, возвращают кредит. И наоборот, те люди, которые не являются очень надежными с точки зрения банка тем не менее могут получить кредит, но как правило не очень большой. И среди них больше доля тех, кто кредит не вернет.
С практической точки зрения такой результат не имеет особой значимости, потому что в "обратную" сторону его, разумеется, применить нельзя: было бы абсурдно считать в качестве самостоятельного критерия суждение о том, что если выдать человеку большой кредит, то это повысит вероятность его возврата.
Задание 2. Пол (SEX)
Построим гистограммы распределения полов среди вернувших и не вернувших кредит
End of explanation
sex0 = np.bincount(data[data['default'] == 0]['SEX'].astype('int32').values)
sex1 = np.bincount(data[data['default'] == 1]['SEX'].astype('int32').values)
print "confidence interval: [%f, %f]" % proportions_confint_diff_ind(sex0[1], sum(sex0), sex1[1], sum(sex1))
Explanation: По построенным гистограммам можно предположить, что гендерный состав отличается. Среди не вернувших кредит он более сбалансированный.
Численность можно представить также в таблице:
| вернули кредит | не вернули кредит
------------- | -------------|
мужчин | 9015 | 2873
женщин | 14349 | 3763
$\sum$ | 23364 | 6636
Будем проверять гипотезу
$H_0\colon $ гендерный состав людей, вернувших и не вернувших кредит, не отличается
$H_1\colon $ гендерный состав отличается
Построим доверительный интервал для разности долей (выборки независимы). Будем рассматривать долю мужчин среди вернувших и не вернувших кредит клиентов.
End of explanation
proportions_diff_z_test(proportions_diff_z_stat_ind(sex0[1], sum(sex0), sex1[1], sum(sex1)))
Explanation: Доверительный интервал для разности долей не содержит нуля, следовательно, можно отвергнуть гипотезу о равернстве доли мужчин среди вернувших и не вернувших кредит. Таким образом, гендерный состав отличается.
Теперь проверим ту же гипотезу с помощью Z-критерия. Ограничением для его применения является наличие в выборке только двух типов значений. Это условие выполнено.
End of explanation
make_hist_category_by_target(data, 'EDUCATION', ('PhD', 'MS', 'BS', 'School', 'Elementary', 'Other', 'N/A'),
width=0.4)
Explanation: Достигаемый уровень значимости сильно меньше 0.05, поэтому гипотеза о равенстве доли отвергается. Гендерный состав отличается.
Задание 3. Образование (EDUCATION)
Построим гистограмму, показывающую соотношение людей, вернувших и не вернувших кредит в зависимости от уровня их образования.
End of explanation
conting = [[14, 0], [8549, 2036], [10700, 3330], [3680, 1237], [116, 7], [262, 18], [43, 8]]
chi2, p, dof, expected = sc.stats.chi2_contingency(conting)
p
Explanation: По гистограмме можно сделать предположение, что среди людей с более высоким уровнем образования доля тех, кто не вернул кредит, меньше.
Проверяем гипотезу
$H_0\colon $ уровень образования не влияет на возврат долга
$H_1\colon $ уровень образования влияет на возврат долга
Составим таблицу сопряженности "образование" на "возврат долга"
| вернули кредит | не вернули кредит
------------- | -------------|
доктор | 14 | 0
магистр | 8549 | 2036
бакалавр | 10700 | 3330
выпускник школы | 3680 | 1237
начальное образование | 116 | 7
прочее | 262 | 18
нет данных | 43 | 8
$\sum$ | 23364 | 6636
Применим критерий Хи-квадрат. Это можно сделать, т.к. условия для его применимости выполняются: размер выборки больше 40, количество элементов в каждой ячейке таблицы меньше 5 не более, чем в 20% ячеек, выборки независимы.
End of explanation
[(int(pair[0]), int(pair[1])) for pair in expected]
Explanation: Достигаемый уровень значимости сильно меньше 0.05, следовательно гипотеза $H_0$ отвергается. Уровень образования влияет на возврат долга.
Отобразим ожидаемое количество человек по описанным группам:
End of explanation
make_hist_category_by_target(data, 'MARRIAGE', ('Refused', 'Married', 'Single', 'N/A'), width=0.4)
Explanation: Для наглядности составим также таблицу сопряженности, где значением ячейки будет разность между ожидаемым и наблюдаемым количеством человек, вернувших и не вернувших кредит, в зависимости от образования.
| вернули кредит | не вернули кредит
------------- | -------------|
доктор | -4 | 3
магистр | -306 | 305
бакалавр | 226 | -227
выпускник школы | 149 | -150
начальное образование | -21 | 20
прочее | -44 | 43
нет данных | -4 | 3
$\sum$ | -4 | -3
Для того, чтобы привести таблицу к одному масштабу, ее можно нормализовать на количество клиентов в каждой группе.
Наилучшим индикатором того, что человек отдаст долг является образование на уровне доктора. Того, что не отдаст - уровень бакалавра или выпускника школы.
Задание 4. Семейное положение (MARRIAGE)
Построим гистограмму, показывающую соотношение людей, вернувших и не вернувших кредит, в зависимости от их семейного статуса.
End of explanation
conting = [[49, 5],[10453, 3206], [12623, 3341], [239, 84]]
chi2, p, dof, expected = sc.stats.chi2_contingency(conting)
v_c = np.sqrt(chi2 / (np.sum(conting) * (2-1)))
v_c
Explanation: По гистограмме можно сделать предположение, что среди холостых вероятность возврата кредита выше.
Составим таблицу сопряженности "семейное положение" на "возврат долга"
| вернули кредит | не вернули кредит
------------- | -------------|
отказ | 49 | 5
замужем/женат | 10453 | 3206
холост | 12623 | 3341
нет данных | 239 | 84
$\sum$ | 23364 | 6636
Для измерения возможной связи этих переменных посчитаем коэффициент V Крамера. Он основывается на критерии Хи-квадрат, для которого нужно проверить условия применимости - размер выборки больше 40, количество элементов в каждой ячейке таблицы меньше 5 не более, чем в 20% ячеек, выборки независимы. Эти условия выполняются.
End of explanation
make_hist_numerical(data, 'AGE', (20, 80), bins=12)
Explanation: Коэффициент V Крамера оказался довольно близок к 0, поэтому можно сделать вывод, что семейное положение не сильно влияет на возврат кредита.
Задание 5. Возраст (AGE)
Рассмотрим 2 группы людей - тех, кто вернул кредит (default = 0) и тех, кто его не вернул (default = 1). Построим гистограммы распределения возраста в обеих группах.
End of explanation
age0 = data[data['default'] == 0]['AGE'].values
age1 = data[data['default'] == 1]['AGE'].values
age0_median_scores = map(np.median, get_bootstrap_samples(age0, 5000))
age1_median_scores = map(np.median, get_bootstrap_samples(age1, 5000))
delta_age_median_scores = map(lambda x: x[0] - x[1], zip(age0_median_scores, age1_median_scores))
print "95% confidence interval for the difference between medians", stat_intervals(delta_age_median_scores, 0.05)
Explanation: По гистограммам можно оценить, что по каждому интервалу возрастов людей, вернувших кредит больше, чем не вернувших.
Проверим гипотезу о равенстве медианных значений возрастов против общей альтернативы:
$H_0\colon$ медианы значений возрастов в двух группах равны между собой
$H_1\colon$ медианы значений возрастов в двух группах не равны между собой
Воспользуемся интервальной оценкой. Посчитаем 95%-ый доверительный интервал разности медианных значений в двух выборках с помощью bootstrap.
End of explanation
age0.std()**2 / age1.std()**2
Explanation: 95%-ый доверительный интервал для разности медиан граничит с 0, поэтому нулевую гипотезу нельзя однозначно отвергнуть.
Проверим гипотезу о равенстве распределений значения AGE
$H_0\colon F_{X_1}(x) = F_{X_2}(x)$
$H_1\colon F_{X_1}(x) = F_{X_2}(x + \Delta), \Delta\neq 0$
Применим перестановочный критерий. Необходимое условие для его применения - приблизительное равенство дисперсий обеих выборок. В данном случае:
End of explanation
print "p-value: %f" % permutation_test(age0, age1, max_permutations = 10000)
Explanation: Дисперсии ненамного отличаются (порядок одинаков), поэтому перестановочный критерий можно применять.
End of explanation
def label_age(row):
if row['AGE'] < 30:
return 0
elif row['AGE'] < 40:
return 1
elif row['AGE'] < 50:
return 2
elif row['AGE'] < 60:
return 3
elif row['AGE'] < 70:
return 4
else:
return 5
data['age_cat'] = data.apply(lambda row: label_age(row),axis=1)
make_hist_category_by_target(data, 'age_cat', ('<30', '30-40', '40-50', '50-60', '60-70', '>70'), width=0.4)
Explanation: Достигаемый уровень значимости меньше 0.05, следовательно нулевая гипотеза о равенстве распределений отвергается в пользу альтернативы.
Полученный результат обладает определенной практической значимостью, так как показывает, что среди более молодых людей оказалось больше тех, кто вернул кредит.
Для большей наглядности построим гистограмму числа людей, вернувших и не вернувших кредит, в зависимости от возраста. Для этого введем новый столбец "Возрастная категория"
End of explanation |
4,547 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro analysis to the dataset <a class="tocSkip">
Providing a simple visualization and classic statistical data analysis.
Imports
Import dependencies
Step1: Import data
Step2: Cleaning
Data ranges
Step3: Observations
Step4: Prune outliers (similar approaches
Step5: EDA
Distributions
Step6: Conclusions on cleaned data without normalization <a class="tocSkip">
Most histograms appear to be unimodal, most of them symmetric with the last 2 being left-skewed.
Scatterplot clusters | Python Code:
import numpy as np
import pandas as pd
import matplotlib.style as style
style.use('ggplot')
#print(style.available)
import matplotlib.pyplot as plt
%matplotlib inline
import csv
import datetime
from IPython.core.display import display, HTML
Explanation: Intro analysis to the dataset <a class="tocSkip">
Providing a simple visualization and classic statistical data analysis.
Imports
Import dependencies
End of explanation
data = pd.read_csv("train.csv", header = 0)
data.head()
Explanation: Import data
End of explanation
print("Dataset length: %i" % len(data))
# ---
print("vendor_id: [%s, %s]" % (min(data["vendor_id"]), max(data["vendor_id"])))
print(" ")
print("pickup_datetime: \t[%s, %s]" % (min(data["pickup_datetime"]), max(data["pickup_datetime"])))
print("dropoff_datetime: \t[%s, %s]" % (min(data["dropoff_datetime"]), max(data["dropoff_datetime"])))
print(" ")
print("pickup_longitude: \t[%s, %s]" % (min(data["pickup_longitude"]), max(data["pickup_longitude"])))
print("pickup_latitude: \t[%s, %s]" % (min(data["pickup_latitude"]), max(data["pickup_latitude"])))
print(" ")
print("dropoff_longitude: \t[%s, %s]" % (min(data["dropoff_longitude"]), max(data["dropoff_longitude"])))
print("dropoff_latitude: \t[%s, %s]" % (min(data["dropoff_latitude"]), max(data["dropoff_latitude"])))
print(" ")
print("store_and_fwd_flag: [%s, %s]" % (min(data["store_and_fwd_flag"]), max(data["store_and_fwd_flag"])))
print("passenger_count: [%s, %s]" % (min(data["passenger_count"]), max(data["passenger_count"])))
max_trip_d = max(data["trip_duration"])
m, s = divmod(max_trip_d, 60)
h, m = divmod(m, 60)
max_trip_d_f = "%d:%02d:%02d" % (h, m, s)
print("trip_duration: [%s, %s]" % (min(data["trip_duration"]), max_trip_d_f))
print("---max (seconds): ", max_trip_d)
Explanation: Cleaning
Data ranges
End of explanation
#print(data.columns.values)
print("vendor_id: %s" % data["vendor_id"].dtypes)
print("pickup_datetime: %s" % data["pickup_datetime"].dtypes)
print("dropoff_datetime: %s" % data["dropoff_datetime"].dtypes)
print()
print("pickup_longitude: %s" % data["pickup_longitude"].dtypes)
print("pickup_latitude: %s" % data["pickup_latitude"].dtypes)
print("dropoff_longitude: %s" % data["dropoff_longitude"].dtypes)
print("dropoff_latitude: %s" % data["dropoff_latitude"].dtypes)
print()
print("store_and_fwd_flag: %s" % data["store_and_fwd_flag"].dtypes)
print("passenger_count: %s" % data["passenger_count"].dtypes)
print("trip_duration: %s" % data["trip_duration"].dtypes)
# resolution
data_clean = data
data_clean["pickup_datetime"] = data_clean["pickup_datetime"].astype("datetime64")
data_clean["dropoff_datetime"] = data_clean["dropoff_datetime"].astype("datetime64")
Explanation: Observations:
+ Last maximum seems to be quite a big outlier.
Check if data types are appropriate.
End of explanation
# chosen as ~91 min (by analyzing the distribution below beforehand)
data_clean = data_clean[data.trip_duration < 5500]
Explanation: Prune outliers (similar approaches: trimming or winsorizing data)
End of explanation
display(HTML('<center><h1>Analyzing distribution for the series</h1></center>'))
data["pickup_datetime"].groupby(data["pickup_datetime"].dt.month).count().plot(kind="bar", rot=0)
data["dropoff_datetime"].groupby(data["dropoff_datetime"].dt.month).count().plot(kind="bar")
# initial
# xlim = [-80, -70]
# ylim = [40, 42]
xlim = [-74.2, -73.7]
ylim = [40.55, 40.95]
data_normalized = data_clean
data_normalized = data_normalized[(data_normalized.dropoff_latitude < ylim[1]) & (data_normalized.pickup_latitude < ylim[1])]
data_normalized = data_normalized[(data_normalized.dropoff_latitude > ylim[0]) & (data_normalized.pickup_latitude > ylim[0])]
data_normalized = data_normalized[(data_normalized.dropoff_longitude < xlim[1]) & (data_normalized.pickup_longitude < xlim[1])]
data_normalized = data_normalized[(data_normalized.dropoff_longitude > xlim[0]) & (data_normalized.pickup_longitude > xlim[0])]
data_normalized.hist(
column=["pickup_longitude", "pickup_latitude", "dropoff_longitude", "dropoff_latitude"],
figsize=(10, 10),
weights = np.ones_like(data_normalized.index) / len(data_normalized.index),
bins = 30
#,sharey=True, sharex=True
)
display(HTML('<center><h1>Analyzing distribution for location (normalized)</h1></center>'))
data_clean.hist(
column=["passenger_count", "trip_duration"], #"store_and_fwd_flag",
figsize=(10, 3)
)
plt.twiny()
display(HTML('<center><h1>Analyzing distribution for the other parameters</h1></center>'))
Explanation: EDA
Distributions
End of explanation
# some cleaning needed
# some big outliers are clogging the view
## values determined empirically with 0.05 marker plot below
xlim = [-74.2, -73.7] # -74.2, -73.85
ylim = [40.55, 40.95]
data_viz = data_clean
data_viz = data_viz[(data_viz.pickup_longitude > xlim[0]) & (data_viz.pickup_longitude < xlim[1])]
data_viz = data_viz[(data_viz.dropoff_longitude> xlim[0]) & (data_viz.dropoff_longitude< xlim[1])]
data_viz = data_viz[(data_viz.pickup_latitude > ylim[0]) & (data_viz.pickup_latitude < ylim[1])]
data_viz = data_viz[(data_viz.dropoff_latitude > ylim[0]) & (data_viz.dropoff_latitude < ylim[1])]
longitude = list(data_viz.pickup_longitude) + list(data_viz.dropoff_longitude)
latitude = list(data_viz.pickup_latitude) + list(data_viz.dropoff_latitude)
print("longitude: \t[%s, %s]" % (min(longitude), max(longitude)))
print("latitude: \t[%s, %s]" % (min(latitude), max(latitude)))
display(HTML('<center><h1>Scatter plot for points (estimating map)</h1></center>'))
plt.figure(figsize = (10,10))
plt.plot(longitude,latitude,'.', alpha = 0.4, markersize = 0.05)
# 0.05 (less time, best for visualization) #10 (more time, best for outliers)
plt.show()
plt.figure(figsize = (10,5))
plt.subplots_adjust(wspace = 0.6) # wspace=None
# wspace = 0.2 # the amount of width reserved for blank space between subplots
display(HTML('<center><h1>Examining clusterization density (hex binning)</h1></center>'))
plt.subplot(121),\
plt.hexbin(longitude,latitude, gridsize=50, cmap='inferno'),\
plt.title('no log')
plt.colorbar().set_label('counts')
plt.subplot(122),\
plt.hexbin(longitude,latitude, gridsize=50, bins='log', cmap='inferno'),\
plt.title('Log'),\
#plt.plot(longitude,latitude,'.', alpha = 0.1, c='c', markersize = 0.05)
plt.colorbar().set_label('log10(N)')
plt.show()
Explanation: Conclusions on cleaned data without normalization <a class="tocSkip">
Most histograms appear to be unimodal, most of them symmetric with the last 2 being left-skewed.
Scatterplot clusters
End of explanation |
4,548 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modifying data in-place
Many of MNE-Python's data objects (~mne.io.Raw, ~mne.Epochs, ~mne.Evoked,
etc) have methods that modify the data in-place (either optionally or
obligatorily). This can be advantageous when working with large datasets
because it reduces the amount of computer memory needed to perform the
computations. However, it can lead to unexpected results if you're not aware
that it's happening. This tutorial provides a few examples of in-place
processing, and how and when to avoid it.
As usual we'll start by importing the modules we need and
loading some example data <sample-dataset>
Step1: Signal processing
Most MNE-Python data objects have built-in methods for filtering, including
high-, low-, and band-pass filters (~mne.io.Raw.filter), band-stop filters
(~mne.io.Raw.notch_filter),
Hilbert transforms (~mne.io.Raw.apply_hilbert),
and even arbitrary or user-defined functions (~mne.io.Raw.apply_function).
These typically always modify data in-place, so if we want to preserve
the unprocessed data for comparison, we must first make a copy of it. For
example
Step2: Channel picking
Another group of methods where data is modified in-place are the
channel-picking methods. For example
Step3: Note also that when picking only EEG channels, projectors that affected only
the magnetometers were dropped, since there are no longer any magnetometer
channels.
The copy parameter
Above we saw an example of using the ~mne.io.Raw.copy method to facilitate
comparing data before and after processing. This is not needed when using
certain MNE-Python functions, because they have a function parameter
where you can specify copy=True (return a modified copy of the data) or
copy=False (operate in-place). For example, mne.set_eeg_reference is
one such function; notice that here we plot original_raw after the
rereferencing has been done, but original_raw is unaffected because
we specified copy=True | Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
# the preload flag loads the data into memory now
raw = mne.io.read_raw_fif(sample_data_raw_file, preload=True)
raw.crop(tmax=10.) # raw.crop() always happens in-place
Explanation: Modifying data in-place
Many of MNE-Python's data objects (~mne.io.Raw, ~mne.Epochs, ~mne.Evoked,
etc) have methods that modify the data in-place (either optionally or
obligatorily). This can be advantageous when working with large datasets
because it reduces the amount of computer memory needed to perform the
computations. However, it can lead to unexpected results if you're not aware
that it's happening. This tutorial provides a few examples of in-place
processing, and how and when to avoid it.
As usual we'll start by importing the modules we need and
loading some example data <sample-dataset>:
End of explanation
original_raw = raw.copy()
raw.apply_hilbert()
print(f'original data type was {original_raw.get_data().dtype}, after '
f'apply_hilbert the data type changed to {raw.get_data().dtype}.')
Explanation: Signal processing
Most MNE-Python data objects have built-in methods for filtering, including
high-, low-, and band-pass filters (~mne.io.Raw.filter), band-stop filters
(~mne.io.Raw.notch_filter),
Hilbert transforms (~mne.io.Raw.apply_hilbert),
and even arbitrary or user-defined functions (~mne.io.Raw.apply_function).
These typically always modify data in-place, so if we want to preserve
the unprocessed data for comparison, we must first make a copy of it. For
example:
End of explanation
print(f'original data had {original_raw.info["nchan"]} channels.')
original_raw.pick('eeg') # selects only the EEG channels
print(f'after picking, it has {original_raw.info["nchan"]} channels.')
Explanation: Channel picking
Another group of methods where data is modified in-place are the
channel-picking methods. For example:
End of explanation
rereferenced_raw, ref_data = mne.set_eeg_reference(original_raw, ['EEG 003'],
copy=True)
original_raw.plot()
rereferenced_raw.plot()
Explanation: Note also that when picking only EEG channels, projectors that affected only
the magnetometers were dropped, since there are no longer any magnetometer
channels.
The copy parameter
Above we saw an example of using the ~mne.io.Raw.copy method to facilitate
comparing data before and after processing. This is not needed when using
certain MNE-Python functions, because they have a function parameter
where you can specify copy=True (return a modified copy of the data) or
copy=False (operate in-place). For example, mne.set_eeg_reference is
one such function; notice that here we plot original_raw after the
rereferencing has been done, but original_raw is unaffected because
we specified copy=True:
End of explanation |
4,549 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Wrangling NI House Price Index Data
This is a 'messy' 'blog post' that's just a braindump of a notebook to step through NI House Price Index datasets I was playing around with.
It's mostly code, so if you were here from some 'insight', feck aff.
There is no analysis here, this is just data wrangling.
TLDR As always, Government Open Data has over the years gone from 'non-existent' to 'garbeled' to 'inconsistent' and I feel is now in the stage of 'consistently inconsistent', which is progress in my eyes.
Preamble Code, move on.
Step1: Fix the Contents sheet to correctly reflect the Worksheet names
And fix the table headers and sheet-titles while we're at it.
Step2: Tidy up Data
General Methodology
Ignore figure data (pretty much completly....)
Tables have more or less the same structure; a header on row 3(1), a year and quarter 'index' (on time series; otherwise categorical index, see Table 2, Table 3).
Some TS tables also have totals subsections so these should be a) validated and b) ignored.
Any columns with no header in row 3(1) should be ignored (usually text notes)
Operate Sequentially (i.e. Table 1, Table 2, Table 2a; don't skip, even if it's tempting)
Use keys from 'Contents' to describe data, but may be suffixed by the date which could change between data sets!
There's also some really columns that look like checksums, so if there is an 'NI' column, or a data column that all valid values are '100', delete it.
Table 1
Step3: One down, 31 to go...
Table 2
Step4: Those '\n (Quarter 4 2021)' entries are unnecessary, so for this table, lets clear them
Step5: Table 2a
Step6: Table 2x
Step7: 6 down, 26 to go.
Table 3
Step8: Table 4
Step9: Of note; new offset for the header row at index 3 instead of index 1, due to lots of fluff at the start that is probably not going to be consistent between reports so that will almost certainly mess up my day in a few months.
Also, Quarter dates have now been shifted into 'Quarter 1' instead of 'Q1', which ... meh 🤷♂️. More Egrigiously, it looks like '\n' has leaked into some Sales Year values. Funtimes.
Finally, and possibly most annoying, the introduction of partial total lines is going to throw things off, and this isn't a validation study, to stuff-em
In an effort not to over-complicate basic_cleanup, we can try and clean these table specific issues first;
Step11: Thats awkward enough to get it's own function...
Step12: Table 5
Step13: For some reason; Mid-ulster has a 'Standardised HPI' which throws off the above trick, so we gotta make it ugly...
Step15: We could turn this into a proper multiindex but it would mean pushing the Period/Year/Quarter columns into keys which would be inconsistent behaviour with the rest of the 'cleaned' dataset, so that can be a downstream problem; at least we've got the relevant metrics consistent!
Step16: Table 5a
Step18: df.iloc[1,2]=c
Step19: Table 6
Step21: Table 7
Step22: Table 8
Step23: Table 9
Step24: Table 9x
Step25: Table 10x
Step26: And We're Done!
So, we can see that while government open data is a pain, at least it's a ... consistently inconsistent pain?
I hope this was helpful to someone else. | Python Code:
from bs4 import BeautifulSoup
import pandas as pd
import requests
# Pull the latest pages of https://www.finance-ni.gov.uk/publications/ni-house-price-index-statistical-reports and extract links
base_url= 'https://www.finance-ni.gov.uk/publications/ni-house-price-index-statistical-reports'
base_content = requests.get(base_url).content
base_soup = BeautifulSoup(base_content)
for a in base_soup.find_all('a'):
if a.attrs.get('href','').endswith('xlsx'):
source_name, source_url = a.contents[1],a.attrs['href']
source_df = pd.read_excel(source_url, sheet_name = None) # Load all worksheets in
source_df.keys()
source_df['Contents']
Explanation: Data Wrangling NI House Price Index Data
This is a 'messy' 'blog post' that's just a braindump of a notebook to step through NI House Price Index datasets I was playing around with.
It's mostly code, so if you were here from some 'insight', feck aff.
There is no analysis here, this is just data wrangling.
TLDR As always, Government Open Data has over the years gone from 'non-existent' to 'garbeled' to 'inconsistent' and I feel is now in the stage of 'consistently inconsistent', which is progress in my eyes.
Preamble Code, move on.
End of explanation
new_header = source_df['Contents'].iloc[0]
source_df['Contents'] = source_df['Contents'][1:]
source_df['Contents'].columns = new_header
source_df['Contents'].columns = [*new_header[:-1],'Title']
[t for t in source_df['Contents']['Title'].values if t.startswith('Table')]
# Replace 'Figure' with 'Fig' in 'Worksheet Name'
with pd.option_context('mode.chained_assignment',None):
source_df['Contents']['Worksheet Name'] = source_df['Contents']['Worksheet Name'].str.replace('Figure','Fig')
Explanation: Fix the Contents sheet to correctly reflect the Worksheet names
And fix the table headers and sheet-titles while we're at it.
End of explanation
source_df['Table 1']
def basic_cleanup(df:pd.DataFrame, offset=1)->pd.DataFrame:
df = df.copy()
# Re-header from row 1 (which was row 3 in excel)
new_header = df.iloc[offset]
df = df.iloc[offset+1:]
df.columns = new_header
# remove 'NaN' trailing columns
df = df[df.columns[pd.notna(df.columns)]]
# 'NI' is a usually hidden column that appears to be a checksum;
#if it's all there and all 100, remove it, otherwise, complain.
# (Note, need to change this 'if' logic to just 'if there's a
# column with all 100's, but cross that bridge later)
if 'NI' in df:
assert df['NI'].all() and df['NI'].mean() == 100, "Not all values in df['NI'] == 100"
df = df.drop('NI', axis=1)
# Strip rows below the first all-nan row, if there is one
# (Otherwise this truncates the tables as there is no
# idxmax in the table of all 'false's)
if any(df.isna().all(axis=1)):
idx_first_bad_row = df.isna().all(axis=1).idxmax()
df = df.loc[:idx_first_bad_row-1]
# By Inspection, other tables use 'Sale Year' and 'Sale Quarter'
if set(df.keys()).issuperset({'Sale Year','Sale Quarter'}):
df = df.rename(columns = {
'Sale Year':'Year',
'Sale Quarter': 'Quarter'
})
# For 'Year','Quarter' indexed pages, there is an implied Year
# in Q2/4, so fill it downwards
if set(df.keys()).issuperset({'Year','Quarter'}):
df['Year'] = df['Year'].astype(float).fillna(method='ffill').astype(int)
# In Pandas we can represent Y/Q combinations as proper datetimes
#https://stackoverflow.com/questions/53898482/clean-way-to-convert-quarterly-periods-to-datetime-in-pandas
df.insert(loc=0,
column='Period',
value=pd.PeriodIndex(df.apply(lambda r:f'{r.Year}-{r.Quarter}', axis=1), freq='Q')
)
# reset index, try to fix dtypes, etc, (this should be the last
# operation before returning!
df = df.reset_index(drop=True).infer_objects()
return df
df = basic_cleanup(source_df['Table 1'])
df
dest_df = {
'Table 1': basic_cleanup(source_df['Table 1'])
}
len([k for k in source_df.keys() if k.startswith('Table')])
Explanation: Tidy up Data
General Methodology
Ignore figure data (pretty much completly....)
Tables have more or less the same structure; a header on row 3(1), a year and quarter 'index' (on time series; otherwise categorical index, see Table 2, Table 3).
Some TS tables also have totals subsections so these should be a) validated and b) ignored.
Any columns with no header in row 3(1) should be ignored (usually text notes)
Operate Sequentially (i.e. Table 1, Table 2, Table 2a; don't skip, even if it's tempting)
Use keys from 'Contents' to describe data, but may be suffixed by the date which could change between data sets!
There's also some really columns that look like checksums, so if there is an 'NI' column, or a data column that all valid values are '100', delete it.
Table 1: NI HPI Trends Q1 2005 - Q4 2021
TODO: Regexy way to get rid of the '\QX-YYYY -\QX YYYY' tail
End of explanation
df = basic_cleanup(source_df['Table 2'])
df
Explanation: One down, 31 to go...
Table 2: NI HPI & Standardised Price Statistics by Property Type Q4 2021'
End of explanation
df.columns = [c.split('\n')[0] for c in df.columns]
df
dest_df['Table 2'] = df
Explanation: Those '\n (Quarter 4 2021)' entries are unnecessary, so for this table, lets clear them
End of explanation
df = basic_cleanup(source_df['Table 2a'])
df
Explanation: Table 2a: NI Detached Property Price Index Q1 2005 - Q4 2021
End of explanation
dest_df['Table 2']['Property Type']
import re
table2s = re.compile('Table 2[a-z]')
assert table2s.match('Table 2') is None, 'Table 2 is matching itself!'
assert table2s.match('Table 20') is None, 'Table 2 is greedy!'
assert table2s.match('Table 2z') is not None, 'Table 2 is matching incorrectly!'
table2s = re.compile('Table 2[a-z]')
for table in source_df:
if table2s.match(table):
dest_df[table] = basic_cleanup(source_df[table])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: Table 2x: NI XXX Property Price Index Q1 2005 - Q4 2021
This table structure is consistent against the rest of the Table 2x cohort; mapping to the Property Types listed in Table 2.
For the time being, we can ignore these, but this will probably become a pain later on...
End of explanation
df = basic_cleanup(source_df['Table 3'])
df.columns = [c.split('\n')[0] for c in df.columns] # Stolen from Table 2 Treatment
df
dest_df['Table 3'] = df
df = basic_cleanup(source_df['Table 3a'])
df
table3s = re.compile('Table 3[a-z]')
for table in source_df:
if table3s.match(table):
dest_df[table] = basic_cleanup(source_df[table])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: 6 down, 26 to go.
Table 3: NI HPI & Standardised Price Statistics by New/Existing Resold Dwelling Type Q4 2021
These appear to be a similar structure of the Table 2's... hopefully
End of explanation
df = source_df['Table 4']
df
Explanation: Table 4: Number of Verified Residential Property Sales Q1 2005 - Q4 2021
Table 4 is not looking great
End of explanation
df.iloc[:,1]=df.iloc[:,1].str.replace('Quarter ([1-4])',r'Q\1', regex=True)
df
df=df[~df.iloc[:,1].str.contains('Total').fillna(False)]
# Lose the year new-lines (needs astype because non str lines are
# correctly inferred to be ints, so .str methods nan-out
with pd.option_context('mode.chained_assignment',None):
df.iloc[:,0]=df.iloc[:,0].astype(str).str.replace('\n','')
df
basic_cleanup(df, offset=3)
Explanation: Of note; new offset for the header row at index 3 instead of index 1, due to lots of fluff at the start that is probably not going to be consistent between reports so that will almost certainly mess up my day in a few months.
Also, Quarter dates have now been shifted into 'Quarter 1' instead of 'Q1', which ... meh 🤷♂️. More Egrigiously, it looks like '\n' has leaked into some Sales Year values. Funtimes.
Finally, and possibly most annoying, the introduction of partial total lines is going to throw things off, and this isn't a validation study, to stuff-em
In an effort not to over-complicate basic_cleanup, we can try and clean these table specific issues first;
End of explanation
def cleanup_table_4(df):
Table 4: Number of Verified Residential Property Sales
* Regex 'Quarter X' to 'QX' in future 'Sales Quarter' column
* Drop Year Total rows
* Clear any Newlines from the future 'Sales Year' column
* call `basic_cleanup` with offset=3
df.iloc[:,1]=df.iloc[:,1].str.replace('Quarter ([1-4])',r'Q\1', regex=True)
df=df[~df.iloc[:,1].str.contains('Total').fillna(False)]
# Lose the year new-lines (needs astype because non str lines are
# correctly inferred to be ints, so .str methods nan-out
with pd.option_context('mode.chained_assignment',None):
df.iloc[:,0]=df.iloc[:,0].astype(str).str.replace('\n','')
return basic_cleanup(df, offset=3)
cleanup_table_4(source_df['Table 4'].copy())
dest_df['Table 4'] = cleanup_table_4(source_df['Table 4'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: Thats awkward enough to get it's own function...
End of explanation
df = basic_cleanup(source_df['Table 5'])
df
# Two inner-columns per LGD
lgds = df.columns[3:].str.replace(' HPI','').str.replace(' Standardised Price','').unique()
lgds
Explanation: Table 5: HPI & Standardised Price for each Local Government District in NI
This nearly works but structurally requires a multi-index column to make sense....
End of explanation
lgds = df.columns[3:].str.replace(' Standardised HPI',' HPI')\
.str.replace(' HPI','')\
.str.replace(' Standardised Price','').unique()
lgds
df.columns = [*df.columns[:3], *pd.MultiIndex.from_product([lgds,['Index','Price']], names=['LGD','Metric'])]
df
Explanation: For some reason; Mid-ulster has a 'Standardised HPI' which throws off the above trick, so we gotta make it ugly...
End of explanation
def cleanup_table_5(df):
Table 5: Standardised House Price & Index for each Local Government District Northern Ireland
*
# Basic Cleanup first
df = basic_cleanup(df)
# Build multi-index of LGD / Metric [Index,Price]
# Two inner-columns per LGD
lgds = df.columns[3:].str.replace(' Standardised HPI',' HPI')\
.str.replace(' HPI','')\
.str.replace(' Standardised Price','')\
.unique()
df.columns = [*df.columns[:3], *pd.MultiIndex.from_product([lgds,['Index','Price']], names=['LGD','Metric'])]
return df
cleanup_table_5(source_df['Table 5'])
dest_df['Table 5']=cleanup_table_5(source_df['Table 5'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: We could turn this into a proper multiindex but it would mean pushing the Period/Year/Quarter columns into keys which would be inconsistent behaviour with the rest of the 'cleaned' dataset, so that can be a downstream problem; at least we've got the relevant metrics consistent!
End of explanation
df = source_df['Table 5a'].copy()
df
dates = df.iloc[:,0].str.extract('(Q[1-4]) ([0-9]{4})').rename(columns={0:'Quarter',1:'Year'})
for c in ['Quarter','Year']:# insert the dates in order, so they come out in reverse in the insert
df.insert(1,c,dates[c])
df.iloc[2,1]=c # Need to have the right colname for when `basic_cleanup` is called.
df.iloc[2,1]=c
df
df=df[~df.iloc[:,0].str.contains('Total').fillna(False)]
Explanation: Table 5a: Number of Verified Residential Property Sales by Local Government District
This one has a new problem; the Sale Year/Quarter is now squished together. This will do a few terrible things to our basic_cleanup so this needs to be done ahead of cleanup.
Also has annual total lines.
End of explanation
basic_cleanup(df,offset=2)
def cleanup_table_5a(df):
Table 5a: Number of Verified Residential Property Sales by Local Government District
* Parse the 'Sale Year/Quarter' to two separate cols
* Insert future-headers for Quarter and Year cols
* Remove rows with 'total' in the first column
* Disregard the 'Sale Year/Quarter' column
* perform `basic_cleanup` with offset=2
# Safety first
df=df.copy()
# Extract 'Quarter' and 'Year' columns from the future 'Sale Year/Quarter' column
dates = df.iloc[:,0].str.extract('(Q[1-4]) ([0-9]{4})').rename(columns={0:'Quarter',1:'Year'})
for c in ['Quarter','Year']:# insert the dates in order, so they come out in reverse in the insert
df.insert(1,c,dates[c])
df.iloc[2,1]=c # Need to have the right colname for when `basic_cleanup` is called.
# Remove 'total' rows from the future 'Sale Year/Quarter' column
df=df[~df.iloc[:,0].str.contains('Total').fillna(False)]
# Remove the 'Sale Year/Quarter' column all together
df = df.iloc[:,1:]
# Standard cleanup
df = basic_cleanup(df, offset=2)
return df
cleanup_table_5a(source_df['Table 5a'])
dest_df['Table 5a']=cleanup_table_5a(source_df['Table 5a'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: df.iloc[1,2]=c
End of explanation
df = basic_cleanup(source_df['Table 6'])
df
dest_df['Table 6']=basic_cleanup(source_df['Table 6'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: Table 6: Standardised House Price & Index for all Urban and Rural areas in NI
Wee buns, thankfully. Still mixing the 'HPI' vs 'Index', but that's a downstream problem
End of explanation
df = source_df['Table 7'].copy()
df.head()
df.iloc[1,0] = 'Year'
df.iloc[1,1] = 'Quarter'
df.head()
basic_cleanup(df).head()
def cleanup_table_7(df):
Table 7: Standardised House Price & Index for Rural Areas of Northern Ireland by drive times
* Insert Year/Quarter future-headers
* Clean normally
# TODO THIS MIGHT BE VALID FOR MULTIINDEXING ON DRIVETIME/[Index/Price]
df = df.copy()
df.iloc[1,0] = 'Year'
df.iloc[1,1] = 'Quarter'
df = basic_cleanup(df)
return df
cleanup_table_7(source_df['Table 7'])
dest_df['Table 7'] = cleanup_table_7(source_df['Table 7'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: Table 7: Standardised House Price & Index for Rural Areas of Northern Ireland by drive times
Nearly-wee-buns; but this one doesn't have Year or Quarter headers, and the extra \n (Ref: Q1 2015) added, which will complicate downstream analysis if that changes over time...
End of explanation
cleanup_table_5a(source_df['Table 8']).head()
cleanup_table_8 = cleanup_table_5a
dest_df['Table 8'] = cleanup_table_8(source_df['Table 8'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: Table 8: Number of Verified Residential Property Sales of properties in urban and rural areas and properties in rural areas by drive times witihn towns of 10,000 or more and within 1 hour of Belfast
We're now getting into the swing of this!
This one has two similar problems we've already seen; Munged Quarters/Years (this time with no header on that column...), and annual Total rows.
Vee must deeel with it
End of explanation
basic_cleanup(source_df['Table 9'])
dest_df['Table 9'] = basic_cleanup(source_df['Table 9'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: Table 9: NI Average Sales Prices Q1 2005 - Q4 2021
Wee buns
End of explanation
cleanup_table_7(source_df['Table 9a'])
cleanup_table_9x = cleanup_table_7
table9s = re.compile('Table 9[a-z]')
for table in source_df:
if table9s.match(table):
dest_df[table] = cleanup_table_9x(source_df[table])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: Table 9x: NI Average Sale Prices XXXXX Property Q1 2005 - Q4 2021
These are very similar to Tables 2x; i.e. they're broken down by property type.
Annoyingly, they don't follow the same structure as Tables 2x or Table 9 because they don't include the Year/Quarter headers.
If that reminds you of anything, it's because Table 7 was the same...
End of explanation
source_df['Table 10a']
cleanup_table_5a(source_df['Table 10a'])
cleanup_table_10x = cleanup_table_5a
table10s = re.compile('Table 10[a-z]')
for table in source_df:
if table10s.match(table):
dest_df[table] = cleanup_table_10x(source_df[table])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
Explanation: Table 10x: Number of Verified Residential Property Sales by Type in XXXXX
Surprisingly, we're in the home straight; the remaining tables are all of the same structure, with familiar awkwardness.,,
Annual-Total Rows
Munged Year/Quarter Column
That column having a silly (but contextual) name
a different offset
Fortunately, we already have something like that from dealing with Table 5a!
End of explanation
dest_df['Contents'] = source_df['Contents'][source_df['Contents']['Worksheet Name'].str.startswith('Table')]
with pd.ExcelWriter('NI Housing Price Index.xlsx') as writer:
# Thankfully these are semantically sortable otherwise this would be a _massive_ pain
for k,df in sorted(dest_df.items()):
df.to_excel(writer, sheet_name=k)
Explanation: And We're Done!
So, we can see that while government open data is a pain, at least it's a ... consistently inconsistent pain?
I hope this was helpful to someone else.
End of explanation |
4,550 | Given the following text description, write Python code to implement the functionality described.
Description:
Minimum characters to be replaced to make frequency of all characters same
Function to find the minimum operations to convert given string to another with equal frequencies of characters ; Frequency of characters ; Loop to find the Frequency of each character ; Sort in decreasing order based on frequency ; Maximum possible answer ; Loop to find the minimum operations required such that frequency of every character is equal ; Driver Code
| Python Code:
def minOperations(s ) :
freq =[0 ] * 26
n = len(s )
for i in range(n ) :
freq[ord(s[i ] ) - ord(' A ' ) ] += 1
freq . sort(reverse = True )
answer = n
for i in range(1 , 27 ) :
if(n % i == 0 ) :
x = n // i
y = 0
for j in range(i ) :
y += min(freq[j ] , x )
answer = min(answer , n - y )
return answer
if __name__== "__main __":
s = "BBC "
print(minOperations(s ) )
|
4,551 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handling spectral energy distributions
This notebook illustrates how to use the sed module of nmmn. This module is very convenient for dealing with spectral energy distributions (SEDs)—the distributions of luminosity $\nu L_\nu$ as a function of $\nu$.
Often, we have different SEDs, for example the emission from different components of a flow, or spectra corresponding to different AGNs, and we want to perform operations on them. For example, we may want to sum them up, or average them, or compute the bolometric luminosity. This module does it all and this notebook illustrates some of the functionalities.
Step1: Read the SED data files
We will import model SEDs corresponding to the components of the flow appropriate for NGC 3031 as reported in Nemmen et al (2014)
Step2: The arrays of frequency and luminosity are available at riaf.lognu and riaf.ll. There are several other arrays as well.
Just to make sure we read them correctly, let's plot the SEDs
Step3: sed support several different types of input formats for the SEDs. Check out the methods erac, hayden, prieto and grmonty.
Compute the bolometric luminosity and other properties
Get the bolometric luminosity
Step4: Compute the X-ray luminosity in the 2-10 keV band. The first number is the luminosity and the second is the photon index
Step5: Radio-loudness (Kellermann criterion)
Step6: Extract the luminosity at a given value of $\log \nu$, using interpolation. In this case, we choose the frequency corresponding to 5 GHz. The numbers correspond to the nearest frequency ($\log \nu$) to the input, and the corresponding $L_\nu$.
Step7: Get $\alpha_{\rm ox}$—the power-law index of a straight line in log-log space connecting the optical to the X-rays
Step8: Add SEDs
Let's add the three SEDs. Interpolation is performed as required.
Step9: Let's see the resulting SED with the sum
Step10: Average
We can also easily compute the mean SED. In this example, I will compute the average of the RIAF and jet SEDs because they extend from radio to X-rays. m below is a SED object with the resulting average. sdm is the standard deviation for plotting the range of variation in the shape of the SED.
You have to choose the reference frequency at which they will be normalized to the same luminosity (default 1E40 erg/s). The parameter xray=True normalizes all the SEDs to have the same X-ray luminosity in the 2-10 keV band. | Python Code:
%pylab inline
import nmmn.sed as sed
Explanation: Handling spectral energy distributions
This notebook illustrates how to use the sed module of nmmn. This module is very convenient for dealing with spectral energy distributions (SEDs)—the distributions of luminosity $\nu L_\nu$ as a function of $\nu$.
Often, we have different SEDs, for example the emission from different components of a flow, or spectra corresponding to different AGNs, and we want to perform operations on them. For example, we may want to sum them up, or average them, or compute the bolometric luminosity. This module does it all and this notebook illustrates some of the functionalities.
End of explanation
riaf=sed.SED(file='ngc3031.adaf',logfmt=1)
thin=sed.SED(file='ngc3031.ssd',logfmt=1)
jet=sed.SED(file='ngc3031.jet',logfmt=1)
Explanation: Read the SED data files
We will import model SEDs corresponding to the components of the flow appropriate for NGC 3031 as reported in Nemmen et al (2014): a radiatively inefficient accretion flow (aka RIAF), a thin disk and a relativistic jet.
Each file consists of two columns which are $\log \nu$ (Hz) and $\log \nu L_\nu$ (erg/s), respectively. The logfmt parameter makes sure we read them correctly.
End of explanation
plot(riaf.lognu,riaf.ll,label='RIAF')
plot(thin.lognu,thin.ll,label='Thin disk')
plot(jet.lognu,jet.ll,label='Jet')
ylim(37,41)
xlabel('log($\\nu$ / Hz)')
ylabel('log($\\nu L_\\nu$ / erg s$^{-1}$)')
legend()
Explanation: The arrays of frequency and luminosity are available at riaf.lognu and riaf.ll. There are several other arrays as well.
Just to make sure we read them correctly, let's plot the SEDs
End of explanation
lumbol=riaf.bol()
f'Lbol = {lumbol:.1e}'
Explanation: sed support several different types of input formats for the SEDs. Check out the methods erac, hayden, prieto and grmonty.
Compute the bolometric luminosity and other properties
Get the bolometric luminosity
End of explanation
riaf.xrays()
Explanation: Compute the X-ray luminosity in the 2-10 keV band. The first number is the luminosity and the second is the photon index
End of explanation
jet.radioloud()
Explanation: Radio-loudness (Kellermann criterion)
End of explanation
jet.findlum(9.7)
Explanation: Extract the luminosity at a given value of $\log \nu$, using interpolation. In this case, we choose the frequency corresponding to 5 GHz. The numbers correspond to the nearest frequency ($\log \nu$) to the input, and the corresponding $L_\nu$.
End of explanation
riaf.alphaox()
Explanation: Get $\alpha_{\rm ox}$—the power-law index of a straight line in log-log space connecting the optical to the X-rays
End of explanation
summed=sed.sum([riaf,jet,thin])
Explanation: Add SEDs
Let's add the three SEDs. Interpolation is performed as required.
End of explanation
plot(riaf.lognu,riaf.ll,label='RIAF')
plot(thin.lognu,thin.ll,label='Thin disk')
plot(jet.lognu,jet.ll,label='Jet')
plot(summed.lognu,summed.ll,label='Sum',lw=3)
ylim(37,41)
xlabel('log($\\nu$ / Hz)')
ylabel('log($\\nu L_\\nu$ / erg s$^{-1}$)')
legend()
Explanation: Let's see the resulting SED with the sum
End of explanation
[m,sdm]=riaf.mean(seds=[riaf,jet], refnlnu=1e40, xray=1)
plot(riaf.lognu,riaf.ll,label='RIAF')
plot(jet.lognu,jet.ll,label='Jet')
plot(m.lognu,m.ll,label='Average',lw=3)
ylim(37,41)
xlabel('log($\\nu$ / Hz)')
ylabel('log($\\nu L_\\nu$ / erg s$^{-1}$)')
legend()
Explanation: Average
We can also easily compute the mean SED. In this example, I will compute the average of the RIAF and jet SEDs because they extend from radio to X-rays. m below is a SED object with the resulting average. sdm is the standard deviation for plotting the range of variation in the shape of the SED.
You have to choose the reference frequency at which they will be normalized to the same luminosity (default 1E40 erg/s). The parameter xray=True normalizes all the SEDs to have the same X-ray luminosity in the 2-10 keV band.
End of explanation |
4,552 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IA369Z - Reprodutibilidade em Pesquisa Computacional.
testes
teste
teste
teste
teste
teste
Descrição de códigos para devices e coletas
Code Client Device
ESP8266 Runing program language LUA.
Step1: Server Local
Step2: Export database from dashaboard about device IoT
Arquivo csv | Python Code:
-- Campainha IoT - LHC - v1.1
-- ESP Inicializa pinos, Configura e Conecta no Wifi, Cria conexão TCP
-- e na resposta de um "Tocou" coloca o ESP em modo DeepSleep para economizar bateria.
-- Se nenhuma resposta for recebida em 15 segundos coloca o ESP em DeepSleep.
led_pin = 3
status_led = gpio.LOW
ip_servidor = "192.168.1.10"
ip_campainha = "192.168.1.20"
voltagem=3333
function desliga_circuito()
print("Colocando ESP em Deep Sleep")
node.dsleep(0)
end
function read_voltage()
-- Desconecta do wifi para poder ler a voltagem de alimentação do ESP.
wifi.sta.disconnect()
voltagem = adc.readvdd33()
print("Voltagem: "..voltagem)
-- Inicializa o Wifi e conecta no servidor
print("Inicializando WiFi")
init_wifi()
end
function pisca_led()
gpio.write(led_pin, status_led)
if status_led == gpio.LOW then
status_led = gpio.HIGH
else
status_led = gpio.LOW
end
end
function init_pins()
gpio.mode(led_pin, gpio.OUTPUT)
gpio.write(led_pin, status_led)
end
function init_wifi()
wifi.setmode(wifi.STATION)
wifi.sta.config("SSID", "password")
wifi.sta.connect()
wifi.sta.setip({ip=ip_campainha,netmask="255.255.255.0",gateway="192.168.1.1"})
-- Aguarda conexão com Wifi antes de enviar o request.
function try_connect()
if (wifi.sta.status() == 5) then
tmr.stop(0)
print("Conectado, mandando request")
manda_request()
-- Se nenhuma confirmação for recebida em 15 segundos, desliga o ESP.
tmr.alarm(2,15000,0, desliga_circuito)
else
print("Conectando...")
end
end
tmr.alarm(0,1000,1, function() try_connect() end )
end
function manda_request()
tmr.alarm(1, 200, 1, pisca_led)
print("Request enviado")
-- Cria a conexão TCP
conn=net.createConnection(net.TCP,false)
-- Envia o toque da campainha e voltagem para o servidor
conn:on("connection", function(conn)
conn:send("GET /?bateria=" ..voltagem.. " HTTP/1.0\r\n\r\n")
end)
-- Se receber "Tocou" do servidor, desliga o ESP.
conn:on("receive", function(conn, data)
if data:find("Tocou") ~= nil then
desliga_circuito()
end
end)
-- Conectar no servidor
conn:connect(9999,ip_servidor)
end
print("Inicializando pinos")
init_pins()
print ("Lendo voltagem")
read_voltage()
Explanation: IA369Z - Reprodutibilidade em Pesquisa Computacional.
testes
teste
teste
teste
teste
teste
Descrição de códigos para devices e coletas
Code Client Device
ESP8266 Runing program language LUA.
End of explanation
# !/usr/bin/python2
import time
import BaseHTTPServer
import os
import random
import string
import requests
from urlparse import parse_qs, urlparse
HOST_NAME = '0.0.0.0'
PORT_NUMBER = 9999
# A variável MP3_DIR será construida tendo como base o diretório HOME do usuário + Music/Campainha
# (e.g: /home/usuario/Music/Campainha)
MP3_DIR = os.path.join(os.getenv('HOME'), 'Music', 'Campainha')
VALID_CHARS = set(string.ascii_letters + string.digits + '_.')
CHAVE_THINGSPEAK = 'XYZ11ZYX99XYZ1XX'
# Salva o arquivo de log no diretório do usuário (e.g: /home/usuário/campainha.log)
ARQUIVO_LOG = os.path.join(os.getenv('HOME'), 'campainha.log')
def filtra(mp3):
if not mp3.endswith('.mp3'):
return False
for c in mp3:
if not c in VALID_CHARS:
return False
return True
def log(msg, output_file=None):
if output_file is None:
output_file = open(ARQUIVO_LOG, 'a')
output_file.write('%s: %s\n' % (time.asctime(), msg))
output_file.flush()
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(s):
s.send_header("Content-type", "text/plain")
query = urlparse(s.path).query
if not query:
s.send_response(404)
s.end_headers()
s.wfile.write('Not found')
return
components = dict(qc.split('=') for qc in query.split('&'))
if not 'bateria' in components:
s.send_response(404)
s.end_headers()
s.wfile.write('Not found')
return
s.send_response(200)
s.end_headers()
s.wfile.write('Tocou')
s.wfile.flush()
log("Atualizando thingspeak")
r = requests.post('https://api.thingspeak.com/update',
data={'api_key': CHAVE_THINGSPEAK, 'field1': components['bateria']})
log("Thingspeak retornou: %d" % r.status_code)
log("Tocando MP3")
mp3s = [f for f in os.listdir(MP3_DIR) if filtra(f)]
mp3 = random.choice(mp3s)
os.system("mpv " + os.path.join(MP3_DIR, mp3))
if __name__ == '__main__':
server_class = BaseHTTPServer.HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
log("Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER))
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
log("Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER))
Explanation: Server Local : Runing soun local area.
Program Python
End of explanation
import numpy as np
import csv
with open('database.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in spamreader:
print ', '.join(row)
Explanation: Export database from dashaboard about device IoT
Arquivo csv
End of explanation |
4,553 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Preparing the Dataset
The
MNIST dataset is normalized to [0,1] range as per the explanation here
Step2: Make the train and test datasets
Step3: Visualization
Data loader
Step4: Model
The model consists of 3 convolutional blocks, followed by 3 linear blocks, with ReLU activation in between. The MNIST image is fed to the 1st Conv block, and the ranbdom number (one hot) to the first linear block (alongwith the output of the 3rd Conv block, concatenated). | Python Code:
#pip install --force-reinstall torch==1.2.0 torchvision==0.4.0 -f https://download.pytorch.org/whl/torch_stable.html
%pylab inline
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset
import torch.utils.data.dataloader as dataloader
import torch.optim as optim
from torch.utils.data import TensorDataset
from torch.autograd import Variable
from torchvision import transforms
from torchvision.datasets import MNIST
from torchsummary import summary
SEED = 1
# CUDA?
use_cuda = torch.cuda.is_available()
# For reproducibility
torch.manual_seed(SEED)
if use_cuda:
torch.cuda.manual_seed(SEED)
device = torch.device("cuda" if use_cuda else "cpu")
Explanation: <a href="https://colab.research.google.com/github/vpw/AndroidForBeginners/blob/master/Ass2.5/END3_MNIST_addnumber_ass2_5.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Quick Start PyTorch - MNIST
To run a Code Cell you can click on the ⏯ Run button in the Navigation Bar above or type Shift + Enter
End of explanation
class MNIST_add_dataset(Dataset):
def __init__(self, trainset=True):
self.trainset=trainset
self.mnist_set = MNIST('./data', train=self.trainset, download=True,
transform=transforms.Compose([
transforms.ToTensor(), # ToTensor does min-max normalization.
transforms.Normalize((0.1307,), (0.3081,))
]), )
# list of random integers for adding - part of training set
self.rand_int_num_list = [np.random.randint(0,10) for i in range(len(self.mnist_set))]
#print("Rand int ",self.rand_int_num_list[0])
# convert it to 1 hot encoding for use in training
self.num_onehot = np.identity(10)[self.rand_int_num_list]
#print("Rand int one hot ",self.num_onehot[0])
# get the list of MNIST digits
self.digit_list = list(map(lambda x:[x[1]], self.mnist_set))
#print("MNIST digit ",self.digit_list[0], len(self.digit_list))
# get the final train target label by summing the MNIST label and the random number
# and get the binary (5 digit) representation of the sum of the MNIST digit and the random number
self.bin_sum_list = list(map(lambda x,y: list(map(int,f'{x[1]+y:05b}')), self.mnist_set, self.rand_int_num_list))
#print("Binary of sum ",self.bin_sum_list[0])
# set the target as a concatenation of the MNIST label and the binary encoding of the sum of the
# MNIST number and the random number
self.target = list(map(lambda x,y:np.concatenate((x,y)),self.digit_list,self.bin_sum_list))
#print("Target ",self.target[0])
def __getitem__(self, index):
# MNIST image input
image = self.mnist_set[index][0]
# One hot encoding of the random number
oh_num = torch.as_tensor(self.num_onehot[index],dtype=torch.float32)
# concatenated target
target = torch.tensor(self.target[index])
return ([image, oh_num],target)
def __len__(self):
return len(self.mnist_set)
Explanation: Preparing the Dataset
The
MNIST dataset is normalized to [0,1] range as per the explanation here: https://stackoverflow.com/questions/63746182/correct-way-of-normalizing-and-scaling-the-mnist-dataset
The dataset has the input MNIST image, a onehot encoding of a random number generated per test case, and the output which consists of the concatenation of the MNIST number (onehot) and a binary (decimal to binary) encoding of the sum. As the sum can vary from 0 to 19, 5 binary digits are needed, the dimension of the model output is hence 10+5 = 15 nodes. The target for MNIST is the MNIST label (instead of the one host encoding) so it is appended to the 5 digit sum.
End of explanation
train_set = MNIST_add_dataset(trainset=True)
test_set = MNIST_add_dataset(trainset=False)
Explanation: Make the train and test datasets
End of explanation
dataloader_args = dict(shuffle=True, batch_size=256,num_workers=4, pin_memory=True) if use_cuda else dict(shuffle=True, batch_size=64)
train_loader = dataloader.DataLoader(train_set, **dataloader_args)
test_loader = dataloader.DataLoader(test_set, **dataloader_args)
Explanation: Visualization
Data loader
End of explanation
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
# conv layer 1
self.conv1 = nn.Sequential(
nn.Conv2d(1,16,5), # 16x24x24
nn.ReLU(),
nn.MaxPool2d(2,2) # 16x12x12
)
self.conv2 = nn.Sequential(
nn.Conv2d(16,32,5), # 32x8x8
nn.ReLU(),
nn.MaxPool2d(2,2) # 32x4x4
)
self.conv3 = nn.Sequential(
nn.Conv2d(32,10,3), # 10x2x2
nn.MaxPool2d(2,2) # 10x1x1
)
self.relu = nn.ReLU()
self.fc1 = nn.Linear(10+10, 60) # adding random number one hot to the 1x10 MNIST output
self.fc2 = nn.Linear(60, 30)
self.fc3 = nn.Linear(30, 15) # 10 for MNIST 1-hot coding, and 5 for binary repr of sum of digits
def forward(self, image, number):
#print("0 ",image.shape)
x = self.conv1(image)
#print("1 ",x.shape)
x = self.conv2(x)
#print("2 ",x.shape)
x = self.conv3(x)
#print("3 ",x.shape)
x = x.view(-1,10)
#print("after ",x.shape)
# concatenate the number
x = torch.cat((x,number),1)
x = self.fc1(x)
x = self.relu(x)
x = self.fc2(x)
x = self.relu(x)
x = self.fc3(x)
x = x.view(-1,15)
#print("In forward x shape ",x.shape)
# The first 10 outputs should be the onehot encoding of the MNIST digit
# using a Log softmax (with NLL Loss) for this
o1 = F.log_softmax(x[:,:10])
#print("In forward o1 shape ",o1.shape)
# for the 5 digit sum outout - as it is a multi-label classification, I am using a Sigmoid and not a softmax as there
# will be multiple 1's in the output
# used Hardsigmoid as it has a more sharp curve
sig = nn.Hardsigmoid()
o2 = sig(x[:,10:])
#print("In forward o2 shape ",o2.shape)
return torch.cat((o1,o2),1)
model = Model().to(device)
print(model)
# random test
model.forward(torch.rand((1, 1, 28, 28)).to(device), torch.rand((1, 10)).to(device))
def train(model, device, train_loader, optimizer, epoch, losses):
print(f"EPOCH - {epoch}")
model.train()
for batch_idx, (input, target) in enumerate(train_loader):
image, number, target = input[0].to(device), input[1].to(device), target.to(device)
# clear the grad computation
optimizer.zero_grad()
y_pred = model(image, number) # Passing batch
#print("Input shape ", input.shape)
#print(len(image))
#print("Image shape ", image.shape)
#print("Number shape ", number.shape)
#print("Target shape ",target.shape)
#print("Ypred shape ",y_pred.shape)
# Calculate loss
#print(target[:,0].shape)
#print(y_pred[:,:10])
# using 2 losses - one for the MNIST prediction and one for the sum (binary)
# using Negative log likelihood for the MNIST prediction as we used Log Softmax for the activation
loss_nll = nn.NLLLoss()
loss1 = loss_nll(y_pred[:,:10],target[:,0])
# Using Binary cross entropy for the binary sum representation
loss_bce = torch.nn.BCELoss()
loss2 = loss_bce(y_pred[:,10:].float(),target[:,1:].float())
# Total loss
loss=loss1+loss2
#print("Loss1 ",loss1.cpu().data.item())
#print("Loss2 ",loss2.cpu().data.item())
#print("Loss ",loss.cpu().data.item())
losses.append(loss.cpu().data.item())
# Backpropagation
loss.backward()
optimizer.step()
# Display
if batch_idx % 100 == 0:
print('\r Train Epoch: {}/{} \
[{}/{} ({:.0f}%)]\
\tAvg Loss: {:.6f}'.format(
epoch+1,
EPOCHS,
batch_idx * len(image),
len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.cpu().data.item()/256),
end='')
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct_MNIST = 0
correct_sum = 0
correct = 0
with torch.no_grad(): # dont compute gradients
for data, target in test_loader:
image, number, target = data[0].to(device), data[1].to(device), target.to(device)
# get prediction
output = model(image, number)
#print(output.shape, target.shape)
#print("Output ",output,"\nTarget ", target)
# compute loss
loss_nll = nn.NLLLoss()
loss1 = loss_nll(output[:,:10],target[:,0]).item()
loss_bce = torch.nn.BCELoss()
loss2 = loss_bce(output[:,10:].float(),target[:,1:].float()).item()
loss = loss1+loss2
#print("Loss1 ",loss1.cpu().data.item())
#print("Loss2 ",loss2.cpu().data.item())
#print("Loss ",loss.cpu().data.item())
test_loss += loss # sum up batch loss
#pred = output.argmax(dim=1, keepdim=True) # get the index of the max log-probability
#correct += pred.eq(target.view_as(pred)).sum().item()
pred_MNIST = output[:,:10].argmax(dim=1, keepdim=True) # get the index of the max log-probability
correct_MNIST_list = pred_MNIST.eq(target[:,0].view_as(pred_MNIST))
correct_MNIST += correct_MNIST_list.sum().item()
pred_sum = output[:,10:]
correct_sum_list = pred_sum.eq(target[:,1:].view_as(pred_sum))
correct_sum += correct_sum_list.sum().item()
correct_list = torch.logical_and(correct_MNIST_list, correct_sum_list)
correct += correct_list.sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, \
Accuracy-MNIST: {}/{} ({:.0f}%)\t\
Accuracy-sum: {}/{} ({:.0f}%)\t\
Accuracy-total: {}/{} ({:.0f}%)\n'.format(
test_loss,
correct_MNIST, len(test_loader.dataset),
100. * correct_MNIST / len(test_loader.dataset),
correct_sum, len(test_loader.dataset),
100. * correct_sum / len(test_loader.dataset),
correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
model = Model().to(device)
optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9)
EPOCHS=20
losses = []
for epoch in range(EPOCHS):
train(model, device, train_loader, optimizer, epoch, losses)
test(model, device, test_loader)
plot(losses)
Explanation: Model
The model consists of 3 convolutional blocks, followed by 3 linear blocks, with ReLU activation in between. The MNIST image is fed to the 1st Conv block, and the ranbdom number (one hot) to the first linear block (alongwith the output of the 3rd Conv block, concatenated).
End of explanation |
4,554 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EX 4
Step1: (二)準備測試資料
利用make_classification產生分類資料,n_features=2表示共有兩個特徵, n_informative=2 代表有兩個類別
所產生之 X
Step2: (三)測試分類器並作圖
接下來這段程式碼有兩個for 迴圈,外迴圈走過三個的dataset,內迴圈則走過所有的分類器。
為求簡要說明,我們將程式碼簡略如下:
1. 外迴圈:資料迴圈。首先畫出資料分佈,接著將資料傳入分類器迴圈
python
for ds in datasets | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.cross_validation import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
h = .02 # step size in the mesh
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Decision Tree",
"Random Forest", "AdaBoost", "Naive Bayes", "Linear Discriminant Ana.",
"Quadratic Discriminant Ana."]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
AdaBoostClassifier(),
GaussianNB(),
LinearDiscriminantAnalysis(),
QuadraticDiscriminantAnalysis()]
Explanation: EX 4: Classifier comparison
這個範例的主要目的
* 比較各種分類器
* 利用圖示法觀察各種分類器的分類邊界及區域
(一)引入函式並準備分類器
將分類器引入之後存放入一個list裏
這邊要注意 sklearn.discriminant_analysis 必需要 sklearn 0.17以上才能執行
End of explanation
X, y = make_classification(n_features=2, n_redundant=0, n_informative=2,
random_state=1, n_clusters_per_class=1)
rng = np.random.RandomState(2)
X += 2 * rng.uniform(size=X.shape)
linearly_separable = (X, y)
datasets = [make_moons(noise=0.3, random_state=0),
make_circles(noise=0.2, factor=0.5, random_state=1),
linearly_separable
]
Explanation: (二)準備測試資料
利用make_classification產生分類資料,n_features=2表示共有兩個特徵, n_informative=2 代表有兩個類別
所產生之 X: 100 x 2矩陣,y: 100 元素之向量,y的數值僅有0或是1用來代表兩種類別
利用X += 2 * rng.uniform(size=X.shape)加入適度的雜訊後將(X,y)資料集命名為linear_separable
最後利用make_moon()及make_circles()產生空間中月亮形狀及圓形之數據分佈後,一併存入datasets變數
End of explanation
%matplotlib inline
figure = plt.figure(figsize=(30,20), dpi=300)
i = 1
# iterate over datasets
for ds in datasets:
# preprocess dataset, split into training and test part
X, y = ds
X = StandardScaler().fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4)
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# just plot the dataset first
cm = plt.cm.RdBu
cm_bright = ListedColormap(['#FF0000', '#0000FF'])
ax = plt.subplot(len(datasets), (len(classifiers) + 1)//2, i)
# Plot the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
i += 1
# iterate over classifiers
for name, clf in zip(names[0:4], classifiers[0:4]):
ax = plt.subplot(len(datasets), (len(classifiers) + 1)//2, i)
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
# Plot also the training points
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# and testing points
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright,
alpha=0.6)
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(name,fontsize=28)
ax.text(xx.max() - .3, yy.min() + .3, ('%.2f' % score).lstrip('0'),
size=30, horizontalalignment='right')
i += 1
figure.subplots_adjust(left=.02, right=.98)
plt.show()
Explanation: (三)測試分類器並作圖
接下來這段程式碼有兩個for 迴圈,外迴圈走過三個的dataset,內迴圈則走過所有的分類器。
為求簡要說明,我們將程式碼簡略如下:
1. 外迴圈:資料迴圈。首先畫出資料分佈,接著將資料傳入分類器迴圈
python
for ds in datasets:
X, y = ds
#調整特徵值大小使其在特定範圍
X = StandardScaler().fit_transform(X)
#利用train_test_split將資料分成訓練集以及測試集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.4)
#產生資料網格來大範圍測試分類器,範例EX 3有詳述該用法
xx, yy = np.meshgrid(..........省略)
# 畫出訓練資料點
ax.scatter(X_train[:, 0], X_train[:, 1], c=y_train, cmap=cm_bright)
# 畫出測試資料點,用alpha=0.6將測試資料點畫的"淡"一些
ax.scatter(X_test[:, 0], X_test[:, 1], c=y_test, cmap=cm_bright, alpha=0.6)
2. 內迴圈:分類器迴圈。測試分類準確度並繪製分類邊界及區域
```python
for name, clf in zip(names, classifiers):
clf.fit(X_train, y_train)
score = clf.score(X_test, y_test)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
if hasattr(clf, "decision_function"):
Z = clf.decision_function(np.c_[xx.ravel(), yy.ravel()])
else:
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:, 1]
# Put the result into a color plot
Z = Z.reshape(xx.shape)
ax.contourf(xx, yy, Z, cmap=cm, alpha=.8)
```
為了顯示方便,我將原始碼的內圈改為 for name, clf in zip(names[0:4], classifiers[0:4]):只跑過前四個分類器。
End of explanation |
4,555 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate a Cubic Lattice with an Interpenetrating Dual Cubic Lattice
OpenPNM offers several options for generating dual networks. This tutorial will outline the use of the basic CubicDual class, while the DelaunayVoronoiDual is covered elsewhere. The main motivation for creating these dual networks is to enable the modeling of transport in the void phase on one network and through the solid phase on the other. These networks are interpenetrating but not overlapping or coincident so it makes the topology realistic or at least consistent. Moreover, these networks are interconnected to each other so they can exchange quantities between them, such as gas-solid heat transfer. The tutorial below outlines how to setup a CubicDual network object, describes the combined topology, and explains how to use labels to access different parts of the network.
As usual start by importing Scipy and OpenPNM
Step1: Let's create a CubicDual and visualize it in Paraview
Step2: The resulting network has two sets of pores, labelled as blue and red in the image below. By default, the main cubic lattice is referred to as the 'primary' network which is colored blue and is defined by the shape argument, and the interpenetrating dual is referred to as the 'secondary' network shown in red. These names are used to label the pores and throats associated with each network. These names can be changed by sending label_1 and label_2 arguments during initialization. The throats connecting the 'primary' and 'secondary' pores are labelled 'interconnect', and they can be seen as the diagonal connections below.
<img src="https
Step3: Inspection of this image shows that the 'primary' pores are located at expected locations for a cubic network including on the faces of the cube, and 'secondary' pores are located at the interstitial locations. There is one important nuance to note
Step4: Now that this topology is created, the next step would be to create Geometry objects for each network, and an additional one for the 'interconnect' throats | Python Code:
import scipy as sp
import numpy as np
import openpnm as op
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(10)
wrk = op.Workspace() # Initialize a workspace object
wrk.settings['loglevel'] = 50
Explanation: Generate a Cubic Lattice with an Interpenetrating Dual Cubic Lattice
OpenPNM offers several options for generating dual networks. This tutorial will outline the use of the basic CubicDual class, while the DelaunayVoronoiDual is covered elsewhere. The main motivation for creating these dual networks is to enable the modeling of transport in the void phase on one network and through the solid phase on the other. These networks are interpenetrating but not overlapping or coincident so it makes the topology realistic or at least consistent. Moreover, these networks are interconnected to each other so they can exchange quantities between them, such as gas-solid heat transfer. The tutorial below outlines how to setup a CubicDual network object, describes the combined topology, and explains how to use labels to access different parts of the network.
As usual start by importing Scipy and OpenPNM:
End of explanation
net = op.network.CubicDual(shape=[6, 6, 6])
Explanation: Let's create a CubicDual and visualize it in Paraview:
End of explanation
#NBVAL_IGNORE_OUTPUT
from openpnm.topotools import plot_connections, plot_coordinates
fig1 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
#NBVAL_IGNORE_OUTPUT
fig2 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig2 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig2, c='r')
#NBVAL_IGNORE_OUTPUT
fig3 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig3 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig3, c='r')
fig3 = plot_connections(network=net, throats=net.throats('primary'), fig=fig3, c='b')
#NBVAL_IGNORE_OUTPUT
fig4 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig4 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig4, c='r')
fig4 = plot_connections(network=net, throats=net.throats('primary'), fig=fig4, c='b')
fig4 = plot_connections(network=net, throats=net.throats('secondary'), fig=fig4, c='r')
#NBVAL_IGNORE_OUTPUT
fig5 = plot_coordinates(network=net, pores=net.pores('primary'), c='b')
fig5 = plot_coordinates(network=net, pores=net.pores('secondary'), fig=fig5, c='r')
fig5 = plot_connections(network=net, throats=net.throats('primary'), fig=fig5, c='b')
fig5 = plot_connections(network=net, throats=net.throats('secondary'), fig=fig5, c='r')
fig5 = plot_connections(network=net, throats=net.throats('interconnect'), fig=fig5, c='g')
Explanation: The resulting network has two sets of pores, labelled as blue and red in the image below. By default, the main cubic lattice is referred to as the 'primary' network which is colored blue and is defined by the shape argument, and the interpenetrating dual is referred to as the 'secondary' network shown in red. These names are used to label the pores and throats associated with each network. These names can be changed by sending label_1 and label_2 arguments during initialization. The throats connecting the 'primary' and 'secondary' pores are labelled 'interconnect', and they can be seen as the diagonal connections below.
<img src="https://i.imgur.com/3KRduQh.png" style="width: 60%" align="left"/>
The topotools module of openpnm also has handy visualization functions which can be used to consecutively build a picture of the network connections and coordinates.
Replace %matplotlib inline with %matplotlib notebook for 3D interactive plots.
End of explanation
print(f"No. of primary pores: {net.num_pores('primary')}")
print(f"No. of secondary pores: {net.num_pores('secondary')}")
print(f"No. of primary throats: {net.num_throats('primary')}")
print(f"No. of secondary throats: {net.num_throats('secondary')}")
print(f"No. of interconnect throats: {net.num_throats('interconnect')}")
Explanation: Inspection of this image shows that the 'primary' pores are located at expected locations for a cubic network including on the faces of the cube, and 'secondary' pores are located at the interstitial locations. There is one important nuance to note: Some of 'secondary' pores area also on the face, and are offset 1/2 a lattice spacing from the internal 'secondary' pores. This means that each face of the network is a staggered tiling of 'primary' and 'secondary' pores.
The 'primary' and 'secondary' pores are connected to themselves in a standard 6-connected lattice, and connected to each other in the diagonal directions. Unlike a regular Cubic network, it is not possible to specify more elaborate connectivity in the CubicDual networks since the throats of each network would be conceptually entangled. The figure below shows the connections in the secondary (left), and primary (middle) networks, as well as the interconnections between them (right).
Using the labels it is possible to query the number of each type of pore and throat on the network:
End of explanation
geo_pri = op.geometry.GenericGeometry(network=net,
pores=net.pores('primary'),
throats=net.throats('primary'))
geo_sec = op.geometry.GenericGeometry(network=net,
pores=net.pores('secondary'),
throats=net.throats('secondary'))
geo_inter = op.geometry.GenericGeometry(network=net,
throats=net.throats('interconnect'))
Explanation: Now that this topology is created, the next step would be to create Geometry objects for each network, and an additional one for the 'interconnect' throats:
End of explanation |
4,556 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Single-layer Perceptron For MNIST Dataset
Load the dataset
Step1: Visualize a sample subset of data
Step2: Side Note
Step3: Tensorflow Session
Step4: Evaluating the model | Python Code:
import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist_data = input_data.read_data_sets('/tmp/data', one_hot=True)
Explanation: Single-layer Perceptron For MNIST Dataset
Load the dataset
End of explanation
## Visualize a sample subset of data
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
f,a = plt.subplots(5,10,figsize=(10,5))
for i in range(5):
for j in range(10):
index = (i-1)*5 + j
a[i][j].imshow(np.reshape(mnist_data.test.images[index],(28,28)), cmap='Greys_r')
f.show()
Explanation: Visualize a sample subset of data
End of explanation
## set learning parameters
learning_rate = 0.5
batch_size = 128
trainig_iters = 2000
dispay_step = 20
# set network parameters
num_weights = 32
num_dims = 784 ## number of input pixels
num_classes = 10
num_layers = 1 ## number of hidden layers
# create placeholders for data
x = tf.placeholder(tf.float32, [None, num_dims])
y_ = tf.placeholder(tf.float32, [None, num_classes])
#### 2-D tensor of floating-point numbers, with a shape [None, 784].
#### --> None means that a dimension can be of any length
#### --> placeholder x is store a batch of data samples
#### --> placeholder y_ is for the true (onehot emcoder) labels
## define weights: intiailize using
weights = tf.Variable(tf.truncated_normal([num_dims, num_classes],
mean=0, stddev=1.0/num_dims))
biases = tf.Variable(tf.zeros(shape=[num_classes]))
# --> intiial weights are normal distribited, with sigma=(1/n)
## define the model (network)
y = tf.nn.softmax(tf.matmul(x, weights) + biases)
## define the loss-function: Cross-Entropy Loss Function
### One way to define the loss is as follows
### (but it is numericlly unstable and should be avoided)
# cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
#### --> tf.reduce_sum adds the elements in dimension s[ecified by reduction_indices
#### --> tf.reduce_mean computes the mean over all the examples in the batch
## Instead, we use tf.nn.softmax_cross_entropy_with_logits
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_))
## Training:
train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
Explanation: Side Note: What are Tensors?
A vector is one-dimensional
A matrix is 2-dimensional
A tensor can have $k$ dimenions
$$\left[\begin{array}{ccc} & & & \ & & & \ & & & \end{array}\right]_{n_1\times n_2 \times ... n_k}$$
Read about tensors: Animashree Anand Kumar
Set Network and Learning Parameters
End of explanation
## define initialization of variables
init = tf.initialize_all_variables()
## start a Tensorflow session and intitalize variables
sess = tf.Session()
sess.run(init)
losses = []
for i in range(trainig_iters):
batch_xs, batch_ys = mnist_data.train.next_batch(batch_size)
##sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
_, loss_val = sess.run([train_step, cross_entropy],
feed_dict={x: batch_xs, y_: batch_ys})
losses.append(loss_val)
fig = plt.figure(figsize=(10,5))
plt.plot(np.arange(len(losses)), losses)
plt.show()
Explanation: Tensorflow Session
End of explanation
correct_prediction = tf.equal(tf.argmax(y,dimension=1), tf.argmax(y_,dimension=1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist_data.test.images, y_: mnist_data.test.labels}))
with sess.as_default():
W = weights.eval()
fig,ax = plt.subplots(2,5,figsize=(20,8))
for i in range(10):
ax[i/5][i%5].imshow(np.reshape(W[:,i], (28,28)), cmap='Greys_r')
fig.show()
Explanation: Evaluating the model
End of explanation |
4,557 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimization Exercise 1
Imports
Step1: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential"
Step2: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$
Step3: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Optimization Exercise 1
Imports
End of explanation
def hat(x,a,b):
V=-a*x**2+b*x**4
return V
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
Explanation: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function:
End of explanation
a = 5.0
b = 1.0
x=np.linspace(-3,3,100)
plt.plot(x,hat(x,a,b))
plt.xlabel('x')
plt.ylabel('y')
plt.title('Hat Potential')
assert True # leave this to grade the plot
Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
End of explanation
x=np.linspace(-3,3,100)
x1=opt.minimize(hat,1,args=(a,b)).x
x2=opt.minimize(hat,-1,args=(a,b)).x
print(x1,x2)
plt.plot(x,hat(x,a,b),'b-')
plt.plot(x1,hat(x1,a,b),'ro')
plt.plot(x2,hat(x2,a,b),'ro')
plt.box(False)
plt.axvline(0,color='black')
plt.axhline(0,color='black')
plt.axvline(0)
assert True # leave this for grading the plot
Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective.
End of explanation |
4,558 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cauchy noise inference example
The Cauchy distribution (a Student-t with 1 degree of freedom) has fatter tails than the normal, meaning that it can be a better approximation to the noise process in some time series models; when there is a high degree of uncertainty regarding the underlying process, leading to a large standard deviation in a model's errors. This case study explains to the user the differences between the Cauchy and the more popular normal distribution and shows how to estimate a model with Cauchy errors in Pints.
Plot Cauchy probability density function versus a normal.
Step1: Compare a Cauchy error process with a normal error process for the logistic model.
Step2: Specify a model using a Cauchy error process and use adaptive covariance to fit it to data. | Python Code:
import pints
import pints.toy as toy
import pints.plot
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
x = np.linspace(-15, 15, 1000)
y_c = scipy.stats.t.pdf(x, 1, loc=0, scale=1)
y_t = scipy.stats.t.pdf(x, 3, loc=0, scale=1)
y_norm = scipy.stats.norm.pdf(x, 0, 3)
plt.plot(x, y_c, label ='Cauchy(0, 1)')
plt.plot(x, y_t, label ='Student-t(df=3, scale=1)')
plt.plot(x, y_norm, label ='Gaussian(0, 3)')
plt.xlabel('x')
plt.ylabel('Probability density')
plt.legend()
plt.show()
Explanation: Cauchy noise inference example
The Cauchy distribution (a Student-t with 1 degree of freedom) has fatter tails than the normal, meaning that it can be a better approximation to the noise process in some time series models; when there is a high degree of uncertainty regarding the underlying process, leading to a large standard deviation in a model's errors. This case study explains to the user the differences between the Cauchy and the more popular normal distribution and shows how to estimate a model with Cauchy errors in Pints.
Plot Cauchy probability density function versus a normal.
End of explanation
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
real_parameters = [0.015, 500]
times = np.linspace(0, 1000, 100)
signal_values = model.simulate(real_parameters, times)
# Add Cauchy noise
nu = 1
sigma = 10
observed_values_t = signal_values + scipy.stats.t.rvs(df=nu, loc=0, scale=sigma, size=signal_values.shape)
observed_values_norm = signal_values + scipy.stats.norm.rvs(loc=0, scale=sigma, size=signal_values.shape)
real_parameters = np.array(real_parameters + [sigma])
# Plot
fig = plt.figure(figsize=(12, 6))
plt.subplot(121)
plt.plot(times,signal_values,label = 'signal')
plt.plot(times,observed_values_t,label = 'observed')
plt.xlabel('Time')
plt.ylabel('Values')
plt.title('Cauchy errors')
plt.legend()
plt.subplot(122)
plt.plot(times,signal_values,label = 'signal')
plt.plot(times,observed_values_norm,label = 'observed')
plt.xlabel('Time')
plt.ylabel('Values')
plt.title('Gaussian errors')
plt.legend()
plt.show()
Explanation: Compare a Cauchy error process with a normal error process for the logistic model.
End of explanation
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, observed_values_t)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.CauchyLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, sigma*0.1],
[0.02, 600, sigma*100]
)
# Create a posterior log-likelihood (log(likelihood * prior))
log_posterior = pints.LogPosterior(log_likelihood, log_prior)
# Choose starting points for 3 mcmc chains
xs = [
real_parameters * 1.1,
real_parameters * 0.9,
real_parameters * 1.0,
]
# Create mcmc routine
mcmc = pints.MCMCController(log_posterior, 3, xs, method=pints.HaarioBardenetACMC)
# Add stopping criterion
mcmc.set_max_iterations(2000)
# Start adapting after 1000 iterations
mcmc.set_initial_phase_iterations(250)
# Disable logging mode
mcmc.set_log_to_screen(False)
# Run!
print('Running...')
chains = mcmc.run()
print('Done!')
# Show traces and histograms
pints.plot.trace(chains)
# Discard warm up
chains = chains[:, 1000:, :]
# Check convergence and other properties of chains
results = pints.MCMCSummary(chains=chains, time=mcmc.time(), parameter_names=['growth rate', 'capacity', 'sigma'])
print(results)
# Look at distribution in chain 0
pints.plot.pairwise(chains[0], kde=True, ref_parameters=real_parameters)
# Show graphs
plt.show()
Explanation: Specify a model using a Cauchy error process and use adaptive covariance to fit it to data.
End of explanation |
4,559 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DataFrame object
Create SparkContext and SparkSession
Step1: Create a DataFrame object
Creat DataFrame by reading a file
Step2: Create DataFrame with createDataFrame function
From an RDD
Elements in RDD has to be an Row object
Step3: From pandas DataFrame
Step4: From a list
Each element in the list becomes an Row in the DataFrame.
Step5: The following code generates a DataFrame consisting of two columns, each column is a vector column.
Why vector columns are generated in this case?
In this case, the list my_list has only one element, a tuple. Therefore, the DataFrame has only one row. This tuple has two elements. Therefore, it generates a two-columns DataFrame. Each element in the tuple is a list, so the resulting columns are vector columns. | Python Code:
# create entry points to spark
try:
sc.stop()
except:
pass
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
sc=SparkContext()
spark = SparkSession(sparkContext=sc)
Explanation: DataFrame object
Create SparkContext and SparkSession
End of explanation
mtcars = spark.read.csv(path='../../data/mtcars.csv',
sep=',',
encoding='UTF-8',
comment=None,
header=True,
inferSchema=True)
mtcars.show(n=5, truncate=False)
Explanation: Create a DataFrame object
Creat DataFrame by reading a file
End of explanation
from pyspark.sql import Row
rdd = sc.parallelize([
Row(x=[1,2,3], y=['a','b','c']),
Row(x=[4,5,6], y=['e','f','g'])
])
rdd.collect()
df = spark.createDataFrame(rdd)
df.show()
Explanation: Create DataFrame with createDataFrame function
From an RDD
Elements in RDD has to be an Row object
End of explanation
import pandas as pd
pdf = pd.DataFrame({
'x': [[1,2,3], [4,5,6]],
'y': [['a','b','c'], ['e','f','g']]
})
pdf
df = spark.createDataFrame(pdf)
df.show()
Explanation: From pandas DataFrame
End of explanation
my_list = [['a', 1], ['b', 2]]
df = spark.createDataFrame(my_list, ['letter', 'number'])
df.show()
df.dtypes
my_list = [['a', 1], ['b', 2]]
df = spark.createDataFrame(my_list, ['my_column'])
df.show()
df.dtypes
Explanation: From a list
Each element in the list becomes an Row in the DataFrame.
End of explanation
my_list = [(['a', 1], ['b', 2])]
df = spark.createDataFrame(my_list, ['x', 'y'])
df.show()
Explanation: The following code generates a DataFrame consisting of two columns, each column is a vector column.
Why vector columns are generated in this case?
In this case, the list my_list has only one element, a tuple. Therefore, the DataFrame has only one row. This tuple has two elements. Therefore, it generates a two-columns DataFrame. Each element in the tuple is a list, so the resulting columns are vector columns.
End of explanation |
4,560 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cahn-Hilliard with Primtive and Legendre Bases
This example uses a Cahn-Hilliard model to compare two different bases representations to discretize the microstructure. One basis representaion uses the primitive (or hat) basis and the other uses Legendre polynomials. The example includes the background theory about using Legendre polynomials as a basis in MKS. The MKS with two different bases are compared with the standard spectral solution for the Cahn-Hilliard solution at both the calibration domain size and a scaled domain size.
Cahn-Hilliard Equation
The Cahn-Hilliard equation is used to simulate microstructure evolution during spinodial decomposition and has the following form,
$$ \dot{\phi} = \nabla^2 \left( \phi^3 - \phi \right) - \gamma \nabla^4 \phi $$
where $\phi$ is a conserved ordered parameter and $\sqrt{\gamma}$ represents the width of the interface. In this example, the Cahn-Hilliard equation is solved using a semi-implicit spectral scheme with periodic boundary conditions, see Chang and Rutenberg for more details.
Basis Functions for the Microstructure Function and Influence Function
In this example, we will explore the differences when using the
Legendre polynomials as the basis function compared to the primitive
(or hat) basis for the microstructure function and the influence coefficients.
For more information about both of these basis please see the theory section.
Step1: Modeling with MKS
Generating Calibration Datasets
Because the microstructure is a continuous field that can have a range of values and changes over time, the first order influence coefficients cannot be calibrated with delta microstructures. Instead, a large number of simulations with random initial conditions will be used to calibrate the first order influence coefficients using linear regression. Let's show how this is done.
The function make_cahnHilliard from pymks.datasets provides a nice interface to generate calibration datasets for the influence coefficients. The function make_cahnHilliard requires the number of calibration samples, given by n_samples, and the size and shape of the domain, given by size.
Step2: The function make_cahnHilliard has generated n_samples number of random microstructures, X, and returned the same microstructures after they have evolved for one time step, given by y. Let's take a look at one of them.
Step3: Calibrate Influence Coefficients
In this example, we compare the difference between using the primitive (or hat) basis and the Legendre polynomial basis to represent the microstructure function. As mentioned above, the microstructures (concentration fields) are not discrete phases. This leaves the number of local states in local state space n_states as a free hyperparameter. In the next section, we look to see what a practical number of local states for bases would be.
Optimizing the Number of Local States
Below, we compare the difference in performance, as we vary the local state, when we choose the primitive basis and the Legendre polynomial basis.
The (X, y) sample data is split into training and test data. The code then optimizes n_states between 2 and 11 and the two basis with the parameters_to_tune variable. The GridSearchCV takes an MKSLocalizationModel instance, a scoring function (figure of merit) and the parameters_to_tune and then finds the optimal parameters with a grid search.
Step4: The optimal parameters are the LegendreBasis with only 4 local states. More terms don't improve the R-squared value.
Step5: As you can see the LegendreBasis converges faster than the PrimitiveBasis. In order to further compare performance between the two models, lets select 4 local states for both bases.
Comparing the Bases for n_states=4
Step6: Now let's look at the influence coefficients for both bases.
First, the PrimitiveBasis influence coefficients
Step7: Now, the LegendreBasis influence coefficients
Step8: Now, let's do some simulations with both sets of coefficients and compare the results.
Predict Microstructure Evolution
In order to compare the difference between the two bases, we need to have the Cahn-Hilliard simulation and the two MKS models start with the same initial concentration phi0 and evolve in time. In order to do the Cahn-Hilliard simulation, we need an instance of the class CahnHilliardSimulation.
Step9: Let's look at the inital concentration field.
Step10: In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS models.
Step11: Let's take a look at the concentration fields.
Step12: By just looking at the three microstructures is it difficult to see any differences. Below, we plot the difference between the two MKS models and the simulation.
Step13: The LegendreBasis basis clearly outperforms the PrimitiveBasis for the same value of n_states.
Resizing the Coefficients to use on Larger Systems
Below we compare the bases after the coefficients are resized.
Step14: Let's take a look at the initial large concentration field.
Step15: Let's look at the resized coefficients.
First, the influence coefficients from the PrimitiveBasis.
Step16: Now, the influence coefficients from the LegendreBases.
Step17: Once again, we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS models.
Step18: Both MKS models seem to predict the concentration faily well. However, the Legendre polynomial basis looks to be better. Again, let's look at the difference between the simulation and the MKS models. | Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
Explanation: Cahn-Hilliard with Primtive and Legendre Bases
This example uses a Cahn-Hilliard model to compare two different bases representations to discretize the microstructure. One basis representaion uses the primitive (or hat) basis and the other uses Legendre polynomials. The example includes the background theory about using Legendre polynomials as a basis in MKS. The MKS with two different bases are compared with the standard spectral solution for the Cahn-Hilliard solution at both the calibration domain size and a scaled domain size.
Cahn-Hilliard Equation
The Cahn-Hilliard equation is used to simulate microstructure evolution during spinodial decomposition and has the following form,
$$ \dot{\phi} = \nabla^2 \left( \phi^3 - \phi \right) - \gamma \nabla^4 \phi $$
where $\phi$ is a conserved ordered parameter and $\sqrt{\gamma}$ represents the width of the interface. In this example, the Cahn-Hilliard equation is solved using a semi-implicit spectral scheme with periodic boundary conditions, see Chang and Rutenberg for more details.
Basis Functions for the Microstructure Function and Influence Function
In this example, we will explore the differences when using the
Legendre polynomials as the basis function compared to the primitive
(or hat) basis for the microstructure function and the influence coefficients.
For more information about both of these basis please see the theory section.
End of explanation
import pymks
from pymks.datasets import make_cahn_hilliard
length = 41
n_samples = 400
dt = 1e-2
np.random.seed(101)
size=(length, length)
X, y = make_cahn_hilliard(n_samples=n_samples, size=size, dt=dt)
Explanation: Modeling with MKS
Generating Calibration Datasets
Because the microstructure is a continuous field that can have a range of values and changes over time, the first order influence coefficients cannot be calibrated with delta microstructures. Instead, a large number of simulations with random initial conditions will be used to calibrate the first order influence coefficients using linear regression. Let's show how this is done.
The function make_cahnHilliard from pymks.datasets provides a nice interface to generate calibration datasets for the influence coefficients. The function make_cahnHilliard requires the number of calibration samples, given by n_samples, and the size and shape of the domain, given by size.
End of explanation
from pymks.tools import draw_concentrations
draw_concentrations((X[0], y[0]),('Calibration Input', 'Calibration Output'))
Explanation: The function make_cahnHilliard has generated n_samples number of random microstructures, X, and returned the same microstructures after they have evolved for one time step, given by y. Let's take a look at one of them.
End of explanation
from pymks.bases import PrimitiveBasis
from sklearn.grid_search import GridSearchCV
from sklearn import metrics
mse = metrics.mean_squared_error
from pymks.bases import LegendreBasis
from pymks import MKSLocalizationModel
from sklearn.cross_validation import train_test_split
train_split_shape = (X.shape[0],) + (np.prod(X.shape[1:]),)
X_train, X_test, y_train, y_test = train_test_split(X.reshape(train_split_shape),
y.reshape(train_split_shape),
test_size=0.5, random_state=3)
prim_basis = PrimitiveBasis(2, [-1, 1])
leg_basis = LegendreBasis(2, [-1, 1])
params_to_tune = {'n_states': np.arange(2, 11),
'basis': [prim_basis, leg_basis]}
Model = MKSLocalizationModel(prim_basis)
scoring = metrics.make_scorer(lambda a, b: -mse(a, b))
fit_params = {'size': size}
gs = GridSearchCV(Model, params_to_tune, cv=5, fit_params=fit_params, n_jobs=3).fit(X_train, y_train)
Explanation: Calibrate Influence Coefficients
In this example, we compare the difference between using the primitive (or hat) basis and the Legendre polynomial basis to represent the microstructure function. As mentioned above, the microstructures (concentration fields) are not discrete phases. This leaves the number of local states in local state space n_states as a free hyperparameter. In the next section, we look to see what a practical number of local states for bases would be.
Optimizing the Number of Local States
Below, we compare the difference in performance, as we vary the local state, when we choose the primitive basis and the Legendre polynomial basis.
The (X, y) sample data is split into training and test data. The code then optimizes n_states between 2 and 11 and the two basis with the parameters_to_tune variable. The GridSearchCV takes an MKSLocalizationModel instance, a scoring function (figure of merit) and the parameters_to_tune and then finds the optimal parameters with a grid search.
End of explanation
print(gs.best_estimator_)
print(gs.score(X_test, y_test))
from pymks.tools import draw_gridscores
lgs = [x for x in gs.grid_scores_ \
if type(x.parameters['basis']) is type(leg_basis)]
pgs = [x for x in gs.grid_scores_ \
if type(x.parameters['basis']) is type(prim_basis)]
draw_gridscores([lgs, pgs], 'n_states', data_labels=['Legendre', 'Primitve'],
colors=['#f46d43', '#1a9641'], score_label='R-Squared',
param_label = 'L - Total Number of Local States')
Explanation: The optimal parameters are the LegendreBasis with only 4 local states. More terms don't improve the R-squared value.
End of explanation
prim_basis = PrimitiveBasis(n_states=4, domain=[-1, 1])
prim_model = MKSLocalizationModel(basis=prim_basis)
prim_model.fit(X, y)
leg_basis = LegendreBasis(4, [-1, 1])
leg_model = MKSLocalizationModel(basis=leg_basis)
leg_model.fit(X, y)
Explanation: As you can see the LegendreBasis converges faster than the PrimitiveBasis. In order to further compare performance between the two models, lets select 4 local states for both bases.
Comparing the Bases for n_states=4
End of explanation
from pymks.tools import draw_coeff
draw_coeff(prim_model.coeff)
Explanation: Now let's look at the influence coefficients for both bases.
First, the PrimitiveBasis influence coefficients:
End of explanation
draw_coeff(leg_model.coeff)
Explanation: Now, the LegendreBasis influence coefficients:
End of explanation
from pymks.datasets.cahn_hilliard_simulation import CahnHilliardSimulation
np.random.seed(66)
phi0 = np.random.normal(0, 1e-9, ((1,) + size))
ch_sim = CahnHilliardSimulation(dt=dt)
phi_sim = phi0.copy()
phi_prim = phi0.copy()
phi_legendre = phi0.copy()
Explanation: Now, let's do some simulations with both sets of coefficients and compare the results.
Predict Microstructure Evolution
In order to compare the difference between the two bases, we need to have the Cahn-Hilliard simulation and the two MKS models start with the same initial concentration phi0 and evolve in time. In order to do the Cahn-Hilliard simulation, we need an instance of the class CahnHilliardSimulation.
End of explanation
draw_concentrations([phi0[0]], ['Initial Concentration'])
Explanation: Let's look at the inital concentration field.
End of explanation
time_steps = 50
for steps in range(time_steps):
ch_sim.run(phi_sim)
phi_sim = ch_sim.response
phi_prim = prim_model.predict(phi_prim)
phi_legendre = leg_model.predict(phi_legendre)
Explanation: In order to move forward in time, we need to feed the concentration back into the Cahn-Hilliard simulation and the MKS models.
End of explanation
from pymks.tools import draw_concentrations
draw_concentrations((phi_sim[0], phi_prim[0], phi_legendre[0]),
('Simulation', 'Primative', 'Legendre'))
Explanation: Let's take a look at the concentration fields.
End of explanation
from sklearn import metrics
mse = metrics.mean_squared_error
from pymks.tools import draw_differences
draw_differences([(phi_sim[0] - phi_prim[0]), (phi_sim[0] - phi_legendre[0])],
['Simulaiton - Prmitive', 'Simulation - Legendre'])
print 'Primative mse =', mse(phi_sim[0], phi_prim[0])
print 'Legendre mse =', mse(phi_sim[0], phi_legendre[0])
Explanation: By just looking at the three microstructures is it difficult to see any differences. Below, we plot the difference between the two MKS models and the simulation.
End of explanation
big_length = 3 * length
big_size = (big_length, big_length)
prim_model.resize_coeff(big_size)
leg_model.resize_coeff(big_size)
phi0 = np.random.normal(0, 1e-9, (1,) + big_size)
phi_sim = phi0.copy()
phi_prim = phi0.copy()
phi_legendre = phi0.copy()
Explanation: The LegendreBasis basis clearly outperforms the PrimitiveBasis for the same value of n_states.
Resizing the Coefficients to use on Larger Systems
Below we compare the bases after the coefficients are resized.
End of explanation
draw_concentrations([phi0[0]], ['Initial Concentration'])
Explanation: Let's take a look at the initial large concentration field.
End of explanation
draw_coeff(prim_model.coeff)
Explanation: Let's look at the resized coefficients.
First, the influence coefficients from the PrimitiveBasis.
End of explanation
draw_coeff(leg_model.coeff)
Explanation: Now, the influence coefficients from the LegendreBases.
End of explanation
for steps in range(time_steps):
ch_sim.run(phi_sim)
phi_sim = ch_sim.response
phi_prim = prim_model.predict(phi_prim)
phi_legendre = leg_model.predict(phi_legendre)
draw_concentrations((phi_sim[0], phi_prim[0], phi_legendre[0]), ('Simulation', 'Primiative', 'Legendre'))
Explanation: Once again, we are going to march forward in time by feeding the concentration fields back into the Cahn-Hilliard simulation and the MKS models.
End of explanation
draw_differences([(phi_sim[0] - phi_prim[0]), (phi_sim[0] - phi_legendre[0])],
['Simulaiton - Primiative','Simulation - Legendre'])
print 'Primative mse =', mse(phi_sim[0], phi_prim[0])
print 'Legendre mse =', mse(phi_sim[0], phi_legendre[0])
Explanation: Both MKS models seem to predict the concentration faily well. However, the Legendre polynomial basis looks to be better. Again, let's look at the difference between the simulation and the MKS models.
End of explanation |
4,561 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Generators
One-hot encoding classes
Step1: Stratified split into train/val
Step2: Generator class
Step3: Generator instances
Step4: PR-AUC-based Callback
The callback would be used
Step5: Callback instances
Step6: Classifier
Defining a model
Step7: Initial tuning of the added fully-connected layer
Step8: Fine-tuning the whole model
After unfreezing all the layers(except last 3) I set a less aggressive initial learning rate and train until early stopping (or 100 epochs max).
Step9: Visualizing train and val PR AUC
Step10: I left the model to train longer on my local GPU. I then upload the best model and plots from the model training.
Step11: Selecting postprocessing thresholds
Step12: Post-processing segmentation submission
Predicting cloud classes for test.
Step13: Estimating set of images without masks.
Step14: Segmentation results
Step17: Future work
estimate distribution of classes in test set using the classifier. Then, if necessary and doable, modify val set accordingly,
use the classifier with explainability technique Gradient-weighted Class Activation Mapping to generate a baseline, (please see GradCAM | Python Code:
train_df = pd.read_csv('/home/dex/Desktop/ml/cloud data/train.csv')
train_df.head()
train_df = train_df[~train_df['EncodedPixels'].isnull()]
train_df['Image'] = train_df['Image_Label'].map(lambda x: x.split('_')[0])
train_df['Class'] = train_df['Image_Label'].map(lambda x: x.split('_')[1])
classes = train_df['Class'].unique()
train_df = train_df.groupby('Image')['Class'].agg(set).reset_index()
for class_name in classes:
train_df[class_name] = train_df['Class'].map(lambda x: 1 if class_name in x else 0)
train_df.head()
# dictionary for fast access to ohe vectors
img_2_ohe_vector = {img:vec for img, vec in zip(train_df['Image'], train_df.iloc[:, 2:].values)}
Explanation: Data Generators
One-hot encoding classes
End of explanation
train_imgs, val_imgs = train_test_split(train_df['Image'].values,
test_size=0.2,
stratify=train_df['Class'].map(lambda x: str(sorted(list(x)))), # sorting present classes in lexicographical order, just to be sure
random_state=2019)
Explanation: Stratified split into train/val
End of explanation
class DataGenenerator(Sequence):
def __init__(self, images_list=None, folder_imgs=train_imgs_folder,
batch_size=32, shuffle=True, augmentation=None,
resized_height=260, resized_width=260, num_channels=3):
self.batch_size = batch_size
self.shuffle = shuffle
self.augmentation = augmentation
if images_list is None:
self.images_list = os.listdir(folder_imgs)
else:
self.images_list = deepcopy(images_list)
self.folder_imgs = folder_imgs
self.len = len(self.images_list) // self.batch_size
self.resized_height = resized_height
self.resized_width = resized_width
self.num_channels = num_channels
self.num_classes = 4
self.is_test = not 'train' in folder_imgs
if not shuffle and not self.is_test:
self.labels = [img_2_ohe_vector[img] for img in self.images_list[:self.len*self.batch_size]]
def __len__(self):
return self.len
def on_epoch_start(self):
if self.shuffle:
random.shuffle(self.images_list)
def __getitem__(self, idx):
current_batch = self.images_list[idx * self.batch_size: (idx + 1) * self.batch_size]
X = np.empty((self.batch_size, self.resized_height, self.resized_width, self.num_channels))
y = np.empty((self.batch_size, self.num_classes))
for i, image_name in enumerate(current_batch):
path = os.path.join(self.folder_imgs, image_name)
img = cv2.resize(cv2.imread(path), (self.resized_height, self.resized_width)).astype(np.float32)
if not self.augmentation is None:
augmented = self.augmentation(image=img)
img = augmented['image']
X[i, :, :, :] = img/255.0
if not self.is_test:
y[i, :] = img_2_ohe_vector[image_name]
return X, y
def get_labels(self):
if self.shuffle:
images_current = self.images_list[:self.len*self.batch_size]
labels = [img_2_ohe_vector[img] for img in images_current]
else:
labels = self.labels
return np.array(labels)
albumentations_train = Compose([
VerticalFlip(), HorizontalFlip(), Rotate(limit=20), GridDistortion()
], p=1)
Explanation: Generator class
End of explanation
data_generator_train = DataGenenerator(train_imgs, augmentation=albumentations_train)
data_generator_train_eval = DataGenenerator(train_imgs, shuffle=False)
data_generator_val = DataGenenerator(val_imgs, shuffle=False)
Explanation: Generator instances
End of explanation
class PrAucCallback(Callback):
def __init__(self, data_generator, num_workers=num_cores,
early_stopping_patience=5,
plateau_patience=3, reduction_rate=0.5,
stage='train', checkpoints_path='checkpoints/'):
super(Callback, self).__init__()
self.data_generator = data_generator
self.num_workers = num_workers
self.class_names = ['Fish', 'Flower', 'Sugar', 'Gravel']
self.history = [[] for _ in range(len(self.class_names) + 1)] # to store per each class and also mean PR AUC
self.early_stopping_patience = early_stopping_patience
self.plateau_patience = plateau_patience
self.reduction_rate = reduction_rate
self.stage = stage
self.best_pr_auc = -float('inf')
if not os.path.exists(checkpoints_path):
os.makedirs(checkpoints_path)
self.checkpoints_path = checkpoints_path
def compute_pr_auc(self, y_true, y_pred):
pr_auc_mean = 0
print(f"\n{'#'*30}\n")
for class_i in range(len(self.class_names)):
precision, recall, _ = precision_recall_curve(y_true[:, class_i], y_pred[:, class_i])
pr_auc = auc(recall, precision)
pr_auc_mean += pr_auc/len(self.class_names)
print(f"PR AUC {self.class_names[class_i]}, {self.stage}: {pr_auc:.3f}\n")
self.history[class_i].append(pr_auc)
print(f"\n{'#'*20}\n PR AUC mean, {self.stage}: {pr_auc_mean:.3f}\n{'#'*20}\n")
self.history[-1].append(pr_auc_mean)
return pr_auc_mean
def is_patience_lost(self, patience):
if len(self.history[-1]) > patience:
best_performance = max(self.history[-1][-(patience + 1):-1])
return best_performance == self.history[-1][-(patience + 1)] and best_performance >= self.history[-1][-1]
def early_stopping_check(self, pr_auc_mean):
if self.is_patience_lost(self.early_stopping_patience):
self.model.stop_training = True
def model_checkpoint(self, pr_auc_mean, epoch):
if pr_auc_mean > self.best_pr_auc:
# remove previous checkpoints to save space
for checkpoint in glob.glob(os.path.join(self.checkpoints_path, 'classifier_densenet169_epoch_*')):
os.remove(checkpoint)
self.best_pr_auc = pr_auc_mean
self.model.save(os.path.join(self.checkpoints_path, f'classifier_densenet169_epoch_{epoch}_val_pr_auc_{pr_auc_mean}.h5'))
print(f"\n{'#'*20}\nSaved new checkpoint\n{'#'*20}\n")
def reduce_lr_on_plateau(self):
if self.is_patience_lost(self.plateau_patience):
new_lr = float(keras.backend.get_value(self.model.optimizer.lr)) * self.reduction_rate
keras.backend.set_value(self.model.optimizer.lr, new_lr)
print(f"\n{'#'*20}\nReduced learning rate to {new_lr}.\n{'#'*20}\n")
def on_epoch_end(self, epoch, logs={}):
y_pred = self.model.predict_generator(self.data_generator, workers=self.num_workers)
y_true = self.data_generator.get_labels()
# estimate AUC under precision recall curve for each class
pr_auc_mean = self.compute_pr_auc(y_true, y_pred)
if self.stage == 'val':
# early stop after early_stopping_patience=4 epochs of no improvement in mean PR AUC
self.early_stopping_check(pr_auc_mean)
# save a model with the best PR AUC in validation
self.model_checkpoint(pr_auc_mean, epoch)
# reduce learning rate on PR AUC plateau
self.reduce_lr_on_plateau()
def get_pr_auc_history(self):
return self.history
Explanation: PR-AUC-based Callback
The callback would be used:
1. to estimate AUC under precision recall curve for each class,
2. to early stop after 5 epochs of no improvement in mean PR AUC,
3. save a model with the best PR AUC in validation,
4. to reduce learning rate on PR AUC plateau.
End of explanation
train_metric_callback = PrAucCallback(data_generator_train_eval)
val_callback = PrAucCallback(data_generator_val, stage='val')
Explanation: Callback instances
End of explanation
from keras.losses import binary_crossentropy
def dice_coef(y_true, y_pred, smooth=1):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_loss(y_true, y_pred):
smooth = 1.
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = y_true_f * y_pred_f
score = (2. * K.sum(intersection) + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
return 1. - score
def bce_dice_loss(y_true, y_pred):
return binary_crossentropy(y_true, y_pred) + dice_loss(y_true, y_pred)
!pip install -U git+https://github.com/qubvel/efficientnet
import efficientnet.keras as efn
def get_model():
K.clear_session()
base_model = efn.EfficientNetB2(weights='imagenet', include_top=False, pooling='avg', input_shape=(260, 260, 3))
x = base_model.output
y_pred = Dense(4, activation='sigmoid')(x)
return Model(inputs=base_model.input, outputs=y_pred)
model = get_model()
from keras_radam import RAdam
Explanation: Classifier
Defining a model
End of explanation
for base_layer in model.layers[:-3]:
base_layer.trainable = False
model.compile(optimizer=RAdam(warmup_proportion=0.1, min_lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
history_0 = model.fit_generator(generator=data_generator_train,
validation_data=data_generator_val,
epochs=20,
callbacks=[train_metric_callback, val_callback],
workers=num_cores,
verbose=1
)
Explanation: Initial tuning of the added fully-connected layer
End of explanation
for base_layer in model.layers[:-3]:
base_layer.trainable = True
model.compile(optimizer=RAdam(warmup_proportion=0.1, min_lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
history_1 = model.fit_generator(generator=data_generator_train,
validation_data=data_generator_val,
epochs=20,
callbacks=[train_metric_callback, val_callback],
workers=num_cores,
verbose=1,
initial_epoch=1
)
Explanation: Fine-tuning the whole model
After unfreezing all the layers(except last 3) I set a less aggressive initial learning rate and train until early stopping (or 100 epochs max).
End of explanation
def plot_with_dots(ax, np_array):
ax.scatter(list(range(1, len(np_array) + 1)), np_array, s=50)
ax.plot(list(range(1, len(np_array) + 1)), np_array)
pr_auc_history_train = train_metric_callback.get_pr_auc_history()
pr_auc_history_val = val_callback.get_pr_auc_history()
plt.figure(figsize=(10, 7))
plot_with_dots(plt, pr_auc_history_train[-1])
plot_with_dots(plt, pr_auc_history_val[-1])
plt.xlabel('Epoch', fontsize=15)
plt.ylabel('Mean PR AUC', fontsize=15)
plt.legend(['Train', 'Val'])
plt.title('Training and Validation PR AUC', fontsize=20)
plt.savefig('pr_auc_hist.png')
plt.figure(figsize=(10, 7))
plot_with_dots(plt, history_0.history['loss']+history_1.history['loss'])
plot_with_dots(plt, history_0.history['val_loss']+history_1.history['val_loss'])
plt.xlabel('Epoch', fontsize=15)
plt.ylabel('Binary Crossentropy', fontsize=15)
plt.legend(['Train', 'Val'])
plt.title('Training and Validation Loss', fontsize=20)
plt.savefig('loss_hist.png')
import efficientnet
efficientnet.sq
Explanation: Visualizing train and val PR AUC
End of explanation
from keras_radam import RAdam
model = load_model('/home/dex/Downloads/classifier_densenet169_epoch_9_val_pr_auc_0.8459108536857672.h5', custom_objects={'RAdam':RAdam})
Explanation: I left the model to train longer on my local GPU. I then upload the best model and plots from the model training.
End of explanation
class_names = ['Fish', 'Flower', 'Sugar', 'Gravel']
def get_threshold_for_recall(y_true, y_pred, class_i, recall_threshold=0.94, precision_threshold=0.90, plot=False):
precision, recall, thresholds = precision_recall_curve(y_true[:, class_i], y_pred[:, class_i])
i = len(thresholds) - 1
best_recall_threshold = None
while best_recall_threshold is None:
next_threshold = thresholds[i]
next_recall = recall[i]
if next_recall >= recall_threshold:
best_recall_threshold = next_threshold
i -= 1
# consice, even though unnecessary passing through all the values
best_precision_threshold = [thres for prec, thres in zip(precision, thresholds) if prec >= precision_threshold][0]
if plot:
plt.figure(figsize=(10, 7))
plt.step(recall, precision, color='r', alpha=0.3, where='post')
plt.fill_between(recall, precision, alpha=0.3, color='r')
plt.axhline(y=precision[i + 1])
recall_for_prec_thres = [rec for rec, thres in zip(recall, thresholds)
if thres == best_precision_threshold][0]
plt.axvline(x=recall_for_prec_thres, color='g')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.ylim([0.0, 1.05])
plt.xlim([0.0, 1.0])
plt.legend(['PR curve',
f'Precision {precision[i + 1]: .2f} corresponding to selected recall threshold',
f'Recall {recall_for_prec_thres: .2f} corresponding to selected precision threshold'])
plt.title(f'Precision-Recall curve for Class {class_names[class_i]}')
return best_recall_threshold, best_precision_threshold
y_pred = model.predict_generator(data_generator_val, workers=num_cores)
y_true = data_generator_val.get_labels()
recall_thresholds = dict()
precision_thresholds = dict()
for i, class_name in tqdm(enumerate(class_names)):
recall_thresholds[class_name], precision_thresholds[class_name] = get_threshold_for_recall(y_true, y_pred, i, plot=True)
Explanation: Selecting postprocessing thresholds
End of explanation
data_generator_test = DataGenenerator(folder_imgs=test_imgs_folder, shuffle=False)
y_pred_test = model.predict_generator(data_generator_test, workers=num_cores)
Explanation: Post-processing segmentation submission
Predicting cloud classes for test.
End of explanation
recall_thresholds = {'Fish': 0.29,
'Flower': 0.4,
'Sugar': 0.29,
'Gravel': 0.29}
image_labels_empty = set()
class_names = ['Fish', 'Flower', 'Sugar', 'Gravel']
for i, (img, predictions) in enumerate(zip(os.listdir(test_imgs_folder), y_pred_test)):
for class_i, class_name in enumerate(class_names):
if predictions[class_i] < recall_thresholds[class_name]:
image_labels_empty.add(f'{img}_{class_name}')
recall_thresholds, precision_thresholds
Explanation: Estimating set of images without masks.
End of explanation
submission = pd.read_csv('sub_convex_0.6586.csv')
#pd.read_csv('/home/dex/Desktop/ml/cloud artgor/submissions/submission_Model_segmentation_segm_resnet152_bs_5_2019-11-05.csv')
#submission = pd.read_csv('../input/densenet201cloudy/densenet201.csv')
submission.head()
predictions_nonempty = set(submission.loc[~submission['EncodedPixels'].isnull(), 'Image_Label'].values)
print(f'{len(image_labels_empty.intersection(predictions_nonempty))} masks would be removed')
#removing masks
submission.loc[submission['Image_Label'].isin(image_labels_empty), 'EncodedPixels'] = np.nan
submission.to_csv('submission_segmentation_and_classifier_628.csv', index=None)
Explanation: Segmentation results:
End of explanation
import pandas as pd
import os
from tqdm import tqdm
test_imgs_folder = '/home/dex/Desktop/ml/cloud data//test_images/'
folder_images = '/home/dex/Desktop/ml/clouds/input/understanding-clouds-resized/test_images_525/test_images_525'
model_class_names=['Fish', 'Flower', 'Gravel', 'Sugar']
sub = pd.read_csv('/home/dex/Downloads/submission_segmentation_and_classifier (1).csv')
sub.head()
def rle_decode(mask_rle: str = '', shape = (1400, 2100)):
'''
Decode rle encoded mask.
:param mask_rle: run-length as string formatted (start length)
:param shape: (height, width) of array to return
Returns numpy array, 1 - mask, 0 - background
'''
s = mask_rle.split()
starts, lengths = [np.asarray(x, dtype=int) for x in (s[0:][::2], s[1:][::2])]
starts -= 1
ends = starts + lengths
img = np.zeros(shape[0] * shape[1], dtype=np.uint8)
for lo, hi in zip(starts, ends):
img[lo:hi] = 1
return img.reshape(shape, order='F')
def mask2rle(img):
'''
Convert mask to rle.
img: numpy array, 1 - mask, 0 - background
Returns run length as string formated
'''
pixels= img.T.flatten()
pixels = np.concatenate([[0], pixels, [0]])
runs = np.where(pixels[1:] != pixels[:-1])[0] + 1
runs[1::2] -= runs[::2]
return ' '.join(str(x) for x in runs)
min_size = [10000 ,10000, 10000, 10000]
def post_process_minsize(mask, min_size):
Post processing of each predicted mask, components with lesser number of pixels
than `min_size` are ignored
num_component, component = cv2.connectedComponents(mask.astype(np.uint8))
predictions = np.zeros(mask.shape)
num = 0
for c in range(1, num_component):
p = (component == c)
if p.sum() > min_size:
predictions[p] = 1
num += 1
return predictions #, num
def make_mask(df, image_label, shape = (1400, 2100), cv_shape = (525, 350),debug=False):
Create mask based on df, image name and shape.
if debug:
print(shape,cv_shape)
df = df.set_index('Image_Label')
encoded_mask = df.loc[image_label, 'EncodedPixels']
# print('encode: ',encoded_mask[:10])
mask = np.zeros((shape[0], shape[1]), dtype=np.float32)
if encoded_mask is not np.nan:
mask = rle_decode(encoded_mask,shape=shape) # original size
return cv2.resize(mask, cv_shape)
def draw_convex_hull(mask, mode='convex'):
img = np.zeros(mask.shape)
contours, hier = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
if mode=='rect': # simple rectangle
x, y, w, h = cv2.boundingRect(c)
cv2.rectangle(img, (x, y), (x+w, y+h), (255, 255, 255), -1)
elif mode=='convex': # minimum convex hull
hull = cv2.convexHull(c)
cv2.drawContours(img, [hull], 0, (255, 255, 255),-1)
elif mode=='approx':
epsilon = 0.02*cv2.arcLength(c,True)
approx = cv2.approxPolyDP(c,epsilon,True)
cv2.drawContours(img, [approx], 0, (255, 255, 255),-1)
else: # minimum area rectangle
rect = cv2.minAreaRect(c)
box = cv2.boxPoints(rect)
box = np.int0(box)
cv2.drawContours(img, [box], 0, (255, 255, 255),-1)
return img/255.
mode='convex' # choose from 'rect', 'min', 'convex' and 'approx'
img_label_list = []
enc_pixels_list = []
test_imgs = os.listdir(folder_images)
for test_img_i, test_img in enumerate(tqdm(test_imgs)):
for class_i, class_name in enumerate(model_class_names):
path = os.path.join(folder_images, test_img)
img = cv2.imread(path).astype(np.float32) # use already-resized ryches' dataset
img = img/255.
img2 = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_label_list.append(f'{test_img}_{class_name}')
mask = make_mask(sub, test_img + '_' + class_name,shape=(350,525))
if True:
#if class_name == 'Flower' or class_name =='Sugar': # you can decide to post-process for some certain classes
mask = draw_convex_hull(mask.astype(np.uint8), mode=mode)
mask[img2<=2/255.] = 0
mask = post_process_minsize(mask, min_size[class_i])
if mask.sum() == 0:
enc_pixels_list.append(np.nan)
else:
mask = np.where(mask > 0.5, 1.0, 0.0)
enc_pixels_list.append(mask2rle(mask))
submission_df = pd.DataFrame({'Image_Label': img_label_list, 'EncodedPixels': enc_pixels_list})
submission_df.to_csv('sub_convex_0.6586.csv', index=None)
class_names = ['Fish', 'Flower', 'Sugar', 'Gravel']
Explanation: Future work
estimate distribution of classes in test set using the classifier. Then, if necessary and doable, modify val set accordingly,
use the classifier with explainability technique Gradient-weighted Class Activation Mapping to generate a baseline, (please see GradCAM: extracting masks from classifier),
improve the classifier,
use the classifier as backbone for UNet-like solution.
End of explanation |
4,562 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
barf
Step1: Gives a simple framework for defined what the different fields in the data should look like
The parsing is done with third-party libraries | Python Code:
import re
import string
class SequenceModel(object):
def __init__(self, alphabet, flags=re.IGNORECASE):
self.alphabet = alphabet
self.pattern = re.compile(r'[{alphabet}]*$'.format(alphabet=alphabet),
flags=flags)
def __str__(self):
return 'SequenceModel<{0}>'.format(self.alphabet)
def checkValid(self, data):
if self.pattern.match(data) is None:
raise AssertionError('{0} failed to match "{1}"'.format(self, data))
dnaModel = SequenceModel('ACGT')
dnanModel = SequenceModel('ACGTN')
iupacModel = SequenceModel('ARNDCQEGHILKMFPSTWXVBZX')
models = {'DNA': dnaModel,
'DNA+N': dnanModel,
'IUPAC': iupacModel}
Explanation: barf: a drop-in bioinformatics file format validator
Camille Scott
Lab for Data Intensive Biology
March 18, 2016
Background
High-throughput DNA sequencing creates HUGE volumes of data
Often times this data is processed through complex pipelines
Motivation
Most bioinformatics software is developed by academic labs; and
most academic labs don't have the time or money for formal verification; and
most academic labs can't even afford software engineers;
AND, most users of the software are barely computationally literate
The result?
Motivation: The Story of "L"
"L is a new graduate student with a background in bench biology who has been diving deeper into bioinformatics as a part of her PhD research. “L” is assembling a genome, and her analysis pipeline includes the widely-used program Trimmomatic [1] to remove low-quality sequences. Some days later, when the pipeline has completed, she starts to look more closely at her results, and realizes that one of the sequence files output by Trimmomatic is truncated: the FASTQ formatted file ends part-way through a DNA sequence, and includes no quality score. This does not trigger a failure until a few steps down the pipeline, when another program mysteriously crashes. As it turns out, Trimmomatic occasionally fails due to some unpredictable error which cannot be reproduced, and instead of returning an error code, returns 0 and truncates its output. Had the program behaved more appropriately, “L” would have identified the problem early-on and saved significant time."
Problem!
This story is common
Reporting bugs is time consuming, fixing them moreso
Many bugs are unpredictabe or system-dependent
Bad data gives bad results: junk in, junk out
barf tries to solve this problem by allowing easy drop-in data validation for any bioinformatics program.
Aside: why the name?
Our lab likes silly names, and we discussed this concept a while back. It goes along well with my mRNA annotator, dammit :)
Case: FASTA Format
This barf prototype targets FASTA format
Widely used, poorly defined, often broken
The expected format can be defined in BNF form as follows:
<file> ::= <token> | <token> <file>
<token> ::= <ignore> | <seq>
<ignore> ::= <whitespace> | <comment> <newline>
<seq> ::= <header> <molecule> <newline>
<header> ::= ">" <arbitrary text> <newline>
<molecule> ::= <mol-line> | <mol-line> <molecule>
<mol-line> ::= <nucl-line> | <prot-line>
<nucl-line>::= "^[ACGTURYKMSWBDHVNX-]+$"
<prot-line>::= "^[ABCDEFGHIKLMNOPQRSTUVWYZX*-]+$"
in reality....
In reality, this format is often toyed with
Many programs fail on the header, many mangle the sequence with line breaks, many parsers don't follow convention
The format itself is trivial to parse; the data is what needs to be checked
Approach
Instead of focusing on parsing, we focus on a limited model of the data
This is a crude type system based on regular expressions
Can be arbitrary python code
End of explanation
# a no-op
!cat test.500.fasta | ./barf --sequence-model DNA cat > test.out.fa
!head test.out.fa
# a bad sequence
!cat badfasta.fa | ./barf --sequence-model DNA cat > test.out.fa
# we don't check biological meaning
!cat badfasta.fa | ./barf --sequence-model IUPAC cat > test.out.fa
# adding in a new sequence
!cat test.500.fasta | ./barf --sequence-model DNA ./fraudster.py > /dev/null
Explanation: Gives a simple framework for defined what the different fields in the data should look like
The parsing is done with third-party libraries: we assume the parsers make a best-effort to consume that data
In a way, we validate both the parser and the program
What about "L"?
Only validating data elements is not enough: we need to validate the data is a whole
Introduce a collection: keep track of what inputs and outputs
We want $OUTPUT \subseteq INPUT$ where $INPUT$ and $OUTPUT$ are sets of some record (in thise case, FASTA)
Bloom Filters
This data is BIG! Hundreds of millions of elements!
Exact counting not an option
Instead, use a bloom filter to represent the set
This way, we can assert that each element in the output is an element of the input.
Implementation
The invocation format is based on GNU time
Pass the target program and arguments to barf; pipe input to barf; output on standard out
barf manages the subprocess in the background: validates input, sends it to a FIFO for the program to consume
End of explanation |
4,563 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Frost Number Model
Link to this notebook
Step1: Part 1
Adapt the base case configuration to a mean temperature of the coldest month of -13C, and of the warmest month +19.5C (the actual values for Vladivostok in Far East Russia).
Step2: Part 2
Now run the same simulation for Yakutsk on the Lena River in Siberia. There the warmest month is again 19.5C, but the coldest month is -40.9C.
Step3: Questions
Please answer the following questions in each box (double click the box to edit).
Q1 | Python Code:
# Import standard Python modules
import numpy as np
import pandas
import matplotlib.pyplot as plt
# Import the FrostNumber PyMT model
import pymt.models
frost_number = pymt.models.FrostNumber()
Explanation: Frost Number Model
Link to this notebook: https://github.com/csdms/pymt/blob/master/docs/demos/frost_number.ipynb
Install command:
$ conda install notebook pymt_permamodel
Download a local copy of the notebook:
$ curl -O https://raw.githubusercontent.com/csdms/pymt/master/docs/demos/frost_number.ipynb
Start a Jupyter Notebook session in the current directory:
$ jupyter notebook
Introduction to Permafrost Processes - Lesson 1
This lab has been designed and developed by Irina Overeem and Mark Piper, CSDMS, University of Colorado, CO
with assistance of Kang Wang, Scott Stewart at CSDMS, University of Colorado, CO, and Elchin Jafarov, at Los Alamos National Labs, NM.
These labs are developed with support from NSF Grant 1503559, ‘Towards a Tiered Permafrost Modeling Cyberinfrastructure’
Classroom organization
This lab is the first in a series of introduction to permafrost process modeling, designed for inexperienced users. In this first lesson, we explore the Air Frost Number model and learn to use the CSDMS Python Modeling Toolkit (PyMT). We implemented a basic configuration of the Air Frost Number (as formulated by Nelson and Outcalt in 1987). This series of labs is designed for inexperienced modelers to gain some experience with running a numerical model, changing model inputs, and analyzing model output. Specifically, this first lab looks at what controls permafrost occurrence and compares the occurrence of permafrost in Russia.
Basic theory on the Air Frost Number is presented in Frost Number Model Lecture 1.
This lab will likely take ~ 1,5 hours to complete in the classroom. This time assumes you are unfamiiar with the PyMT and need to learn setting parameters, saving runs, downloading data and looking at output (otherwise it will be much faster).
We will use netcdf files for output, this is a standard output from all CSDMS models. If you have no experience with visualizing these files, Panoply software will be helpful. Find instructions on how to use this software.
Learning objectives
Skills
familiarize with a basic configuration of the Air Frost Number Model
hands-on experience with visualizing NetCDF output with Panoply.
Topical learning objectives:
what is the primary control on the occurrence of permafrost
freezing and thawing day indices and how to approximate these
where in Russia permafrost occurs
References and More information
Nelson, F.E., Outcalt, S.I., 1987. A computational method for prediction and prediction and regionalization of permafrost. Arct. Alp. Res. 19, 279–288.
Janke, J., Williams, M., Evans, A., 2012. A comparison of permafrost prediction models along a section of Trail Ridge Road, RMNP, CO. Geomorphology 138, 111-120.
The Air Frost number
The Air Frost number uses the mean annual air temperature of a location (MAAT), as well as the yearly temperature amplitude. In the Air Frost parametrization the Mean monthly temperature of the warmest month (Tw) and coldest month (Tc) set that amplitude. The 'degree thawing days' are above 0 C, the 'degree freezing days' are below 0 C. To arrive at the cumulative freezing degree days and thawing degree days the annual temperature curve is approximated by a cosine as defined by the warmest and coldest months, and one can integrate under the cosine curve (see figure, and more detailed notes in the associated presentation).
End of explanation
config_file, config_folder = frost_number.setup(T_air_min=-13., T_air_max=19.5)
frost_number.initialize(config_file, config_folder)
frost_number.update()
frost_number.output_var_names
frost_number.get_value('frostnumber__air')
Explanation: Part 1
Adapt the base case configuration to a mean temperature of the coldest month of -13C, and of the warmest month +19.5C (the actual values for Vladivostok in Far East Russia).
End of explanation
args = frost_number.setup(T_air_min=-40.9, T_air_max=19.5)
frost_number.initialize(*args)
frost_number.update()
frost_number.get_value('frostnumber__air')
Explanation: Part 2
Now run the same simulation for Yakutsk on the Lena River in Siberia. There the warmest month is again 19.5C, but the coldest month is -40.9C.
End of explanation
data = pandas.read_csv("https://raw.githubusercontent.com/mcflugen/pymt_frost_number/master/data/t_air_min_max.csv")
data
frost_number = pymt.models.FrostNumber()
config_file, run_folder = frost_number.setup()
frost_number.initialize(config_file, run_folder)
t_air_min = data["atmosphere_bottom_air__time_min_of_temperature"]
t_air_max = data["atmosphere_bottom_air__time_max_of_temperature"]
fn = np.empty(6)
for i in range(6):
frost_number.set_value("atmosphere_bottom_air__time_min_of_temperature", t_air_min.values[i])
frost_number.set_value("atmosphere_bottom_air__time_max_of_temperature", t_air_max.values[i])
frost_number.update()
fn[i] = frost_number.get_value('frostnumber__air')
years = range(2000, 2006)
plt.subplot(211)
plt.plot(years, t_air_min, years, t_air_max)
plt.subplot(212)
plt.plot(years, fn)
Explanation: Questions
Please answer the following questions in each box (double click the box to edit).
Q1: What is the Frost Number the model returned for each of the Vladivostok and Yakutsk temperature regimes?
A1: the answer in here.
Q2: What do these specific Frost numbers imply for the likelihood of permafrost occurrence?
A2:
Q3: How do you think the annual temperature distribution would look in regions of Russia bordering the Barents Sea?
A3:
Q4: Devise a scenario and run it; was the calculated Frost number what you expected?
A4:
Q5: On the map below, find the how the permafrost is mapped in far west coastal Russia at high-latitude (e.g. Murmansk).
A5:
Q6: Discuss the factors that would make this first-order approach problematic?
A6:
Q7: When would the temperature in the first cm in the soil be significantly different from the air temperature?
A7:
Extra Credit
Now run a time series.
End of explanation |
4,564 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 3 - Generative Models
This chapter introduces learning from a bayesian perspective, giving some simple examples and showing how Naive Bayes relies on this kind of theory.
Bayesian concept learning
We can model learning behavior by introducing a hypothesis space $\mathcal{H}$ and, based on a dataset $\mathcal{D}$ we review our belief in $h\in\mathcal{H}$. This update is done by using Bayes rule.
Likelihood
The likelihood $p(\mathcal{D}|h)$ represents how probable is to get the data $\mathcal{D}$ uder the hypothesis $h$. We want want to avoid suspicious coincidences, so we want models with high likelihood values for our data. Simply using the $h$ with highest likelihhod is called the maximum likelihood estimate (or MLE).
Prior
Maybe we also want to encode some prior belief into a hypothesis $h$. This way, we can weight down hypothesis that are too strange but yet good to explain our data. We do this by introducing the prior probability of the hypothesis $p(h)$. This can be seen as subjective (excluding the fact that there are principled ways to choose the prior), but the final result is that, in the limit where we have too much data, the prior don't matter.
Posterior
The posterior is the updated belief in our hypothesis given its prior and it likelihood
Step1: As bayesians, we model the problem as finding the parameter $\theta$ of a bernoulli distribution given the data. For this, we start with an uniform prior, since with this prior we make the least number of assumptions about $\theta$
Step2: The likelihood of the dataset is given by $\theta^{N_{\mathrm{heads}}} (1 - \theta)^{N_{\mathrm{tails}}} \propto Beta(\theta|N_{\mathrm{heads}} + 1, N_{\mathrm{tails}} + 1)$
Step3: The posterior is, therefore, the product
Step4: If we, instead, had a prior $Beta(2, 5)$ we would have
Step5: We can see the difference between both priors in the following figure
Step6: Notice how, in both cases, the most probable value for $\theta$ is close to the real 0.7 value. We can also see that the posterior in the first case, where the prior is uniform, is equal to the likelihood. On the other hand, if our prior tends to other values of $\theta$, the posterior approaches the likelihood more slowly.
Dice
We want to compute the probability of a dice of $K$ faces comes up with the face $k$, using a set of observations $\mathcal{D}$. A single trial can be written as
Step7: Sex
Step8: Height
Step9: Weight
Step10: Shoe size
Step11: Predicting | Python Code:
coin = bernoulli(0.7)
samples = coin.rvs(20)
num_heads = sum(samples)
num_tails = len(samples) - num_heads
Explanation: Chapter 3 - Generative Models
This chapter introduces learning from a bayesian perspective, giving some simple examples and showing how Naive Bayes relies on this kind of theory.
Bayesian concept learning
We can model learning behavior by introducing a hypothesis space $\mathcal{H}$ and, based on a dataset $\mathcal{D}$ we review our belief in $h\in\mathcal{H}$. This update is done by using Bayes rule.
Likelihood
The likelihood $p(\mathcal{D}|h)$ represents how probable is to get the data $\mathcal{D}$ uder the hypothesis $h$. We want want to avoid suspicious coincidences, so we want models with high likelihood values for our data. Simply using the $h$ with highest likelihhod is called the maximum likelihood estimate (or MLE).
Prior
Maybe we also want to encode some prior belief into a hypothesis $h$. This way, we can weight down hypothesis that are too strange but yet good to explain our data. We do this by introducing the prior probability of the hypothesis $p(h)$. This can be seen as subjective (excluding the fact that there are principled ways to choose the prior), but the final result is that, in the limit where we have too much data, the prior don't matter.
Posterior
The posterior is the updated belief in our hypothesis given its prior and it likelihood:
$$p(h|\mathcal{D}) = \frac{p(\mathcal{D}|h)p(h)}{p(\mathcal{D})}= \frac{p(\mathcal{D}|h)p(h)}{\sum_h p(\mathcal{D}|h)p(h)}$$
But we can simplify to:
$$p(h|\mathcal{D}) \propto p(\mathcal{D}|h)p(h)$$
Simply using the hypothesis $h$ with highest posterior probability is called maximum a posterior (or MAP) estimate.
Posterior predictive
If we want to know if a new observable $x$ satisfies our hypothesis given the data we can average through our beliefs in the values of $h$, by doing:
$$p(x \mathrm{\ satisfies\ hypothesis\ }|\mathcal{D}) = \sum_h p(x \mathrm{\ satisfies\ hypothesis\ }|h) p(h|\mathcal{D})$$
When using MAP, we simply plug $p(h|\mathcal{D})= \delta_{h_{MAP}}$ and when using MLE we plug $\delta_{h_{MLE}}$, but the bayesian way of computing this result is to perform the sum.
Examples
We will introduce the classification problem using the bayes rule:
$$p(y = c| x,\theta) = p(x|y = c, \theta)p(y=c|\theta)$$
This is the probability of a point be of class $c$, given its features' vector $x$ and some parameter vector $\theta$. In this case, our hypothesis space will the the space of the values of $\theta$ (some parameter we would like to estimate). Again, we will find the best hypothesis, i.e. the best $\theta$, given the data $\mathcal{D}$.
Coin
In this example we want to find the probability of a coin toss gives head given a set of observations $\mathcal{D}$. We first supose that the probability of a coin toss turns to be head is given by:
$$Ber(x|\theta)$$
That is $p(x = \mathrm{head}|\theta) = \theta$ and $p(x = \mathrm{tails}|\theta) = 1 - \theta$. For a set of observations, the likelihood is written as:
$$p(\mathcal{D}|\theta) = \theta^{N_{\mathrm{heads}}} (1 - \theta)^{N_{\mathrm{tails}}}$$
Where $N$ is the number of occurencies. The posterior is written as:
$$p(\theta|\mathcal{D}) \propto p(\mathcal{D}|\theta)p(\theta) = \theta^{N_{\mathrm{heads}}} (1 - \theta)^{N_{\mathrm{tails}}} p(\theta)$$
If we have our prior on $\theta$ is given by $Beta(x|a, b) \propto \theta^{a - 1} (1 - \theta)^{b - 1}$ the posterior will be:
$$p(\theta|\mathcal{D}) \propto Beta(\theta|N_{\mathrm{heads}} + a, N_{\mathrm{tails}} + b) \propto \theta^{N_{\mathrm{tails}} + a - 1} (1 - \theta)^{N_{\mathrm{tails}} + b - 1}$$
If the prior and the posterior have the same form, the prior is said to be conjugated with respect to the likelihood. If we instead had two sets of observations $\mathcal{D}'$ and $\mathcal{D}''$ we will arrive with:
$$p(\theta|\mathcal{D}', \mathcal{D}'') \propto Beta(\theta|N_{\mathrm{heads}}' + N_{\mathrm{heads}}'' + a, N_{\mathrm{tails}}' + N_{\mathrm{tails}}'' + b)$$
Therefore, the algorithm is well suited for online learning.
MAP
If $\theta | D \sim Ber(N_{\mathrm{heads}} + a, N_{\mathrm{tails}} + b)$, the MAP estimator of the parameter is given by:
$$\theta^{*} = \underset{\theta}{\operatorname{argmax}} p(\theta|\mathcal{D}) = \frac{N_{\mathrm{heads}} + a - 1}{N + a + b - 2}$$
where $N = N_{\mathrm{heads}} + N_{\mathrm{tails}}$
MLE
The MLE estimator of parameters, on the other hand, is given by:
$$\theta^* = \underset{\theta}{\operatorname{argmax}} p(\mathcal{D}|\theta) = \frac{N_{\mathrm{heads}}}{N}$$
Mean and Variance
The mean of $\theta$ given the data is given by:
$$\mathbb{E}[\theta|\mathcal{D}] = \frac{N_{\mathrm{heads}} + a}{N + a + b} = \lambda \frac{a}{a + b} + (1 - \lambda)\hat{\theta}_{\mathrm{MLE}}$$
where $\lambda = \frac{a + b}{N + a + b}$. That is, the expected value of the parameter is a convex combination of the MLE estimator and the prior mean, approaching the MLE when $N \rightarrow \infty$. Similarly, one can show that the posterior mode is a convex combination of the prior mode and the MLE, and also converges to the MLE.
The variance is given by:
$$\mathrm{var}[\theta|\mathcal{D}] = \frac{(N_{\mathrm{heads}} + a)(N_{\mathrm{tails}} + b)}{(N_{\mathrm{heads}} + N_{\mathrm{tails}} + a + b)^2 (N_{\mathrm{heads}} + N_{\mathrm{tails}} + a + b + 1)}$$
When $N \gg a, b$, we have:
$$\mathrm{var}[\theta|\mathcal{D}] = \frac{\hat{\theta}{\mathrm{MLE}}(1 - \hat{\theta}{\mathrm{MLE}})}{N}$$
So the error of the estimation of $\theta$ goes down at a rate of $\frac{1}{\sqrt{N}}$.
Posterior predictive
Finally, the probability of heads in a future trial is given by:
$$p(x = \mathrm{heads}| \mathcal{D}) = \int p(x = \mathrm{heads}|\theta)p(\theta|\mathcal{D})d\theta = \int \theta\ Beta(\theta|N_{\mathrm{heads}} + a, N_{\mathrm{tails}} + b) = \mathbb{E}[\theta|\mathcal{D}] = \frac{N_{\mathrm{heads}} + a}{N + a + b}$$
If we use an uniform prior $a = b = 1$, we obtain what is called pseudo-counts or add-one smoothing, that prevents the black swan effect, in which the probability of an unseen event would be zero.
Example
Suppose we have a real coin which gives heads with probability $\theta = 0.7$ and tails with probability $(1 - \theta)$. Suppose now that we don't know $\theta$, but we have access to 40 trials of this coin:
End of explanation
prior_1 = beta(1,1)
Explanation: As bayesians, we model the problem as finding the parameter $\theta$ of a bernoulli distribution given the data. For this, we start with an uniform prior, since with this prior we make the least number of assumptions about $\theta$:
End of explanation
likelihood = beta(num_heads+1, num_tails+1)
Explanation: The likelihood of the dataset is given by $\theta^{N_{\mathrm{heads}}} (1 - \theta)^{N_{\mathrm{tails}}} \propto Beta(\theta|N_{\mathrm{heads}} + 1, N_{\mathrm{tails}} + 1)$:
End of explanation
posterior_1 = beta(num_heads+1, num_tails+1)
Explanation: The posterior is, therefore, the product:
$$p(\theta|\mathcal{D}) \propto Beta(\theta|N_{\mathrm{heads}} + 1, N_{\mathrm{tails}} + 1) Beta(\theta|1, 1) = Beta(\theta|N_{\mathrm{heads}} + 1, N_{\mathrm{tails}} + 1)$$
End of explanation
prior_2 = beta(2, 5)
posterior_2 = beta(num_heads + 2, num_tails + 5)
Explanation: If we, instead, had a prior $Beta(2, 5)$ we would have
End of explanation
colors = sns.color_palette('husl', 10)
fig, ax = plt.subplots(ncols=2, nrows=1, figsize=(16, 4))
thetas = np.linspace(0, 1, 1000)
ax[0].plot(thetas, list(map(prior_1.pdf, thetas)), color=colors[4], label='prior')
ax[0].plot(thetas, list(map(likelihood.pdf, thetas)), color=colors[7], label='likelihood')
ax[0].plot(thetas, list(map(posterior_1.pdf, thetas)), color=colors[0], label='posterior')
ax[0].legend(loc='upper left')
ax[0].set_xlabel('theta')
ax[0].set_ylabel('pdf')
ax[0].set_title('Beta(1,1) prior')
ax[1].plot(thetas, list(map(prior_2.pdf, thetas)), color=colors[4], label='prior')
ax[1].plot(thetas, list(map(likelihood.pdf, thetas)), color=colors[7], label='likelihood')
ax[1].plot(thetas, list(map(posterior_2.pdf, thetas)), color=colors[0], label='posterior')
ax[1].legend(loc='upper left')
ax[1].set_xlabel('theta')
ax[1].set_ylabel('pdf')
ax[1].set_title('Beta(2,5) prior')
plt.suptitle('Different priors with the same data')
plt.show()
Explanation: We can see the difference between both priors in the following figure:
End of explanation
people = pd.DataFrame([
['M', 6, 180, 12],
['M', 5.92, 190, 11],
['M', 5.58, 170, 12],
['M', 5.92, 165, 10],
['F', 5, 100, 6],
['F', 5.5, 150, 8],
['F', 5.42, 130, 7],
['F', 5.75, 150, 9]],
columns = ['sex', 'height', 'weight', 'size']
)
people
number_of_people = len(people)
Explanation: Notice how, in both cases, the most probable value for $\theta$ is close to the real 0.7 value. We can also see that the posterior in the first case, where the prior is uniform, is equal to the likelihood. On the other hand, if our prior tends to other values of $\theta$, the posterior approaches the likelihood more slowly.
Dice
We want to compute the probability of a dice of $K$ faces comes up with the face $k$, using a set of observations $\mathcal{D}$. A single trial can be written as:
$$p(x = k|\theta) = Cat(x|\theta)$$
The likelihood of the data under the parameter is given, therefore, by the product:
$$p(\mathcal{D}|\theta) = \prod_k \theta_k^{N_K}$$
Where $N_k$ is the number of that came with face $k$ up. The posterior is given as:
$$p(\theta|\mathcal{D}) = p(\theta) \prod_k \theta_k^{N_K}$$
So, if we use the Dirichlet distribution, the prior is conjugate:
$$p(\theta) = Dir(\theta|\alpha) = \frac{1}{B(\alpha)} \sum_k \theta_k^{\alpha_k - 1} \mathbb{I}(\theta \in S_k)$$
If this is the case, the posterior becomes simply:
$$p(\theta|\mathcal{D}) = Dir(\theta|\alpha_1 + N_1, \dots, \alpha_K + N_K)$$
MAP
Using the lagrange optimization method with the restriction that $\sum_k \theta_k = 1$, we can find the $\theta$ that maximizes the posterior:
$$\hat{\theta}_{k\mathrm{MAP}} = \frac{N_k + \alpha_k - 1}{N - \alpha_0 - K}$$
where $\alpha_0 = \sum_k \alpha_k$.
MLE
Using the same method to find $\theta$ that maximizes the likelihood we find the MLE estimator:
$$\hat{\theta}_{k\mathrm{MLE}} = \frac{N_k}{N}$$
This result can be achieved by the MAP estimator when using an uniform prior $\alpha_k = 1$.
Posterior predictive
Finally, we compute the probability of the next dice trial comes up with face $k$:
$$p(x = k|\mathcal{D}) = \int p(x = k|\theta)p(\theta|\mathcal{D})d\theta = \int d\theta_k\ p(x = k|\theta_k) \int p(\theta|\mathcal{D}) d\theta_{-k} = \int \theta_j p(\theta_j|\mathcal{D})d\theta = \mathbb{E}[\theta_j|\mathcal{D}]$$
$$\therefore p(x = k|\mathcal{D}) = \frac{\alpha_j + N_j}{\alpha_0 + N}$$
This also avoids the zero-count problem, since it adds the prior to the observations. This is even more important when we have more categories to fit.
Naive Bayes classifier
Supose we are still working on the classification problem:
$$p(y = c| x, \theta) = p(x|y = c, \theta) p(\theta)$$
But this time $x$ is a feature vector of $D$ dimensions. Naive Bayes consists of assuming the features are conditionally independent given the class:
$$p(x|y = c, \theta) = \prod_d p(x_d|y=c, \theta_{dc})$$
In the case of $x$ being a real valued real vector, the conditional probability of each feature can be modeled by a normal distribution $p(x|y = c, \theta) = \prod_d N(x_d|\mu_{dc}, \sigma^2_{dc})$. In the case binary feature we use $Ber(x_d|\mu_{dc})$ and in the case of categorical features we use $Cat(x_d|\mu_{dc})$ where $\mu$ is a vector.
Fitting the parameters
Now we show how to train a Naive Bayes classifier. We will see how to compute the MLE, the posterior and the MAP of this kind of model.
MLE
For a single data point, the likelihood is written as:
$$p(x_i, y_i|\theta) = p(y_i|\pi) \prod_d p(x_{id}|\theta_j) = \pi_c \prod_d p(x_{id}|\theta_{dc})$$
where $c$ is the class of the given example. This is beacause $p(y_i|\pi) = Cat(y_i|\pi)$. For the example dataset $\mathcal{D}$, the log-likelihood is given by:
$$\log p(\mathcal{D}|\theta) = \sum_{c} N_c \log \pi_c + \sum_d \sum_c \sum_{i: y_i = c} \log p(x_{id}|\theta_{dc})$$
This uncouples the terms of $\pi$ and $\theta$, so we can optimize these separately. If we find $pi$ that maximizes its term, we get:
$$\hat{\pi}_{c\mathrm{MLE}} = \frac{N_c}{N}$$
If we suppose all features are binary, then $p(x_{id}|\theta_{dc}) = Ber(x_{id}|\theta_{dc})$. Therefore, the parameters that maximizes this term are:
$$\hat{\theta}{dc\mathrm{MLE}} = \frac{N{dc}}{N_c}$$
Posterior
If we factor our prior in terms of $\pi$ and $\theta$, we get:
$$p(\theta) = p(\pi)\prod_d \prod_c p(\theta_{dc})$$
If we use $p(\pi) = Dir(\pi|\theta)$ and $p(\theta_{dc}) = Beta(\theta_{dc}|\beta_0, \beta_1)$ we get a conjugate prior, that is, our posteriors becomes:
$$p(\pi|\mathcal{D}) = Dir(\pi|N_1 + \alpha_1, \dots, N_C + \alpha_C)$$
$$p(\theta_{dc}|\mathcal{D}) = Beta(\theta_{dc}|(N_c - N_{dc}) +\beta_0, N_{dc} + \beta_1)$$
Therefore:
$$p(\theta) = Dir(\pi|N_1 + \alpha_1, \dots, N_C + \alpha_C) \prod_d \prod_c Beta(\theta_{dc}|(N_c - N_{dc}) +\beta_0, N_{dc} + \beta_1)$$
The MAP estimator of the $\pi$ parameter is given by the mode of its distribution:
$$\hat{\pi}_{c\mathrm{MAP}} = \frac{\alpha_c + N_c - 1}{\alpha_0 + N- K}$$
Where the MAP estimator of the $\theta_{dc}$ parameters is:
$$\hat{\theta}{dc\mathrm{MAP}} = \frac{(N_c - N{dc}) + \beta_0 - 1}{ N_{c} + \beta_0 + \beta_1}$$
Posterior predictive
When performing predictions, we aim to compute:
$$p(y=c|x, \mathcal{D}) = p(y=c|\mathcal{D}) \prod_d p(x_d|y=c, \mathcal{D})$$
The Bayesian way to compute these probabilities is to integrate over the unknown parameters of the model:
$$p(y=c|x, \mathcal{D}) = \left(\int Cat(y = c|\pi)p(\pi|\mathcal{D}) d\pi\right) \prod_d \left(\int Ber(x_d|y=c, \theta_{dc}) p(\theta_{dc}|\mathcal{D}) d\theta_{dc}\right)$$
As we saw in the previous sessions, if $p(\pi|\mathcal{D})$ is dirichlet and $p(\theta_{dc}|\mathcal{D})$ is beta distributed, we can subtitute this integrals eaach by the mean value of the parameters:
$$p(y=c|x, \mathcal{D}) = \bar{\pi}c \prod_d \bar{\theta}{dc}^{\mathbb{I}(x_d = 1)} (1 - \bar{\theta}_{dc})^{\mathbb{I}(x_d = 0)}$$
where:
$$\bar{\pi}c = \frac{N_c + \alpha_c}{N + \alpha_0}$$
$$\bar{\theta}{dc} = \frac{N_{dc} + \beta_1}{N_c + \beta_0 + \beta_1}$$
This pseudo-counts added in both predictions result in less overfitting, showing the advantage of using the correct bayesian way of computing a prediction.
Example: real valued features
In this example we predict the sex of a person given a vector of real valued features (height, weight and shoe size), using the Naive Bayes approach.
Data
End of explanation
mask_male = people.sex == 'M'
mask_female = people.sex == 'F'
prob_male = sum(mask_male)/number_of_people
prob_female = sum(mask_female)/number_of_people
Explanation: Sex
End of explanation
males_height = people.loc[mask_male, 'height']
males_height_dist = norm(loc=males_height.mean(), scale=males_height.std())
females_height = people.loc[mask_female, 'height']
females_height_dist = norm(loc=females_height.mean(), scale=females_height.std())
colors = sns.color_palette('RdBu', 10)
heights = np.linspace(4, 7, 100)
plt.plot(heights, females_height_dist.pdf(heights), label='female', color=colors[0])
plt.plot(heights, males_height_dist.pdf(heights), label='male', color=colors[-1])
plt.xlabel('Height')
plt.ylabel('PDF')
plt.title('Distribution of heights by sex')
plt.legend()
plt.show()
Explanation: Height
End of explanation
males_weight = people.loc[mask_male, 'weight']
males_weight_dist = norm(loc=males_weight.mean(), scale=males_weight.std())
females_weight = people.loc[mask_female, 'weight']
females_weight_dist = norm(loc=females_weight.mean(), scale=females_weight.std())
colors = sns.color_palette("RdBu", 10)
weights = np.linspace(60, 220, 100)
plt.plot(weights, females_weight_dist.pdf(weights), label='female', color=colors[0])
plt.plot(weights, males_weight_dist.pdf(weights), label='male', color=colors[-1])
plt.xlabel('Weight')
plt.ylabel('PDF')
plt.title('Distribution of weights by sex')
plt.legend()
plt.show()
Explanation: Weight
End of explanation
males_size = people.loc[mask_male, 'size']
males_size_dist = norm(loc=males_size.mean(), scale=males_size.std())
females_size = people.loc[mask_female, 'size']
females_size_dist = norm(loc=females_size.mean(), scale=females_size.std())
colors = sns.color_palette("RdBu", 10)
sizes = np.linspace(3, 15, 100)
plt.plot(sizes, females_size_dist.pdf(sizes), label='female', color= colors[0])
plt.plot(sizes, males_size_dist.pdf(sizes), label='male', color= colors[-1])
plt.xlabel('Shoe size')
plt.ylabel('PDF')
plt.title('Distribution of shoe size by sex')
plt.legend()
plt.show()
Explanation: Shoe size
End of explanation
person = namedtuple('person', ['height', 'weight', 'size'])
def male(person):
likelihood = males_height_dist.pdf(person.height) * \
males_weight_dist.pdf(person.weight) * \
males_size_dist.pdf(person.size)
prior = prob_male
return likelihood * prior
def female(person):
likelihood = females_height_dist.pdf(person.height) * \
females_weight_dist.pdf(person.weight) * \
females_size_dist.pdf(person.size)
prior = prob_female
return likelihood * prior
mary = person(5.68, 120, 7.5)
mary_male, mary_female = male(mary), female(mary)
mary_total = mary_male + mary_female
print("The probability of Mary be a male is:", mary_male/mary_total)
print("The probability of Mary be a female is:", mary_female/mary_total)
john = person(7, 200, 14)
john_male, john_female = male(john), female(john)
john_total = john_male + john_female
print("The probability of John be a male is:", john_male/john_total)
print("The probability of John be a female is:", john_female/john_total)
Explanation: Predicting
End of explanation |
4,565 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
logistic Regression using sklearn
| Python Code::
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
model.fit(X_train, y_train)
|
4,566 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculate Asym vs. Emin from bhm_e
Rewriting calc_Asym_vs_emin_energies for bhm_e.
Generate Asym_df for a specific dataset.
P. Schuster
July 18, 2018
Step1: Load data
Step2: Functionalize | Python Code:
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='ticks')
import sys
import os
import os.path
import scipy.io as sio
import time
import numpy as np
np.set_printoptions(threshold=np.nan) # print entire matrices
import pandas as pd
from tqdm import *
sys.path.append('../scripts/')
import bicorr as bicorr
import bicorr_math as bicorr_math
import bicorr_plot as bicorr_plot
import bicorr_e as bicorr_e
import bicorr_sums as bicorr_sums
%load_ext autoreload
%autoreload 2
Explanation: Calculate Asym vs. Emin from bhm_e
Rewriting calc_Asym_vs_emin_energies for bhm_e.
Generate Asym_df for a specific dataset.
P. Schuster
July 18, 2018
End of explanation
det_df = bicorr.load_det_df()
chList, fcList, detList, num_dets, num_det_pairs = bicorr.build_ch_lists()
dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df)
singles_hist_e_n, e_bin_edges, dict_det_to_index, dict_index_to_det = bicorr_e.load_singles_hist_both(filepath = '../analysis/Cf072115_to_Cf072215b/datap/',plot_flag=True, save_flag=True)
bhm_e, e_bin_edges, note = bicorr_e.load_bhm_e('../analysis/Cf072115_to_Cf072215b/datap')
bhp_e = np.zeros((len(det_df),len(e_bin_edges)-1,len(e_bin_edges)-1))
for index in det_df.index.values: # index is same as in `bhm`
bhp_e[index,:,:] = bicorr_e.build_bhp_e(bhm_e,e_bin_edges,pair_is=[index])[0]
emins = np.arange(0.5,5,.2)
emax = 12
print(emins)
angle_bin_edges = np.arange(8,190,10)
print(angle_bin_edges)
Explanation: Load data
End of explanation
Asym_df = bicorr_sums.calc_Asym_vs_emin_energies(det_df, dict_index_to_det, singles_hist_e_n, e_bin_edges, bhp_e, e_bin_edges, emins, emax, angle_bin_edges, plot_flag=True, show_flag = True, save_flag=False)
Asym_df.head()
Explanation: Functionalize
End of explanation |
4,567 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
downloading small dataset of less than 100MB from tensorflow_datasets
| Python Code::
import tensorflow_datasets as tfds
ds, meta = tfds.load('citrus_leaves', with_info=True, split='train', shuffle_files=True)
|
4,568 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
NiftyNet provides a "CRF-RNN" layer for image segmentation, following the idea proposed in
Zheng et al., Conditional Random Fields as Recurrent Neural Networks, ICCV 2015.
Different from many open-source implementations of the method, NiftyNet's version implements the core algorithm Fast High‐Dimensional Gaussian Filtering with Numpy and TensorFlow APIs. One of the advantages is that the layer is ready-to-use -- once these common Python packages are (pip-)installed.
This tutorial demonstrates the basic usage of this layer.
This demo requires two files, a CT image 100_CT.nii and a 'noisy' logits map 100__niftynet_out.nii.gz, to be placed in demo/crf_as_rnn folder.
The files (~100Mb) can be downloaded from https
Step1: The CT volume has 144x144x144 voxels,
the predicted logits has nine channels corresponding
to eight types of organs (plus a channel for background).
As a demo, let's only study a slice of the volume
Step2: Visualisation of the slice
Step3: Build the graph and initialise a CRFAsRNNLayer
Step4: Let's visualise the outputs
Step5: Effects of weighting of the kernels
The pairwise potential of Dense CRF consists of two kernels
Step6: Visualise the effects of using spatial kernels only
Step7: Visualise the effects of using bilateral kernels only | Python Code:
import sys
niftynet_path = '/Users/demo/Documents/NiftyNet/'
sys.path.insert(0, niftynet_path)
from niftynet.layer.crf import CRFAsRNNLayer
import nibabel as nib
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
ct_image = nib.load('100_CT.nii').get_data()
logits = nib.load('100__niftynet_out.nii.gz').get_data()
print('CT image shape: {}'.format(ct_image.shape))
print('Predicted logits shape: {}'.format(logits.shape))
Explanation: NiftyNet provides a "CRF-RNN" layer for image segmentation, following the idea proposed in
Zheng et al., Conditional Random Fields as Recurrent Neural Networks, ICCV 2015.
Different from many open-source implementations of the method, NiftyNet's version implements the core algorithm Fast High‐Dimensional Gaussian Filtering with Numpy and TensorFlow APIs. One of the advantages is that the layer is ready-to-use -- once these common Python packages are (pip-)installed.
This tutorial demonstrates the basic usage of this layer.
This demo requires two files, a CT image 100_CT.nii and a 'noisy' logits map 100__niftynet_out.nii.gz, to be placed in demo/crf_as_rnn folder.
The files (~100Mb) can be downloaded from https://www.dropbox.com/s/lf1hvfyvuo9lsc1/demo_prob.tar.gz?dl=1
CRF inferences
Given a CRF model and some noisy segmentation outputs, model inference is to find the best underlying 'true' segmentation label that minimises CRF's energy.
Load and visualise the data
End of explanation
slice_idx = 73
ct_slice = np.transpose(ct_image[::-1, ::-1, slice_idx])
ct_logits = np.transpose(logits[::-1, ::-1, slice_idx, 0, :], axes=(1, 0, 2))
print('CT slice shape: {}'.format(ct_slice.shape))
print('Predicted logits shape: {}'.format(ct_logits.shape))
Explanation: The CT volume has 144x144x144 voxels,
the predicted logits has nine channels corresponding
to eight types of organs (plus a channel for background).
As a demo, let's only study a slice of the volume:
End of explanation
f, axes = plt.subplots(1, 2, figsize=(10,5))
axes[0].imshow(ct_slice, cmap='gray')
axes[1].imshow(np.argmax(ct_logits, -1), cmap='Accent')
Explanation: Visualisation of the slice:
End of explanation
# make a tensor with batch size 1 and channel 9
tf_logits = tf.constant(ct_logits, dtype=tf.float32)
tf_logits = tf.expand_dims(tf_logits, axis=0)
print(tf_logits)
# make a tensor of the CT intensity
tf_features = tf.constant(ct_slice, dtype=tf.float32)
tf_features = tf.expand_dims(tf_features, axis=0)
tf_features = tf.expand_dims(tf_features, axis=-1)
print(tf_features)
crf_layer = CRFAsRNNLayer(alpha=160., beta=3., gamma=3., T=5,
w_init=[1.0 * np.ones(9), 3.0 * np.ones(9)])
smoothed_logits = crf_layer(tf_features, tf_logits)
smoothed_label = tf.cast(tf.argmax(smoothed_logits, -1), tf.int32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_label = sess.run(smoothed_label)
Explanation: Build the graph and initialise a CRFAsRNNLayer:
CRFAsRNNLayer requires two inputs:
-- image features [batch x spatial_dims x n_features]
-- initial segmentation logits[batch x spatial_dims x n_classes].
End of explanation
plt.imshow(output_label[0,...], cmap='Accent')
Explanation: Let's visualise the outputs:
End of explanation
def varying_w_init(w_bilateral=1.0, w_spatial=3.0):
crf_layer = CRFAsRNNLayer(alpha=5., beta=5., gamma=3., T=5,
w_init=[w_bilateral * np.ones(9), w_spatial * np.ones(9)])
smoothed_logits = crf_layer(tf_features, tf_logits)
smoothed_label = tf.cast(tf.argmax(smoothed_logits, -1), tf.int32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_label = sess.run(smoothed_label)
return output_label[0]
Explanation: Effects of weighting of the kernels
The pairwise potential of Dense CRF consists of two kernels:
-- bilateral kernel, encourages nearby pixels with similar features/colours to have the same label.
-- spatial kernel, encourages nearby pixels to have the same label.
To see how they change the final segmentation output, we first define a wrapper to compute the output given specific weights, and then run the layer with different weight combinations.
End of explanation
labels = []
w_spatials = [0.0, 1.0, 5.0, 10.0, 100.0, 200.0]
for w_spatial in w_spatials:
labels.append(varying_w_init(0.0, w_spatial))
f, axes = plt.subplots(2, 4, figsize=(15,7))
axes[0][0].imshow(ct_slice, cmap='gray');
axes[0][0].set_title('CT'); axes[0][0].set_axis_off()
axes[0][1].imshow(np.argmax(ct_logits, -1), cmap='Accent');
axes[0][1].set_title('Initial seg'); axes[0][1].set_axis_off()
for idx, label in enumerate(labels):
i = idx + 2
c_axes = axes[i//4][i%4]
c_axes.imshow(label, cmap='Accent');
c_axes.set_title('spatial_w = {}'.format(w_spatials[idx]))
c_axes.set_axis_off()
Explanation: Visualise the effects of using spatial kernels only
End of explanation
labels = []
w_bilaterals = [0.0, 1.0, 5.0, 10.0, 100.0, 200.0]
for w_bilateral in w_bilaterals:
labels.append(varying_w_init(w_bilateral, 0.0))
f, axes = plt.subplots(2, 4, figsize=(15,7))
axes[0][0].imshow(ct_slice, cmap='gray');
axes[0][0].set_title('CT'); axes[0][0].set_axis_off()
axes[0][1].imshow(np.argmax(ct_logits, -1), cmap='Accent');
axes[0][1].set_title('Initial seg'); axes[0][1].set_axis_off()
for idx, label in enumerate(labels):
i = idx + 2
c_axes = axes[i//4][i%4]
c_axes.imshow(label, cmap='Accent');
c_axes.set_title('bilateral_w = {}'.format(w_bilaterals[idx]))
c_axes.set_axis_off()
Explanation: Visualise the effects of using bilateral kernels only
End of explanation |
4,569 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting sentiment from product reviews
Fire up GraphLab Create
Step1: Read some product review data
Loading reviews for a set of baby products.
Step2: Let's explore this data together
Data includes the product name, the review text and the rating of the review.
Step3: Build the word count vector for each review
Step4: Examining the reviews for most-sold product
Step5: Build a sentiment classifier
Step6: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
Step7: Let's train the sentiment classifier
Step8: Evaluate the sentiment model
Step9: Applying the learned model to understand sentiment for Giraffe
Step10: Sort the reviews based on the predicted sentiment and explore
Step11: Most positive reviews for the giraffe
Step12: Show most negative reviews for giraffe | Python Code:
import graphlab;
Explanation: Predicting sentiment from product reviews
Fire up GraphLab Create
End of explanation
products = graphlab.SFrame('amazon_baby.gl/')
Explanation: Read some product review data
Loading reviews for a set of baby products.
End of explanation
products.head()
Explanation: Let's explore this data together
Data includes the product name, the review text and the rating of the review.
End of explanation
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
graphlab.canvas.set_target('ipynb')
products['name'].show()
Explanation: Build the word count vector for each review
End of explanation
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].show(view='Categorical')
Explanation: Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'
End of explanation
products['rating'].show(view='Categorical')
Explanation: Build a sentiment classifier
End of explanation
#ignore all 3* reviews
products = products[products['rating'] != 3]
#positive sentiment = 4* or 5* reviews
products['sentiment'] = products['rating'] >=4
products.head()
Explanation: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
End of explanation
train_data,test_data = products.random_split(.8, seed=0)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
Explanation: Let's train the sentiment classifier
End of explanation
sentiment_model.evaluate(test_data, metric='roc_curve')
sentiment_model.show(view='Evaluation')
Explanation: Evaluate the sentiment model
End of explanation
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
Explanation: Applying the learned model to understand sentiment for Giraffe
End of explanation
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews.head()
Explanation: Sort the reviews based on the predicted sentiment and explore
End of explanation
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
Explanation: Most positive reviews for the giraffe
End of explanation
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
def awesome_count(cell):
if 'hate' in cell:
return cell['hate']
else:
return 0
products['hate'] = products['word_count'].apply(awesome_count)
products.head()
train_data,test_data = products.random_split(.8, seed=0)
selected_words = ['awesome', 'great', 'fantastic', 'amazing', 'love', 'horrible', 'bad', 'terrible', 'awful', 'wow', 'hate']
selected_words_model = graphlab.logistic_classifier.create(train_data,target='sentiment',features=selected_words,validation_set=test_data, )
selected_words_model['coefficients'].sort('value', ascending = True)
selected_words_model.evaluate(test_data)
sentiment_model.evaluate(test_data)
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
Explanation: Show most negative reviews for giraffe
End of explanation |
4,570 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Notebook
Step1: BQPlot
Examples here are shamelessly stolen from the amazing
Step2: ipyvolume | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from pandas.tools.plotting import scatter_matrix
from sklearn.datasets import load_boston
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('poster')
sns.set_style('whitegrid')
plt.rcParams['figure.figsize'] = 12, 8 # plotsize
import warnings
warnings.filterwarnings('ignore')
Explanation: Advanced Notebook
End of explanation
# mixed feelings about this import
import bqplot.pyplot as plt
import numpy as np
x = np.linspace(0, 2, 50)
y = x**2
fig = plt.figure()
scatter = plt.scatter(x, y)
plt.show()
fig.animation_duration = 500
scatter.y = 2 * x**.5
scatter.selected_style = {'stroke':'red', 'fill': 'orange'}
plt.brush_selector();
scatter.selected
scatter.selected = [1,2,10,40]
Explanation: BQPlot
Examples here are shamelessly stolen from the amazing: https://github.com/maartenbreddels/jupytercon-2017/blob/master/jupytercon2017-widgets.ipynb
End of explanation
import ipyvolume as ipv
N = 1000
x, y, z = np.random.random((3, N))
fig = ipv.figure()
scatter = ipv.scatter(x, y, z, marker='box')
ipv.show()
scatter.x = scatter.x - 0.5
scatter.x = x
scatter.color = "green"
scatter.size = 5
scatter.color = np.random.random((N,3))
scatter.size = 2
ex = ipv.datasets.animated_stream.fetch().data
ex.shape
ex[:, ::, ::4].shape
ipv.figure()
ipv.style.use('dark')
quiver = ipv.quiver(*ipv.datasets.animated_stream.fetch().data[:,::,::4], size=5)
ipv.animation_control(quiver, interval=200)
ipv.show()
ipv.style.use('light')
ipv.style.use('light')
quiver.geo = "cat"
N = 1000*1000
x, y, z = np.random.random((3, N)).astype('f4')
ipv.figure()
s = ipv.scatter(x, y, z, size=0.2)
ipv.show()
ipv.save("3d-example-plot.html")
!open 3d-example-plot.html
Explanation: ipyvolume
End of explanation |
4,571 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="right">Python 3.6 Jupyter Notebook</div>
Geotagging WiFi access points
<div class="alert alert-warning">
**This notebook contains advanced exercises that are only applicable to students who wish to deepen their understanding and qualify for bonus marks on this course.**
You will be able to achieve 100% for this notebook by successfully completing exercise 1. An optional, additional exercise can be completed to qualify for bonus marks.
</div>
Your completion of the notebook exercises will be graded based on your ability to do the following
Step1: 1. Single user review
For simplicity, review a single user's data, examine the properties of that data, and try to see if analysis yields any results of value. You should be familiar with both WiFi and location data, from previous exercises. As a result, they will be loaded and presented without extensive clarification.
1.1 Data exploration
Step2: 1.2 Remove the columns that you do not require
Step3: 1.3 Remove location records with poor accuracy
The accuracy reported in location records is interpreted counterintuitively. The higher the value, the less accurate the measurement. It denotes the radius of a circle within which 68% of the measurements (or one standard deviation) of the reported coordinates are present. Since the radius of an outdoor access point can reach 250 metres (Sapiezynski et al. 2015), it is safe to assume a more conservative measure of 200 metres (at an elevated risk of classifying routers as non-stationary).
The accuracy of location measurements is a major source of noise, hence the need for additional consideration. To do that, you need to plot the cumulative distribution of the accuracy.
Step4: It looks like the data set contains quite accurate location measurements, as a visual inspection of the histogram suggests that almost 90% of the observations have relatively good accuracy. It is therefore safe to select only the most accurate observations.
Using the Pandas "describe" function, you can get a quick view of the data set.
Step5: Next, determine how many observations to keep. The impact of using an accuracy value of 40 is demonstrated in the cell below.
Step6: 73% of the records meet your criteria, and will be used as a filter in subsequent steps.
Step7: Drop the accuracy column from the DataFrame, as it is no longer required.
Step8: Note
Step9: Having two DataFrames with time as an index, you can simply "join" them on the index columns by assigning the value “None” to the argument “on” as demonstrated below.
A JOIN clause is used to merge DataFrames by combining rows from two or more tables, based on a common field between them. The most common type of join is an "inner join". An "inner join" between two tables (A and B) returns all rows from A and B, where the join condition is met. That is, the intersection of the two tables.
<img src="innerjoin.png" alt="Drawing" style="width
Step10: It is time to account for possible noise, and remove the routers with sparse data (i.e., less than five observations, as in the referenced paper). Pandas "df.groupby()" will be used to do this.
Step11: 1.4.2 Compute the median location of each AP
Define stationary routers as ones for which 95% of observations fall inside a radius of 200 metres from the geometric median of all of the observations. In order to compute the median and calculate the distances, you will need to import the custom function from the "utils” directory.
In order to compute the geometric medians with the tools at your disposal, the "getmedian()" method needs properly-formatted data. That means a lot of list points, where each point is an array of "longitude", "latitude", and "altitude". The algorithm accepts input in degrees as units.
Step12: After completing the above, you will have your geomedians, and will be ready to move on to the last step, which is to filter out the non-stationary access points.
Step13: 1.4.3 Filter out the non-stationary routers
Identify stationary routers with 95% confidence, and a distance threshold of 200 metres. Start by computing the distances using the "haversine()" function.
Step14: Now, check how many of the routers pass the threshold. Iterate over the access points, and count the ratio of measurements outside the threshold to all measurements. They are assigned to "static" or "others" based on your confidence level.
Step15: The tagged routers (access points) can now be visualized on a map.
Step16: Note
Step17: You can now compare this to your computed values.
Step18: The results are acceptable. You can compute the actual distance between the points with the "haversine" function.
Step19: 2. Review of all users
Next, repeat the analysis from the previous section for all users. This analysis will be used in the next exercise.
<br>
<div class="alert alert-warning">
<b>Important
Step20: 2.2 Drop APs with sparse records
Remove access points with less than five observations.
Step21: 2.3 Compute medians
Compute the medians for each router in the combined data set, as per Section 1.4.2.
Step22: 2.4 Compute distances of observations to the calculated median
Compute the distance from the medians for each router in the combined dataset.
Step23: 2.5 Label APs as static or non-static
Step24: 2.6 Plot the static APs
Step25: <br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Review the visual output produced for all users. Try to find a static router located outside North America. Remember that you can scroll and zoom on the map.
Question | Python Code:
# Load relevant libraries.
from os import path
import pandas as pd
import numpy as np
import folium
import glob
from tqdm import tqdm
import random
%matplotlib inline
# Load custom modules.
import sys
sys.path.append('..')
from utils import getmedian, haversine
from utils import llaToECEF as coords_to_geomedian
from utils import ECEFTolla as geomedian_to_coords
from IPython.display import Image
# Define variable definitions.
wifi_path = '../data/dartmouth/wifi'
location_path = '../data/dartmouth/location/'
Explanation: <div align="right">Python 3.6 Jupyter Notebook</div>
Geotagging WiFi access points
<div class="alert alert-warning">
**This notebook contains advanced exercises that are only applicable to students who wish to deepen their understanding and qualify for bonus marks on this course.**
You will be able to achieve 100% for this notebook by successfully completing exercise 1. An optional, additional exercise can be completed to qualify for bonus marks.
</div>
Your completion of the notebook exercises will be graded based on your ability to do the following:
All students:
Evaluate: Are you able to interpret the results and justify your interpretation based on the observed data?
Advanced students:
Apply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets?
Analyze: Are you able to pick the relevant method or library to resolve specific stated questions?
Create: Are you able to produce notebooks that serve as computational record of a session, and can be used to share your insights with others?
Notebook objectives
By the end of this notebook you will be expected to understand and apply the steps involved in geotagging, which are the following:
Match the records in time.
Compute the median location of each access point (AP).
Filter out the non-stationary routers.
List of exercises
Exercise 1: Identification of stationary WiFi routers.
Exercise 2 [Advanced]: Identification of non-stationary WiFi routers.
Notebook introduction
This notebook will use the same Dartmouth StudentLife data set, as in previous exercises. In this exercise, you will combine WiFi scans with location information to create a small database of WiFi access point (AP) locations, using Google's location services. You will be replicating the work of Piotr Sapieżyński et al. (2015).
You will start by importing the necessary modules and variable definitions.
<div class="alert alert-warning">
<b>Note</b>:<br>
It is strongly recommended that you save and checkpoint after applying significant changes or completing exercises. This allows you to return the notebook to a previous state should you wish to do so. On the Jupyter menu, select "File", then "Save and Checkpoint" from the dropdown menu that appears.
</div>
Load libraries and set options
In order to compute the median and calculate the distances in Section 1.4.2, you will need to import the custom function from the "utils” directory.
End of explanation
# Load WiFi data.
u00_wifi = pd.read_csv(path.join(wifi_path, 'wifi_u00.csv'))
u00_wifi.head(3)
# Load location data.
u00_loc = pd.read_csv(path.join(location_path, 'gps_u00.csv'))
u00_loc.head(3)
Explanation: 1. Single user review
For simplicity, review a single user's data, examine the properties of that data, and try to see if analysis yields any results of value. You should be familiar with both WiFi and location data, from previous exercises. As a result, they will be loaded and presented without extensive clarification.
1.1 Data exploration
End of explanation
# Remove columns from WiFi dataset.
u00_wifi.drop(['freq', 'level'], axis=1, inplace=True)
u00_wifi.head(3)
# Remove irrelevant columns from location dataset.
u00_loc.drop(['provider', 'network_type', 'bearing', 'speed', 'travelstate'], axis=1, inplace=True)
u00_loc.head(3)
Explanation: 1.2 Remove the columns that you do not require
End of explanation
# Plot histogram of accuracy observations.
u00_loc.accuracy.hist(cumulative=True, density=1, histtype='step', bins=100)
Explanation: 1.3 Remove location records with poor accuracy
The accuracy reported in location records is interpreted counterintuitively. The higher the value, the less accurate the measurement. It denotes the radius of a circle within which 68% of the measurements (or one standard deviation) of the reported coordinates are present. Since the radius of an outdoor access point can reach 250 metres (Sapiezynski et al. 2015), it is safe to assume a more conservative measure of 200 metres (at an elevated risk of classifying routers as non-stationary).
The accuracy of location measurements is a major source of noise, hence the need for additional consideration. To do that, you need to plot the cumulative distribution of the accuracy.
End of explanation
# Review the dataset with Pandas decribe function.
u00_loc.accuracy.describe()
Explanation: It looks like the data set contains quite accurate location measurements, as a visual inspection of the histogram suggests that almost 90% of the observations have relatively good accuracy. It is therefore safe to select only the most accurate observations.
Using the Pandas "describe" function, you can get a quick view of the data set.
End of explanation
# Determine the number of records meeting our threshold of 40 for accuracy.
result = len(u00_loc[u00_loc.accuracy <= 40]) / float(len(u00_loc))
print('Proportion of records that meet the criteria is {:.1f}%'.format(100*result))
Explanation: Next, determine how many observations to keep. The impact of using an accuracy value of 40 is demonstrated in the cell below.
End of explanation
# Make a copy of the original dataset before applying the filter.
u00_loc_raw = u00_loc.copy()
# Apply the filter.
u00_loc = u00_loc[u00_loc['accuracy'] <= 40]
# Get the lenghts of each of the data objects.
original_location_count = len(u00_loc_raw)
filtered_location_count = len(u00_loc)
print("Number of location observations before filtering: {}".format(original_location_count))
print("Number of observations remaining after filtering: {}".format(filtered_location_count))
Explanation: 73% of the records meet your criteria, and will be used as a filter in subsequent steps.
End of explanation
# Update the object to remove accuracy.
u00_loc.drop('accuracy', axis=1, inplace=True)
# Display the head of the new dataset.
u00_loc.head(3)
Explanation: Drop the accuracy column from the DataFrame, as it is no longer required.
End of explanation
# Set the index for WiFi.
u00_wifi = u00_wifi.set_index('time')
u00_wifi.head(3)
# Set the index for location.
u00_loc = u00_loc.set_index('time')
u00_loc.head(3)
Explanation: Note:
For certain methods, Pandas has the option of applying changes to data sets "inplace". While convenient, this feature should be used with care as you will no longer be able to re-execute earlier cells. The guiding principle is that you can use this feature in data cleaning and wrangling steps, where you no longer need to go back and revisit earlier steps.
Should you need to revisit earlier steps, you can either restart the notebook and execute all the cells up to that point, or only execute the cells needed to get the object in the required form to continue your analysis.
1.4 Geotagging
In order to geotag, location and WiFi readouts need to be matched based on the time of the observations. As in the paper by Sapiezynski et al. (2015), readouts will be constrained to those happening at exactly the same second, to reduce impact of readouts from moving vehicles.
There are three steps involved in geotagging:
1. Match the records in time.
2. Compute the median location of each AP.
3. Filter out the non-stationary routers.
These three steps will be explored in further detail in the following sections of this notebook.
1.4.1 Match the records
This requires the use of Pandas magic to join (much like SQL's join) the DataFrames based on time. First, use the time as the index with the "df.set_index()" method, and then join them with the "df.join()" method.
End of explanation
# Join the two data sets, print the number of records found and display the head of the new dataset.
u00_raw_geotags = u00_wifi.join(u00_loc, how='inner',on=None)
print('{} WiFi records found time matching location records.'.format(len(u00_raw_geotags)))
u00_raw_geotags.head(3)
Explanation: Having two DataFrames with time as an index, you can simply "join" them on the index columns by assigning the value “None” to the argument “on” as demonstrated below.
A JOIN clause is used to merge DataFrames by combining rows from two or more tables, based on a common field between them. The most common type of join is an "inner join". An "inner join" between two tables (A and B) returns all rows from A and B, where the join condition is met. That is, the intersection of the two tables.
<img src="innerjoin.png" alt="Drawing" style="width: 800px;"/>
End of explanation
# Create object u00_groups.
u00_groups = u00_raw_geotags.groupby('BSSID')
# Create a new object where filter criteria is met.
u00_geotags = u00_groups.filter(lambda gr: len(gr)>=5)
print("{} geotagged records remained after trimming for sparse data.".format(len(u00_geotags)))
print("They correspond to {} unique router APs".format(len(u00_groups)))
Explanation: It is time to account for possible noise, and remove the routers with sparse data (i.e., less than five observations, as in the referenced paper). Pandas "df.groupby()" will be used to do this.
End of explanation
# Create a new DataFrame with latitude and longitude.
u00_geo_medians = pd.DataFrame(columns=[u'latitude', u'longitude'])
# Transform the data set using the provided set of utilities.
for (BSSID, geotags) in u00_groups:
geotags = [row for row in np.array(geotags[['latitude', 'longitude', 'altitude']])]
geotags = [coords_to_geomedian(row) for row in geotags]
median = getmedian(geotags)
median = geomedian_to_coords(median)[:2]
u00_geo_medians.loc[BSSID] = median
Explanation: 1.4.2 Compute the median location of each AP
Define stationary routers as ones for which 95% of observations fall inside a radius of 200 metres from the geometric median of all of the observations. In order to compute the median and calculate the distances, you will need to import the custom function from the "utils” directory.
In order to compute the geometric medians with the tools at your disposal, the "getmedian()" method needs properly-formatted data. That means a lot of list points, where each point is an array of "longitude", "latitude", and "altitude". The algorithm accepts input in degrees as units.
End of explanation
# Display the head of the geomedians object.
u00_geo_medians.head(3)
Explanation: After completing the above, you will have your geomedians, and will be ready to move on to the last step, which is to filter out the non-stationary access points.
End of explanation
# Calculate the distances from the median.
u00_distances = {}
for BSSID, geotags in u00_groups:
u00_distances[BSSID] = []
(lat_median, lon_median) = u00_geo_medians.loc[BSSID]
for (lat, lon) in np.array(geotags[['latitude','longitude']]):
u00_distances[BSSID].append(haversine(lon, lat, lon_median, lat_median)*1000) # haversine() returns distance in [km]
Explanation: 1.4.3 Filter out the non-stationary routers
Identify stationary routers with 95% confidence, and a distance threshold of 200 metres. Start by computing the distances using the "haversine()" function.
End of explanation
# Group access points as static or non-static.
# Set the thresholds.
distance_threshold = 200
confidence_level = 0.95
# Create empty lists.
static = []
others = []
for BSSID, distances in u00_distances.items():
all_count = len(distances)
near_count = len(list(filter(lambda distance: distance <= distance_threshold, distances)))
if( near_count / all_count >= confidence_level ):
static.append(BSSID)
else:
others.append(BSSID)
# Print summary results.
print("We identified {} static routers and {} non-static (moved or mobile).".format(len(static), len(others)))
Explanation: Now, check how many of the routers pass the threshold. Iterate over the access points, and count the ratio of measurements outside the threshold to all measurements. They are assigned to "static" or "others" based on your confidence level.
End of explanation
# Plot the access points on a map.
map_center = list(u00_geo_medians.median())
routers_map = folium.Map(location=map_center, zoom_start=14)
# Add points to the map for each of the locations.
for router in static:
folium.CircleMarker(u00_geo_medians.loc[router], fill_color='red', radius=15, fill_opacity=0.5).add_to(routers_map)
#Display the map.
routers_map
Explanation: The tagged routers (access points) can now be visualized on a map.
End of explanation
# Set the provided location.
lat = 43.7068263
lon = -72.2868704
bssid1 = '00:01:36:57:be:88'
bssid2 = '00:01:36:57:be:87'
Explanation: Note:
In order to validate your results, you can compare the location with the known location (lat: 43.7068263, long:-72.2868704).
End of explanation
u00_geo_medians.loc[[bssid1, bssid2]]
Explanation: You can now compare this to your computed values.
End of explanation
# Calculate and display the difference between calculated and Google API provided locations.
lat_m1, lon_m1 = u00_geo_medians.loc[bssid1]
lat_m2, lon_m2 = u00_geo_medians.loc[bssid2]
print('Distance from the Google API provided location to our first router ' \
'estimation is {:2g}m'.format(haversine(lon,lat,lon_m1,lat_m1)*1000))
print('Distance from the Google API provided location to our first router ' \
'estimation is {:2g}m'.format(haversine(lon,lat,lon_m2,lat_m2)*1000))
Explanation: The results are acceptable. You can compute the actual distance between the points with the "haversine" function.
End of explanation
# Set variables.
all_geotags = pd.DataFrame(columns=['time','BSSID','latitude','longitude','altitude'])
all_geotags = all_geotags.set_index('time')
pcounter = 0
# Define function to build the dataset, all_geotags, using the input files supplied.
def build_ds(file_in, all_geotags):
# Get the user id.
user_id = path.basename(file_in)[5:-4]
# Read the WiFi and location data for the user.
wifi = pd.read_csv(file_in)
loc = pd.read_csv(path.join(location_path, 'gps_'+user_id+'.csv'))
# Filter location data not meeting the accuracy threshold.
loc = loc[loc.accuracy <= 40]
# Drop the columns not required.
wifi.drop(['freq', 'level'], axis=1, inplace=True)
loc.drop(['accuracy', 'provider', 'network_type', 'bearing', 'speed', 'travelstate'], axis=1, inplace=True)
# Index the datasets based on time.
loc = loc.set_index('time')
wifi = wifi.set_index('time')
# Join the datasets based on time index.
raw_tags = wifi.join(loc, how='inner')
# Return the dataset for the user.
return [raw_tags]
# Iterate through the files in the specified directory and append the results of the function to the all_geotags variable.
for f in tqdm(glob.glob(wifi_path + '/*.csv')):
# Append result from our function to all_geotags for each input file supplied.
all_geotags = all_geotags.append(build_ds(f, all_geotags))
Explanation: 2. Review of all users
Next, repeat the analysis from the previous section for all users. This analysis will be used in the next exercise.
<br>
<div class="alert alert-warning">
<b>Important:</b>
Please ensure that this is the only running notebook when performing this section, because you will require as many resources as possible to complete the next section. In the Orientation Module, you were introduced to the process required to shut down notebooks. That being said, you can shut down running notebooks by viewing the "Running" tab on your Jupyter Notebook directory view.
</div>
Note:
There will be less contextual information provided in this section, as the details have already been provided in Section 1 of this notebook
2.1 Load data for all users
This section utilizes two new libraries that do not fall within the scope of this course. Interested students can read more about glob, which is used to read files in the specified directory, and tqdm, which is used to render a progress bar. Set your variables, then create the required function to process the input files, and finally, execute this function to process all the files.
Note:
Processing a large amount of files or records can be time consuming. It is good practice to include progress bars to provide visual feedback where applicable.
End of explanation
print("{} all geotags found".format(len(all_geotags)))
all_groups = all_geotags.groupby('BSSID')
print("{} unique routers found".format(len(all_groups)))
# Drop sparsely populated access points.
all_geotags = all_groups.filter(lambda gr: len(gr)>=5)
all_groups = all_geotags.groupby('BSSID')
print("{} unique router APs remaining after dropping routers with sparse data".format(len(all_groups)))
Explanation: 2.2 Drop APs with sparse records
Remove access points with less than five observations.
End of explanation
# Create a new variable containing all the coordinates.
all_geo_medians = pd.DataFrame(columns=[u'latitude', u'longitude'])
# Compute the geomedians and add to all_geo_medians.
# Initiate progress bar.
with tqdm(total=len(all_groups)) as pbar:
# Iterate through data in all_groups as per single user example.
for i, data in enumerate(all_groups):
(BSSID, geotags) = data
geotags = [row for row in np.array(geotags[['latitude', 'longitude', 'altitude']])]
geotags = [coords_to_geomedian(row) for row in geotags]
median = getmedian(geotags)
median = geomedian_to_coords(median)[:2]
all_geo_medians.loc[BSSID] = median
pbar.update()
pbar.close()
Explanation: 2.3 Compute medians
Compute the medians for each router in the combined data set, as per Section 1.4.2.
End of explanation
# Calculate the distances from the median.
all_distances = {}
# Initiate progress bar.
with tqdm(total=len(all_groups)) as pbar:
# Iterate through data in all_groups as per single user example.
for i, data in enumerate(all_groups):
(BSSID, geotags) = data
all_distances[BSSID] = []
(lat_median, lon_median) = all_geo_medians.loc[BSSID]
for (lat, lon) in np.array(geotags[['latitude','longitude']]):
all_distances[BSSID].append(haversine(lon, lat, lon_median, lat_median)*1000)
pbar.update()
pbar.close()
Explanation: 2.4 Compute distances of observations to the calculated median
Compute the distance from the medians for each router in the combined dataset.
End of explanation
# Group access points as static or non-static.
# Set the thresholds.
distance_threshold = 200
confidence_level = 0.95
# Create empty lists.
all_static = []
all_others = []
for BSSID, distances in all_distances.items():
all_count = len(distances)
near_count = len(list(filter(lambda distance: distance <= distance_threshold, distances)))
if( near_count / all_count >= confidence_level ):
all_static.append(BSSID)
else:
all_others.append(BSSID)
# Print summary results.
print("We identified {} static routers and {} non-static (moved or mobile).".format(len(all_static), len(all_others)))
Explanation: 2.5 Label APs as static or non-static
End of explanation
# Plot the access points on a map.
all_map_center = list(all_geo_medians.median())
all_routers_map = folium.Map(location=all_map_center, zoom_start=10)
# Add 1000 randomly sampled points to a new variable.
random.seed(3)
rn_static = random.sample(all_static,1000)
# Add the points in rn_static to the map. A random seed value is used for reproducibility of results.
for router in rn_static:
folium.CircleMarker(all_geo_medians.loc[router], fill_color='red', radius=15, fill_opacity=0.5).add_to(all_routers_map)
# Display the map.
all_routers_map
Explanation: 2.6 Plot the static APs
End of explanation
# Your answer here.
# Please add as many cells as you require in this section.
# Your plot here.
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Review the visual output produced for all users. Try to find a static router located outside North America. Remember that you can scroll and zoom on the map.
Question: Are you able to locate a static router located outside North America? If so, where?
Your markdown answer here.
<br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
<br>
<div class="alert alert-info">
<b>Exercise 2 [Advanced] Start.</b>
</div>
<div class="alert alert-warning">
<b>Note</b>:<br>
This activity is for advanced students only and extra credit will be allocated. Students will not be penalized for not completing this activity.
</div>
Instructions
Can you identify moving BSSIDs and plot the observed locations on a map?
This is not a trivial task (compared to geotagging stationary routers) and there are many possible ways to perform the analysis.
Input : All data points for an access point.
Output: Boolean mobility status.
Perform agglomerative clustering. (This is not covered in the scope of this course.)
Discard clusters with n<5 observations as noise.
Compare pairwise timeframes in which each of the clusters were observed.
If at least two clusters overlap (in time), consider the AP as mobile.
Note:
Keep in mind that other, possibly better, solutions exist, and you are encouraged to provide your input.
Hints:
You can start by reviewing SciPy.
from scipy.spatial.distance import pdist
from scipy.cluster.hierarchy import linkage, fcluster
End of explanation |
4,572 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SciPy - Library of scientific algorithms for Python
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http
Step1: Introduction
The SciPy framework builds on top of the low-level NumPy framework for multidimensional arrays, and provides a large number of higher-level scientific algorithms. Some of the topics that SciPy covers are
Step2: If we only need to use part of the SciPy framework we can selectively include only those modules we are interested in. For example, to include the linear algebra package under the name la, we can do
Step3: Special functions
A large number of mathematical special functions are important for many computional physics problems. SciPy provides implementations of a very extensive set of special functions. For details, see the list of functions in the reference documention at http
Step4: Integration
Numerical integration
Step5: The quad function takes a large number of optional arguments, which can be used to fine-tune the behaviour of the function (try help(quad) for details).
The basic usage is as follows
Step7: If we need to pass extra arguments to integrand function we can use the args keyword argument
Step8: For simple functions we can use a lambda function (name-less function) instead of explicitly defining a function for the integrand
Step9: As show in the example above, we can also use 'Inf' or '-Inf' as integral limits.
Higher-dimensional integration works in the same way
Step10: Note how we had to pass lambda functions for the limits for the y integration, since these in general can be functions of x.
Ordinary differential equations (ODEs)
SciPy provides two different ways to solve ODEs
Step11: A system of ODEs are usually formulated on standard form before it is attacked numerically. The standard form is
Step13: The equations of motion of the pendulum are given on the wiki page
Step14: Simple annimation of the pendulum motion. We will see how to make better animation in Lecture 4.
Step16: Example
Step17: Fourier transform
Fourier transforms are one of the universal tools in computational physics, which appear over and over again in different contexts. SciPy provides functions for accessing the classic FFTPACK library from NetLib, which is an efficient and well tested FFT library written in FORTRAN. The SciPy API has a few additional convenience functions, but overall the API is closely related to the original FORTRAN library.
To use the fftpack module in a python program, include it using
Step18: To demonstrate how to do a fast Fourier transform with SciPy, let's look at the FFT of the solution to the damped oscillator from the previous section
Step19: Since the signal is real, the spectrum is symmetric. We therefore only need to plot the part that corresponds to the postive frequencies. To extract that part of the w and F we can use some of the indexing tricks for NumPy arrays that we saw in Lecture 2
Step20: As expected, we now see a peak in the spectrum that is centered around 1, which is the frequency we used in the damped oscillator example.
Linear algebra
The linear algebra module contains a lot of matrix related functions, including linear equation solving, eigenvalue solvers, matrix functions (for example matrix-exponentiation), a number of different decompositions (SVD, LU, cholesky), etc.
Detailed documetation is available at
Step21: We can also do the same with
$A X = B$
where $A, B, X$ are matrices
Step22: Eigenvalues and eigenvectors
The eigenvalue problem for a matrix $A$
Step23: The eigenvectors corresponding to the $n$th eigenvalue (stored in evals[n]) is the $n$th column in evecs, i.e., evecs[
Step24: There are also more specialized eigensolvers, like the eigh for Hermitian matrices.
Matrix operations
Step25: Sparse matrices
Sparse matrices are often useful in numerical simulations dealing with large systems, if the problem can be described in matrix form where the matrices or vectors mostly contains zeros. Scipy has a good support for sparse matrices, with basic linear algebra operations (such as equation solving, eigenvalue calculations, etc).
There are many possible strategies for storing sparse matrices in an efficient way. Some of the most common are the so-called coordinate form (COO), list of list (LIL) form, and compressed-sparse column CSC (and row, CSR). Each format has some advantanges and disadvantages. Most computational algorithms (equation solving, matrix-matrix multiplication, etc) can be efficiently implemented using CSR or CSC formats, but they are not so intuitive and not so easy to initialize. So often a sparse matrix is initially created in COO or LIL format (where we can efficiently add elements to the sparse matrix data), and then converted to CSC or CSR before used in real calcalations.
For more information about these sparse formats, see e.g. http
Step26: More efficient way to create sparse matrices
Step27: Converting between different sparse matrix formats
Step28: We can compute with sparse matrices like with dense matrices
Step29: Optimization
Optimization (finding minima or maxima of a function) is a large field in mathematics, and optimization of complicated functions or in many variables can be rather involved. Here we will only look at a few very simple cases. For a more detailed introduction to optimization with SciPy see
Step30: Finding a minima
Let's first look at how to find the minima of a simple function of a single variable
Step31: We can use the fmin_bfgs function to find the minima of a function
Step32: We can also use the brent or fminbound functions. They have a bit different syntax and use different algorithms.
Step33: Finding a solution to a function
To find the root for a function of the form $f(x) = 0$ we can use the fsolve function. It requires an initial guess
Step34: Interpolation
Interpolation is simple and convenient in scipy
Step35: Statistics
The scipy.stats module contains a large number of statistical distributions, statistical functions and tests. For a complete documentation of its features, see http
Step36: Statistics
Step37: Statistical tests
Test if two sets of (independent) random data comes from the same distribution
Step38: Since the p value is very large we cannot reject the hypothesis that the two sets of random data have different means.
To test if the mean of a single sample of data has mean 0.1 (the true mean is 0.0)
Step39: Low p-value means that we can reject the hypothesis that the mean of Y is 0.1.
Step40: Further reading
http | Python Code:
# what is this line all about? Answer in lecture 4
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import Image
Explanation: SciPy - Library of scientific algorithms for Python
J.R. Johansson (jrjohansson at gmail.com)
The latest version of this IPython notebook lecture is available at http://github.com/jrjohansson/scientific-python-lectures.
The other notebooks in this lecture series are indexed at http://jrjohansson.github.io.
End of explanation
from scipy import *
Explanation: Introduction
The SciPy framework builds on top of the low-level NumPy framework for multidimensional arrays, and provides a large number of higher-level scientific algorithms. Some of the topics that SciPy covers are:
Special functions (scipy.special)
Integration (scipy.integrate)
Optimization (scipy.optimize)
Interpolation (scipy.interpolate)
Fourier Transforms (scipy.fftpack)
Signal Processing (scipy.signal)
Linear Algebra (scipy.linalg)
Sparse Eigenvalue Problems (scipy.sparse)
Statistics (scipy.stats)
Multi-dimensional image processing (scipy.ndimage)
File IO (scipy.io)
Each of these submodules provides a number of functions and classes that can be used to solve problems in their respective topics.
In this lecture we will look at how to use some of these subpackages.
To access the SciPy package in a Python program, we start by importing everything from the scipy module.
End of explanation
import scipy.linalg as la
Explanation: If we only need to use part of the SciPy framework we can selectively include only those modules we are interested in. For example, to include the linear algebra package under the name la, we can do:
End of explanation
#
# The scipy.special module includes a large number of Bessel-functions
# Here we will use the functions jn and yn, which are the Bessel functions
# of the first and second kind and real-valued order. We also include the
# function jn_zeros and yn_zeros that gives the zeroes of the functions jn
# and yn.
#
from scipy.special import jn, yn, jn_zeros, yn_zeros
n = 0 # order
x = 0.0
# Bessel function of first kind
print "J_%d(%f) = %f" % (n, x, jn(n, x))
x = 1.0
# Bessel function of second kind
print "Y_%d(%f) = %f" % (n, x, yn(n, x))
x = linspace(0, 10, 100)
fig, ax = plt.subplots()
for n in range(4):
ax.plot(x, jn(n, x), label=r"$J_%d(x)$" % n)
ax.legend();
# zeros of Bessel functions
n = 0 # order
m = 4 # number of roots to compute
jn_zeros(n, m)
Explanation: Special functions
A large number of mathematical special functions are important for many computional physics problems. SciPy provides implementations of a very extensive set of special functions. For details, see the list of functions in the reference documention at http://docs.scipy.org/doc/scipy/reference/special.html#module-scipy.special.
To demonstrate the typical usage of special functions we will look in more detail at the Bessel functions:
End of explanation
from scipy.integrate import quad, dblquad, tplquad
Explanation: Integration
Numerical integration: quadrature
Numerical evaluation of a function of the type
$\displaystyle \int_a^b f(x) dx$
is called numerical quadrature, or simply quadature. SciPy provides a series of functions for different kind of quadrature, for example the quad, dblquad and tplquad for single, double and triple integrals, respectively.
End of explanation
# define a simple function for the integrand
def f(x):
return x
x_lower = 0 # the lower limit of x
x_upper = 1 # the upper limit of x
val, abserr = quad(f, x_lower, x_upper)
print "integral value =", val, ", absolute error =", abserr
Explanation: The quad function takes a large number of optional arguments, which can be used to fine-tune the behaviour of the function (try help(quad) for details).
The basic usage is as follows:
End of explanation
def integrand(x, n):
Bessel function of first kind and order n.
return jn(n, x)
x_lower = 0 # the lower limit of x
x_upper = 10 # the upper limit of x
val, abserr = quad(integrand, x_lower, x_upper, args=(3,))
print val, abserr
Explanation: If we need to pass extra arguments to integrand function we can use the args keyword argument:
End of explanation
val, abserr = quad(lambda x: exp(-x ** 2), -Inf, Inf)
print "numerical =", val, abserr
analytical = sqrt(pi)
print "analytical =", analytical
Explanation: For simple functions we can use a lambda function (name-less function) instead of explicitly defining a function for the integrand:
End of explanation
def integrand(x, y):
return exp(-x**2-y**2)
x_lower = 0
x_upper = 10
y_lower = 0
y_upper = 10
val, abserr = dblquad(integrand, x_lower, x_upper, lambda x : y_lower, lambda x: y_upper)
print val, abserr
Explanation: As show in the example above, we can also use 'Inf' or '-Inf' as integral limits.
Higher-dimensional integration works in the same way:
End of explanation
from scipy.integrate import odeint, ode
Explanation: Note how we had to pass lambda functions for the limits for the y integration, since these in general can be functions of x.
Ordinary differential equations (ODEs)
SciPy provides two different ways to solve ODEs: An API based on the function odeint, and object-oriented API based on the class ode. Usually odeint is easier to get started with, but the ode class offers some finer level of control.
Here we will use the odeint functions. For more information about the class ode, try help(ode). It does pretty much the same thing as odeint, but in an object-oriented fashion.
To use odeint, first import it from the scipy.integrate module
End of explanation
Image(url='http://upload.wikimedia.org/wikipedia/commons/c/c9/Double-compound-pendulum-dimensioned.svg')
Explanation: A system of ODEs are usually formulated on standard form before it is attacked numerically. The standard form is:
$y' = f(y, t)$
where
$y = [y_1(t), y_2(t), ..., y_n(t)]$
and $f$ is some function that gives the derivatives of the function $y_i(t)$. To solve an ODE we need to know the function $f$ and an initial condition, $y(0)$.
Note that higher-order ODEs can always be written in this form by introducing new variables for the intermediate derivatives.
Once we have defined the Python function f and array y_0 (that is $f$ and $y(0)$ in the mathematical formulation), we can use the odeint function as:
y_t = odeint(f, y_0, t)
where t is and array with time-coordinates for which to solve the ODE problem. y_t is an array with one row for each point in time in t, where each column corresponds to a solution y_i(t) at that point in time.
We will see how we can implement f and y_0 in Python code in the examples below.
Example: double pendulum
Let's consider a physical example: The double compound pendulum, described in some detail here: http://en.wikipedia.org/wiki/Double_pendulum
End of explanation
g = 9.82
L = 0.5
m = 0.1
def dx(x, t):
The right-hand side of the pendulum ODE
x1, x2, x3, x4 = x[0], x[1], x[2], x[3]
dx1 = 6.0/(m*L**2) * (2 * x3 - 3 * cos(x1-x2) * x4)/(16 - 9 * cos(x1-x2)**2)
dx2 = 6.0/(m*L**2) * (8 * x4 - 3 * cos(x1-x2) * x3)/(16 - 9 * cos(x1-x2)**2)
dx3 = -0.5 * m * L**2 * ( dx1 * dx2 * sin(x1-x2) + 3 * (g/L) * sin(x1))
dx4 = -0.5 * m * L**2 * (-dx1 * dx2 * sin(x1-x2) + (g/L) * sin(x2))
return [dx1, dx2, dx3, dx4]
# choose an initial state
x0 = [pi/4, pi/2, 0, 0]
# time coodinate to solve the ODE for: from 0 to 10 seconds
t = linspace(0, 10, 250)
# solve the ODE problem
x = odeint(dx, x0, t)
# plot the angles as a function of time
fig, axes = plt.subplots(1,2, figsize=(12,4))
axes[0].plot(t, x[:, 0], 'r', label="theta1")
axes[0].plot(t, x[:, 1], 'b', label="theta2")
x1 = + L * sin(x[:, 0])
y1 = - L * cos(x[:, 0])
x2 = x1 + L * sin(x[:, 1])
y2 = y1 - L * cos(x[:, 1])
axes[1].plot(x1, y1, 'r', label="pendulum1")
axes[1].plot(x2, y2, 'b', label="pendulum2")
axes[1].set_ylim([-1, 0])
axes[1].set_xlim([1, -1]);
Explanation: The equations of motion of the pendulum are given on the wiki page:
${\dot \theta_1} = \frac{6}{m\ell^2} \frac{ 2 p_{\theta_1} - 3 \cos(\theta_1-\theta_2) p_{\theta_2}}{16 - 9 \cos^2(\theta_1-\theta_2)}$
${\dot \theta_2} = \frac{6}{m\ell^2} \frac{ 8 p_{\theta_2} - 3 \cos(\theta_1-\theta_2) p_{\theta_1}}{16 - 9 \cos^2(\theta_1-\theta_2)}.$
${\dot p_{\theta_1}} = -\frac{1}{2} m \ell^2 \left [ {\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + 3 \frac{g}{\ell} \sin \theta_1 \right ]$
${\dot p_{\theta_2}} = -\frac{1}{2} m \ell^2 \left [ -{\dot \theta_1} {\dot \theta_2} \sin (\theta_1-\theta_2) + \frac{g}{\ell} \sin \theta_2 \right]$
To make the Python code simpler to follow, let's introduce new variable names and the vector notation: $x = [\theta_1, \theta_2, p_{\theta_1}, p_{\theta_2}]$
${\dot x_1} = \frac{6}{m\ell^2} \frac{ 2 x_3 - 3 \cos(x_1-x_2) x_4}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_2} = \frac{6}{m\ell^2} \frac{ 8 x_4 - 3 \cos(x_1-x_2) x_3}{16 - 9 \cos^2(x_1-x_2)}$
${\dot x_3} = -\frac{1}{2} m \ell^2 \left [ {\dot x_1} {\dot x_2} \sin (x_1-x_2) + 3 \frac{g}{\ell} \sin x_1 \right ]$
${\dot x_4} = -\frac{1}{2} m \ell^2 \left [ -{\dot x_1} {\dot x_2} \sin (x_1-x_2) + \frac{g}{\ell} \sin x_2 \right]$
End of explanation
from IPython.display import display, clear_output
import time
fig, ax = plt.subplots(figsize=(4,4))
for t_idx, tt in enumerate(t[:200]):
x1 = + L * sin(x[t_idx, 0])
y1 = - L * cos(x[t_idx, 0])
x2 = x1 + L * sin(x[t_idx, 1])
y2 = y1 - L * cos(x[t_idx, 1])
ax.cla()
ax.plot([0, x1], [0, y1], 'r.-')
ax.plot([x1, x2], [y1, y2], 'b.-')
ax.set_ylim([-1.5, 0.5])
ax.set_xlim([1, -1])
clear_output()
display(fig)
time.sleep(0.1)
Explanation: Simple annimation of the pendulum motion. We will see how to make better animation in Lecture 4.
End of explanation
def dy(y, t, zeta, w0):
The right-hand side of the damped oscillator ODE
x, p = y[0], y[1]
dx = p
dp = -2 * zeta * w0 * p - w0**2 * x
return [dx, dp]
# initial state:
y0 = [1.0, 0.0]
# time coodinate to solve the ODE for
t = linspace(0, 10, 1000)
w0 = 2*pi*1.0
# solve the ODE problem for three different values of the damping ratio
y1 = odeint(dy, y0, t, args=(0.0, w0)) # undamped
y2 = odeint(dy, y0, t, args=(0.2, w0)) # under damped
y3 = odeint(dy, y0, t, args=(1.0, w0)) # critial damping
y4 = odeint(dy, y0, t, args=(5.0, w0)) # over damped
fig, ax = plt.subplots()
ax.plot(t, y1[:,0], 'k', label="undamped", linewidth=0.25)
ax.plot(t, y2[:,0], 'r', label="under damped")
ax.plot(t, y3[:,0], 'b', label=r"critical damping")
ax.plot(t, y4[:,0], 'g', label="over damped")
ax.legend();
Explanation: Example: Damped harmonic oscillator
ODE problems are important in computational physics, so we will look at one more example: the damped harmonic oscillation. This problem is well described on the wiki page: http://en.wikipedia.org/wiki/Damping
The equation of motion for the damped oscillator is:
$\displaystyle \frac{\mathrm{d}^2x}{\mathrm{d}t^2} + 2\zeta\omega_0\frac{\mathrm{d}x}{\mathrm{d}t} + \omega^2_0 x = 0$
where $x$ is the position of the oscillator, $\omega_0$ is the frequency, and $\zeta$ is the damping ratio. To write this second-order ODE on standard form we introduce $p = \frac{\mathrm{d}x}{\mathrm{d}t}$:
$\displaystyle \frac{\mathrm{d}p}{\mathrm{d}t} = - 2\zeta\omega_0 p - \omega^2_0 x$
$\displaystyle \frac{\mathrm{d}x}{\mathrm{d}t} = p$
In the implementation of this example we will add extra arguments to the RHS function for the ODE, rather than using global variables as we did in the previous example. As a consequence of the extra arguments to the RHS, we need to pass an keyword argument args to the odeint function:
End of explanation
from numpy.fft import fftfreq
from scipy.fftpack import *
Explanation: Fourier transform
Fourier transforms are one of the universal tools in computational physics, which appear over and over again in different contexts. SciPy provides functions for accessing the classic FFTPACK library from NetLib, which is an efficient and well tested FFT library written in FORTRAN. The SciPy API has a few additional convenience functions, but overall the API is closely related to the original FORTRAN library.
To use the fftpack module in a python program, include it using:
End of explanation
N = len(t)
dt = t[1]-t[0]
# calculate the fast fourier transform
# y2 is the solution to the under-damped oscillator from the previous section
F = fft(y2[:,0])
# calculate the frequencies for the components in F
w = fftfreq(N, dt)
fig, ax = plt.subplots(figsize=(9,3))
ax.plot(w, abs(F));
Explanation: To demonstrate how to do a fast Fourier transform with SciPy, let's look at the FFT of the solution to the damped oscillator from the previous section:
End of explanation
indices = where(w > 0) # select only indices for elements that corresponds to positive frequencies
w_pos = w[indices]
F_pos = F[indices]
fig, ax = plt.subplots(figsize=(9,3))
ax.plot(w_pos, abs(F_pos))
ax.set_xlim(0, 5);
Explanation: Since the signal is real, the spectrum is symmetric. We therefore only need to plot the part that corresponds to the postive frequencies. To extract that part of the w and F we can use some of the indexing tricks for NumPy arrays that we saw in Lecture 2:
End of explanation
from scipy.linalg import *
A = array([[1,2,3], [4,5,6], [7,8,9]])
b = array([1,2,3])
x = solve(A, b)
x
# check
dot(A, x) - b
Explanation: As expected, we now see a peak in the spectrum that is centered around 1, which is the frequency we used in the damped oscillator example.
Linear algebra
The linear algebra module contains a lot of matrix related functions, including linear equation solving, eigenvalue solvers, matrix functions (for example matrix-exponentiation), a number of different decompositions (SVD, LU, cholesky), etc.
Detailed documetation is available at: http://docs.scipy.org/doc/scipy/reference/linalg.html
Here we will look at how to use some of these functions:
Linear equation systems
Linear equation systems on the matrix form
$A x = b$
where $A$ is a matrix and $x,b$ are vectors can be solved like:
End of explanation
A = rand(3,3)
B = rand(3,3)
X = solve(A, B)
X
# check
norm(dot(A, X) - B)
Explanation: We can also do the same with
$A X = B$
where $A, B, X$ are matrices:
End of explanation
evals = eigvals(A)
evals
evals, evecs = eig(A)
evals
evecs
Explanation: Eigenvalues and eigenvectors
The eigenvalue problem for a matrix $A$:
$\displaystyle A v_n = \lambda_n v_n$
where $v_n$ is the $n$th eigenvector and $\lambda_n$ is the $n$th eigenvalue.
To calculate eigenvalues of a matrix, use the eigvals and for calculating both eigenvalues and eigenvectors, use the function eig:
End of explanation
n = 1
norm(dot(A, evecs[:,n]) - evals[n] * evecs[:,n])
Explanation: The eigenvectors corresponding to the $n$th eigenvalue (stored in evals[n]) is the $n$th column in evecs, i.e., evecs[:,n]. To verify this, let's try mutiplying eigenvectors with the matrix and compare to the product of the eigenvector and the eigenvalue:
End of explanation
# the matrix inverse
inv(A)
# determinant
det(A)
# norms of various orders
norm(A, ord=2), norm(A, ord=Inf)
Explanation: There are also more specialized eigensolvers, like the eigh for Hermitian matrices.
Matrix operations
End of explanation
from scipy.sparse import *
# dense matrix
M = array([[1,0,0,0], [0,3,0,0], [0,1,1,0], [1,0,0,1]]); M
# convert from dense to sparse
A = csr_matrix(M); A
# convert from sparse to dense
A.todense()
Explanation: Sparse matrices
Sparse matrices are often useful in numerical simulations dealing with large systems, if the problem can be described in matrix form where the matrices or vectors mostly contains zeros. Scipy has a good support for sparse matrices, with basic linear algebra operations (such as equation solving, eigenvalue calculations, etc).
There are many possible strategies for storing sparse matrices in an efficient way. Some of the most common are the so-called coordinate form (COO), list of list (LIL) form, and compressed-sparse column CSC (and row, CSR). Each format has some advantanges and disadvantages. Most computational algorithms (equation solving, matrix-matrix multiplication, etc) can be efficiently implemented using CSR or CSC formats, but they are not so intuitive and not so easy to initialize. So often a sparse matrix is initially created in COO or LIL format (where we can efficiently add elements to the sparse matrix data), and then converted to CSC or CSR before used in real calcalations.
For more information about these sparse formats, see e.g. http://en.wikipedia.org/wiki/Sparse_matrix
When we create a sparse matrix we have to choose which format it should be stored in. For example,
End of explanation
A = lil_matrix((4,4)) # empty 4x4 sparse matrix
A[0,0] = 1
A[1,1] = 3
A[2,2] = A[2,1] = 1
A[3,3] = A[3,0] = 1
A
A.todense()
Explanation: More efficient way to create sparse matrices: create an empty matrix and populate with using matrix indexing (avoids creating a potentially large dense matrix)
End of explanation
A
A = csr_matrix(A); A
A = csc_matrix(A); A
Explanation: Converting between different sparse matrix formats:
End of explanation
A.todense()
(A * A).todense()
A.todense()
A.dot(A).todense()
v = array([1,2,3,4])[:,newaxis]; v
# sparse matrix - dense vector multiplication
A * v
# same result with dense matrix - dense vector multiplcation
A.todense() * v
Explanation: We can compute with sparse matrices like with dense matrices:
End of explanation
from scipy import optimize
Explanation: Optimization
Optimization (finding minima or maxima of a function) is a large field in mathematics, and optimization of complicated functions or in many variables can be rather involved. Here we will only look at a few very simple cases. For a more detailed introduction to optimization with SciPy see: http://scipy-lectures.github.com/advanced/mathematical_optimization/index.html
To use the optimization module in scipy first include the optimize module:
End of explanation
def f(x):
return 4*x**3 + (x-2)**2 + x**4
fig, ax = plt.subplots()
x = linspace(-5, 3, 100)
ax.plot(x, f(x));
Explanation: Finding a minima
Let's first look at how to find the minima of a simple function of a single variable:
End of explanation
x_min = optimize.fmin_bfgs(f, -2)
x_min
optimize.fmin_bfgs(f, 0.5)
Explanation: We can use the fmin_bfgs function to find the minima of a function:
End of explanation
optimize.brent(f)
optimize.fminbound(f, -4, 2)
Explanation: We can also use the brent or fminbound functions. They have a bit different syntax and use different algorithms.
End of explanation
omega_c = 3.0
def f(omega):
# a transcendental equation: resonance frequencies of a low-Q SQUID terminated microwave resonator
return tan(2*pi*omega) - omega_c/omega
fig, ax = plt.subplots(figsize=(10,4))
x = linspace(0, 3, 1000)
y = f(x)
mask = where(abs(y) > 50)
x[mask] = y[mask] = NaN # get rid of vertical line when the function flip sign
ax.plot(x, y)
ax.plot([0, 3], [0, 0], 'k')
ax.set_ylim(-5,5);
optimize.fsolve(f, 0.1)
optimize.fsolve(f, 0.6)
optimize.fsolve(f, 1.1)
Explanation: Finding a solution to a function
To find the root for a function of the form $f(x) = 0$ we can use the fsolve function. It requires an initial guess:
End of explanation
from scipy.interpolate import *
def f(x):
return sin(x)
n = arange(0, 10)
x = linspace(0, 9, 100)
y_meas = f(n) + 0.1 * randn(len(n)) # simulate measurement with noise
y_real = f(x)
linear_interpolation = interp1d(n, y_meas)
y_interp1 = linear_interpolation(x)
cubic_interpolation = interp1d(n, y_meas, kind='cubic')
y_interp2 = cubic_interpolation(x)
fig, ax = plt.subplots(figsize=(10,4))
ax.plot(n, y_meas, 'bs', label='noisy data')
ax.plot(x, y_real, 'k', lw=2, label='true function')
ax.plot(x, y_interp1, 'r', label='linear interp')
ax.plot(x, y_interp2, 'g', label='cubic interp')
ax.legend(loc=3);
Explanation: Interpolation
Interpolation is simple and convenient in scipy: The interp1d function, when given arrays describing X and Y data, returns and object that behaves like a function that can be called for an arbitrary value of x (in the range covered by X), and it returns the corresponding interpolated y value:
End of explanation
from scipy import stats
# create a (discreet) random variable with poissionian distribution
X = stats.poisson(3.5) # photon distribution for a coherent state with n=3.5 photons
n = arange(0,15)
fig, axes = plt.subplots(3,1, sharex=True)
# plot the probability mass function (PMF)
axes[0].step(n, X.pmf(n))
# plot the commulative distribution function (CDF)
axes[1].step(n, X.cdf(n))
# plot histogram of 1000 random realizations of the stochastic variable X
axes[2].hist(X.rvs(size=1000));
# create a (continous) random variable with normal distribution
Y = stats.norm()
x = linspace(-5,5,100)
fig, axes = plt.subplots(3,1, sharex=True)
# plot the probability distribution function (PDF)
axes[0].plot(x, Y.pdf(x))
# plot the commulative distributin function (CDF)
axes[1].plot(x, Y.cdf(x));
# plot histogram of 1000 random realizations of the stochastic variable Y
axes[2].hist(Y.rvs(size=1000), bins=50);
Explanation: Statistics
The scipy.stats module contains a large number of statistical distributions, statistical functions and tests. For a complete documentation of its features, see http://docs.scipy.org/doc/scipy/reference/stats.html.
There is also a very powerful python package for statistical modelling called statsmodels. See http://statsmodels.sourceforge.net for more details.
End of explanation
X.mean(), X.std(), X.var() # poission distribution
Y.mean(), Y.std(), Y.var() # normal distribution
Explanation: Statistics:
End of explanation
t_statistic, p_value = stats.ttest_ind(X.rvs(size=1000), X.rvs(size=1000))
print "t-statistic =", t_statistic
print "p-value =", p_value
Explanation: Statistical tests
Test if two sets of (independent) random data comes from the same distribution:
End of explanation
stats.ttest_1samp(Y.rvs(size=1000), 0.1)
Explanation: Since the p value is very large we cannot reject the hypothesis that the two sets of random data have different means.
To test if the mean of a single sample of data has mean 0.1 (the true mean is 0.0):
End of explanation
Y.mean()
stats.ttest_1samp(Y.rvs(size=1000), Y.mean())
Explanation: Low p-value means that we can reject the hypothesis that the mean of Y is 0.1.
End of explanation
%reload_ext version_information
%version_information numpy, matplotlib, scipy
Explanation: Further reading
http://www.scipy.org - The official web page for the SciPy project.
http://docs.scipy.org/doc/scipy/reference/tutorial/index.html - A tutorial on how to get started using SciPy.
https://github.com/scipy/scipy/ - The SciPy source code.
Versions
End of explanation |
4,573 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
计算传播与机器学习
王成军
[email protected]
计算传播网 http
Step1: 训练集和测试集
Step2: 交叉验证
cross-validation
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”
Step3: 使用天涯bbs数据
Step4: 使用sklearn做logistic回归
王成军
[email protected]
计算传播网 http
Step5: 使用sklearn实现贝叶斯预测
王成军
[email protected]
计算传播网 http
Step6: naive_bayes.GaussianNB Gaussian Naive Bayes (GaussianNB)
naive_bayes.MultinomialNB([alpha, ...]) Naive Bayes classifier for multinomial models
naive_bayes.BernoulliNB([alpha, binarize, ...]) Naive Bayes classifier for multivariate Bernoulli models.
Step7: cross-validation
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”
Step8: 使用sklearn实现决策树
王成军
[email protected]
计算传播网 http
Step9: 使用sklearn实现SVM支持向量机
王成军
[email protected]
计算传播网 http
Step10: 泰坦尼克号数据分析
王成军
[email protected]
计算传播网 http | Python Code:
%matplotlib inline
import sklearn
from sklearn import datasets
from sklearn import linear_model
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from sklearn.preprocessing import scale
# boston data
boston = datasets.load_boston()
y = boston.target
X = boston.data
' '.join(dir(boston))
boston['feature_names']
import numpy as np
import statsmodels.api as sm
import statsmodels.formula.api as smf
# Fit regression model (using the natural log of one of the regressors)
results = smf.ols('boston.target ~ boston.data', data=boston).fit()
print(results.summary())
regr = linear_model.LinearRegression()
lm = regr.fit(boston.data, y)
lm.intercept_, lm.coef_, lm.score(boston.data, y)
predicted = regr.predict(boston.data)
fig, ax = plt.subplots()
ax.scatter(y, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
Explanation: 计算传播与机器学习
王成军
[email protected]
计算传播网 http://computational-communication.com
1、 监督式学习
工作机制:
- 这个算法由一个目标变量或结果变量(或因变量)组成。
- 这些变量由已知的一系列预示变量(自变量)预测而来。
- 利用这一系列变量,我们生成一个将输入值映射到期望输出值的函数。
- 这个训练过程会一直持续,直到模型在训练数据上获得期望的精确度。
- 监督式学习的例子有:回归、决策树、随机森林、K – 近邻算法、逻辑回归等。
2、非监督式学习
工作机制:
- 在这个算法中,没有任何目标变量或结果变量要预测或估计。
- 这个算法用在不同的组内聚类分析。
- 这种分析方式被广泛地用来细分客户,根据干预的方式分为不同的用户组。
- 非监督式学习的例子有:关联算法和 K–均值算法。
3、强化学习
工作机制:
- 这个算法训练机器进行决策。
- 它是这样工作的:机器被放在一个能让它通过反复试错来训练自己的环境中。
- 机器从过去的经验中进行学习,并且尝试利用了解最透彻的知识作出精确的商业判断。
- 强化学习的例子有马尔可夫决策过程。alphago
Chess. Here, the agent decides upon a series of moves depending on the state of the board (the environment), and the
reward can be defined as win or lose at the end of the game:
<img src = './img/mlprocess.png' width = 800>
线性回归
逻辑回归
决策树
SVM
朴素贝叶斯
K最近邻算法
K均值算法
随机森林算法
降维算法
Gradient Boost 和 Adaboost 算法
使用sklearn做线性回归
王成军
[email protected]
计算传播网 http://computational-communication.com
线性回归
通常用于估计连续性变量的实际数值(房价、呼叫次数、总销售额等)。
通过拟合最佳直线来建立自变量X和因变量Y的关系。
这条最佳直线叫做回归线,并且用 $Y= \beta *X + C$ 这条线性等式来表示。
系数 $\beta$ 和 C 可以通过最小二乘法获得
End of explanation
boston.data
from sklearn.cross_validation import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(boston.data,
boston.target,
test_size=0.2,
random_state=42)
regr = linear_model.LinearRegression()
lm = regr.fit(Xs_train, y_train)
lm.intercept_, lm.coef_, lm.score(Xs_train, y_train)
predicted = regr.predict(Xs_test)
fig, ax = plt.subplots()
ax.scatter(y_test, predicted)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('$Measured$', fontsize = 20)
ax.set_ylabel('$Predicted$', fontsize = 20)
plt.show()
Explanation: 训练集和测试集
End of explanation
from sklearn.cross_validation import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, boston.data , boston.target, cv = 3)
scores.mean()
help(cross_val_score)
scores = [cross_val_score(regr, data_X_scale,\
boston.target,\
cv = int(i)).mean() \
for i in range(3, 50)]
plt.plot(range(3, 50), scores,'r-o')
plt.show()
data_X_scale = scale(boston.data)
scores = cross_val_score(regr,data_X_scale, boston.target,\
cv = 7)
scores.mean()
Explanation: 交叉验证
cross-validation
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
- A model is trained using k-1 of the folds as training data;
- the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy).
End of explanation
import pandas as pd
df = pd.read_csv('../data/tianya_bbs_threads_list.txt', sep = "\t", header=None)
df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'})
df[:2]
# 定义这个函数的目的是让读者感受到:
# 抽取不同的样本,得到的结果完全不同。
def randomSplit(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append([dataX[k]])
dataY_test.append(dataY[k])
else:
dataX_train.append([dataX[k]])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
import numpy as np
# Use only one feature
data_X = df.reply
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(np.log(df.click+1),
np.log(df.reply+1), 20)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr.score(data_X_test, data_y_test))
data_X_train
y_true, y_pred = data_y_test, regr.predict(data_X_test)
plt.scatter(y_pred, y_true, color='black')
plt.show()
# Plot outputs
plt.scatter(data_X_test, data_y_test, color='black')
plt.plot(data_X_test, regr.predict(data_X_test), color='blue', linewidth=3)
plt.show()
# The coefficients
'Coefficients: \n', regr.coef_
# The mean square error
"Residual sum of squares: %.2f" % np.mean((regr.predict(data_X_test) - data_y_test) ** 2)
df.click_log = [[np.log(df.click[i]+1)] for i in range(len(df))]
df.reply_log = [[np.log(df.reply[i]+1)] for i in range(len(df))]
from sklearn.cross_validation import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(df.click_log, df.reply_log,test_size=0.2, random_state=0)
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(Xs_train, y_train)
# Explained variance score: 1 is perfect prediction
'Variance score: %.2f' % regr.score(Xs_test, y_test)
# Plot outputs
plt.scatter(Xs_test, y_test, color='black')
plt.plot(Xs_test, regr.predict(Xs_test), color='blue', linewidth=3)
plt.show()
from sklearn.cross_validation import cross_val_score
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, df.click_log, \
df.reply_log, cv = 3)
scores.mean()
regr = linear_model.LinearRegression()
scores = cross_val_score(regr, df.click_log,
df.reply_log, cv =5)
scores.mean()
Explanation: 使用天涯bbs数据
End of explanation
repost = []
for i in df.title:
if u'转载' in i:
repost.append(1)
else:
repost.append(0)
data_X = [[df.click[i], df.reply[i]] for i in range(len(df))]
data_X[:3]
from sklearn.linear_model import LogisticRegression
df['repost'] = repost
model = LogisticRegression()
model.fit(data_X,df.repost)
model.score(data_X,df.repost)
def randomSplitLogistic(dataX, dataY, num):
dataX_train = []
dataX_test = []
dataY_train = []
dataY_test = []
import random
test_index = random.sample(range(len(df)), num)
for k in range(len(dataX)):
if k in test_index:
dataX_test.append(dataX[k])
dataY_test.append(dataY[k])
else:
dataX_train.append(dataX[k])
dataY_train.append(dataY[k])
return dataX_train, dataX_test, dataY_train, dataY_test,
# Split the data into training/testing sets
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
# Create logistic regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(data_X_train, data_y_train)
# Explained variance score: 1 is perfect prediction
'Variance score: %.2f' % log_regr.score(data_X_test, data_y_test)
y_true, y_pred = data_y_test, log_regr.predict(data_X_test)
y_true, y_pred
print(classification_report(y_true, y_pred))
from sklearn.cross_validation import train_test_split
Xs_train, Xs_test, y_train, y_test = train_test_split(data_X, df.repost, test_size=0.2, random_state=42)
# Create logistic regression object
log_regr = LogisticRegression()
# Train the model using the training sets
log_regr.fit(Xs_train, y_train)
# Explained variance score: 1 is perfect prediction
'Variance score: %.2f' % log_regr.score(Xs_test, y_test)
print('Logistic score for test set: %f' % log_regr.score(Xs_test, y_test))
print('Logistic score for training set: %f' % log_regr.score(Xs_train, y_train))
y_true, y_pred = y_test, log_regr.predict(Xs_test)
print(classification_report(y_true, y_pred))
logre = LogisticRegression()
scores = cross_val_score(logre, data_X, df.repost, cv = 3)
scores.mean()
logre = LogisticRegression()
data_X_scale = scale(data_X)
# The importance of preprocessing in data science and the machine learning pipeline I:
scores = cross_val_score(logre, data_X_scale, df.repost, cv = 3)
scores.mean()
Explanation: 使用sklearn做logistic回归
王成军
[email protected]
计算传播网 http://computational-communication.com
logistic回归是一个分类算法而不是一个回归算法。
可根据已知的一系列因变量估计离散数值(比方说二进制数值 0 或 1 ,是或否,真或假)。
简单来说,它通过将数据拟合进一个逻辑函数(logistic function)来预估一个事件出现的概率。
因此,它也被叫做逻辑回归。因为它预估的是概率,所以它的输出值大小在 0 和 1 之间(正如所预计的一样)。
$$odds= \frac{p}{1-p} = \frac{probability\: of\: event\: occurrence} {probability \:of \:not\: event\: occurrence}$$
$$ln(odds)= ln(\frac{p}{1-p})$$
$$logit(x) = ln(\frac{p}{1-p}) = b_0+b_1X_1+b_2X_2+b_3X_3....+b_kX_k$$
End of explanation
from sklearn import naive_bayes
' '.join(dir(naive_bayes))
Explanation: 使用sklearn实现贝叶斯预测
王成军
[email protected]
计算传播网 http://computational-communication.com
Naive Bayes algorithm
It is a classification technique based on Bayes’ Theorem with an assumption of independence among predictors.
In simple terms, a Naive Bayes classifier assumes that the presence of a particular feature in a class is unrelated to the presence of any other feature.
why it is known as ‘Naive’? For example, a fruit may be considered to be an apple if it is red, round, and about 3 inches in diameter. Even if these features depend on each other or upon the existence of the other features, all of these properties independently contribute to the probability that this fruit is an apple.
贝叶斯定理为使用$p(c)$, $p(x)$, $p(x|c)$ 计算后验概率$P(c|x)$提供了方法:
$$
p(c|x) = \frac{p(x|c) p(c)}{p(x)}
$$
P(c|x) is the posterior probability of class (c, target) given predictor (x, attributes).
P(c) is the prior probability of class.
P(x|c) is the likelihood which is the probability of predictor given class.
P(x) is the prior probability of predictor.
Step 1: Convert the data set into a frequency table
Step 2: Create Likelihood table by finding the probabilities like:
- p(Overcast) = 0.29, p(rainy) = 0.36, p(sunny) = 0.36
- p(playing) = 0.64, p(rest) = 0.36
Step 3: Now, use Naive Bayesian equation to calculate the posterior probability for each class. The class with the highest posterior probability is the outcome of prediction.
Problem: Players will play if weather is sunny. Is this statement is correct?
We can solve it using above discussed method of posterior probability.
$P(Yes | Sunny) = \frac{P( Sunny | Yes) * P(Yes) } {P (Sunny)}$
Here we have P (Sunny |Yes) = 3/9 = 0.33, P(Sunny) = 5/14 = 0.36, P( Yes)= 9/14 = 0.64
Now, $P (Yes | Sunny) = \frac{0.33 * 0.64}{0.36} = 0.60$, which has higher probability.
End of explanation
#Import Library of Gaussian Naive Bayes model
from sklearn.naive_bayes import GaussianNB
import numpy as np
#assigning predictor and target variables
x= np.array([[-3,7],[1,5], [1,2], [-2,0], [2,3], [-4,0], [-1,1], [1,1], [-2,2], [2,7], [-4,1], [-2,7]])
Y = np.array([3, 3, 3, 3, 4, 3, 3, 4, 3, 4, 4, 4])
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
model.fit(x[:8], Y[:8])
#Predict Output
predicted= model.predict([[1,2],[3,4]])
predicted
Explanation: naive_bayes.GaussianNB Gaussian Naive Bayes (GaussianNB)
naive_bayes.MultinomialNB([alpha, ...]) Naive Bayes classifier for multinomial models
naive_bayes.BernoulliNB([alpha, binarize, ...]) Naive Bayes classifier for multivariate Bernoulli models.
End of explanation
data_X_train, data_X_test, data_y_train, data_y_test = randomSplit(df.click, df.reply, 20)
# Train the model using the training sets
model.fit(data_X_train, data_y_train)
#Predict Output
predicted= model.predict(data_X_test)
predicted
model.score(data_X_test, data_y_test)
from sklearn.cross_validation import cross_val_score
model = GaussianNB()
scores = cross_val_score(model, [[c] for c in df.click],\
df.reply, cv = 7)
scores.mean()
Explanation: cross-validation
k-fold CV, the training set is split into k smaller sets (other approaches are described below, but generally follow the same principles). The following procedure is followed for each of the k “folds”:
- A model is trained using k-1 of the folds as training data;
- the resulting model is validated on the remaining part of the data (i.e., it is used as a test set to compute a performance measure such as accuracy).
End of explanation
from sklearn import tree
model = tree.DecisionTreeClassifier(criterion='gini')
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = cross_val_score(model, data_X, df.repost, cv = 3)
scores.mean()
Explanation: 使用sklearn实现决策树
王成军
[email protected]
计算传播网 http://computational-communication.com
决策树
这个监督式学习算法通常被用于分类问题。
它同时适用于分类变量和连续因变量。
在这个算法中,我们将总体分成两个或更多的同类群。
这是根据最重要的属性或者自变量来分成尽可能不同的组别。
在上图中你可以看到,根据多种属性,人群被分成了不同的四个小组,来判断 “他们会不会去玩”。
为了把总体分成不同组别,需要用到许多技术,比如说 Gini、Information Gain、Chi-square、entropy。
End of explanation
from sklearn import svm
# Create SVM classification object
model=svm.SVC()
' '.join(dir(svm))
data_X_train, data_X_test, data_y_train, data_y_test = randomSplitLogistic(data_X, df.repost, 20)
model.fit(data_X_train,data_y_train)
model.score(data_X_train,data_y_train)
# Predict
model.predict(data_X_test)
# crossvalidation
scores = []
cvs = [3, 5, 10, 25, 50, 75, 100]
for i in cvs:
score = cross_val_score(model, data_X, df.repost,
cv = i)
scores.append(score.mean() ) # Try to tune cv
plt.plot(cvs, scores, 'b-o')
plt.xlabel('$cv$', fontsize = 20)
plt.ylabel('$Score$', fontsize = 20)
plt.show()
Explanation: 使用sklearn实现SVM支持向量机
王成军
[email protected]
计算传播网 http://computational-communication.com
将每个数据在N维空间中用点标出(N是你所有的特征总数),每个特征的值是一个坐标的值。
举个例子,如果我们只有身高和头发长度两个特征,我们会在二维空间中标出这两个变量,每个点有两个坐标(这些坐标叫做支持向量)。
现在,我们会找到将两组不同数据分开的一条直线。
两个分组中距离最近的两个点到这条线的距离同时最优化。
上面示例中的黑线将数据分类优化成两个小组
两组中距离最近的点(图中A、B点)到达黑线的距离满足最优条件。
这条直线就是我们的分割线。接下来,测试数据落到直线的哪一边,我们就将它分到哪一类去。
End of explanation
#Import the Numpy library
import numpy as np
#Import 'tree' from scikit-learn library
from sklearn import tree
import pandas as pd
train = pd.read_csv('../data/tatanic_train.csv', sep = ",")
from sklearn.naive_bayes import GaussianNB
train["Age"] = train["Age"].fillna(train["Age"].median())
train["Fare"] = train["Fare"].fillna(train["Fare"].median())
# x = [[i] for i in train['Age']]
y = train['Age']
y = train['Fare'].astype(int)
#y = [[i] for i in y]
#Create a Gaussian Classifier
model = GaussianNB()
# Train the model using the training sets
nb = model.fit(x[:80], y[:80])
# nb.score(x, y)
help(GaussianNB)
model.fit(x)
train.head()
train["Age"] = train["Age"].fillna(train["Age"].median())
#Convert the male and female groups to integer form
train["Sex"][train["Sex"] == "male"] = 0
train["Sex"][train["Sex"] == "female"] = 1
#Impute the Embarked variable
train["Embarked"] = train["Embarked"].fillna('S')
#Convert the Embarked classes to integer form
train["Embarked"][train["Embarked"] == "S"] = 0
train["Embarked"][train["Embarked"] == "C"] = 1
train["Embarked"][train["Embarked"] == "Q"] = 2
#Create the target and features numpy arrays: target, features_one
target = train['Survived'].values
features_one = train[["Pclass", "Sex", "Age", "Fare"]].values
#Fit your first decision tree: my_tree_one
my_tree_one = tree.DecisionTreeClassifier()
my_tree_one = my_tree_one.fit(features_one, target)
#Look at the importance of the included features and print the score
print(my_tree_one.feature_importances_)
print(my_tree_one.score(features_one, target))
test = pd.read_csv('../data/tatanic_test.csv', sep = ",")
# Impute the missing value with the median
test.Fare[152] = test.Fare.median()
test["Age"] = test["Age"].fillna(test["Age"].median())
#Convert the male and female groups to integer form
test["Sex"][test["Sex"] == "male"] = 0
test["Sex"][test["Sex"] == "female"] = 1
#Impute the Embarked variable
test["Embarked"] = test["Embarked"].fillna('S')
#Convert the Embarked classes to integer form
test["Embarked"][test["Embarked"] == "S"] = 0
test["Embarked"][test["Embarked"] == "C"] = 1
test["Embarked"][test["Embarked"] == "Q"] = 2
# Extract the features from the test set: Pclass, Sex, Age, and Fare.
test_features = test[["Pclass","Sex", "Age", "Fare"]].values
# Make your prediction using the test set
my_prediction = my_tree_one.predict(test_features)
# Create a data frame with two columns: PassengerId & Survived. Survived contains your predictions
PassengerId =np.array(test['PassengerId']).astype(int)
my_solution = pd.DataFrame(my_prediction, PassengerId, columns = ["Survived"])
my_solution[:3]
# Check that your data frame has 418 entries
my_solution.shape
# Write your solution to a csv file with the name my_solution.csv
my_solution.to_csv("../data/tatanic_solution_one.csv", index_label = ["PassengerId"])
# Create a new array with the added features: features_two
features_two = train[["Pclass","Age","Sex","Fare",\
"SibSp", "Parch", "Embarked"]].values
#Control overfitting by setting "max_depth" to 10 and "min_samples_split" to 5 : my_tree_two
max_depth = 10
min_samples_split = 5
my_tree_two = tree.DecisionTreeClassifier(max_depth = max_depth,
min_samples_split = min_samples_split,
random_state = 1)
my_tree_two = my_tree_two.fit(features_two, target)
#Print the score of the new decison tree
print(my_tree_two.score(features_two, target))
# create a new train set with the new variable
train_two = train
train_two['family_size'] = train.SibSp + train.Parch + 1
# Create a new decision tree my_tree_three
features_three = train[["Pclass", "Sex", "Age", \
"Fare", "SibSp", "Parch", "family_size"]].values
my_tree_three = tree.DecisionTreeClassifier()
my_tree_three = my_tree_three.fit(features_three, target)
# Print the score of this decision tree
print(my_tree_three.score(features_three, target))
#Import the `RandomForestClassifier`
from sklearn.ensemble import RandomForestClassifier
#We want the Pclass, Age, Sex, Fare,SibSp, Parch, and Embarked variables
features_forest = train[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
#Building the Forest: my_forest
n_estimators = 100
forest = RandomForestClassifier(max_depth = 10, min_samples_split=2,
n_estimators = n_estimators, random_state = 1)
my_forest = forest.fit(features_forest, target)
#Print the score of the random forest
print(my_forest.score(features_forest, target))
#Compute predictions and print the length of the prediction vector:test_features, pred_forest
test_features = test[["Pclass", "Age", "Sex", "Fare", "SibSp", "Parch", "Embarked"]].values
pred_forest = my_forest.predict(test_features)
print(len(test_features))
print(pred_forest[:3])
#Request and print the `.feature_importances_` attribute
print(my_tree_two.feature_importances_)
print(my_forest.feature_importances_)
#Compute and print the mean accuracy score for both models
print(my_tree_two.score(features_two, target))
print(my_forest.score(features_two, target))
Explanation: 泰坦尼克号数据分析
王成军
[email protected]
计算传播网 http://computational-communication.com
End of explanation |
4,574 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align
Step1: 上一节使用AbuFactorBuyBreak和AbuFactorSellBreak且混入基本止盈止损策略AbuFactorAtrNStop,
风险控制止损策略AbuFactorPreAtrNStop,利润保护止盈策略AbuFactorCloseAtrNStop来提高交易的盈利效果。
本节将继续在上一节回测的基础上示例择时策略其它使用方法,首先完成上一节的回测准备,如下所示:
Step4: 1 滑点买入卖出价格确定及策略实现
第一节中实现的买入策略和卖出策略的编写,买入策略中确定买入只是通过make_buy_order函数,确定买单生成,卖出策略确定卖出订单
也只是通过fit_sell_order来提交卖单,那么执行订单,应该使用的什么价格买入或者卖出呢,abupy在默认的策略都是使用当天的均价买入卖出,
当然你可以实现多种复杂的当日交易策略,设置限价单、市价单,获取当日的分时数据再次进行策略分析执行操作,但是如果你的回测数量足够多的情况下,比如全市场回测,按照大数定理,这个均值执行其实是最好的模拟,而且简单、运行速度快。
滑点买入卖出价格确定具体实现代码请阅读AbuSlippageBuyMean和AbuSlippageSellMean,它们的实现都很简单
在买入滑点AbuSlippageBuyMean中有一个小策略当当天开盘价格直接下探7%时,放弃买单,看上一节回测结果中如下图这次交易,从图上就可以发现虽然是突破买入,但明显第二天执行买单时的价格是直线下跌的,且下跌不少,但还是成交了这笔交易。因为开盘下跌幅度没有达到7%的阀值,下面我们就过拟合这次交易避免买入,只为示例
下面编写一个独立的Slippage策略,只简单修改g_open_down_rate的值为0.02
Step5: 上面编写的AbuSlippageBuyMean2类实现即为滑点买入类的实现:
滑点买入类需要继承自AbuSlippageBuyBase
滑点买入类需要实现fit_price来确定交易单执行当日的最终买入价格
slippage_limit_up装饰器是针对a股涨停板买入价格决策的装饰器,处理买入成功概率,根据概率决定是否能买入,及涨停下的买入价格决策,涨停下买入价格模型为,越靠近涨停价格买入成交概率越大,即在涨停下预期以靠近涨停价格买入,
备注:slippage_limit_up及slippage_limit_down具体实现可阅读源代码,后面的章节有示例演示使用
但是滑点类时什么时候被实例化使用的呢,怎么使用我们自己写的这个滑点类呢?首先看买入因子基类AbuFactorBuyBase,在每个买入因子初始化的时候即把默认的滑点类以及仓位管理类(稍后讲解)赋值,如下片段代码所示:
详情请查看AbuFactorBuyBas源代码
class AbuFactorBuyBase(six.with_metaclass(ABCMeta, ABuParamBaseClass))
Step6: 2. 交易手续费的计算以及自定义手续费
交易必然会产生手续费,手续费的计算在ABuCommission模块中,比如本例中使用的的美股交易回测,使用的手续费计算代码如下所示:
def calc_commission_us(trade_cnt, price)
Step9: 如果你想把自己的计算手续费的方法使用在回测中,只需要编写手续费函数,示例如下所示:
Step10: 如上编写的手续费函数统一每次买入卖出都是7美元手续费,手续费函数有两个参数一个trade_cnt代表买入(卖出)股数,
另一个参数是price,代表买入(卖出)价格,下面使用这个自定义的手续费方法做回测,代码如下所示:
Step11: 从上面回测交易手续费结果可以看到,买入的手续费都变成了7元,卖出手续费还是之前的算法,下面的回测将买入卖出手续费计算方法都变成使用自定义的方法,代码如下所示: | Python Code:
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第3节 滑点策略与交易手续费</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
首先导入abupy中本节使用的模块:
End of explanation
from abupy import AbuFactorBuyBreak, AbuFactorSellBreak
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop
from abupy import ABuPickTimeExecute, AbuBenchmark, AbuCapital
# buy_factors 60日向上突破,42日向上突破两个因子
buy_factors = [{'xd': 60, 'class': AbuFactorBuyBreak},
{'xd': 42, 'class': AbuFactorBuyBreak}]
# 四个卖出因子同时并行生效
sell_factors = [
{
'xd': 120,
'class': AbuFactorSellBreak
},
{
'stop_loss_n': 0.5,
'stop_win_n': 3.0,
'class': AbuFactorAtrNStop
},
{
'class': AbuFactorPreAtrNStop,
'pre_atr_n': 1.0
},
{
'class': AbuFactorCloseAtrNStop,
'close_atr_n': 1.5
}]
benchmark = AbuBenchmark()
capital = AbuCapital(1000000, benchmark)
Explanation: 上一节使用AbuFactorBuyBreak和AbuFactorSellBreak且混入基本止盈止损策略AbuFactorAtrNStop,
风险控制止损策略AbuFactorPreAtrNStop,利润保护止盈策略AbuFactorCloseAtrNStop来提高交易的盈利效果。
本节将继续在上一节回测的基础上示例择时策略其它使用方法,首先完成上一节的回测准备,如下所示:
End of explanation
from abupy import AbuSlippageBuyBase, slippage
# 修改买入下跌阀值为0.02
g_open_down_rate = 0.02
class AbuSlippageBuyMean2(AbuSlippageBuyBase):
示例日内滑点均价买入类
@slippage.sbb.slippage_limit_up
def fit_price(self):
取当天交易日的最高最低均价做为决策价格
:return: 最终决策的当前交易买入价格
# TODO 基类提取作为装饰器函数,子类根据需要选择是否装饰,并且添加上根据order的call,put明确细节逻辑
if self.kl_pd_buy.pre_close == 0 or (self.kl_pd_buy.open / self.kl_pd_buy.pre_close) < (1 - g_open_down_rate):
# 开盘就下跌一定比例阀值,放弃单子
return np.inf
# 买入价格为当天均价,即最高,最低的平均,也可使用高开低收平均等方式计算
self.buy_price = np.mean([self.kl_pd_buy['high'], self.kl_pd_buy['low']])
# 返回最终的决策价格
return self.buy_price
Explanation: 1 滑点买入卖出价格确定及策略实现
第一节中实现的买入策略和卖出策略的编写,买入策略中确定买入只是通过make_buy_order函数,确定买单生成,卖出策略确定卖出订单
也只是通过fit_sell_order来提交卖单,那么执行订单,应该使用的什么价格买入或者卖出呢,abupy在默认的策略都是使用当天的均价买入卖出,
当然你可以实现多种复杂的当日交易策略,设置限价单、市价单,获取当日的分时数据再次进行策略分析执行操作,但是如果你的回测数量足够多的情况下,比如全市场回测,按照大数定理,这个均值执行其实是最好的模拟,而且简单、运行速度快。
滑点买入卖出价格确定具体实现代码请阅读AbuSlippageBuyMean和AbuSlippageSellMean,它们的实现都很简单
在买入滑点AbuSlippageBuyMean中有一个小策略当当天开盘价格直接下探7%时,放弃买单,看上一节回测结果中如下图这次交易,从图上就可以发现虽然是突破买入,但明显第二天执行买单时的价格是直线下跌的,且下跌不少,但还是成交了这笔交易。因为开盘下跌幅度没有达到7%的阀值,下面我们就过拟合这次交易避免买入,只为示例
下面编写一个独立的Slippage策略,只简单修改g_open_down_rate的值为0.02
End of explanation
# 针对60使用AbuSlippageBuyMean2
buy_factors2 = [{'slippage': AbuSlippageBuyMean2, 'xd': 60,
'class': AbuFactorBuyBreak},
{'xd': 42, 'class': AbuFactorBuyBreak}]
capital = AbuCapital(1000000, benchmark)
orders_pd, action_pd, _ = ABuPickTimeExecute.do_symbols_with_same_factors(['usTSLA'],
benchmark,
buy_factors2,
sell_factors,
capital,
show=True)
Explanation: 上面编写的AbuSlippageBuyMean2类实现即为滑点买入类的实现:
滑点买入类需要继承自AbuSlippageBuyBase
滑点买入类需要实现fit_price来确定交易单执行当日的最终买入价格
slippage_limit_up装饰器是针对a股涨停板买入价格决策的装饰器,处理买入成功概率,根据概率决定是否能买入,及涨停下的买入价格决策,涨停下买入价格模型为,越靠近涨停价格买入成交概率越大,即在涨停下预期以靠近涨停价格买入,
备注:slippage_limit_up及slippage_limit_down具体实现可阅读源代码,后面的章节有示例演示使用
但是滑点类时什么时候被实例化使用的呢,怎么使用我们自己写的这个滑点类呢?首先看买入因子基类AbuFactorBuyBase,在每个买入因子初始化的时候即把默认的滑点类以及仓位管理类(稍后讲解)赋值,如下片段代码所示:
详情请查看AbuFactorBuyBas源代码
class AbuFactorBuyBase(six.with_metaclass(ABCMeta, ABuParamBaseClass)):
def __init__(self, capital, kl_pd, **kwargs):
# 走势数据
self.kl_pd = kl_pd
# 资金情况数据
self.capital = capital
# 滑点类,默认AbuSlippageBuyMean
self.slippage_class = kwargs['slippage'] \
if 'slippage' in kwargs else AbuSlippageBuyMean
# 仓位管理,默认AbuAtrPosition
self.position_class = kwargs['position'] \
if 'position' in kwargs else AbuAtrPosition
if 'win_rate' in kwargs:
self.win_rate = kwargs['win_rate']
if 'gains_mean' in kwargs:
self.gains_mean = kwargs['gains_mean']
if 'losses_mean' in kwargs:
self.losses_mean = kwargs['losses_mean']
self._init_self(**kwargs)
之后因子在每次生效产生买单的时候会触发AbuOrder实例对象的fit_buy_order()函数,fit_buy_order()中将滑点类,仓位管理类实例化后,执行买入价格及数量确定,代码片段如下所示,详情请查看源代码。
def fit_buy_order(self, day_ind, factor_object):
kl_pd = factor_object.kl_pd
# 要执行买入当天的数据
kl_pd_buy = kl_pd.iloc[day_ind + 1]
# 买入因子名称
factor_name = factor_object.factor_name \
if hasattr(factor_object, 'factor_name') else 'unknown'
# 滑点类设置
slippage_class = factor_object.slippage_class
# 仓位管理类设置
position_class = factor_object.position_class
# 初始资金,也可修改策略使用剩余资金
read_cash = factor_object.capital.read_cash
# 实例化滑点类
fact = slippage_class(kl_pd_buy, factor_name)
# 执行fit_price(), 计算出买入价格
bp = fact.fit_price()
# 如果滑点类中决定不买入,撤单子,bp就返回正无穷
if bp < np.inf:
# 实例化仓位管理类
position = position_class(kl_pd_buy, factor_name, bp,
read_cash)
# 执行fit_position(),通过仓位管理计算买入的数量
buy_stock_cnt = int(position.fit_position(factor_object))
if buy_stock_cnt < 1:
return
卖出因子的滑点操作及仓位管理与买入类似,读者可以自行阅读源代码。
由以上代码我们可以发现通过buy_factors的字典对象中传入slippage便可以自行设置滑点类,由于上图显示的交易是60日突破产生的买单,所以我们只修改60日突破的字典对象,执行后可以看到如下图所示,过滤了两个60日突破的买单,即过滤了上图所示的交易,代码如下所示:
备注:实际上如果只是修改g_open_down_rate的值,可以通过模块全局变量直接修改,本节只为示例使用流程
End of explanation
capital.commission.commission_df
Explanation: 2. 交易手续费的计算以及自定义手续费
交易必然会产生手续费,手续费的计算在ABuCommission模块中,比如本例中使用的的美股交易回测,使用的手续费计算代码如下所示:
def calc_commission_us(trade_cnt, price):
美股计算交易费用:每股0.01,最低消费2.99
:param trade_cnt: 交易的股数(int)
:param price: 每股的价格(美元)(暂不使用,只是保持接口统一)
:return: 计算结果手续费
# 每股手续费0.01
commission = trade_cnt * 0.01
if commission < 2.99:
# 最低消费2.99
commission = 2.99
return commission
针对不同市场美股,a股,港股,比特币,期货有不同计算手续费的方法,更多详情请阅读ABuCommission模块源代码
下面先看看之前的回测交易中产生的手续费情况,查看代码如下所示:
End of explanation
def calc_commission_us2(trade_cnt, price):
手续费统一7美元
return 7
Explanation: 如果你想把自己的计算手续费的方法使用在回测中,只需要编写手续费函数,示例如下所示:
End of explanation
# 构造一个字典key='buy_commission_func', value=自定义的手续费方法函数
commission_dict = {'buy_commission_func': calc_commission_us2}
# 将commission_dict做为参数传入AbuCapital
capital = AbuCapital(1000000, benchmark, user_commission_dict=commission_dict)
# 除了手续费自定义外,回测其它设置不变,show=False不可视化回测交易
orders_pd, action_pd, _ = ABuPickTimeExecute.do_symbols_with_same_factors(['usTSLA'],
benchmark,
buy_factors2,
sell_factors,
capital,
show=False)
# 回测完成后查看手续费情况
capital.commission.commission_df
Explanation: 如上编写的手续费函数统一每次买入卖出都是7美元手续费,手续费函数有两个参数一个trade_cnt代表买入(卖出)股数,
另一个参数是price,代表买入(卖出)价格,下面使用这个自定义的手续费方法做回测,代码如下所示:
End of explanation
# 卖出字典key='sell_commission_func', 指向同一个手续费方法,当然也可以定义不同的方法
commission_dict = {'buy_commission_func': calc_commission_us2, 'sell_commission_func': calc_commission_us2}
# 将commission_dict做为参数传入AbuCapital
capital = AbuCapital(1000000, benchmark, user_commission_dict=commission_dict)
# 除了手续费自定义外,回测其它设置不变,show=False不可视化回测交易
orders_pd, action_pd, _ = ABuPickTimeExecute.do_symbols_with_same_factors(['usTSLA'],
benchmark,
buy_factors2,
sell_factors,
capital,
show=False)
# 回测完成后查看手续费情况
capital.commission.commission_df
Explanation: 从上面回测交易手续费结果可以看到,买入的手续费都变成了7元,卖出手续费还是之前的算法,下面的回测将买入卖出手续费计算方法都变成使用自定义的方法,代码如下所示:
End of explanation |
4,575 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to plot topomaps the way EEGLAB does
If you have previous EEGLAB experience you may have noticed that topomaps
(topoplots) generated using MNE-Python look a little different from those
created in EEGLAB. If you prefer the EEGLAB style this example will show you
how to calculate head sphere origin and radius to obtain EEGLAB-like channel
layout in MNE.
Step1: Create fake data
First we will create a simple evoked object with a single timepoint using
biosemi 10-20 channel layout.
Step2: Calculate sphere origin and radius
EEGLAB plots head outline at the level where the head circumference is
measured
in the 10-20 system (a line going through Fpz, T8/T4, Oz and T7/T3 channels).
MNE-Python places the head outline lower on the z dimension, at the level of
the anatomical landmarks
Step3: Compare MNE and EEGLAB channel layout
We already have the required x, y, z sphere center and its radius — we can
use these values passing them to the sphere argument of many
topo-plotting functions (by passing sphere=(x, y, z, radius)).
Step4: Topomaps (topoplots)
As the last step we do the same, but plotting the topomaps. These will not
be particularly interesting as they will show random data but hopefully you
will see the difference. | Python Code:
# Authors: Mikołaj Magnuski <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
from matplotlib import pyplot as plt
import mne
print(__doc__)
Explanation: How to plot topomaps the way EEGLAB does
If you have previous EEGLAB experience you may have noticed that topomaps
(topoplots) generated using MNE-Python look a little different from those
created in EEGLAB. If you prefer the EEGLAB style this example will show you
how to calculate head sphere origin and radius to obtain EEGLAB-like channel
layout in MNE.
End of explanation
biosemi_montage = mne.channels.make_standard_montage('biosemi64')
n_channels = len(biosemi_montage.ch_names)
fake_info = mne.create_info(ch_names=biosemi_montage.ch_names, sfreq=250.,
ch_types='eeg')
rng = np.random.RandomState(0)
data = rng.normal(size=(n_channels, 1)) * 1e-6
fake_evoked = mne.EvokedArray(data, fake_info)
fake_evoked.set_montage(biosemi_montage)
Explanation: Create fake data
First we will create a simple evoked object with a single timepoint using
biosemi 10-20 channel layout.
End of explanation
# first we obtain the 3d positions of selected channels
check_ch = ['Oz', 'Fpz', 'T7', 'T8']
ch_idx = [fake_evoked.ch_names.index(ch) for ch in check_ch]
pos = np.stack([fake_evoked.info['chs'][idx]['loc'][:3] for idx in ch_idx])
# now we calculate the radius from T7 and T8 x position
# (we could use Oz and Fpz y positions as well)
radius = np.abs(pos[[2, 3], 0]).mean()
# then we obtain the x, y, z sphere center this way:
# x: x position of the Oz channel (should be very close to 0)
# y: y position of the T8 channel (should be very close to 0 too)
# z: average z position of Oz, Fpz, T7 and T8 (their z position should be the
# the same, so we could also use just one of these channels), it should be
# positive and somewhere around `0.03` (3 cm)
x = pos[0, 0]
y = pos[-1, 1]
z = pos[:, -1].mean()
# lets print the values we got:
print([f'{v:0.5f}' for v in [x, y, z, radius]])
Explanation: Calculate sphere origin and radius
EEGLAB plots head outline at the level where the head circumference is
measured
in the 10-20 system (a line going through Fpz, T8/T4, Oz and T7/T3 channels).
MNE-Python places the head outline lower on the z dimension, at the level of
the anatomical landmarks :term:LPA, RPA, and NAS <fiducial point>.
Therefore to use the EEGLAB layout we
have to move the origin of the reference sphere (a sphere that is used as a
reference when projecting channel locations to a 2d plane) a few centimeters
up.
Instead of approximating this position by eye, as we did in the sensor
locations tutorial <tut-sensor-locations>, here we will calculate it using
the position of Fpz, T8, Oz and T7 channels available in our montage.
End of explanation
# create a two-panel figure with some space for the titles at the top
fig, ax = plt.subplots(ncols=2, figsize=(8, 4), gridspec_kw=dict(top=0.9),
sharex=True, sharey=True)
# we plot the channel positions with default sphere - the mne way
fake_evoked.plot_sensors(axes=ax[0], show=False)
# in the second panel we plot the positions using the EEGLAB reference sphere
fake_evoked.plot_sensors(sphere=(x, y, z, radius), axes=ax[1], show=False)
# add titles
ax[0].set_title('MNE channel projection', fontweight='bold')
ax[1].set_title('EEGLAB channel projection', fontweight='bold')
Explanation: Compare MNE and EEGLAB channel layout
We already have the required x, y, z sphere center and its radius — we can
use these values passing them to the sphere argument of many
topo-plotting functions (by passing sphere=(x, y, z, radius)).
End of explanation
fig, ax = plt.subplots(ncols=2, figsize=(8, 4), gridspec_kw=dict(top=0.9),
sharex=True, sharey=True)
mne.viz.plot_topomap(fake_evoked.data[:, 0], fake_evoked.info, axes=ax[0],
show=False)
mne.viz.plot_topomap(fake_evoked.data[:, 0], fake_evoked.info, axes=ax[1],
show=False, sphere=(x, y, z, radius))
# add titles
ax[0].set_title('MNE', fontweight='bold')
ax[1].set_title('EEGLAB', fontweight='bold')
Explanation: Topomaps (topoplots)
As the last step we do the same, but plotting the topomaps. These will not
be particularly interesting as they will show random data but hopefully you
will see the difference.
End of explanation |
4,576 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading large datasets
Learning Objectives
- Understand difference between loading data entirely in-memory and loading in batches from disk
- Practice loading a .csv file from disk in batches using the tf.data module
Introduction
In the previous notebook, we read the the whole taxifare .csv files into memory, specifically a Pandas dataframe, before invoking tf.data.from_tensor_slices from the tf.data API. We could get away with this because it was a small sample of the dataset, but on the full taxifare dataset this wouldn't be feasible.
In this notebook we demonstrate how to read .csv files directly from disk, one batch at a time, using tf.data.TextLineDataset
Run the following cell and restart the kernel if needed
Step1: Input function reading from CSV
We define read_dataset() which given a csv file path returns a tf.data.Dataset in which each row represents a (features,label) in the Estimator API required format
- features
Step2: Run the following test to make sure your implementation is correct
Step3: We'll use the function parse_row we implemented above to
implement a read_dataset function that
- takes as input the path to a csv file
- returns a tf.data.Dataset object containing the features, labels
We can assume that the .csv file has a header, and that your read_dataset will skip it.
Step4: Tests
Let's create a test dataset to test our function.
Step5: You should be able to iterate over what's returned by read_dataset. We'll print the dropofflat and fare_amount for each entry in ./test.csv
Step6: Run the following test cell to make sure your function works properly
Step7: Next we can implement a train_input_fn function that
- takes a input a path to a csv file along with a batch_size
- returns a dataset object that shuffle the rows and returns them in batches of batch_size
We'll reuse the read_dataset function you implemented above.
Step8: Next, we implement a eval_input_fn simlar to train_input_fn you implemented above.
The only difference is that this function does not need to shuffle the rows.
Step9: Create feature columns
The features of our models are the following
Step10: In the cell below, create a variable feature_cols containing a
list of the appropriate tf.feature_column to be passed to a tf.estimator
Step11: Choose Estimator
Next, we create an instance of a tf.estimator.DNNRegressor such that
- it has two layers of 10 units each
- it uses the features defined in the previous exercise
- it saves the trained model into the directory ./taxi_trained
- it has a random seed set to 1 for replicability and debugging
Note that we can set the random seed by passing a tf.estimator.RunConfig object to the config parameter of the tf.estimator.
Step12: Train
With the model defined, we can now train the model on our data. In the cell below, we train the model you defined above using the train_input_function on ./tazi-train.csv for 500 steps. How many epochs of our data does this represent?
Step13: Evaluate
Finally, we'll evaluate the performance of our model on the validation set. We evaluate the model using its .evaluate method and
the eval_input_fn function you implemented above on the ./taxi-valid.csv dataset. Note, we make sure to extract the average_loss for the dictionary returned by model.evaluate. It is the RMSE. | Python Code:
import tensorflow as tf
import shutil
print(tf.__version__)
tf.enable_eager_execution()
Explanation: Loading large datasets
Learning Objectives
- Understand difference between loading data entirely in-memory and loading in batches from disk
- Practice loading a .csv file from disk in batches using the tf.data module
Introduction
In the previous notebook, we read the the whole taxifare .csv files into memory, specifically a Pandas dataframe, before invoking tf.data.from_tensor_slices from the tf.data API. We could get away with this because it was a small sample of the dataset, but on the full taxifare dataset this wouldn't be feasible.
In this notebook we demonstrate how to read .csv files directly from disk, one batch at a time, using tf.data.TextLineDataset
Run the following cell and restart the kernel if needed:
End of explanation
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0], [40.0], [-74.0], [40.7]]
def parse_row(row):
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
features = dict(zip(CSV_COLUMN_NAMES, fields))
label = features.pop("fare_amount") # remove label from features and store
return features, label
Explanation: Input function reading from CSV
We define read_dataset() which given a csv file path returns a tf.data.Dataset in which each row represents a (features,label) in the Estimator API required format
- features: A python dictionary. Each key is a feature column name and its value is the tensor containing the data for that feature
- label: A Tensor containing the labels
We then invoke read_dataset() function from within the train_input_fn() and eval_input_fn(). The remaining code is as before.
End of explanation
a_row = "0.0,1,0,-74.0,40.0,-74.0,40.7"
features, labels = parse_row(a_row)
assert labels.numpy() == 0.0
assert features["pickuplon"].numpy() == -74.0
print("You rock!")
Explanation: Run the following test to make sure your implementation is correct
End of explanation
def read_dataset(csv_path):
dataset = tf.data.TextLineDataset(filenames = csv_path).skip(count = 1) # skip header
dataset = dataset.map(map_func = parse_row)
return dataset
Explanation: We'll use the function parse_row we implemented above to
implement a read_dataset function that
- takes as input the path to a csv file
- returns a tf.data.Dataset object containing the features, labels
We can assume that the .csv file has a header, and that your read_dataset will skip it.
End of explanation
%%writefile test.csv
fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat
28,1,0,-73.0,41.0,-74.0,20.7
12.3,1,0,-72.0,44.0,-75.0,40.6
10,1,0,-71.0,41.0,-71.0,42.9
Explanation: Tests
Let's create a test dataset to test our function.
End of explanation
for feature, label in read_dataset("./test.csv"):
print("dropofflat:", feature["dropofflat"].numpy())
print("fare_amount:", label.numpy())
Explanation: You should be able to iterate over what's returned by read_dataset. We'll print the dropofflat and fare_amount for each entry in ./test.csv
End of explanation
dataset= read_dataset("./test.csv")
dataset_iterator = dataset.make_one_shot_iterator()
features, labels = dataset_iterator.get_next()
assert features["dayofweek"].numpy() == 1
assert labels.numpy() == 28
print("You rock!")
Explanation: Run the following test cell to make sure your function works properly:
End of explanation
def train_input_fn(csv_path, batch_size = 128):
dataset = read_dataset(csv_path)
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
return dataset
Explanation: Next we can implement a train_input_fn function that
- takes a input a path to a csv file along with a batch_size
- returns a dataset object that shuffle the rows and returns them in batches of batch_size
We'll reuse the read_dataset function you implemented above.
End of explanation
def eval_input_fn(csv_path, batch_size = 128):
dataset = read_dataset(csv_path)
dataset = dataset.batch(batch_size = batch_size)
return dataset
Explanation: Next, we implement a eval_input_fn simlar to train_input_fn you implemented above.
The only difference is that this function does not need to shuffle the rows.
End of explanation
FEATURE_NAMES = CSV_COLUMN_NAMES[1:] # all but first column
print(FEATURE_NAMES)
Explanation: Create feature columns
The features of our models are the following:
End of explanation
feature_cols = [tf.feature_column.numeric_column(key = k) for k in FEATURE_NAMES]
print(feature_cols)
Explanation: In the cell below, create a variable feature_cols containing a
list of the appropriate tf.feature_column to be passed to a tf.estimator:
End of explanation
OUTDIR = "taxi_trained"
model = tf.estimator.DNNRegressor(
hidden_units = [10,10], # specify neural architecture
feature_columns = feature_cols,
model_dir = OUTDIR,
config = tf.estimator.RunConfig(tf_random_seed = 1)
)
Explanation: Choose Estimator
Next, we create an instance of a tf.estimator.DNNRegressor such that
- it has two layers of 10 units each
- it uses the features defined in the previous exercise
- it saves the trained model into the directory ./taxi_trained
- it has a random seed set to 1 for replicability and debugging
Note that we can set the random seed by passing a tf.estimator.RunConfig object to the config parameter of the tf.estimator.
End of explanation
%%time
tf.logging.set_verbosity(tf.logging.INFO) # so loss is printed during training
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
model.train(
input_fn = lambda: train_input_fn(csv_path = "./taxi-train.csv"),
steps = 500
)
Explanation: Train
With the model defined, we can now train the model on our data. In the cell below, we train the model you defined above using the train_input_function on ./tazi-train.csv for 500 steps. How many epochs of our data does this represent?
End of explanation
metrics = model.evaluate(input_fn = lambda: eval_input_fn(csv_path = "./taxi-valid.csv"))
print("RMSE on dataset = {}".format(metrics["average_loss"]**.5))
Explanation: Evaluate
Finally, we'll evaluate the performance of our model on the validation set. We evaluate the model using its .evaluate method and
the eval_input_fn function you implemented above on the ./taxi-valid.csv dataset. Note, we make sure to extract the average_loss for the dictionary returned by model.evaluate. It is the RMSE.
End of explanation |
4,577 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Linear Regression using sklearn
| Python Code::
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25)
model = LinearRegression()
model.fit(X_train, y_train)
|
4,578 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Mixture Model with ADVI
Here, we describe how to use ADVI for inference of Gaussian mixture model. First, we will show that inference with ADVI does not need to modify the stochastic model, just call a function. Then, we will show how to use mini-batch, which is useful for large dataset. In this case, where the model should be slightly changed.
First, create artificial data from a mixuture of two Gaussian components.
Step1: Gaussian mixture models are usually constructed with categorical random variables. However, any discrete rvs does not fit ADVI. Here, class assignment variables are marginalized out, giving weighted sum of the probability for the gaussian components. The log likelihood of the total probability is calculated using logsumexp, which is a standard technique for making this kind of calculation stable.
In the below code, DensityDist class is used as the likelihood term. The second argument, logp_gmix(mus, pi, np.eye(2)), is a python function which recieves observations (denoted by 'value') and returns the tensor representation of the log-likelihood.
Step2: For comparison with ADVI, run MCMC.
Step3: Check posterior of component means and weights. We can see that the MCMC samples of the component mean for the lower-left component varied more than the upper-right due to the difference of the sample size of these clusters.
Step4: We can use the same model with ADVI as follows.
Step5: The function returns three variables. 'means' and 'sds' are the mean and standart deviations of the variational posterior. Note that these values are in the transformed space, not in the original space. For random variables in the real line, e.g., means of the Gaussian components, no transformation is applied. Then we can see the variational posterior in the original space.
Step6: TODO
Step7: To demonstrate that ADVI works for large dataset with mini-batch, let's create 100,000 samples from the same mixture distribution.
Step8: MCMC took 55 seconds, 20 times longer than the small dataset.
Step9: Posterior samples are concentrated on the true means, so looks like single point for each component.
Step10: For ADVI with mini-batch, put theano tensor on the observed variable of the ObservedRV. The tensor will be replaced with mini-batches. Because of the difference of the size of mini-batch and whole samples, the log-likelihood term should be appropriately scaled. To tell the log-likelihood term, we need to give ObservedRV objects ('minibatch_RVs' below) where mini-batch is put. Also we should keep the tensor ('minibatch_tensors').
Step11: Make a generator for mini-batches of size 200. Here, we take random sampling strategy to make mini-batches.
Step12: Run ADVI. It's much faster than MCMC, though the problem here is simple and it's not a fair comparison.
Step13: The result is almost the same.
Step14: The variance of the trace of ELBO is larger than without mini-batch because of the subsampling from the whole samples. | Python Code:
%matplotlib inline
import theano
theano.config.floatX = 'float64'
import pymc3 as pm
from pymc3 import Normal, Metropolis, sample, MvNormal, Dirichlet, \
DensityDist, find_MAP, NUTS, Slice
import theano.tensor as tt
from theano.tensor.nlinalg import det
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
n_samples = 100
rng = np.random.RandomState(123)
ms = np.array([[-1, -1.5], [1, 1]])
ps = np.array([0.2, 0.8])
zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T
xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples)
for z, m in zip(zs, ms)]
data = np.sum(np.dstack(xs), axis=2)
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5)
plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100)
plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100)
Explanation: Gaussian Mixture Model with ADVI
Here, we describe how to use ADVI for inference of Gaussian mixture model. First, we will show that inference with ADVI does not need to modify the stochastic model, just call a function. Then, we will show how to use mini-batch, which is useful for large dataset. In this case, where the model should be slightly changed.
First, create artificial data from a mixuture of two Gaussian components.
End of explanation
from pymc3.math import logsumexp
# Log likelihood of normal distribution
def logp_normal(mu, tau, value):
# log probability of individual samples
k = tau.shape[0]
delta = lambda mu: value - mu
return (-1 / 2.) * (k * tt.log(2 * np.pi) + tt.log(1./det(tau)) +
(delta(mu).dot(tau) * delta(mu)).sum(axis=1))
# Log likelihood of Gaussian mixture distribution
def logp_gmix(mus, pi, tau):
def logp_(value):
logps = [tt.log(pi[i]) + logp_normal(mu, tau, value)
for i, mu in enumerate(mus)]
return tt.sum(logsumexp(tt.stacklists(logps)[:, :n_samples], axis=0))
return logp_
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)
Explanation: Gaussian mixture models are usually constructed with categorical random variables. However, any discrete rvs does not fit ADVI. Here, class assignment variables are marginalized out, giving weighted sum of the probability for the gaussian components. The log likelihood of the total probability is calculated using logsumexp, which is a standard technique for making this kind of calculation stable.
In the below code, DensityDist class is used as the likelihood term. The second argument, logp_gmix(mus, pi, np.eye(2)), is a python function which recieves observations (denoted by 'value') and returns the tensor representation of the log-likelihood.
End of explanation
with model:
start = find_MAP()
step = Metropolis()
trace = sample(1000, step, start=start)
Explanation: For comparison with ADVI, run MCMC.
End of explanation
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
mu_0, mu_1 = trace['mu_0'], trace['mu_1']
plt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c="r", s=10)
plt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c="b", s=10)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
sns.barplot([1, 2], np.mean(trace['pi'][-5000:], axis=0),
palette=['red', 'blue'])
Explanation: Check posterior of component means and weights. We can see that the MCMC samples of the component mean for the lower-left component varied more than the upper-right due to the difference of the sample size of these clusters.
End of explanation
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)
%time means, sds, elbos = pm.variational.advi( \
model=model, n=1000, learning_rate=1e-1)
Explanation: We can use the same model with ADVI as follows.
End of explanation
from copy import deepcopy
mu_0, sd_0 = means['mu_0'], sds['mu_0']
mu_1, sd_1 = means['mu_1'], sds['mu_1']
def logp_normal_np(mu, tau, value):
# log probability of individual samples
k = tau.shape[0]
delta = lambda mu: value - mu
return (-1 / 2.) * (k * np.log(2 * np.pi) + np.log(1./np.linalg.det(tau)) +
(delta(mu).dot(tau) * delta(mu)).sum(axis=1))
def threshold(zz):
zz_ = deepcopy(zz)
zz_[zz < np.max(zz) * 1e-2] = None
return zz_
def plot_logp_normal(ax, mu, sd, cmap):
f = lambda value: np.exp(logp_normal_np(mu, np.diag(1 / sd**2), value))
g = lambda mu, sd: np.arange(mu - 3, mu + 3, .1)
xx, yy = np.meshgrid(g(mu[0], sd[0]), g(mu[1], sd[1]))
zz = f(np.vstack((xx.reshape(-1), yy.reshape(-1))).T).reshape(xx.shape)
ax.contourf(xx, yy, threshold(zz), cmap=cmap, alpha=0.9)
fig, ax = plt.subplots(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
plot_logp_normal(ax, mu_0, sd_0, cmap='Reds')
plot_logp_normal(ax, mu_1, sd_1, cmap='Blues')
plt.xlim(-6, 6)
plt.ylim(-6, 6)
Explanation: The function returns three variables. 'means' and 'sds' are the mean and standart deviations of the variational posterior. Note that these values are in the transformed space, not in the original space. For random variables in the real line, e.g., means of the Gaussian components, no transformation is applied. Then we can see the variational posterior in the original space.
End of explanation
plt.plot(elbos)
Explanation: TODO: We need to backward-transform 'pi', which is transformed by 'stick_breaking'.
'elbos' contains the trace of ELBO, showing stochastic convergence of the algorithm.
End of explanation
n_samples = 100000
zs = np.array([rng.multinomial(1, ps) for _ in range(n_samples)]).T
xs = [z[:, np.newaxis] * rng.multivariate_normal(m, np.eye(2), size=n_samples)
for z, m in zip(zs, ms)]
data = np.sum(np.dstack(xs), axis=2)
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], c='g', alpha=0.5)
plt.scatter(ms[0, 0], ms[0, 1], c='r', s=100)
plt.scatter(ms[1, 0], ms[1, 1], c='b', s=100)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
Explanation: To demonstrate that ADVI works for large dataset with mini-batch, let's create 100,000 samples from the same mixture distribution.
End of explanation
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data)
start = find_MAP()
step = Metropolis()
trace = sample(1000, step, start=start)
Explanation: MCMC took 55 seconds, 20 times longer than the small dataset.
End of explanation
plt.figure(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
mu_0, mu_1 = trace['mu_0'], trace['mu_1']
plt.scatter(mu_0[-500:, 0], mu_0[-500:, 1], c="r", s=50)
plt.scatter(mu_1[-500:, 0], mu_1[-500:, 1], c="b", s=50)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
Explanation: Posterior samples are concentrated on the true means, so looks like single point for each component.
End of explanation
data_t = tt.matrix()
data_t.tag.test_value = np.zeros((1, 2)).astype(float)
with pm.Model() as model:
mus = [MvNormal('mu_%d' % i, mu=np.zeros(2), tau=0.1 * np.eye(2), shape=(2,))
for i in range(2)]
pi = Dirichlet('pi', a=0.1 * np.ones(2), shape=(2,))
xs = DensityDist('x', logp_gmix(mus, pi, np.eye(2)), observed=data_t)
minibatch_tensors = [data_t]
minibatch_RVs = [xs]
Explanation: For ADVI with mini-batch, put theano tensor on the observed variable of the ObservedRV. The tensor will be replaced with mini-batches. Because of the difference of the size of mini-batch and whole samples, the log-likelihood term should be appropriately scaled. To tell the log-likelihood term, we need to give ObservedRV objects ('minibatch_RVs' below) where mini-batch is put. Also we should keep the tensor ('minibatch_tensors').
End of explanation
def create_minibatch(data):
rng = np.random.RandomState(0)
while True:
ixs = rng.randint(len(data), size=200)
yield [data[ixs]]
minibatches = create_minibatch(data)
total_size = len(data)
Explanation: Make a generator for mini-batches of size 200. Here, we take random sampling strategy to make mini-batches.
End of explanation
# Used only to write the function call in single line for using %time
# is there more smart way?
def f():
return pm.variational.advi_minibatch(
model=model, n=1000, minibatch_tensors=minibatch_tensors,
minibatch_RVs=minibatch_RVs, minibatches=minibatches,
total_size=total_size, learning_rate=1e-1)
%time means, sds, elbos = f()
Explanation: Run ADVI. It's much faster than MCMC, though the problem here is simple and it's not a fair comparison.
End of explanation
from copy import deepcopy
mu_0, sd_0 = means['mu_0'], sds['mu_0']
mu_1, sd_1 = means['mu_1'], sds['mu_1']
fig, ax = plt.subplots(figsize=(5, 5))
plt.scatter(data[:, 0], data[:, 1], alpha=0.5, c='g')
plt.scatter(mu_0[0], mu_0[1], c="r", s=50)
plt.scatter(mu_1[0], mu_1[1], c="b", s=50)
plt.xlim(-6, 6)
plt.ylim(-6, 6)
Explanation: The result is almost the same.
End of explanation
plt.plot(elbos);
Explanation: The variance of the trace of ELBO is larger than without mini-batch because of the subsampling from the whole samples.
End of explanation |
4,579 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src='rc_logo.png' style="height
Step1: Locally and Remote
Run locally
Connect to the cloud (e.g AWS)
Connect to supercomputer (e.g. XSEDE Resource)
Add compute power
Step2: Plot a Histogram of x
Step3: Customizable
Custom CSS
Custom javascript libraries
Create your own output format.
Tools and workflow
Magic Commands
Built-in useful functions
% line commands
%% cell commands
Step4: Other Languages
Step5: Keep it all together
Step6: NBconvert examples
HTML
PDF (print) - you have to have LaTex installed
Slides
Dynamic Slides
ReStructured Text (sphinx) | Python Code:
2+4
print("hello")
print("Hello world!")
Explanation: <img src='rc_logo.png' style="height:75px">
Efficient Data Analysis with the IPython Notebook
<img src='data_overview.png' style="height:500px">
Objectives
Become familiar with the IPython Notebook.
Introduce the IPython landscape.
Getting started with exploratory data analysis in Python
Conducting reproducible data analyis and computing experiments
How do you currently:
wrangle data?
visualize results?
Analysis: machine learning, stats
Parallel computing
Big data
What is Python?
<blockquote>
<p>
Python is a general-purpose programming language that blends procedural, functional, and object-oriented paradigms
<p>
Mark Lutz, <a href="http://www.amazon.com/Learning-Python-Edition-Mark-Lutz/dp/1449355730">Learning Python</a>
</blockquote>
Simple, clean syntax
Easy to learn
Interpreted
Strong, dynamically typed
Runs everywhere: Linux, Mac, and Windows
Free and open
Expressive: do more with fewer lines of code
Lean: modules
Options: Procedural, object-oriented, and functional.
Abstractions
Python provides high-level abstraction
Performance can be on par with compiled code if right approach is used
<img src="https://s3.amazonaws.com/research_computing_tutorials/matrix_multiply_compare.png" style="margin:5px auto; height:400px; display:block;">
IPython and the IPython Notebook
IPython
Platform for interactive computing
Shell or browser-based notebook
Project Jupyter
Language independent notebook
Can be used with R, Julia, bash ...
IPython Notebook
http://blog.fperez.org/2012/01/ipython-notebook-historical.html
Interactive web-based computing, data analysis, and documentation.
One document for code and output
Run locally and remote
Document process
Share results
<img src='traditional_python.png'>
<img src='ipython-notebook.png'>
Integrate Code and Documentation
Data structure ouput
Inline plots
Conversation sytle programming (Literate programming)
Telling a data story
Great for iterative programming.
Data analysis
Quick scripts
Prototyping
2 type of cells:
Markdown for documentation
Code for execution programs
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.random.randn(10000)
print(x)
Explanation: Locally and Remote
Run locally
Connect to the cloud (e.g AWS)
Connect to supercomputer (e.g. XSEDE Resource)
Add compute power:
mpi4py
IPython Parallel
spark big distributed data
Numbapro GPU
...
Documentation and Sharing
<img src='ipython-notebook-sharing.png'>
Keyboard Shortcuts
<img src='ipython-notebook-keyboard.png'>
Markdown and LaTeX
Markdown
Latex $y = \sqrt{a + b}$
Images
<img src='https://s3.amazonaws.com/research_computing_tutorials/monty-python.png' width="300">
This is an image:
<img src='https://s3.amazonaws.com/research_computing_tutorials/monty-python.png' width="300">
Embeded Plots
End of explanation
plt.hist(x, bins=50)
plt.show()
Explanation: Plot a Histogram of x
End of explanation
%lsmagic
%timeit y = np.random.randn(100000)
%ll
Explanation: Customizable
Custom CSS
Custom javascript libraries
Create your own output format.
Tools and workflow
Magic Commands
Built-in useful functions
% line commands
%% cell commands
End of explanation
%%bash
ls -l
files = !ls # But glob is a better way
print files[:5]
Explanation: Other Languages: Bash
End of explanation
%%writefile example.cpp
#include <iostream>
int main(){
std::cout << "hello from c++" << std::endl;
}
%ls
%%bash
g++ example.cpp -o example
./example
Explanation: Keep it all together
End of explanation
!ipython nbconvert --to 'PDF' 01_introduction-IPython-notebook.ipynb
!open 01_introduction-IPython-notebook.pdf
Explanation: NBconvert examples
HTML
PDF (print) - you have to have LaTex installed
Slides
Dynamic Slides
ReStructured Text (sphinx)
End of explanation |
4,580 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifiers
A classifier is a function from an asset and a moment in time to a categorical output such as a string or integer label
Step1: Previously, we saw that the latest attribute produced an instance of a Factor. In this case, since the underlying data is of type string, latest produces a Classifier.
Similarly, a computation producing the latest Morningstar sector code of a security is a Classifier. In this case, the underlying type is an int, but the integer doesn't represent a numerical value (it's a category) so it produces a classifier. To get the latest sector code, we can use the built-in Sector classifier.
Step2: Using Sector is equivalent to Fundamentals.morningstar_sector_code.latest.
Building Filters from Classifiers
Classifiers can also be used to produce filters with methods like isnull, eq, and startswith. The full list of Classifier methods producing Filters can be found here.
As an example, if we wanted a filter to select for securities trading on the New York Stock Exchange, we can use the eq method of our exchange classifier.
Step3: This filter will return True for securities having 'NYS' as their most recent exchange_id.
Quantiles
Classifiers can also be produced from various Factor methods. The most general of these is the quantiles method which accepts a bin count as an argument. The quantiles method assigns a label from 0 to (bins - 1) to every non-NaN data point in the factor output and returns a Classifier with these labels. NaNs are labeled with -1. Aliases are available for quartiles (quantiles(4)), quintiles (quantiles(5)), and deciles (quantiles(10)). As an example, this is what a filter for the top decile of a factor might look like
Step4: Let's put each of our classifiers into a pipeline and run it to see what they look like. | Python Code:
from quantopian.pipeline.data import Fundamentals
# Since the underlying data of Fundamentals.exchange_id
# is of type string, .latest returns a Classifier
exchange = Fundamentals.exchange_id.latest
Explanation: Classifiers
A classifier is a function from an asset and a moment in time to a categorical output such as a string or integer label:
F(asset, timestamp) -> category
An example of a classifier producing a string output is the exchange ID of a security. To create this classifier, we'll have to import Fundamentals.exchange_id and use the latest attribute to instantiate our classifier:
End of explanation
from quantopian.pipeline.classifiers.fundamentals import Sector
morningstar_sector = Sector()
Explanation: Previously, we saw that the latest attribute produced an instance of a Factor. In this case, since the underlying data is of type string, latest produces a Classifier.
Similarly, a computation producing the latest Morningstar sector code of a security is a Classifier. In this case, the underlying type is an int, but the integer doesn't represent a numerical value (it's a category) so it produces a classifier. To get the latest sector code, we can use the built-in Sector classifier.
End of explanation
nyse_filter = exchange.eq('NYS')
Explanation: Using Sector is equivalent to Fundamentals.morningstar_sector_code.latest.
Building Filters from Classifiers
Classifiers can also be used to produce filters with methods like isnull, eq, and startswith. The full list of Classifier methods producing Filters can be found here.
As an example, if we wanted a filter to select for securities trading on the New York Stock Exchange, we can use the eq method of our exchange classifier.
End of explanation
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
Explanation: This filter will return True for securities having 'NYS' as their most recent exchange_id.
Quantiles
Classifiers can also be produced from various Factor methods. The most general of these is the quantiles method which accepts a bin count as an argument. The quantiles method assigns a label from 0 to (bins - 1) to every non-NaN data point in the factor output and returns a Classifier with these labels. NaNs are labeled with -1. Aliases are available for quartiles (quantiles(4)), quintiles (quantiles(5)), and deciles (quantiles(10)). As an example, this is what a filter for the top decile of a factor might look like:
End of explanation
def make_pipeline():
exchange = Fundamentals.exchange_id.latest
nyse_filter = exchange.eq('NYS')
morningstar_sector = Sector()
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
return Pipeline(
columns={
'exchange': exchange,
'sector_code': morningstar_sector,
'dollar_volume_decile': dollar_volume_decile
},
screen=(nyse_filter & top_decile)
)
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
print 'Number of securities that passed the filter: %d' % len(result)
result.head(5)
Explanation: Let's put each of our classifiers into a pipeline and run it to see what they look like.
End of explanation |
4,581 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
To build an automaton, simply call translate() with a formula, and a list of options to characterize the automaton you want (those options have the same name as the long options name of the ltl2tgba tool, and they can be abbreviated).
Step1: The call the spot.setup() in the first cells has installed a default style for the graphviz output. If you want to change this style temporarily, you can call the show(style) method explicitely. For instance here is a vertical layout with the default font of GraphViz.
Step2: If you want to add some style options to the existing one, pass a dot to the show() function in addition to your own style options
Step3: The translate() function can also be called with a formula object. Either as a function, or as a method.
Step4: When used as a method, all the arguments are translation options. Here is a monitor
Step5: The following three cells show a formulas for which it makes a difference to select 'small' or 'deterministic'.
Step6: Here is how to build an unambiguous automaton
Step7: Compare with the standard translation
Step8: And here is the automaton above with state-based acceptance
Step9: Some example of running the self-loopization algorithm on an automaton
Step10: Reading from file (see automaton-io.ipynb for more examples).
Step11: Explicit determinization after translation
Step12: Determinization by translate(). The generic option allows any acceptance condition to be used instead of the default generalized Büchi.
Step13: Translation to co-Büchi automaton
Step14: Adding an atomic proposition to all edges
Step15: Adding an atomic proposition to the edge between 0 and 1 | Python Code:
a = spot.translate('(a U b) & GFc & GFd', 'BA', 'complete'); a
Explanation: To build an automaton, simply call translate() with a formula, and a list of options to characterize the automaton you want (those options have the same name as the long options name of the ltl2tgba tool, and they can be abbreviated).
End of explanation
a.show("v")
Explanation: The call the spot.setup() in the first cells has installed a default style for the graphviz output. If you want to change this style temporarily, you can call the show(style) method explicitely. For instance here is a vertical layout with the default font of GraphViz.
End of explanation
a.show(".ast")
Explanation: If you want to add some style options to the existing one, pass a dot to the show() function in addition to your own style options:
End of explanation
f = spot.formula('a U b'); f
spot.translate(f)
f.translate()
Explanation: The translate() function can also be called with a formula object. Either as a function, or as a method.
End of explanation
f.translate('mon')
Explanation: When used as a method, all the arguments are translation options. Here is a monitor:
End of explanation
f = spot.formula('Ga | Gb | Gc'); f
f.translate('ba', 'small').show('.v')
f.translate('ba', 'det').show('v.')
Explanation: The following three cells show a formulas for which it makes a difference to select 'small' or 'deterministic'.
End of explanation
spot.translate('GFa -> GFb', 'unambig')
Explanation: Here is how to build an unambiguous automaton:
End of explanation
spot.translate('GFa -> GFb')
Explanation: Compare with the standard translation:
End of explanation
spot.translate('GFa -> GFb', 'sbacc')
Explanation: And here is the automaton above with state-based acceptance:
End of explanation
a = spot.translate('F(a & X(!a &Xb))', "any"); a
spot.sl(a)
a.is_empty()
Explanation: Some example of running the self-loopization algorithm on an automaton:
End of explanation
%%file example1.aut
HOA: v1
States: 3
Start: 0
AP: 2 "a" "b"
acc-name: Buchi
Acceptance: 4 Inf(0)&Fin(1)&Fin(3) | Inf(2)&Inf(3) | Inf(1)
--BODY--
State: 0 {3}
[t] 0
[0] 1 {1}
[!0] 2 {0}
State: 1 {3}
[1] 0
[0&1] 1 {0}
[!0&1] 2 {2}
State: 2
[!1] 0
[0&!1] 1 {0}
[!0&!1] 2 {0}
--END--
a = spot.automaton('example1.aut')
display(a.show('.a'))
display(spot.remove_fin(a).show('.a'))
display(a.postprocess('TGBA', 'complete').show('.a'))
display(a.postprocess('BA'))
!rm example1.aut
spot.complete(a)
spot.complete(spot.translate('Ga'))
# Using +1 in the display options is a convient way to shift the
# set numbers in the output, as an aid in reading the product.
a1 = spot.translate('a W c'); display(a1.show('.bat'))
a2 = spot.translate('a U b'); display(a2.show('.bat+1'))
# the product should display pairs of states, unless asked not to (using 1).
p = spot.product(a1, a2); display(p.show('.bat')); display(p.show('.bat1'))
Explanation: Reading from file (see automaton-io.ipynb for more examples).
End of explanation
a = spot.translate('FGa')
display(a)
display(a.is_deterministic())
spot.tgba_determinize(a).show('.ba')
Explanation: Explicit determinization after translation:
End of explanation
aut = spot.translate('FGa', 'generic', 'deterministic'); aut.show('.ba')
Explanation: Determinization by translate(). The generic option allows any acceptance condition to be used instead of the default generalized Büchi.
End of explanation
spot.translate('FGa', 'coBuchi').show('.ba')
spot.translate('FGa', 'coBuchi', 'deterministic').show('.ba')
Explanation: Translation to co-Büchi automaton
End of explanation
import buddy
b = buddy.bdd_ithvar(aut.register_ap('b'))
for e in aut.edges():
e.cond &= b
aut
Explanation: Adding an atomic proposition to all edges
End of explanation
c = buddy.bdd_ithvar(aut.register_ap('c'))
for e in aut.out(0):
if e.dst == 1:
e.cond &= c
aut
Explanation: Adding an atomic proposition to the edge between 0 and 1:
End of explanation |
4,582 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test with toy model
Step1: We'll define the modifier at two different temperatures, and we'll run each for 10000 snapshots. Note also that our two atoms have different masses.
Step2: Within each atom, all 3 DOFs will be part of the same distribution. We create a few lists with names of the form v_${BETA}_${ATOM_NUMBER}. These are the results we'll histogram and test.
Step3: We know what the distribution should look like, so we write it down explicitly
Step4: Now we take each total distribution, and compare it to the expected distribution. This is where we have to use our eyes to check the correctness.
Step5: If the red lines match the blue histograms, we're good. Otherwise, something has gone terribly wrong.
Test with OpenMM
Step6: That version randomized all velcoties; we can also create a SnapshotModifier that only modifies certain velocities. For example, we might be interested in modifying the velocities of a solvent while ignoring the solute.
Next we create a little example that only modifies the velocities of the carbon atoms in alanine dipeptide.
Step7: Note that only the 6 carbon atoms, selected by the subset_mask, have changed velocities from the template's value of 0.0.
Finally, we'll check that the OpenMM version is giving the right statistics | Python Code:
topology = paths.engines.toy.Topology(n_spatial=3,
n_atoms=2,
masses=np.array([2.0, 8.0]),
pes=None)
initial_snapshot = paths.engines.toy.Snapshot(
coordinates=np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
velocities=np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]),
engine=paths.engines.toy.Engine({}, topology)
)
Explanation: Test with toy model
End of explanation
modifier_1 = paths.RandomVelocities(beta=1.0)
modifier_5 = paths.RandomVelocities(beta=1.0/5.0)
snapshots_1 = [modifier_1(initial_snapshot) for i in range(10000)]
snapshots_5 = [modifier_5(initial_snapshot) for i in range(10000)]
Explanation: We'll define the modifier at two different temperatures, and we'll run each for 10000 snapshots. Note also that our two atoms have different masses.
End of explanation
v_1_0 = sum([s.velocities[0].tolist() for s in snapshots_1], [])
v_1_1 = sum([s.velocities[1].tolist() for s in snapshots_1], [])
v_5_0 = sum([s.velocities[0].tolist() for s in snapshots_5], [])
v_5_1 = sum([s.velocities[1].tolist() for s in snapshots_5], [])
Explanation: Within each atom, all 3 DOFs will be part of the same distribution. We create a few lists with names of the form v_${BETA}_${ATOM_NUMBER}. These are the results we'll histogram and test.
End of explanation
def expected(beta, mass, v):
alpha = 0.5*beta*mass
return np.sqrt(alpha/np.pi)*np.exp(-alpha*v**2)
Explanation: We know what the distribution should look like, so we write it down explicitly:
End of explanation
v = np.arange(-5.0, 5.0, 0.1)
bins = np.arange(-8.0, 8.0, 0.2)
plt.hist(v_1_0, bins=bins, normed=True)
plt.plot(v, expected(1.0, 2.0, v), 'r');
v = np.arange(-5.0, 5.0, 0.1)
bins = np.arange(-8.0, 8.0, 0.2)
plt.hist(v_1_1, bins=bins, normed=True)
plt.plot(v, expected(1.0, 8.0, v), 'r');
v = np.arange(-5.0, 5.0, 0.1)
bins = np.arange(-8.0, 8.0, 0.2)
plt.hist(v_5_0, bins=bins, normed=True)
plt.plot(v, expected(0.2, 2.0, v), 'r');
v = np.arange(-5.0, 5.0, 0.1)
bins = np.arange(-8.0, 8.0, 0.2)
plt.hist(v_5_1, bins=bins, normed=True)
plt.plot(v, expected(0.2, 8.0, v), 'r');
Explanation: Now we take each total distribution, and compare it to the expected distribution. This is where we have to use our eyes to check the correctness.
End of explanation
import openmmtools as omt
import openpathsampling.engines.openmm as omm_engine
import simtk.unit as u
test_system = omt.testsystems.AlanineDipeptideVacuum()
template = omm_engine.snapshot_from_testsystem(test_system)
# just to show that the initial velocities are all 0
template.velocities
temperature = 300.0 * u.kelvin
beta = 1.0 / (temperature * u.BOLTZMANN_CONSTANT_kB)
full_randomizer = paths.RandomVelocities(beta)
fully_randomized_snapshot = full_randomizer(template)
fully_randomized_snapshot.velocities
Explanation: If the red lines match the blue histograms, we're good. Otherwise, something has gone terribly wrong.
Test with OpenMM
End of explanation
carbon_atoms = template.topology.mdtraj.select("element C")
carbon_randomizer = paths.RandomVelocities(beta, subset_mask=carbon_atoms)
carbon_randomized_snapshot = carbon_randomizer(template)
carbon_randomized_snapshot.velocities
Explanation: That version randomized all velcoties; we can also create a SnapshotModifier that only modifies certain velocities. For example, we might be interested in modifying the velocities of a solvent while ignoring the solute.
Next we create a little example that only modifies the velocities of the carbon atoms in alanine dipeptide.
End of explanation
carbon_velocities = [carbon_randomizer(template).velocities[carbon_atoms] for i in range(1000)]
all_dof_values = sum(np.concatenate(carbon_velocities).tolist(), [])
print(len(all_dof_values))
dalton_mass = 12.0
# manually doing conversions here
carbon_mass = dalton_mass / (6.02*10**23) * 10**-3 # kg
boltzmann = 1.38 * 10**-23 # J/K
m_s__to__nm_ps = 10**-3
temperature = 300.0 # K
kB_T = boltzmann * temperature * m_s__to__nm_ps**2
v = np.arange(-3.0, 3.0, 0.1)
bins = np.arange(-3.0, 3.0, 0.1)
plt.hist(all_dof_values, bins=bins, normed=True);
plt.plot(v, expected(1.0/kB_T, carbon_mass, v), 'r')
Explanation: Note that only the 6 carbon atoms, selected by the subset_mask, have changed velocities from the template's value of 0.0.
Finally, we'll check that the OpenMM version is giving the right statistics:
End of explanation |
4,583 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute the power spectral density of raw data
This script shows how to compute the power spectral density (PSD)
of measurements on a raw dataset. It also show the effect of applying SSP
to the data to reduce ECG and EOG artifacts.
Step1: Load data
We'll load a sample MEG dataset, along with SSP projections that will
allow us to reduce EOG and ECG artifacts. For more information about
reducing artifacts, see the preprocessing section in documentation.
Step2: Plot the raw PSD
First we'll visualize the raw PSD of our data. We'll do this on all of the
channels first. Note that there are several parameters to the
Step3: Plot a cleaned PSD
Next we'll focus the visualization on a subset of channels.
This can be useful for identifying particularly noisy channels or
investigating how the power spectrum changes across channels.
We'll visualize how this PSD changes after applying some standard
filtering techniques. We'll first apply the SSP projections, which is
accomplished with the proj=True kwarg. We'll then perform a notch filter
to remove particular frequency bands.
Step4: Alternative functions for PSDs
There are also several functions in MNE that create a PSD using a Raw
object. These are in the | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Martin Luessi <[email protected]>
# Eric Larson <[email protected]>
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io, read_proj, read_selection
from mne.datasets import sample
from mne.time_frequency import psd_multitaper
print(__doc__)
Explanation: Compute the power spectral density of raw data
This script shows how to compute the power spectral density (PSD)
of measurements on a raw dataset. It also show the effect of applying SSP
to the data to reduce ECG and EOG artifacts.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
proj_fname = data_path + '/MEG/sample/sample_audvis_eog-proj.fif'
tmin, tmax = 0, 60 # use the first 60s of data
# Setup for reading the raw data (to save memory, crop before loading)
raw = io.read_raw_fif(raw_fname).crop(tmin, tmax).load_data()
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Add SSP projection vectors to reduce EOG and ECG artifacts
projs = read_proj(proj_fname)
raw.add_proj(projs, remove_existing=True)
fmin, fmax = 2, 300 # look at frequencies between 2 and 300Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
Explanation: Load data
We'll load a sample MEG dataset, along with SSP projections that will
allow us to reduce EOG and ECG artifacts. For more information about
reducing artifacts, see the preprocessing section in documentation.
End of explanation
raw.plot_psd(area_mode='range', tmax=10.0, show=False, average=True)
Explanation: Plot the raw PSD
First we'll visualize the raw PSD of our data. We'll do this on all of the
channels first. Note that there are several parameters to the
:meth:mne.io.Raw.plot_psd method, some of which will be explained below.
End of explanation
# Pick MEG magnetometers in the Left-temporal region
selection = read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads', selection=selection)
# Let's just look at the first few channels for demonstration purposes
picks = picks[:4]
plt.figure()
ax = plt.axes()
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=False, ax=ax, color=(0, 0, 1), picks=picks,
show=False, average=True)
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=True, ax=ax, color=(0, 1, 0), picks=picks,
show=False, average=True)
# And now do the same with SSP + notch filtering
# Pick all channels for notch since the SSP projection mixes channels together
raw.notch_filter(np.arange(60, 241, 60), n_jobs=1, fir_design='firwin')
raw.plot_psd(tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax, n_fft=n_fft,
n_jobs=1, proj=True, ax=ax, color=(1, 0, 0), picks=picks,
show=False, average=True)
ax.set_title('Four left-temporal magnetometers')
plt.legend(ax.lines[::3], ['Without SSP', 'With SSP', 'SSP + Notch'])
Explanation: Plot a cleaned PSD
Next we'll focus the visualization on a subset of channels.
This can be useful for identifying particularly noisy channels or
investigating how the power spectrum changes across channels.
We'll visualize how this PSD changes after applying some standard
filtering techniques. We'll first apply the SSP projections, which is
accomplished with the proj=True kwarg. We'll then perform a notch filter
to remove particular frequency bands.
End of explanation
f, ax = plt.subplots()
psds, freqs = psd_multitaper(raw, low_bias=True, tmin=tmin, tmax=tmax,
fmin=fmin, fmax=fmax, proj=True, picks=picks,
n_jobs=1)
psds = 10 * np.log10(psds)
psds_mean = psds.mean(0)
psds_std = psds.std(0)
ax.plot(freqs, psds_mean, color='k')
ax.fill_between(freqs, psds_mean - psds_std, psds_mean + psds_std,
color='k', alpha=.5)
ax.set(title='Multitaper PSD', xlabel='Frequency',
ylabel='Power Spectral Density (dB)')
plt.show()
Explanation: Alternative functions for PSDs
There are also several functions in MNE that create a PSD using a Raw
object. These are in the :mod:mne.time_frequency module and begin with
psd_*. For example, we'll use a multitaper method to compute the PSD
below.
End of explanation |
4,584 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running Tune experiments with Dragonfly
In this tutorial we introduce Dragonfly, while running a simple Ray Tune experiment.
Tune’s Search Algorithms integrate with Dragonfly and, as a result,
allow you to seamlessly scale up a Dragonfly optimization process - without
sacrificing performance.
Dragonfly is an open source python library for scalable Bayesian optimization.
Bayesian optimization is used optimizing black-box functions whose evaluations
are usually expensive. Beyond vanilla optimization techniques,
Dragonfly provides an array of tools to scale up Bayesian optimization to expensive
large scale problems. These include features that are especially suited for high
dimensional spaces (optimizing with a large number of variables), parallel evaluations
in synchronous or asynchronous settings (conducting multiple evaluations in parallel),
multi-fidelity optimization (using cheap approximations to speed up the optimization
process), and multi-objective optimization (optimizing multiple functions
simultaneously).
Bayesian optimization does not rely on the gradient of the objective function,
but instead, learns from samples of the search space. It is suitable for optimizing
functions that are non-differentiable, with many local minima, or even unknown but only
testable. Therefore, it belongs to the domain of "derivative-free optimization" and
"black-box optimization". In this example we minimize a simple objective to briefly
demonstrate the usage of Dragonfly with Ray Tune via DragonflySearch. It's useful
to keep in mind that despite the emphasis on machine learning experiments,
Ray Tune optimizes any implicit or explicit objective. Here we assume
dragonfly-opt==0.1.6 library is installed. To learn more, please refer to
the Dragonfly website.
Step1: Click below to see all the imports we need for this example.
You can also launch directly into a Binder instance to run this notebook yourself.
Just click on the rocket symbol at the top of the navigation.
Step3: Let's start by defining a optimization problem.
Suppose we want to figure out the proportions of water and several salts to add to an
ionic solution with the goal of maximizing it's ability to conduct electricity.
The objective here is explicit for demonstration, yet in practice they often come
out of a black-box (e.g. a physical device measuring conductivity, or reporting the
results of a long-running ML experiment). We artificially sleep for a bit
(0.02 seconds) to simulate a more typical experiment. This setup assumes that we're
running multiple steps of an experiment and try to tune relative proportions of
4 ingredients-- these proportions should be considered as hyperparameters.
Our objective function will take a Tune config, evaluates the conductivity of
our experiment in a training loop,
and uses tune.report to report the conductivity back to Tune.
Step4: Next we define a search space. The critical assumption is that the optimal
hyperparameters live within this space. Yet, if the space is very large, then those
hyperparameters may be difficult to find in a short amount of time.
Step5: Now we define the search algorithm from DragonflySearch with optimizer and
domain arguments specified in a common way. We also use ConcurrencyLimiter
to constrain to 4 concurrent trials.
Step6: The number of samples is the number of hyperparameter combinations that will be
tried out. This Tune run is set to 1000 samples.
(you can decrease this if it takes too long on your machine).
Step7: Finally, we run the experiment to minimize the mean_loss of the objective by
searching search_config via algo, num_samples times. This previous sentence is
fully characterizes the search problem we aim to solve. With this in mind,
notice how efficient it is to execute tune.run().
Step8: Below are the recommended relative proportions of water and each salt found to
maximize conductivity in the ionic solution (according to the simple model) | Python Code:
# !pip install ray[tune]
!pip install dragonfly-opt==0.1.6
Explanation: Running Tune experiments with Dragonfly
In this tutorial we introduce Dragonfly, while running a simple Ray Tune experiment.
Tune’s Search Algorithms integrate with Dragonfly and, as a result,
allow you to seamlessly scale up a Dragonfly optimization process - without
sacrificing performance.
Dragonfly is an open source python library for scalable Bayesian optimization.
Bayesian optimization is used optimizing black-box functions whose evaluations
are usually expensive. Beyond vanilla optimization techniques,
Dragonfly provides an array of tools to scale up Bayesian optimization to expensive
large scale problems. These include features that are especially suited for high
dimensional spaces (optimizing with a large number of variables), parallel evaluations
in synchronous or asynchronous settings (conducting multiple evaluations in parallel),
multi-fidelity optimization (using cheap approximations to speed up the optimization
process), and multi-objective optimization (optimizing multiple functions
simultaneously).
Bayesian optimization does not rely on the gradient of the objective function,
but instead, learns from samples of the search space. It is suitable for optimizing
functions that are non-differentiable, with many local minima, or even unknown but only
testable. Therefore, it belongs to the domain of "derivative-free optimization" and
"black-box optimization". In this example we minimize a simple objective to briefly
demonstrate the usage of Dragonfly with Ray Tune via DragonflySearch. It's useful
to keep in mind that despite the emphasis on machine learning experiments,
Ray Tune optimizes any implicit or explicit objective. Here we assume
dragonfly-opt==0.1.6 library is installed. To learn more, please refer to
the Dragonfly website.
End of explanation
import numpy as np
import time
import ray
from ray import tune
from ray.tune.suggest import ConcurrencyLimiter
from ray.tune.suggest.dragonfly import DragonflySearch
Explanation: Click below to see all the imports we need for this example.
You can also launch directly into a Binder instance to run this notebook yourself.
Just click on the rocket symbol at the top of the navigation.
End of explanation
def objective(config):
Simplistic model of electrical conductivity with added Gaussian
noise to simulate experimental noise.
for i in range(config["iterations"]):
vol1 = config["LiNO3_vol"] # LiNO3
vol2 = config["Li2SO4_vol"] # Li2SO4
vol3 = config["NaClO4_vol"] # NaClO4
vol4 = 10 - (vol1 + vol2 + vol3) # Water
conductivity = vol1 + 0.1 * (vol2 + vol3) ** 2 + 2.3 * vol4 * (vol1 ** 1.5)
conductivity += np.random.normal() * 0.01
tune.report(timesteps_total=i, objective=conductivity)
time.sleep(0.02)
Explanation: Let's start by defining a optimization problem.
Suppose we want to figure out the proportions of water and several salts to add to an
ionic solution with the goal of maximizing it's ability to conduct electricity.
The objective here is explicit for demonstration, yet in practice they often come
out of a black-box (e.g. a physical device measuring conductivity, or reporting the
results of a long-running ML experiment). We artificially sleep for a bit
(0.02 seconds) to simulate a more typical experiment. This setup assumes that we're
running multiple steps of an experiment and try to tune relative proportions of
4 ingredients-- these proportions should be considered as hyperparameters.
Our objective function will take a Tune config, evaluates the conductivity of
our experiment in a training loop,
and uses tune.report to report the conductivity back to Tune.
End of explanation
search_space = {
"iterations": 100,
"LiNO3_vol": tune.uniform(0, 7),
"Li2SO4_vol": tune.uniform(0, 7),
"NaClO4_vol": tune.uniform(0, 7)
}
ray.init(configure_logging=False)
Explanation: Next we define a search space. The critical assumption is that the optimal
hyperparameters live within this space. Yet, if the space is very large, then those
hyperparameters may be difficult to find in a short amount of time.
End of explanation
algo = DragonflySearch(
optimizer="bandit",
domain="euclidean",
)
algo = ConcurrencyLimiter(algo, max_concurrent=4)
Explanation: Now we define the search algorithm from DragonflySearch with optimizer and
domain arguments specified in a common way. We also use ConcurrencyLimiter
to constrain to 4 concurrent trials.
End of explanation
num_samples = 100
# Reducing samples for smoke tests
num_samples = 10
Explanation: The number of samples is the number of hyperparameter combinations that will be
tried out. This Tune run is set to 1000 samples.
(you can decrease this if it takes too long on your machine).
End of explanation
analysis = tune.run(
objective,
metric="objective",
mode="max",
name="dragonfly_search",
search_alg=algo,
num_samples=num_samples,
config=search_space
)
Explanation: Finally, we run the experiment to minimize the mean_loss of the objective by
searching search_config via algo, num_samples times. This previous sentence is
fully characterizes the search problem we aim to solve. With this in mind,
notice how efficient it is to execute tune.run().
End of explanation
print("Best hyperparameters found: ", analysis.best_config)
ray.shutdown()
Explanation: Below are the recommended relative proportions of water and each salt found to
maximize conductivity in the ionic solution (according to the simple model):
End of explanation |
4,585 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Networks with TensorFlow and Keras
Step1: First Step
Step2: Second Step
Step3: What is the intuition here
Step4: No overfitting, probably as good as it gets
Does this look like your manual solution?
The machine isn't better than you?
Your brain is a great pattern matcher, but only in 2-d.
Third Step
Step5: This is so much more, look at all the different shapes for different kilometers per year
Step6: Fouth Step | Python Code:
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
import pandas as pd
print(pd.__version__)
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
print(tf.__version__)
import keras
print(keras.__version__)
Explanation: Neural Networks with TensorFlow and Keras
End of explanation
# df = pd.read_csv('./insurance-customers-300.csv', sep=';')
df = pd.read_csv('./insurance-customers-1500.csv', sep=';')
y=df['group']
df.drop('group', axis='columns', inplace=True)
X = df.as_matrix()
df.describe()
Explanation: First Step: Load Data and disassemble for our purposes
We need a few more data point samples for this approach
End of explanation
# ignore this, it is just technical code
# should come from a lib, consider it to appear magically
# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
cmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])
cmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00'])
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])
font_size=25
def meshGrid(x_data, y_data):
h = 1 # step size in the mesh
x_min, x_max = x_data.min() - 1, x_data.max() + 1
y_min, y_max = y_data.min() - 1, y_data.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return (xx,yy)
def plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title="", mesh=True, fixed=None, fname=None, print=False):
xx,yy = meshGrid(x_data, y_data)
plt.figure(figsize=(20,10))
if clf and mesh:
grid_X = np.array(np.c_[yy.ravel(), xx.ravel()])
if fixed:
fill_values = np.full((len(grid_X), 1), fixed)
grid_X = np.append(grid_X, fill_values, axis=1)
Z = clf.predict(grid_X)
Z = np.argmax(Z, axis=1)
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
if print:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k')
else:
plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k')
plt.xlabel(x_label, fontsize=font_size)
plt.ylabel(y_label, fontsize=font_size)
plt.title(title, fontsize=font_size)
if fname:
plt.savefig(fname)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42, stratify=y)
X_train.shape, y_train.shape, X_test.shape, y_test.shape
X_train_kmh_age = X_train[:, :2]
X_test_kmh_age = X_test[:, :2]
X_train_2_dim = X_train_kmh_age
X_test_2_dim = X_test_kmh_age
# tiny little pieces of feature engeneering
from keras.utils.np_utils import to_categorical
num_categories = 3
y_train_categorical = to_categorical(y_train, num_categories)
y_test_categorical = to_categorical(y_test, num_categories)
from keras.layers import Input
from keras.layers import Dense
from keras.models import Model
from keras.layers import Dropout
inputs = Input(name='input', shape=(2, ))
x = Dense(100, name='hidden1', activation='relu')(inputs)
x = Dense(100, name='hidden2', activation='relu')(x)
x = Dense(100, name='hidden3', activation='relu')(x)
predictions = Dense(3, name='softmax', activation='softmax')(x)
model = Model(input=inputs, output=predictions)
# loss function: http://cs231n.github.io/linear-classify/#softmax
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
Explanation: Second Step: Deep Learning as Alchemy
End of explanation
%time model.fit(X_train_2_dim, y_train_categorical, epochs=1000, verbose=0, batch_size=100)
# %time model.fit(X_train_2_dim, y_train_categorical, epochs=1000, validation_split=0.2, verbose=0, batch_size=100)
plotPrediction(model, X_train_2_dim[:, 1], X_train_2_dim[:, 0],
'Age', 'Max Speed', y_train,
title="Train Data Max Speed vs Age with Classification")
train_loss, train_accuracy = model.evaluate(X_train_2_dim, y_train_categorical, batch_size=100)
train_accuracy
plotPrediction(model, X_test_2_dim[:, 1], X_test_2_dim[:, 0],
'Age', 'Max Speed', y_test,
title="Test Data Max Speed vs Age with Prediction")
test_loss, test_accuracy = model.evaluate(X_test_2_dim, y_test_categorical, batch_size=100)
test_accuracy
Explanation: What is the intuition here: none, just fiddle around like everyone
This might be frustrating or even seem wrong, but trust me: this is more or less what everyone does
End of explanation
drop_out = 0.15
inputs = Input(name='input', shape=(3, ))
x = Dense(100, name='hidden1', activation='relu')(inputs)
x = Dropout(drop_out)(x)
x = Dense(100, name='hidden2', activation='relu')(x)
x = Dropout(drop_out)(x)
x = Dense(100, name='hidden3', activation='relu')(x)
x = Dropout(drop_out)(x)
# x = Dense(100, name='hidden4', activation='sigmoid')(x)
# x = Dropout(drop_out)(x)
# x = Dense(100, name='hidden5', activation='sigmoid')(x)
# x = Dropout(drop_out)(x)
predictions = Dense(3, name='softmax', activation='softmax')(x)
model = Model(input=inputs, output=predictions)
# loss function: http://cs231n.github.io/linear-classify/#softmax
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
%time model.fit(X_train, y_train_categorical, epochs=1000, verbose=0, batch_size=100)
# %time model.fit(X_train, y_train_categorical, epochs=1000, validation_split=0.2, verbose=0, batch_size=100)
train_loss, train_accuracy = model.evaluate(X_train, y_train_categorical, batch_size=100)
train_accuracy
test_loss, test_accuracy = model.evaluate(X_test, y_test_categorical, batch_size=100)
test_accuracy
Explanation: No overfitting, probably as good as it gets
Does this look like your manual solution?
The machine isn't better than you?
Your brain is a great pattern matcher, but only in 2-d.
Third Step: Use all dimensions
And possibly some drop out to avoid overfitting
End of explanation
kms_per_year = 15
plotPrediction(model, X_test[:, 1], X_test[:, 0],
'Age', 'Max Speed', y_test,
fixed = kms_per_year,
title="Test Data Max Speed vs Age with Prediction, 15 km/year",
fname='cnn.png')
kms_per_year = 50
plotPrediction(model, X_test[:, 1], X_test[:, 0],
'Age', 'Max Speed', y_test,
fixed = kms_per_year,
title="Test Data Max Speed vs Age with Prediction, 50 km/year")
prediction = model.predict(X)
y_pred = np.argmax(prediction, axis=1)
y_true = y
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_true, y_pred)
import seaborn as sns
sns.heatmap(cm, annot=True, cmap="YlGnBu")
figure = plt.gcf()
figure.set_size_inches(10, 10)
ax = figure.add_subplot(111)
ax.set_xlabel('Prediction')
ax.set_ylabel('Ground Truth')
Explanation: This is so much more, look at all the different shapes for different kilometers per year
End of explanation
inputs = Input(name='input', shape=(3, ))
x = Dense(80, name='hidden1', activation='relu')(inputs)
x = Dense(80, name='hidden2', activation='relu')(x)
x = Dense(80, name='hidden3', activation='relu')(x)
predictions = Dense(3, name='softmax', activation='softmax')(x)
model = Model(input=inputs, output=predictions)
# loss function: http://cs231n.github.io/linear-classify/#softmax
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.summary()
%time model.fit(X_train, y_train_categorical, epochs=1000, verbose=0, batch_size=100)
# %time model.fit(X_train, y_train_categorical, epochs=1000, validation_split=0.2, verbose=0, batch_size=100)
train_loss, train_accuracy = model.evaluate(X_train, y_train_categorical, batch_size=100)
print(train_accuracy)
test_loss, test_accuracy = model.evaluate(X_test, y_test_categorical, batch_size=100)
print(test_accuracy)
!rm -rf tf
import os
from keras import backend as K
# K.clear_session()
K.set_learning_phase(0)
export_path_base = 'tf'
export_path = os.path.join(
tf.compat.as_bytes(export_path_base),
tf.compat.as_bytes("1"))
sess = K.get_session()
classification_inputs = tf.saved_model.utils.build_tensor_info(model.input)
classification_outputs_scores = tf.saved_model.utils.build_tensor_info(model.output)
from tensorflow.python.saved_model.signature_def_utils_impl import build_signature_def, predict_signature_def
signature = predict_signature_def(inputs={'inputs': model.input},
outputs={'scores': model.output})
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
builder.add_meta_graph_and_variables(
sess,
tags=[tf.saved_model.tag_constants.SERVING],
signature_def_map={
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature
})
builder.save()
!ls -lhR tf
!saved_model_cli show --dir tf/1 --tag_set serve --signature_def serving_default
# 0: red
# 1: green
# 2: yellow
!saved_model_cli run --dir tf/1 --tag_set serve --signature_def serving_default --input_exprs 'inputs=[[160.0,47.0,15.0]]'
!cat sample_insurance.json
# https://cloud.google.com/ml-engine/docs/deploying-models
# Copy model to bucket
# gsutil cp -R tf/1 gs://booster_bucket
# create model and version at https://console.cloud.google.com/mlengine
# try ouy deployed
# gcloud ml-engine predict --model=booster --version=v1 --json-instances=./sample_insurance.json
# SCORES
# [0.003163766348734498, 0.9321494698524475, 0.06468681246042252]
# [2.467862714183866e-08, 1.2279541668431052e-14, 1.0]
Explanation: Fouth Step: Publish Model to Google Cloud ML and try it out
Unfortunately, for technical limitations, we need to get rid of the dropouts for that
End of explanation |
4,586 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Brewing Logistic Regression then Going Deeper
While Caffe is made for deep networks it can likewise represent "shallow" models like logistic regression for classification. We'll do simple logistic regression on synthetic data that we'll generate and save to HDF5 to feed vectors to Caffe. Once that model is done, we'll add layers to improve accuracy. That's what Caffe is about
Step1: Synthesize a dataset of 10,000 4-vectors for binary classification with 2 informative features and 2 noise features.
Step2: Learn and evaluate scikit-learn's logistic regression with stochastic gradient descent (SGD) training. Time and check the classifier's accuracy.
Step3: Save the dataset to HDF5 for loading in Caffe.
Step4: Let's define logistic regression in Caffe through Python net specification. This is a quick and natural way to define nets that sidesteps manually editing the protobuf model.
Step5: Now, we'll define our "solver" which trains the network by specifying the locations of the train and test nets we defined above, as well as setting values for various parameters used for learning, display, and "snapshotting".
Step6: Time to learn and evaluate our Caffeinated logistic regression in Python.
Step7: Do the same through the command line interface for detailed output on the model and solving.
Step8: If you look at output or the logreg_auto_train.prototxt, you'll see that the model is simple logistic regression.
We can make it a little more advanced by introducing a non-linearity between weights that take the input and weights that give the output -- now we have a two-layer network.
That network is given in nonlinear_auto_train.prototxt, and that's the only change made in nonlinear_logreg_solver.prototxt which we will now use.
The final accuracy of the new network should be higher than logistic regression!
Step9: Do the same through the command line interface for detailed output on the model and solving. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
os.chdir('..')
import sys
sys.path.insert(0, './python')
import caffe
import os
import h5py
import shutil
import tempfile
import sklearn
import sklearn.datasets
import sklearn.linear_model
import pandas as pd
Explanation: Brewing Logistic Regression then Going Deeper
While Caffe is made for deep networks it can likewise represent "shallow" models like logistic regression for classification. We'll do simple logistic regression on synthetic data that we'll generate and save to HDF5 to feed vectors to Caffe. Once that model is done, we'll add layers to improve accuracy. That's what Caffe is about: define a model, experiment, and then deploy.
End of explanation
X, y = sklearn.datasets.make_classification(
n_samples=10000, n_features=4, n_redundant=0, n_informative=2,
n_clusters_per_class=2, hypercube=False, random_state=0
)
# Split into train and test
X, Xt, y, yt = sklearn.cross_validation.train_test_split(X, y)
# Visualize sample of the data
ind = np.random.permutation(X.shape[0])[:1000]
df = pd.DataFrame(X[ind])
_ = pd.scatter_matrix(df, figsize=(9, 9), diagonal='kde', marker='o', s=40, alpha=.4, c=y[ind])
Explanation: Synthesize a dataset of 10,000 4-vectors for binary classification with 2 informative features and 2 noise features.
End of explanation
%%timeit
# Train and test the scikit-learn SGD logistic regression.
clf = sklearn.linear_model.SGDClassifier(
loss='log', n_iter=1000, penalty='l2', alpha=5e-4, class_weight='auto')
clf.fit(X, y)
yt_pred = clf.predict(Xt)
print('Accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(yt, yt_pred)))
Explanation: Learn and evaluate scikit-learn's logistic regression with stochastic gradient descent (SGD) training. Time and check the classifier's accuracy.
End of explanation
# Write out the data to HDF5 files in a temp directory.
# This file is assumed to be caffe_root/examples/hdf5_classification.ipynb
dirname = os.path.abspath('./examples/hdf5_classification/data')
if not os.path.exists(dirname):
os.makedirs(dirname)
train_filename = os.path.join(dirname, 'train.h5')
test_filename = os.path.join(dirname, 'test.h5')
# HDF5DataLayer source should be a file containing a list of HDF5 filenames.
# To show this off, we'll list the same data file twice.
with h5py.File(train_filename, 'w') as f:
f['data'] = X
f['label'] = y.astype(np.float32)
with open(os.path.join(dirname, 'train.txt'), 'w') as f:
f.write(train_filename + '\n')
f.write(train_filename + '\n')
# HDF5 is pretty efficient, but can be further compressed.
comp_kwargs = {'compression': 'gzip', 'compression_opts': 1}
with h5py.File(test_filename, 'w') as f:
f.create_dataset('data', data=Xt, **comp_kwargs)
f.create_dataset('label', data=yt.astype(np.float32), **comp_kwargs)
with open(os.path.join(dirname, 'test.txt'), 'w') as f:
f.write(test_filename + '\n')
Explanation: Save the dataset to HDF5 for loading in Caffe.
End of explanation
from caffe import layers as L
from caffe import params as P
def logreg(hdf5, batch_size):
# logistic regression: data, matrix multiplication, and 2-class softmax loss
n = caffe.NetSpec()
n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)
n.ip1 = L.InnerProduct(n.data, num_output=2, weight_filler=dict(type='xavier'))
n.accuracy = L.Accuracy(n.ip1, n.label)
n.loss = L.SoftmaxWithLoss(n.ip1, n.label)
return n.to_proto()
train_net_path = 'examples/hdf5_classification/logreg_auto_train.prototxt'
with open(train_net_path, 'w') as f:
f.write(str(logreg('examples/hdf5_classification/data/train.txt', 10)))
test_net_path = 'examples/hdf5_classification/logreg_auto_test.prototxt'
with open(test_net_path, 'w') as f:
f.write(str(logreg('examples/hdf5_classification/data/test.txt', 10)))
Explanation: Let's define logistic regression in Caffe through Python net specification. This is a quick and natural way to define nets that sidesteps manually editing the protobuf model.
End of explanation
from caffe.proto import caffe_pb2
def solver(train_net_path, test_net_path):
s = caffe_pb2.SolverParameter()
# Specify locations of the train and test networks.
s.train_net = train_net_path
s.test_net.append(test_net_path)
s.test_interval = 1000 # Test after every 1000 training iterations.
s.test_iter.append(250) # Test 250 "batches" each time we test.
s.max_iter = 10000 # # of times to update the net (training iterations)
# Set the initial learning rate for stochastic gradient descent (SGD).
s.base_lr = 0.01
# Set `lr_policy` to define how the learning rate changes during training.
# Here, we 'step' the learning rate by multiplying it by a factor `gamma`
# every `stepsize` iterations.
s.lr_policy = 'step'
s.gamma = 0.1
s.stepsize = 5000
# Set other optimization parameters. Setting a non-zero `momentum` takes a
# weighted average of the current gradient and previous gradients to make
# learning more stable. L2 weight decay regularizes learning, to help prevent
# the model from overfitting.
s.momentum = 0.9
s.weight_decay = 5e-4
# Display the current training loss and accuracy every 1000 iterations.
s.display = 1000
# Snapshots are files used to store networks we've trained. Here, we'll
# snapshot every 10K iterations -- just once at the end of training.
# For larger networks that take longer to train, you may want to set
# snapshot < max_iter to save the network and training state to disk during
# optimization, preventing disaster in case of machine crashes, etc.
s.snapshot = 10000
s.snapshot_prefix = 'examples/hdf5_classification/data/train'
# We'll train on the CPU for fair benchmarking against scikit-learn.
# Changing to GPU should result in much faster training!
s.solver_mode = caffe_pb2.SolverParameter.CPU
return s
solver_path = 'examples/hdf5_classification/logreg_solver.prototxt'
with open(solver_path, 'w') as f:
f.write(str(solver(train_net_path, test_net_path)))
Explanation: Now, we'll define our "solver" which trains the network by specifying the locations of the train and test nets we defined above, as well as setting values for various parameters used for learning, display, and "snapshotting".
End of explanation
%%timeit
caffe.set_mode_cpu()
solver = caffe.get_solver(solver_path)
solver.solve()
accuracy = 0
batch_size = solver.test_nets[0].blobs['data'].num
test_iters = int(len(Xt) / batch_size)
for i in range(test_iters):
solver.test_nets[0].forward()
accuracy += solver.test_nets[0].blobs['accuracy'].data
accuracy /= test_iters
print("Accuracy: {:.3f}".format(accuracy))
Explanation: Time to learn and evaluate our Caffeinated logistic regression in Python.
End of explanation
!./build/tools/caffe train -solver examples/hdf5_classification/logreg_solver.prototxt
Explanation: Do the same through the command line interface for detailed output on the model and solving.
End of explanation
from caffe import layers as L
from caffe import params as P
def nonlinear_net(hdf5, batch_size):
# one small nonlinearity, one leap for model kind
n = caffe.NetSpec()
n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)
# define a hidden layer of dimension 40
n.ip1 = L.InnerProduct(n.data, num_output=40, weight_filler=dict(type='xavier'))
# transform the output through the ReLU (rectified linear) non-linearity
n.relu1 = L.ReLU(n.ip1, in_place=True)
# score the (now non-linear) features
n.ip2 = L.InnerProduct(n.ip1, num_output=2, weight_filler=dict(type='xavier'))
# same accuracy and loss as before
n.accuracy = L.Accuracy(n.ip2, n.label)
n.loss = L.SoftmaxWithLoss(n.ip2, n.label)
return n.to_proto()
train_net_path = 'examples/hdf5_classification/nonlinear_auto_train.prototxt'
with open(train_net_path, 'w') as f:
f.write(str(nonlinear_net('examples/hdf5_classification/data/train.txt', 10)))
test_net_path = 'examples/hdf5_classification/nonlinear_auto_test.prototxt'
with open(test_net_path, 'w') as f:
f.write(str(nonlinear_net('examples/hdf5_classification/data/test.txt', 10)))
solver_path = 'examples/hdf5_classification/nonlinear_logreg_solver.prototxt'
with open(solver_path, 'w') as f:
f.write(str(solver(train_net_path, test_net_path)))
%%timeit
caffe.set_mode_cpu()
solver = caffe.get_solver(solver_path)
solver.solve()
accuracy = 0
batch_size = solver.test_nets[0].blobs['data'].num
test_iters = int(len(Xt) / batch_size)
for i in range(test_iters):
solver.test_nets[0].forward()
accuracy += solver.test_nets[0].blobs['accuracy'].data
accuracy /= test_iters
print("Accuracy: {:.3f}".format(accuracy))
Explanation: If you look at output or the logreg_auto_train.prototxt, you'll see that the model is simple logistic regression.
We can make it a little more advanced by introducing a non-linearity between weights that take the input and weights that give the output -- now we have a two-layer network.
That network is given in nonlinear_auto_train.prototxt, and that's the only change made in nonlinear_logreg_solver.prototxt which we will now use.
The final accuracy of the new network should be higher than logistic regression!
End of explanation
!./build/tools/caffe train -solver examples/hdf5_classification/nonlinear_logreg_solver.prototxt
# Clean up (comment this out if you want to examine the hdf5_classification/data directory).
shutil.rmtree(dirname)
Explanation: Do the same through the command line interface for detailed output on the model and solving.
End of explanation |
4,587 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Science Tutorial
Now that we've covered some Python basics, we will begin a tutorial going through many tasks a data scientist may perform. We will obtain real world data and go through the process of auditing, analyzing, visualing, and building classifiers from the data.
We will use a database of breast cancer data obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg. The data is a collection of samples from Dr. Wolberg's clinical cases with attributes pertaining to tumors and a class labeling the sample as benign or malignant.
| Attribute | Domain |
|--------------------------------|---------------------------------|
| 1. Sample code number | id number |
| 2. Clump Thickness | 1 - 10 |
| 3. Uniformity of Cell Size | 1 - 10 |
| 4. Uniformity of Cell Shape | 1 - 10 |
| 5. Marginal Adhesion | 1 - 10 |
| 6. Single Epithelial Cell Size | 1 - 10 |
| 7. Bare Nuclei | 1 - 10 |
| 8. Bland Chromatin | 1 - 10 |
| 9. Normal Nucleoli | 1 - 10 |
| 10. Mitoses | 1 - 10 |
| 11. Class | (2 for benign, 4 for malignant) |
For more information on this data set
Step1: Now we'll specify the url of the file and the file name we will save to
Step2: And make a call to <code>download_file</code>
Step3: Now this might seem like overkill for downloading a single, small csv file, but we can use this same function to access countless APIs available on the World Wide Web by building an API request in the url.
Wrangling the Data
Now that we have some data, lets get it into a useful form. For this task we will use a package called pandas. pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for Python. The most fundamental data structure in pandas is the dataframe, which is similar to the data.frame data structure found in the R statistical programming language.
For more information
Step4: Whoops, looks like our csv file did not contain a header row. <code>read_csv</code> assumes the first row of the csv is the header by default.
Lets check out the file located here
Step5: Lets try the import again, this time specifying the names. When specifying names, the <code>read_csv</code> function requires us to set the <code>header</code> row number to <code>None</code>
Step6: Lets take a look at some simple statistics for the clump_thickness column
Step7: Referring to the documentation link above about the data, the count, range of values (min = 1, max = 10), and data type (dtype = float64) look correct.
Lets take a look at another column, this time bare_nuclei
Step8: Well at least the count is correct. We were expecting no more than 10 unique values and now the data type is an object.
Whats up with our data?
We have arrived at arguably the most important part of performing data science
Step9: Using <code>unique</code> we can see that '?' is one of the distinct values that appears in this series. Looking again at the documentation for this data set, we find the following
Step10: Here we have attempted to convert the bare_nuclei series to a numeric type. Lets see what the unique values are now.
Step11: The decimal point after each number means that it is an integer value being represented by a floating point number. Now instead of our pesky '?' we have <code>nan</code> (not a number). <code>nan</code> is a construct used by pandas to represent the absence of value. It is a data type that comes from the package numpy, used internally by pandas, and is not part of the standard Python library.
Now that we have <code>nan</code> values in place of '?', we can use some nice features in pandas to deal with these missing values.
What we are about to do is what is called "imputing" or providing a replacement for missing values so the data set becomes easier to work with. There are a number of strategies for imputing missing values, all with their own pitfalls. In general, imputation introduces some degree of bias to the data, so the imputation strategy taken should be in an attempt to minimize that bias.
Here, we will simply use the mean of all of the non-nan values in the series as a replacement. Since we already know that the data is integer in possible values, we will round the mean to the nearest whole number.
Step12: <code>fillna</code> is a dataframe function that replaces all nan values with either a scalar value, a series of values with the same indices as found in the dataframe, or a dataframe that is indexed by the columns of the target dataframe.
<code>cancer_data.mean().round()</code> will take the mean of each column (this computation ignores the currently present nan values), then round, and return a dataframe indexed by the columns of the original dataframe
Step13: <code>inplace=True</code> allows us to make this modification directly on the dataframe, without having to do any assignment.
Now that we have figured out how to impute these missing values in a single column, lets start over and quickly apply this technique to the entire dataframe.
Step14: Structurally, Pandas dataframes are a collection of Series objects sharing a common index. In general, the Series object and Dataframe object share a large number of functions with some behavioral differences. In other words, whatever computation you can do on a single column can generally be applied to the entire dataframe.
Now we can use the dataframe version of <code>describe</code> to get an overview of all of our data
Step15: Visualizing the Data
Another important tool in the data scientist's toolbox is the ability to create visualizations from data. Visualizing data is often the most logical place to start getting a deeper intuition of the data. This intuition will shape and drive your analysis.
Even more important than visualizing data for your own personal benefit, it is often the job of the data scientist to use the data to tell a story. Creating illustrative visuals that succinctly convey an idea are the best way to tell that story, especially to stakeholders with less technical skillsets.
Here we will be using a Python package called ggplot (https
Step16: So we enabled plotting in IPython and imported everything from the ggplot package. Now we'll create a plot and then break down the components
Step17: A plot begins with the <code>ggplot</code> function. Here, we pass in the cancer_data pandas dataframe and a special function called <code>aes</code> (short for aesthetic). The values provided to <code>aes</code> change depending on which type of plot is being used. Here we are going to make a histogram from the clump_thickness column in cancer_data, so that column name needs to be passed as the x parameter to <code>aes</code>.
The grammar of graphics is based off of a concept of "geoms" (short for geometric objects). These geoms provide granular control of the plot and are progressively added to the base call to <code>ggplot</code> with + syntax.
Lets say we wanted to show the mean clump_thickness on this plot. We could do something like the following
Step18: As you can see, each geom has its own set of parameters specific to the appearance of that geom (also called aesthetics).
Lets try a scatter plot to get some multi-variable action
Step19: Sometimes when working with integer data, or data that takes on a limited range of values, it is easier to visualize the plot with added jitter to the points. We can do that by adding an aesthetic to <code>geom_point</code>.
Step20: With a simple aesthetic addition, we can see how these two variables play into our cancer classification
Step21: By adding <code>color = 'class'</code> as a parameter to the aes function, we now give a color to each unique value found in that column and automatically get a legend. Remember, 2 is benign and 4 is malignant.
We can also do things such as add a title or change the axis labeling with geoms
Step22: There is definitely some patterning going on in that plot.
A slightly different way to convey this idea is to use faceting. Faceting is the creation of multiple related plots arranged by the values of a given faceted variable
Step23: Rather than set the color equal to the class, we have created two plots based off of the class. With a facet, we can get very detailed. Lets through some more variables into the mix
Step24: Unfortunately, legends for faceting are not yet implemented in the Python ggplot package. In this example we faceted on the x-axis with clump_thickness and along the y-axis with marginal_adhesion, then created 100 plots of uniformity_cell_shape vs. bare_nuclei effect on class.
I highly encourage you to check out https
Step25: Here we call <code>values</code> on the dataframe to extract the values stored in the dataframe as an array of numpy arrays with the same dimensions as our subsetted dataframe. Numpy is a powerful, high performance scientific computing package that implements arrays. It is used internally by pandas. We will use <code>labels</code> and <code>features</code> later on in our machine learning classifier
Step26: An important concept in machine learning is to split the data set into training data and testing data. The machine learning algorithm will use the subset of training data to build a classifier to predict labels. We then test the accuracy of this classifier on the subset of testing data. This is done in order to prevent overfitting the classifier to one given set of data.
Overfitting is a major concern in the design of machine learning algorithms. Conceptually, overfitting is when a classifier is really good at predicting the data used to build it, but isn't robust or general enough to predict new, yet unseen data all that well.
To perform machine learning, we will use a package called sci-kit learn (sklearn for short). The sklearn cross_validation module contains a function called <code>train_test_split</code> that will take in features and labels, and randomly select values into the training and testing subsets
Step27: For this example, we will build a Decision Tree Classifier. The goal of a decision tree is to create a prediction by outlining a simple tree of decision rules. These rules are built from the training data by slicing the data on simple boundaries and trying to minimize the prediction error of that boundary. More details on decision trees can be found here
Step28: Next, we create a variable to store the classifier
Step29: Then we have to fit the classifier to the training data. Both the training features (uniformity_cell_shape and bare_nuclei) and the labels (benign vs. malignant) are passed to the fit function
Step30: The classifier is now ready to make some predictions. We can use the score function to see how accurate the classifier is on the test data. The score function will take the data in <code>features_test</code>, make a prediction of benign or malignant based on the decision tree that was fit to the training data, and compare that prediction to the true values in <code>labels_test</code>
Step31: Nearly all classifiers, decision trees included, will have paremeters that can be tuned to build a more accurate model. Without any parameter tuning and using just two features we have made a pretty accurate prediction. Good job!
To get a better idea of what is going on, I have included a helper function to plot our test data along with the decision boundary | Python Code:
def download_file(url, local_filename):
import requests
# stream = True allows downloading of large files; prevents loading entire file into memory
r = requests.get(url, stream = True)
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
Explanation: Data Science Tutorial
Now that we've covered some Python basics, we will begin a tutorial going through many tasks a data scientist may perform. We will obtain real world data and go through the process of auditing, analyzing, visualing, and building classifiers from the data.
We will use a database of breast cancer data obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg. The data is a collection of samples from Dr. Wolberg's clinical cases with attributes pertaining to tumors and a class labeling the sample as benign or malignant.
| Attribute | Domain |
|--------------------------------|---------------------------------|
| 1. Sample code number | id number |
| 2. Clump Thickness | 1 - 10 |
| 3. Uniformity of Cell Size | 1 - 10 |
| 4. Uniformity of Cell Shape | 1 - 10 |
| 5. Marginal Adhesion | 1 - 10 |
| 6. Single Epithelial Cell Size | 1 - 10 |
| 7. Bare Nuclei | 1 - 10 |
| 8. Bland Chromatin | 1 - 10 |
| 9. Normal Nucleoli | 1 - 10 |
| 10. Mitoses | 1 - 10 |
| 11. Class | (2 for benign, 4 for malignant) |
For more information on this data set:
https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
Obtaining the Data
Lets begin by programmatically obtaining the data. Here I'll define a function we can use to make HTTP requests and download the data
End of explanation
url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data'
filename = 'breast-cancer-wisconsin.csv'
Explanation: Now we'll specify the url of the file and the file name we will save to
End of explanation
download_file(url, filename)
Explanation: And make a call to <code>download_file</code>
End of explanation
import pandas as pd # import the module and alias it as pd
cancer_data = pd.read_csv('breast-cancer-wisconsin.csv')
cancer_data.head() # show the first few rows of the data
Explanation: Now this might seem like overkill for downloading a single, small csv file, but we can use this same function to access countless APIs available on the World Wide Web by building an API request in the url.
Wrangling the Data
Now that we have some data, lets get it into a useful form. For this task we will use a package called pandas. pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for Python. The most fundamental data structure in pandas is the dataframe, which is similar to the data.frame data structure found in the R statistical programming language.
For more information: http://pandas.pydata.org
pandas dataframes are a 2-dimensional labeled data structures with columns of potentially different types. Dataframes can be thought of as similar to a spreadsheet or SQL table.
There are numerous ways to build a dataframe with pandas. Since we have already attained a csv file, we can use a parser built into pandas called <code>read_csv</code> which will read the contents of a csv file directly into a data frame.
For more information: http://pandas.pydata.org/pandas-docs/dev/generated/pandas.io.parsers.read_csv.html
End of explanation
# \ allows multi line wrapping
cancer_header = [ \
'sample_code_number', \
'clump_thickness', \
'uniformity_cell_size', \
'uniformity_cell_shape', \
'marginal_adhesion', \
'single_epithelial_cell_size', \
'bare_nuclei', \
'bland_chromatin', \
'normal_nucleoli', \
'mitoses', \
'class']
Explanation: Whoops, looks like our csv file did not contain a header row. <code>read_csv</code> assumes the first row of the csv is the header by default.
Lets check out the file located here: https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.names
This contains information about the data set including the names of the attributes.
Lets create a list of these attribute names to use when reading the csv file
End of explanation
cancer_data = pd.read_csv('breast-cancer-wisconsin.csv', header=None, names=cancer_header)
cancer_data.head()
Explanation: Lets try the import again, this time specifying the names. When specifying names, the <code>read_csv</code> function requires us to set the <code>header</code> row number to <code>None</code>
End of explanation
cancer_data["clump_thickness"].describe()
Explanation: Lets take a look at some simple statistics for the clump_thickness column
End of explanation
cancer_data["bare_nuclei"].describe()
Explanation: Referring to the documentation link above about the data, the count, range of values (min = 1, max = 10), and data type (dtype = float64) look correct.
Lets take a look at another column, this time bare_nuclei
End of explanation
cancer_data["bare_nuclei"].unique()
Explanation: Well at least the count is correct. We were expecting no more than 10 unique values and now the data type is an object.
Whats up with our data?
We have arrived at arguably the most important part of performing data science: dealing with messy data. One of most important tools in a data scientist's toolbox is the ability to audit, clean, and reshape data. The real world is full of messy data and your sources may not always have data in the exact format you desire.
In this case we are working with csv data, which is a relatively straightforward format, but this will not always be the case when performing real world data science. Data comes in all varieties from csv all the way to something as unstructured as a collection of emails or documents. A data scientist must be versed in a wide variety of technologies and methodologies in order to be successful.
Now, lets do a little bit of digging into why were are not getting a numeric pandas column
End of explanation
cancer_data["bare_nuclei"] = cancer_data["bare_nuclei"].convert_objects(convert_numeric=True)
Explanation: Using <code>unique</code> we can see that '?' is one of the distinct values that appears in this series. Looking again at the documentation for this data set, we find the following:
Missing attribute values: 16
There are 16 instances in Groups 1 to 6 that contain a single missing
(i.e., unavailable) attribute value, now denoted by "?".
It was so nice of them to tell us to expect these missing values, but as a data scientist that will almost never be the case. Lets see what we can do with these missing values.
End of explanation
cancer_data["bare_nuclei"].unique()
Explanation: Here we have attempted to convert the bare_nuclei series to a numeric type. Lets see what the unique values are now.
End of explanation
cancer_data.fillna(cancer_data.mean().round(), inplace=True)
cancer_data["bare_nuclei"].unique()
Explanation: The decimal point after each number means that it is an integer value being represented by a floating point number. Now instead of our pesky '?' we have <code>nan</code> (not a number). <code>nan</code> is a construct used by pandas to represent the absence of value. It is a data type that comes from the package numpy, used internally by pandas, and is not part of the standard Python library.
Now that we have <code>nan</code> values in place of '?', we can use some nice features in pandas to deal with these missing values.
What we are about to do is what is called "imputing" or providing a replacement for missing values so the data set becomes easier to work with. There are a number of strategies for imputing missing values, all with their own pitfalls. In general, imputation introduces some degree of bias to the data, so the imputation strategy taken should be in an attempt to minimize that bias.
Here, we will simply use the mean of all of the non-nan values in the series as a replacement. Since we already know that the data is integer in possible values, we will round the mean to the nearest whole number.
End of explanation
cancer_data.mean().round()
Explanation: <code>fillna</code> is a dataframe function that replaces all nan values with either a scalar value, a series of values with the same indices as found in the dataframe, or a dataframe that is indexed by the columns of the target dataframe.
<code>cancer_data.mean().round()</code> will take the mean of each column (this computation ignores the currently present nan values), then round, and return a dataframe indexed by the columns of the original dataframe:
End of explanation
cancer_data = pd.read_csv('breast-cancer-wisconsin.csv', header=None, names=cancer_header)
cancer_data = cancer_data.convert_objects(convert_numeric=True)
cancer_data.fillna(cancer_data.mean().round(), inplace=True)
cancer_data["bare_nuclei"].describe()
cancer_data["bare_nuclei"].unique()
Explanation: <code>inplace=True</code> allows us to make this modification directly on the dataframe, without having to do any assignment.
Now that we have figured out how to impute these missing values in a single column, lets start over and quickly apply this technique to the entire dataframe.
End of explanation
cancer_data.describe()
Explanation: Structurally, Pandas dataframes are a collection of Series objects sharing a common index. In general, the Series object and Dataframe object share a large number of functions with some behavioral differences. In other words, whatever computation you can do on a single column can generally be applied to the entire dataframe.
Now we can use the dataframe version of <code>describe</code> to get an overview of all of our data
End of explanation
# The following line is NOT Python code, but a special syntax for enabling inline plotting in IPython
%matplotlib inline
from ggplot import *
import warnings
# ggplot usage of pandas throws a future warning
warnings.filterwarnings('ignore')
Explanation: Visualizing the Data
Another important tool in the data scientist's toolbox is the ability to create visualizations from data. Visualizing data is often the most logical place to start getting a deeper intuition of the data. This intuition will shape and drive your analysis.
Even more important than visualizing data for your own personal benefit, it is often the job of the data scientist to use the data to tell a story. Creating illustrative visuals that succinctly convey an idea are the best way to tell that story, especially to stakeholders with less technical skillsets.
Here we will be using a Python package called ggplot (https://ggplot.yhathq.com). The ggplot package is an attempt to bring visuals following the guidelines outlayed in the grammar of graphics (http://vita.had.co.nz/papers/layered-grammar.html) to Python. It is based off of and intended to mimic the features of the ggplot2 library found in R. Additionally, ggplot is designed to work with Pandas dataframes, making things nice and simple.
We'll start by doing a bit of setup
End of explanation
plt = ggplot(aes(x = 'clump_thickness'), data = cancer_data) + \
geom_histogram(binwidth = 1, fill = 'steelblue')
# using print gets the plot to show up here within the notebook.
# In normal Python environment without using print, the plot appears in a window
print plt
Explanation: So we enabled plotting in IPython and imported everything from the ggplot package. Now we'll create a plot and then break down the components
End of explanation
plt = ggplot(aes(x = 'clump_thickness'), data = cancer_data) + \
geom_histogram(binwidth = 1, fill = 'steelblue') + \
geom_vline(xintercept = [cancer_data['clump_thickness'].mean()], linetype='dashed')
print plt
Explanation: A plot begins with the <code>ggplot</code> function. Here, we pass in the cancer_data pandas dataframe and a special function called <code>aes</code> (short for aesthetic). The values provided to <code>aes</code> change depending on which type of plot is being used. Here we are going to make a histogram from the clump_thickness column in cancer_data, so that column name needs to be passed as the x parameter to <code>aes</code>.
The grammar of graphics is based off of a concept of "geoms" (short for geometric objects). These geoms provide granular control of the plot and are progressively added to the base call to <code>ggplot</code> with + syntax.
Lets say we wanted to show the mean clump_thickness on this plot. We could do something like the following
End of explanation
plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei'), data = cancer_data) + \
geom_point()
print plt
Explanation: As you can see, each geom has its own set of parameters specific to the appearance of that geom (also called aesthetics).
Lets try a scatter plot to get some multi-variable action
End of explanation
plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei'), data = cancer_data) + \
geom_point(position = 'jitter')
print plt
Explanation: Sometimes when working with integer data, or data that takes on a limited range of values, it is easier to visualize the plot with added jitter to the points. We can do that by adding an aesthetic to <code>geom_point</code>.
End of explanation
plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei', color = 'class'), data = cancer_data) + \
geom_point(position = 'jitter')
print plt
Explanation: With a simple aesthetic addition, we can see how these two variables play into our cancer classification
End of explanation
plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei', color = 'class'), data = cancer_data) + \
geom_point(position = 'jitter') + \
ggtitle("The Effect of the Bare Nuclei and Cell Shape Uniformity on Classification") + \
ylab("Amount of Bare Nuclei") + \
xlab("Uniformity in Cell shape")
print plt
Explanation: By adding <code>color = 'class'</code> as a parameter to the aes function, we now give a color to each unique value found in that column and automatically get a legend. Remember, 2 is benign and 4 is malignant.
We can also do things such as add a title or change the axis labeling with geoms
End of explanation
plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei'), data = cancer_data) + \
geom_point(position = 'jitter') + \
ggtitle("The Effect of the Bare Nuclei and Cell Shape Uniformity on Classification") + \
facet_grid('class')
print plt
Explanation: There is definitely some patterning going on in that plot.
A slightly different way to convey this idea is to use faceting. Faceting is the creation of multiple related plots arranged by the values of a given faceted variable
End of explanation
plt = ggplot(aes(x = 'uniformity_cell_shape', y = 'bare_nuclei', color = 'class'), data = cancer_data) + \
geom_point(position = 'jitter') + \
ggtitle("The Effect of the Bare Nuclei and Cell Shape Uniformity on Classification") + \
facet_grid('clump_thickness', 'marginal_adhesion')
print plt
Explanation: Rather than set the color equal to the class, we have created two plots based off of the class. With a facet, we can get very detailed. Lets through some more variables into the mix
End of explanation
cancer_features = ['uniformity_cell_shape', 'bare_nuclei']
Explanation: Unfortunately, legends for faceting are not yet implemented in the Python ggplot package. In this example we faceted on the x-axis with clump_thickness and along the y-axis with marginal_adhesion, then created 100 plots of uniformity_cell_shape vs. bare_nuclei effect on class.
I highly encourage you to check out https://ggplot.yhathq.com/docs/index.html to see all of the available geoms. The best way to learn is to play with and visualize the data with many different plots and aesthetics.
Machine Learning
So now that we've acquired, audited, cleaned, and visualized our data, we have arrived at machine learning. By formal definition from Tom Mitchell:
A computer program is set to learn from an experience E with
respect to some task T and some performance measure P if its performance
on T as measured by P improves with experience E.
Okay, thats a bit ridiculous. Essentially machine learning is the science of building algorithms that learn from data in order make predictions about the data. There are two main classes of machine learning: supervised and unsupervised.
In supervised learning, an algorithm will use the features of the data given to make a prediction about a known label. For example, we will use supervised learning here to take features such as bare_nuclei and uniformity_cell_shape and predict a tumor class (benign or malignant). This type of machine learning is called supervised because the class labels (benign or malignant) are a known quantity during learning, so we are supervising the algorithm with the "correct" answer.
In unsupervised learning, an algorithm will use the features of the data to discover what types of labels there could be. The "correct" answer is not known.
In this session we will be mostly focused on supervised learning as we attempt to predict whether a tumor is benign or malignant. We will also be focused on doing some practical machine learning, and will glaze over the algorithmic details.
The first thing we have to do is to extract the class labels and features from <code>cancer_data</code> and store them as separate arrays. In our first classifier we will only choose two features from <code>cancer_data</code> to keep things simple
End of explanation
labels = cancer_data['class'].values
features = cancer_data[cancer_features].values
Explanation: Here we call <code>values</code> on the dataframe to extract the values stored in the dataframe as an array of numpy arrays with the same dimensions as our subsetted dataframe. Numpy is a powerful, high performance scientific computing package that implements arrays. It is used internally by pandas. We will use <code>labels</code> and <code>features</code> later on in our machine learning classifier
End of explanation
from sklearn.cross_validation import train_test_split
features_train, features_test, labels_train, labels_test = train_test_split(features,
labels,
test_size = 0.3,
random_state = 42)
Explanation: An important concept in machine learning is to split the data set into training data and testing data. The machine learning algorithm will use the subset of training data to build a classifier to predict labels. We then test the accuracy of this classifier on the subset of testing data. This is done in order to prevent overfitting the classifier to one given set of data.
Overfitting is a major concern in the design of machine learning algorithms. Conceptually, overfitting is when a classifier is really good at predicting the data used to build it, but isn't robust or general enough to predict new, yet unseen data all that well.
To perform machine learning, we will use a package called sci-kit learn (sklearn for short). The sklearn cross_validation module contains a function called <code>train_test_split</code> that will take in features and labels, and randomly select values into the training and testing subsets
End of explanation
from sklearn.tree import DecisionTreeClassifier
Explanation: For this example, we will build a Decision Tree Classifier. The goal of a decision tree is to create a prediction by outlining a simple tree of decision rules. These rules are built from the training data by slicing the data on simple boundaries and trying to minimize the prediction error of that boundary. More details on decision trees can be found here: http://scikit-learn.org/stable/modules/tree.html
The first step is to import the classifier from the <code>sklearn.tree</code> module.
End of explanation
clf = DecisionTreeClassifier()
Explanation: Next, we create a variable to store the classifier
End of explanation
clf.fit(features_train, labels_train)
Explanation: Then we have to fit the classifier to the training data. Both the training features (uniformity_cell_shape and bare_nuclei) and the labels (benign vs. malignant) are passed to the fit function
End of explanation
print "Accuracy score:", clf.score(features_test,labels_test)
Explanation: The classifier is now ready to make some predictions. We can use the score function to see how accurate the classifier is on the test data. The score function will take the data in <code>features_test</code>, make a prediction of benign or malignant based on the decision tree that was fit to the training data, and compare that prediction to the true values in <code>labels_test</code>
End of explanation
from class_vis import prettyPicture # helper class
prettyPicture(clf, features_test, labels_test)
Explanation: Nearly all classifiers, decision trees included, will have paremeters that can be tuned to build a more accurate model. Without any parameter tuning and using just two features we have made a pretty accurate prediction. Good job!
To get a better idea of what is going on, I have included a helper function to plot our test data along with the decision boundary
End of explanation |
4,588 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, I'm going to define a function that will print the power (watts per square meter) that the earth would receive from the sun if there were no atmosphere.
Step1: Now I'm going to take the above function and do the same thing except make it print the number of kWh in one square meter for a year.
Step2: Loading Cloud Data
The cloud cover data I am using is International Satellite Cloud Climatology Project (ISCCP). To understand this data in its raw form, visualize a map of the world overlayed by a grid of squares. Each square is 2.5 degrees in width and height, so the grid is 144 x 72 (longitude x latitude) and has a total of 10368 squares. Each number in the data is the average annual cloud cover percentage for a single square. The first number represents average cloud cover in the -90 degrees latitude, -180 degrees longitude box. Longitude varies first, and begins at -180 degrees and proceeds eastward to +180 degrees. Latitude begins at -90 degrees and proceeds northward to +90 degrees.
Step3: Now, 'clouds' is a nice looking dataframe that includes lattitude, longitude, and average sun that gets through the clouds for every month and the entire world. Next I will make a function that takes lattitude and longitude as an input and returns sun_ratio for each month as an output
Step4: Now, I will make a function that combines watts and radiation to create the final solar generation prediction
Step5: Making a Plot of the Data | Python Code:
def Solar_Power_Calculator(Day_Of_Year,Lattitude,Hour_of_Day):
'''This function will tell you how much power the sun is radiating on one square meter
of the earth when it is sunny in any location in the world at any time.
Inputs: Day_Of_Year, Lattitude, Hour_of_Day
Output: Power (watts)'''
# Make some assertions about the inputs of the function.
assert 0 < Day_Of_Year <= 365, 'Day of year must be from 1 through 365.'
# Hour_of_Day input is from 0 through 24, but we need hours in a different form for our
# calculations.
if Hour_of_Day >= 12:
hour = Hour_of_Day - 12
elif Hour_of_Day < 12:
hour = 12 - Hour_of_Day
# Calculating Theta D
ThetaD = (2*np.pi*Day_Of_Year)/365
# Calculating distance
# Constants for calculating distance
Dis_n = [0,1,2]
Dis_an = [1.00011,0.034221,0.000719]
Dis_bn = [0,0.00128,0.000077]
Dis1 = Dis_an[0]*np.cos(Dis_n[0]*ThetaD)+Dis_bn[0]*np.sin(Dis_n[0]*ThetaD)
Dis2 = Dis_an[1]*np.cos(Dis_n[1]*ThetaD)+Dis_bn[1]*np.sin(Dis_n[1]*ThetaD)
Dis3 = Dis_an[2]*np.cos(Dis_n[2]*ThetaD)+Dis_bn[2]*np.sin(Dis_n[2]*ThetaD)
# Calculate Distance
Distance = Dis1+Dis2+Dis3
# Constants for calculating declination
Dec_n = [0,1,2,3]
Dec_an = [0.006918,-0.399912,-0.006758,-0.002697]
Dec_bn = [0,0.070257,0.000907,0.00148]
Dec1 = Dec_an[0]*np.cos(Dec_n[0]*ThetaD)+Dec_bn[0]*np.sin(Dec_n[0]*ThetaD)
Dec2 = Dec_an[1]*np.cos(Dec_n[1]*ThetaD)+Dec_bn[1]*np.sin(Dec_n[1]*ThetaD)
Dec3 = Dec_an[2]*np.cos(Dec_n[2]*ThetaD)+Dec_bn[2]*np.sin(Dec_n[2]*ThetaD)
Dec4 = Dec_an[3]*np.cos(Dec_n[3]*ThetaD)+Dec_bn[3]*np.sin(Dec_n[3]*ThetaD)
# Calculate Dec_radians
Dec_radians = Dec1+Dec2+Dec3+Dec4
Dec_degrees = np.degrees(Dec_radians)
# For Hour Angle
Hour_angle = np.radians(hour*15)
# For Radians and Cos Solar Zenith Angle
radians = (np.pi/180)*Lattitude
CSZA = np.sin(radians)*np.sin(Dec_radians)+np.cos(radians)*np.cos(Dec_radians)*np.cos(Hour_angle)# Cos Solar Zenith Angle
# When the sun is down, CSZA is negative, but we want it to be zero (because when the sun
# is down, it isn't radiating on that location.
if CSZA < 0:
CSZA = 0
# Calculate Energy/Area (W/m^2)
Watts_Per_SqMeter = S0*Distance*CSZA*Atm
return(Watts_Per_SqMeter)
# For example, this is how the above function works for Squamish's latitude at noon on
# January 1st
Solar_Power_Calculator(1,49.7,12)
Explanation: First, I'm going to define a function that will print the power (watts per square meter) that the earth would receive from the sun if there were no atmosphere.
End of explanation
# Making a list called of Theta D for every day of the year
def ThetaD():
year = list(range(1,366))
ThetaD_list = []
for i in year:
ThetaD_list.append((2*np.pi*i)/365)
return(ThetaD_list)
ThetaD_list = ThetaD()
def Solar_Energy_Calculator(latitude, panel_efficiency, area):
'''This function calculates the energy that can be generated in any given place in the
world over one year sans clouds or other shading such as buildings and trees.
Inputs: lattitude, panel_efficiency (a number between 0 and 1), and area (of solar panels
in square meters).'''
# Make some assertions about the inputs of the function.
assert -90 <= latitude <= 90, 'Latitude must be between -90 and 90.'
assert 0 <= panel_efficiency <= 1, 'Panel efficiency must be between 0 and 1.'
assert area > 0, 'Area of solar panel array must be greater than 0.'
# Making Distance and Dec_radians lists for each day of the year
radians = np.pi/180*latitude
Hours = [12,11,10,9,8,7,6,5,4,3,2,1,0,1,2,3,4,5,6,7,8,9,10,11] # A list of all the hours of the day
Solar_Flux = 0 # Energy generated from given area of solar panels in one hour
Watts_Every_Hour = [] # A list that will become the Wh/m^2 every hour for a year
kWh = 0 # A number that will become the total kWh in one place in one year.
for i in ThetaD_list:
# Calculate the Distance
Dis1 = Dis_an[0]*np.cos(Dis_n[0]*i)+Dis_bn[0]*np.sin(Dis_n[0]*i)
Dis2 = Dis_an[1]*np.cos(Dis_n[1]*i)+Dis_bn[1]*np.sin(Dis_n[1]*i)
Dis3 = Dis_an[2]*np.cos(Dis_n[2]*i)+Dis_bn[2]*np.sin(Dis_n[2]*i)
Distance = Dis1+Dis2+Dis3
# Calculate the Declination
Dec1 = Dec_an[0]*np.cos(Dec_n[0]*i)+Dec_bn[0]*np.sin(Dec_n[0]*i)
Dec2 = Dec_an[1]*np.cos(Dec_n[1]*i)+Dec_bn[1]*np.sin(Dec_n[1]*i)
Dec3 = Dec_an[2]*np.cos(Dec_n[2]*i)+Dec_bn[2]*np.sin(Dec_n[2]*i)
Dec4 = Dec_an[3]*np.cos(Dec_n[3]*i)+Dec_bn[3]*np.sin(Dec_n[3]*i)
Dec_radians = Dec1+Dec2+Dec3+Dec4
Dec_degrees = (np.degrees(Dec_radians))
for i in Hours:
Hour_angle = np.radians(i*15)
CSZA = (np.sin(radians)*np.sin(Dec_radians)) + (np.cos(radians)*np.cos(Dec_radians)*np.cos(Hour_angle))
if CSZA < 0:
CSZA = 0
Solar_Flux = (S0)*Distance*CSZA*Atm*panel_efficiency*area
Watts_Every_Hour.append(Solar_Flux)
return(Watts_Every_Hour)
Watts = Solar_Energy_Calculator(49,.16,1.6)
Explanation: Now I'm going to take the above function and do the same thing except make it print the number of kWh in one square meter for a year.
End of explanation
# First, I'm loading the raw cloud cover data.
cloud_dat = pd.read_table('../data/weather.txt',sep='\s+')
# Right now the data is in 1 row and 10368 columns, so it requires some
# cleaning up
cloud_dat.shape
# After transposing, the data is in 1 column and 10368 rows
cloud_dat = cloud_dat.transpose()
cloud_dat.shape
# Now I will change the name of the column of data and reset the index
cloud_dat = cloud_dat.reset_index()
cloud_dat.columns=['cloud_ratio']
# Here is a glimpse of what the data looks like now
cloud_dat
# Next, I load a dataframe that I created in excel with three columns
# (month, lattitude, and longitude) that have been filled in to line up
# with the 'data' object.
clouds = pd.read_excel('../data/blank_weather.xlsx',sep='\s+')
clouds.head(n=5)
# Now, we will add a fourth column to 'clouds' that is our data
clouds['cloud_ratio'] = cloud_dat['cloud_ratio']
clouds.head(n=5)
Explanation: Loading Cloud Data
The cloud cover data I am using is International Satellite Cloud Climatology Project (ISCCP). To understand this data in its raw form, visualize a map of the world overlayed by a grid of squares. Each square is 2.5 degrees in width and height, so the grid is 144 x 72 (longitude x latitude) and has a total of 10368 squares. Each number in the data is the average annual cloud cover percentage for a single square. The first number represents average cloud cover in the -90 degrees latitude, -180 degrees longitude box. Longitude varies first, and begins at -180 degrees and proceeds eastward to +180 degrees. Latitude begins at -90 degrees and proceeds northward to +90 degrees.
End of explanation
def find_sun(lat,long):
'''This function finds the ratio of clouds for any lattitude and longitude and converts
it into the ratio of radiation that reaches the earth.
inputs: lattitude, longitude
output: radiation ratio'''
x = clouds.loc[(clouds['lattitude'] <= lat) & (clouds['lattitude'] > (lat-2.5)) & (clouds['longitude'] <= long) & (clouds['longitude'] > (long-2.5))]
radiation_ratio = 1-((float(x.iloc[0,2])*0.6)/100)
return(radiation_ratio)
# Now I will use the find_sun function to find the amount of sun in a specific location (Squamish)
radiation = find_sun(49,-123)
radiation
# I'm also going to make an object that is the list of watts from Solar_Energy_Calculator for
# Squamish
Watts = Solar_Energy_Calculator(49.7,.16,1.68)
Explanation: Now, 'clouds' is a nice looking dataframe that includes lattitude, longitude, and average sun that gets through the clouds for every month and the entire world. Next I will make a function that takes lattitude and longitude as an input and returns sun_ratio for each month as an output
End of explanation
def apply_clouds(watts,radiation):
'''This function takes a list of watts without clouds and radiation ratio due to clouds
and gives you a list of the real solar generation prediction.'''
energy = []
for i in Watts:
energy.append(i*radiation)
return(energy)
final = apply_clouds(Watts,radiation)
kWh = sum(final)/1000
kWh
# Cleaning up the final data
final = pd.DataFrame(final)
final = final.reset_index()
final.columns=['Day','Power']
final['Day'] = final['Day']/24
final.head(n=5)
Explanation: Now, I will make a function that combines watts and radiation to create the final solar generation prediction
End of explanation
# change figure size
plt.figure(figsize=(12,9))
# add data to plot (x-axis, y-axis, )
plt.plot(final['Day'],final['Power'],color='b',linestyle='-')
# add title
plt.title('Power Output',fontsize=24)
# modify axis limits
plt.xlim(0,365)
# add axis labels
plt.ylabel('Average Power Generation (Watts)',fontsize=16)
plt.xlabel('Day of Year',fontsize=16)
# save figure to graphs directory
plt.savefig('TEST.pdf')
pylab.savefig("../results/Power_Output.png")
# show plot
plt.show()
Explanation: Making a Plot of the Data
End of explanation |
4,589 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dénes Csala
MCC, 2022
Based on Elements of Data Science (Allen B. Downey, 2021) and Python Data Science Handbook (Jake VanderPlas, 2018)
License
Step1: Motivating Random Forests
Step2: The binary splitting makes this extremely efficient.
As always, though, the trick is to ask the right questions.
This is where the algorithmic process comes in
Step3: We have some convenience functions in the repository that help
Step4: Now using IPython's interact (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits
Step5: Notice that at each increase in depth, every node is split in two except those nodes which contain only a single class.
The result is a very fast non-parametric classification, and can be extremely useful in practice.
Question
Step6: The details of the classifications are completely different! That is an indication of over-fitting
Step7: See how the details of the model change as a function of the sample, while the larger characteristics remain the same!
The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer
Step8: By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!
(Note
Step9: As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model!
Example
Step10: To remind us what we're looking at, we'll visualize the first few data points
Step11: We can quickly classify the digits using a decision tree as follows
Step12: We can check the accuracy of this classifier
Step13: and for good measure, plot the confusion matrix | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
Explanation: Dénes Csala
MCC, 2022
Based on Elements of Data Science (Allen B. Downey, 2021) and Python Data Science Handbook (Jake VanderPlas, 2018)
License: MIT
Supervised Learning In-Depth: Random Forests
Previously we saw a powerful discriminative classifier, Support Vector Machines.
Here we'll take a look at motivating another powerful algorithm. This one is a non-parametric algorithm called Random Forests.
End of explanation
import fig_code
fig_code.plot_example_decision_tree()
Explanation: Motivating Random Forests: Decision Trees
Random forests are an example of an ensemble learner built on decision trees.
For this reason we'll start by discussing decision trees themselves.
Decision trees are extremely intuitive ways to classify or label objects: you simply ask a series of questions designed to zero-in on the classification:
End of explanation
from sklearn.datasets import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=1.0)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='rainbow');
Explanation: The binary splitting makes this extremely efficient.
As always, though, the trick is to ask the right questions.
This is where the algorithmic process comes in: in training a decision tree classifier, the algorithm looks at the features and decides which questions (or "splits") contain the most information.
Creating a Decision Tree
Here's an example of a decision tree classifier in scikit-learn. We'll start by defining some two-dimensional labeled data:
End of explanation
from fig_code import visualize_tree, plot_tree_interactive
Explanation: We have some convenience functions in the repository that help
End of explanation
plot_tree_interactive(X, y);
Explanation: Now using IPython's interact (available in IPython 2.0+, and requires a live kernel) we can view the decision tree splits:
End of explanation
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
plt.figure()
visualize_tree(clf, X[:200], y[:200], boundaries=False)
plt.figure()
visualize_tree(clf, X[-200:], y[-200:], boundaries=False)
Explanation: Notice that at each increase in depth, every node is split in two except those nodes which contain only a single class.
The result is a very fast non-parametric classification, and can be extremely useful in practice.
Question: Do you see any problems with this?
Decision Trees and over-fitting
One issue with decision trees is that it is very easy to create trees which over-fit the data. That is, they are flexible enough that they can learn the structure of the noise in the data rather than the signal! For example, take a look at two trees built on two subsets of this dataset:
End of explanation
def fit_randomized_tree(random_state=0):
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=2.0)
clf = DecisionTreeClassifier(max_depth=15)
rng = np.random.RandomState(random_state)
i = np.arange(len(y))
rng.shuffle(i)
visualize_tree(clf, X[i[:250]], y[i[:250]], boundaries=False,
xlim=(X[:, 0].min(), X[:, 0].max()),
ylim=(X[:, 1].min(), X[:, 1].max()))
from ipywidgets import interact
interact(fit_randomized_tree, random_state=(0, 100));
Explanation: The details of the classifications are completely different! That is an indication of over-fitting: when you predict the value for a new point, the result is more reflective of the noise in the model rather than the signal.
Ensembles of Estimators: Random Forests
One possible way to address over-fitting is to use an Ensemble Method: this is a meta-estimator which essentially averages the results of many individual estimators which over-fit the data. Somewhat surprisingly, the resulting estimates are much more robust and accurate than the individual estimates which make them up!
One of the most common ensemble methods is the Random Forest, in which the ensemble is made up of many decision trees which are in some way perturbed.
There are volumes of theory and precedent about how to randomize these trees, but as an example, let's imagine an ensemble of estimators fit on subsets of the data. We can get an idea of what these might look like as follows:
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=100, random_state=0)
visualize_tree(clf, X, y, boundaries=False);
Explanation: See how the details of the model change as a function of the sample, while the larger characteristics remain the same!
The random forest classifier will do something similar to this, but use a combined version of all these trees to arrive at a final answer:
End of explanation
from sklearn.ensemble import RandomForestRegressor
x = 10 * np.random.rand(100)
def model(x, sigma=0.3):
fast_oscillation = np.sin(5 * x)
slow_oscillation = np.sin(0.5 * x)
noise = sigma * np.random.randn(len(x))
return slow_oscillation + fast_oscillation + noise
y = model(x)
plt.errorbar(x, y, 0.3, fmt='o');
xfit = np.linspace(0, 10, 1000)
yfit = RandomForestRegressor(100).fit(x[:, None], y).predict(xfit[:, None])
ytrue = model(xfit, 0)
plt.errorbar(x, y, 0.3, fmt='o')
plt.plot(xfit, yfit, '-r');
plt.plot(xfit, ytrue, '-k', alpha=0.5);
Explanation: By averaging over 100 randomly perturbed models, we end up with an overall model which is a much better fit to our data!
(Note: above we randomized the model through sub-sampling... Random Forests use more sophisticated means of randomization, which you can read about in, e.g. the scikit-learn documentation)
Quick Example: Moving to Regression
Above we were considering random forests within the context of classification.
Random forests can also be made to work in the case of regression (that is, continuous rather than categorical variables). The estimator to use for this is sklearn.ensemble.RandomForestRegressor.
Let's quickly demonstrate how this can be used:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
digits.keys()
X = digits.data
y = digits.target
print(X.shape)
print(y.shape)
Explanation: As you can see, the non-parametric random forest model is flexible enough to fit the multi-period data, without us even specifying a multi-period model!
Example: Random Forest for Classifying Digits
We previously saw the hand-written digits data. Let's use that here to test the efficacy of the SVM and Random Forest classifiers.
End of explanation
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
Explanation: To remind us what we're looking at, we'll visualize the first few data points:
End of explanation
from sklearn.model_selection import train_test_split
from sklearn import metrics
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, random_state=0)
clf = DecisionTreeClassifier(max_depth=11)
clf.fit(Xtrain, ytrain)
ypred = clf.predict(Xtest)
Explanation: We can quickly classify the digits using a decision tree as follows:
End of explanation
metrics.accuracy_score(ypred, ytest)
Explanation: We can check the accuracy of this classifier:
End of explanation
plt.imshow(metrics.confusion_matrix(ypred, ytest),
interpolation='nearest', cmap=plt.cm.binary)
plt.grid(False)
plt.colorbar()
plt.xlabel("predicted label")
plt.ylabel("true label");
Explanation: and for good measure, plot the confusion matrix:
End of explanation |
4,590 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Les déterminants du choix de contraception en Indonésie
Présentation littéraire
Selon de récentes estimations, l'Indonésie a une population d'environ 255 millions d'habitants. Petit à petit elle voit grâce à l'intervention de son gouvernement son taux de fertilité diminuait jusqu'à atteindre en 2013 2.3 naissances par femme. Il est donc intéressant de voir les effets en pratique de sa politique et ses traductions dans le choix de contraception des femmes.
Présentation technique
Il s'agit d'un problème de classification multiclasse puisque la variable à prédire a trois modalités. Une spécificté plus particulière de notre jeu de données est de ne contenir quasiment que des variables catégorielles (7/9) ce qui modifie sensiblement notre approche, notamment dans la première partie de l'étude. Le déroulement de l'analyse est organisé en deux grandes parties, l'une reposant sur une description statistiques des données et l'autre sur un
<h1 id="tocheading">Table des matières</h1>
<div id="toc"></div>
Step1: Récupération des données
Je cherche dans cette partie à avoir les instruments pour pouvoir commencer l'analyse.
Ainsi je récupère les données ainsi que leur présentation afin de ne pas avoir à recopier les indications.
Dans un deuxième temps je les partitionne en trois tables
Step2: Importation des données
Step3: Première approche
Step4: La variable à expliquer
Step5: Nous cherchons par ailleurs pour la suite à savoir si une des modalités est bien moins fréquente que les autres ce qui pourrait fausser l'analyse de classification.
Step6: Étude des interactions entre les variables
Nous sommmes en présence d'un jeu de données contenant majoritairement des variables catégorielles (8/10) ce qui nous empêche de mener une étude des corrélations, pourtant très commode pour avoir une vue synthétique des relations entre nos variables. Nous allons donc procéder ainsi
Step19: Entre toutes les variables
Step20: Je reconnais que l'ACM n'est pas facilement visible, le seul changement que j'ai réussi à faire
Step21: Entre notre variable d'intérêt et les autres
Step22: Ainsi on peut en déduire que le potentiel explicatif de Exp Media est moins important que celui de Nbr Enf car la distribution selon le mode de contraception a la même structure selon chacune des modalités de Exp médi alors qu'elle différe beaucoup suivant le nbr d'enfants.
Seconde approche
Step23: AdaBoost
Step24: Remarque
Step25: Amélioration 1
Step26: Decision Tree
Step27: AdaBoost
Step28: Les prévisions ne sont pas excellentes
Step29: Ainsi on remarque que ce n'est pas un certain groupe de femme qui a tendance à être moins bien prédit
Amélioration 3
Step30: KNeighbors
Step31: Amélioration 4
Step32: Test sur la nouvelle variable de prédiction
Nous avons choisi de faire tourner les modèles déjà envisagés
Step33: Comparaison des résultats
On met dans un tableau les scores obtenus en classant suivant la variable y (Contraception) à trois modalités (Non-Use, Short-Term, Long-Term) et celle à deux modalités (Use, Non-Use)
Step34: Ainsi cela corrobore notre intuition, tous les modèles sont plus performants lorsqu'on leur soumet la variable binaire d'utilisation ou non d'un moyen de contraception, quel qu'il soit. Cependant ce qui est plus particulièrement étonnant est le changement dans le classement de la performance relative des méthodes. En effet si on avait, en prenant comme indicateur les f1_scores, les préférences suivantes
Step35: Nous avons confirmation que le DecisionTree et l'AdaBoost ont des meilleurs prédictions. En effet la probabilité que le score d’une bonne réponse soit supérieure au score d’une mauvaise réponse est dans les deux strictement supérieure à 0.76 (cf auc)
Limite des deux modèles les plus performants
Step36: Le score des deux modèles ne sont pas des plus discriminants puisqu’il existe une aire commune entre les bonnes et les mauvaises réponses plutôt importante.
Nous allons donc introduire un nouveau modèle, le GradientBoost, en sélectionnant toutefois pour la comparaison le DecisionTree qui a la meilleur performance de prédiction
Amélioration 5
Step37: Ainsi on observe une nette amélioration de la prédiction. En effet la probabilité que le score d’une bonne réponse soit supérieure au score d’une mauvaise réponse est avec le GradientBoosting quasiment de 80% (cf.AUC). Par ailleurs sa discrimination est plus importante que les deux classifieurs précédents. Cela nous encourage à utiliser cette méthode pour appréhender l'importance relative des composantes, plutôt que les RandomForest par exemple.
Importance des features
Step38: Ce graphique met en évidence l'importance de quatre variables dans la prédiction
Step39: Nous pouvons voir donc qu'il y a clairement une tendance comportementale qui varie avec l'âge
Step40: DecTree sur modèle réduit
Step41: GradientBoost sur modèle réduit | Python Code:
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
Explanation: Les déterminants du choix de contraception en Indonésie
Présentation littéraire
Selon de récentes estimations, l'Indonésie a une population d'environ 255 millions d'habitants. Petit à petit elle voit grâce à l'intervention de son gouvernement son taux de fertilité diminuait jusqu'à atteindre en 2013 2.3 naissances par femme. Il est donc intéressant de voir les effets en pratique de sa politique et ses traductions dans le choix de contraception des femmes.
Présentation technique
Il s'agit d'un problème de classification multiclasse puisque la variable à prédire a trois modalités. Une spécificté plus particulière de notre jeu de données est de ne contenir quasiment que des variables catégorielles (7/9) ce qui modifie sensiblement notre approche, notamment dans la première partie de l'étude. Le déroulement de l'analyse est organisé en deux grandes parties, l'une reposant sur une description statistiques des données et l'autre sur un
<h1 id="tocheading">Table des matières</h1>
<div id="toc"></div>
End of explanation
import requests
presentation = "https://archive.ics.uci.edu/ml/machine-learning-databases/cmc/cmc.names"
f = requests.get(presentation)
print (f.text)
Explanation: Récupération des données
Je cherche dans cette partie à avoir les instruments pour pouvoir commencer l'analyse.
Ainsi je récupère les données ainsi que leur présentation afin de ne pas avoir à recopier les indications.
Dans un deuxième temps je les partitionne en trois tables : celle contenant et les données et la variable à prédire (df), celle ne contenant que les données (data), et celle contenant que la variable à prédire (y).
Par ailleurs pour pouvoir documenter au mieux mes résultats, je crée un vecteur avec les intitulés des modalités de la variable à prédire afin de les ajouter sur les graphiques (y_names).
Importation de la présentation des données sur UCI
End of explanation
import pandas, urllib.request
furl = urllib.request.urlopen("https://archive.ics.uci.edu/ml/machine-learning-databases/cmc/cmc.data")
df = pandas.read_csv(furl, names=["Age", "Educ", "Educ mari","Nbr Enfant","Religion","Statut trav","Statut mari","Niveau de vie","Exp Media","Contraception"])
print(df.head())
print(df.shape)
# nous avons donc 1473 observations et 10 variables comme indiqué en introduction
#Création de trois tables et un vecteur : data y df y_names
copy= df.copy() #je ne veux pas écraser ma table d'entrée puisque je pourrais encore en avoir besoin
y= copy.pop('Contraception')# je ne récupère que la variable d'intérêt
print(y.shape)
data = copy# je récupère les autres colonnes qui restent soit les variables de prédiction
print(data.head())
print(data.shape)
y_names=["","No-Use","Long-Term","Short-Term"]# je garde en mémoire la signification des modalités de y pour pouvoir lire aisément la siginification des tableaux en sortie
print(y_names)
Explanation: Importation des données
End of explanation
data.hist(figsize=(12,12), alpha=0.40 )
#comme la plupart des variables sont des variables catégorielles
# il est d'autant plus pertinent de ne choisir que la représentation en histogramme
Explanation: Première approche: les statistiques descriptives
On cherche dans cette partie à avoir une première appréhension des données grâce à des outils standards de statistiques.
Représentation de la répartition des variables
Les variables explicatives: traitement du dataframe "data"
End of explanation
plt.hist(y, 3, facecolor='red',alpha=0.20, )
plt.xlabel('Contraception')
plt.ylabel('Effectif')
plt.annotate('No-use', xy=(1.5, 200), xytext=(1.15, 200))
plt.annotate('Long-Term', xy=(1.5,200), xytext=(1.9, 200))
plt.annotate('Short-Term', xy=(3,200), xytext=(2.6, 200))
plt.figure()
Explanation: La variable à expliquer : traitement de y
End of explanation
print(df.Contraception.value_counts())
# Nous sommes rassurés d'autant plus que la modalité la plus importante est le non-use.
# Or d'un point de vue sociétal c'est sur celle-là qu'une politique publique peut vouloir jouer
Explanation: Nous cherchons par ailleurs pour la suite à savoir si une des modalités est bien moins fréquente que les autres ce qui pourrait fausser l'analyse de classification.
End of explanation
# Une vision en une multitude de graphe représentant les variables deux à deux
import seaborn
seaborn.pairplot(df)
# Cependant le fait que nos variables explicatives sont surtout des données catégorielles
# réduit le potentiel explicatif d'un tel graphique
Explanation: Étude des interactions entre les variables
Nous sommmes en présence d'un jeu de données contenant majoritairement des variables catégorielles (8/10) ce qui nous empêche de mener une étude des corrélations, pourtant très commode pour avoir une vue synthétique des relations entre nos variables. Nous allons donc procéder ainsi: d'abord visualiser toutes les variables deux à deux pour comprendre leur covariation puis nous allons chercher un subsitut d'une analyse en composante prinicpales et enfin nous allons nous concentrer plus spécifiquement sur notre variable d'intérêt soit la contraception.
Entre toutes les variables : Pairplot
Nous utilisons dans cette partie délibérement des graphes qui permettent de présenter les résultats de façon synthétique vu le grand nombre de variables à comparer.
End of explanation
#de l'art du copier/coller
from scipy.linalg import diagsvd
import numpy as np
import pandas as pd
import functools
def process_df(DF, cols, ncols):
if cols: # if you want us to do the dummy coding
K = len(cols) # the number of categories
X = dummy(DF, cols)
else: # if you want to dummy code it yourself or do all the cols
K = ncols
if ncols is None: # be sure to pass K if you didn't multi-index
K = len(DF.columns) # ... it with mca.dummy()
if not K:
raise ValueError("Your DataFrame has no columns.")
elif not isinstance(ncols, int) or ncols <= 0 or \
ncols > len(DF.columns): # if you dummy coded it yourself
raise ValueError("You must pass a valid number of columns.")
X = DF
J = X.shape[1]
return X, K, J
def dummy(DF, cols=None):
Dummy code select columns of a DataFrame.
return pd.concat((pd.get_dummies(DF[col])
for col in (DF.columns if cols is None else cols)),
axis=1, keys=DF.columns)
def _mul(*args):
An internal method to multiply matrices.
return functools.reduce(np.dot, args)
class MCA(object):
Run MCA on selected columns of a pd DataFrame.
If the column are specified, assume that they hold
categorical variables that need to be replaced with
dummy indicators, otherwise process the DataFrame as is.
'cols': The columns of the DataFrame to process.
'ncols': The number of columns before dummy coding. To be passed if cols isn't.
'benzecri': Perform Benzécri correction (default: True)
'TOL': value below which to round eigenvalues to zero (default: 1e-4)
def __init__(self, DF, cols=None, ncols=None, benzecri=True, TOL=1e-4):
X, self.K, self.J = process_df(DF, cols, ncols)
S = X.sum().sum()
Z = X / S # correspondence matrix
self.r = Z.sum(axis=1)
self.c = Z.sum()
self._numitems = len(DF)
self.cor = benzecri
self.D_r = np.diag(1/np.sqrt(self.r))
Z_c = Z - np.outer(self.r, self.c) # standardized residuals matrix
self.D_c = np.diag(1/np.sqrt(self.c))
# another option, not pursued here, is sklearn.decomposition.TruncatedSVD
self.P, self.s, self.Q = np.linalg.svd(_mul(self.D_r, Z_c, self.D_c))
self.E = None
E = self._benzecri() if self.cor else self.s**2
self.inertia = sum(E)
self.rank = np.argmax(E < TOL)
self.L = E[:self.rank]
def _benzecri(self):
if self.E is None:
self.E = np.array([(self.K/(self.K-1.)*(_ - 1./self.K))**2
if _ > 1./self.K else 0 for _ in self.s**2])
return self.E
def fs_r(self, percent=0.9, N=None):
Get the row factor scores (dimensionality-reduced representation),
choosing how many factors to retain, directly or based on the explained
variance.
'percent': The minimum variance that the retained factors are required
to explain (default: 90% = 0.9)
'N': The number of factors to retain. Overrides 'percent'.
If the rank is less than N, N is ignored.
if not 0 <= percent <= 1:
raise ValueError("Percent should be a real number between 0 and 1.")
if N:
if not isinstance(N, (int, np.int64)) or N <= 0:
raise ValueError("N should be a positive integer.")
N = min(N, self.rank)
# S = np.zeros((self._numitems, N))
# else:
self.k = 1 + np.flatnonzero(np.cumsum(self.L) >= sum(self.L)*percent)[0]
# S = np.zeros((self._numitems, self.k))
# the sign of the square root can be either way; singular value vs. eigenvalue
# np.fill_diagonal(S, -np.sqrt(self.E) if self.cor else self.s)
num2ret = N if N else self.k
s = -np.sqrt(self.L) if self.cor else self.s
S = diagsvd(s[:num2ret], self._numitems, num2ret)
self.F = _mul(self.D_r, self.P, S)
return self.F
def fs_c(self, percent=0.9, N=None):
Get the column factor scores (dimensionality-reduced representation),
choosing how many factors to retain, directly or based on the explained
variance.
'percent': The minimum variance that the retained factors are required
to explain (default: 90% = 0.9)
'N': The number of factors to retain. Overrides 'percent'.
If the rank is less than N, N is ignored.
if not 0 <= percent <= 1:
raise ValueError("Percent should be a real number between 0 and 1.")
if N:
if not isinstance(N, (int, np.int64)) or N <= 0:
raise ValueError("N should be a positive integer.")
N = min(N, self.rank) # maybe we should notify the user?
# S = np.zeros((self._numitems, N))
# else:
self.k = 1 + np.flatnonzero(np.cumsum(self.L) >= sum(self.L)*percent)[0]
# S = np.zeros((self._numitems, self.k))
# the sign of the square root can be either way; singular value vs. eigenvalue
# np.fill_diagonal(S, -np.sqrt(self.E) if self.cor else self.s)
num2ret = N if N else self.k
s = -np.sqrt(self.L) if self.cor else self.s
S = diagsvd(s[:num2ret], len(self.Q), num2ret)
self.G = _mul(self.D_c, self.Q.T, S) # important! note the transpose on Q
return self.G
def cos_r(self, N=None): # percent=0.9
Return the squared cosines for each row.
if not hasattr(self, 'F') or self.F.shape[1] < self.rank:
self.fs_r(N=self.rank) # generate F
self.dr = np.linalg.norm(self.F, axis=1)**2
# cheaper than np.diag(self.F.dot(self.F.T))?
return np.apply_along_axis(lambda _: _/self.dr, 0, self.F[:, :N]**2)
def cos_c(self, N=None): # percent=0.9,
Return the squared cosines for each column.
if not hasattr(self, 'G') or self.G.shape[1] < self.rank:
self.fs_c(N=self.rank) # generate
self.dc = np.linalg.norm(self.G, axis=1)**2
# cheaper than np.diag(self.G.dot(self.G.T))?
return np.apply_along_axis(lambda _: _/self.dc, 0, self.G[:, :N]**2)
def cont_r(self, percent=0.9, N=None):
Return the contribution of each row.
if not hasattr(self, 'F'):
self.fs_r(N=self.rank) # generate F
return np.apply_along_axis(lambda _: _/self.L[:N], 1,
np.apply_along_axis(lambda _: _*self.r, 0, self.F[:, :N]**2))
def cont_c(self, percent=0.9, N=None): # bug? check axis number 0 vs 1 here
Return the contribution of each column.
if not hasattr(self, 'G'):
self.fs_c(N=self.rank) # generate G
return np.apply_along_axis(lambda _: _/self.L[:N], 1,
np.apply_along_axis(lambda _: _*self.c, 0, self.G[:, :N]**2))
def expl_var(self, greenacre=True, N=None):
Return proportion of explained inertia (variance) for each factor.
:param greenacre: Perform Greenacre correction (default: True)
if greenacre:
greenacre_inertia = (self.K / (self.K - 1.) * (sum(self.s**4)
- (self.J - self.K) / self.K**2.))
return (self._benzecri() / greenacre_inertia)[:N]
else:
E = self._benzecri() if self.cor else self.s**2
return (E / sum(E))[:N]
def fs_r_sup(self, DF, N=None):
Find the supplementary row factor scores.
ncols: The number of singular vectors to retain.
If both are passed, cols is given preference.
if not hasattr(self, 'G'):
self.fs_c(N=self.rank) # generate G
if N and (not isinstance(N, int) or N <= 0):
raise ValueError("ncols should be a positive integer.")
s = -np.sqrt(self.E) if self.cor else self.s
N = min(N, self.rank) if N else self.rank
S_inv = diagsvd(-1/s[:N], len(self.G.T), N)
# S = scipy.linalg.diagsvd(s[:N], len(self.tau), N)
return _mul(DF.div(DF.sum(axis=1), axis=0), self.G, S_inv)[:, :N]
def fs_c_sup(self, DF, N=None):
Find the supplementary column factor scores.
ncols: The number of singular vectors to retain.
If both are passed, cols is given preference.
if not hasattr(self, 'F'):
self.fs_r(N=self.rank) # generate F
if N and (not isinstance(N, int) or N <= 0):
raise ValueError("ncols should be a positive integer.")
s = -np.sqrt(self.E) if self.cor else self.s
N = min(N, self.rank) if N else self.rank
S_inv = diagsvd(-1/s[:N], len(self.F.T), N)
# S = scipy.linalg.diagsvd(s[:N], len(self.tau), N)
return _mul((DF/DF.sum()).T, self.F, S_inv)[:, :N]
# Discrétisation de la table df : création de dfdis
dfdis = df.copy()
dfdis["Nbr Enfant"]=pandas.qcut(dfdis["Nbr Enfant"],3,labels=["Enf1","Enf2","Enf3"])
dfdis["Age"]= pandas.qcut(dfdis["Age"],3,labels=["Age1","Age2","Age3"])
dfdis.head()
# Transformation des variables en dummies : création de la table dc
totdis=dfdis.copy()
totdis["Educ"]=pandas.Categorical(totdis["Educ"],ordered=False)
totdis["Educ"]=totdis["Educ"].cat.rename_categories(["Ed1","Ed2","Ed3","Ed4"])
totdis["Educ mari"]=pandas.Categorical(totdis["Educ mari"],ordered=False)
totdis["Educ mari"]=totdis["Educ mari"].cat.rename_categories(["Edm1","Edm2","Edm3","Edm4"])
totdis["Statut mari"]=pandas.Categorical(totdis["Statut mari"],ordered=False)
totdis["Statut mari"]=totdis["Statut mari"].cat.rename_categories(["St1","St2","St3","St4"])
totdis["Niveau de vie"]=pandas.Categorical(totdis["Niveau de vie"],ordered=False)
totdis["Niveau de vie"]=totdis["Niveau de vie"].cat.rename_categories(["Nv1","Nv2","Nv3","Nv4"])
totdis["Contraception"]=pandas.Categorical(totdis["Contraception"],ordered=False)
totdis["Contraception"]=totdis["Contraception"].cat.rename_categories(["Ctr1","Ctr2","Ctr3"])
dc=pandas.DataFrame(pandas.get_dummies(totdis[["Age","Nbr Enfant", "Educ","Educ mari","Statut mari","Niveau de vie","Exp Media","Religion","Statut trav","Contraception"]]))
dc.head()
# Création de la table des variations explicatives discrétisées : datadis
copy= dc.copy() #je ne veux pas écraser ma table d'entrée puisque je pourrais encore en avoir besoin
datadis=copy.drop(["Contraception_Ctr1","Contraception_Ctr2", "Contraception_Ctr3"], axis=1)
print(datadis.shape)
print(dc.shape)
# Exécution de l'ACM
mca_dc=MCA(dc,benzecri=False)
col=[1,2,3,4,4,4,5,5,5,6,6,6,6,7,7,7,7,8,8,8,8,9,9,9,9,10,10,10]
plt.scatter(mca_dc.fs_c()[:, 0],mca_dc.fs_c()[:, 1],c=col, cmap=plt.cm.Reds)
for i, j, nom in zip(mca_dc.fs_c()[:, 0],
mca_dc.fs_c()[:, 1], dc.columns):
plt.text(i, j, nom)
show()
Explanation: Entre toutes les variables : ACM
Nous cherchons un graphique qui serait plus parlant. Vu le nombre de variable catégorielles, nous pensons à effectuer une analsye des correspondances multiples. Malheureusement il n'y a pas de modules évidents pour pouvoir en effectuer une. Nous sommes donc aller prendre le code source de Emre Safak sur Github. Il faut dans un second temps aussi transformer toutes nos variables en variables binaires pour pouvoir appliquer une acm
End of explanation
import scipy # on ne l'étudie que sur les variables catégorielles
for i in (list(data.T.index[1:3,])):
u=pandas.crosstab(df.Contraception, df[i])
print(u)
print(scipy.stats.chi2_contingency(u))
for i in (list(data.T.index[4::,])):
u=pandas.crosstab(df.Contraception, df[i])
print(u)
print(scipy.stats.chi2_contingency(u))
Explanation: Je reconnais que l'ACM n'est pas facilement visible, le seul changement que j'ai réussi à faire : la couleur des points. Je n'ai pas trouvé comment l'agrandir. On remarque surtout les instances qui n'ont pas beaucoup d'impact sur la contraception quelle qu'elle soit : le grand nbr d'enfant, un âge important, éducation faible du mari et de la femme, une exposition nulle au média.
Cela nous encourage à aller regarder de plus près l'intéraction de notre variable d'intérêt avec les autres pour voir quels sont les facteurs d'influence.
Entre notre variable d'intérêt et les autres : test du chi2
On cherche dans un premier temps grâce au test du chi2, à savoir si toutes les variables socio-démographiques à notre disposition ont un impact sur le choix de contraception. On teste donc à partir des tableaux de contingence l'hypothèse nulle que chacune des catégories prises séparément n'influence pas la contraception, i.e on teste l'indépendance entre les variables
End of explanation
#L'exposition au média
seaborn.countplot(x="Exp Media", hue="Contraception", data=df, palette='Greens_d')
# Le nombre d'enfant
u = df.copy() # Il faut d'abord la discrétiser
u["Nbr Enfant"]= pandas.qcut(u["Nbr Enfant"],6)# le choix du nombre de catégories s'est effectuée selon l'allure de l'histogramme préenté au début
# Nous choisission délibérément qcut parce que proche de la distribution initiale du fait la prise en compte des quantiles
seaborn.countplot(x="Nbr Enfant", hue="Contraception", data=u, palette='Greens_d')
u.head()
Explanation: Entre notre variable d'intérêt et les autres : histogramme conditionnel
Nous utilisons une fonction de seaborn qui permet de représenter sous forme d'histogramme les tableaux de contingence. On peut donc comparer la structure de l'histogramme de chaque modalité des variables explicatives organisé selon la contraception.
Détail technique: J'aurais souhaiter obtenir cette représentation selon chaque variable plotée dans un même gaphe pour avoir une vue panoramique. Cependant que ce soit avec la commande subplot de matplotlib ou Facetgrid de seaborn j'ai pas réussi. Je présente donc ici juste deux variables: l'exposition au média et le nombre d'enfants.
End of explanation
from sklearn.cross_validation import train_test_split
data_train, data_test, y_train, y_test = train_test_split(data, y, test_size=0.33, random_state=42)
df.shape, data_test.shape, y_test.shape
import timeit
tic=timeit.default_timer()
from sklearn import tree
DecTree = tree.DecisionTreeClassifier(min_samples_leaf=10, min_samples_split=10)
DecTree.fit(data_train, y_train)
y_predDec = DecTree.fit(data_train, y_train).predict(data_test)
toc=timeit.default_timer()
toc - tic
from sklearn.metrics import confusion_matrix, f1_score
def plot_confusion_matrix(cm, title='Matrice de confusion', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
plt.tight_layout()
plt.ylabel('Vraie contraception')
plt.xlabel('Contraception prédite')
# Compute confusion matrix
cm = confusion_matrix(y_test, y_predDec)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
plt.show()
f1Dec = f1_score(y_test, y_predDec)
print('f1Dec',f1Dec)
Explanation: Ainsi on peut en déduire que le potentiel explicatif de Exp Media est moins important que celui de Nbr Enf car la distribution selon le mode de contraception a la même structure selon chacune des modalités de Exp médi alors qu'elle différe beaucoup suivant le nbr d'enfants.
Seconde approche : machine learning et supervised classification
Grand balayage : Decision Tree, AdaBoost et GaussianNB
Decisition Tree
End of explanation
#Application avec 800 estimateurs et temps de calcul
#Il est intéressant pour AdaBoost de voir le temps mis car c'est un modèle qui met du temps à tourner sur mon jeu de données
import timeit
tic=timeit.default_timer()
from sklearn.ensemble import AdaBoostClassifier
AdaBoost = AdaBoostClassifier(DecTree,
algorithm='SAMME',
n_estimators=800,
learning_rate=0.5)
y_predAda= AdaBoost.fit(data_train, y_train).predict(data_test)
toc=timeit.default_timer()
toc - tic
# AdaBoost : performance de prédiction
cm = confusion_matrix(y_test, y_predAda)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
f1Ada = f1_score(y_test, y_predAda)
print('f1Ada',f1Ada)
Explanation: AdaBoost
End of explanation
tic=timeit.default_timer()
from sklearn.naive_bayes import GaussianNB
Gaussian = GaussianNB()
y_predGaus= Gaussian.fit(data_train, y_train).predict(data_test)
toc=timeit.default_timer()
toc - tic
cm = confusion_matrix(y_test, y_predGaus)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
plt.figure()
plot_confusion_matrix(cm)
f1Gaus = f1_score(y_test, y_predGaus)
print('f1Gaus',f1Gaus)
l = [ {"Méthode":"DécisionTree", "f1":f1Dec},
{"Méthode":"AdaBoost", "f1":f1Ada},
{"Méthode":"Gaussian", "f1":f1Gaus}]
f1=pd.DataFrame(l)
f1
Explanation: Remarque: l'Adaboost est un modèle qui est vraiment plus lent à faire tourner
GaussianNB
End of explanation
datadis_train, datadis_test, y_train, y_test = train_test_split(datadis, y, test_size=0.33, random_state=42)
df.shape, datadis_test.shape, y_test.shape
Explanation: Amélioration 1 : discrétiser les variables prédictives
Pour aller rapidement au résultat nous supprimons la représentation de la matrice de confusion et nous nous limitons à deux classifier, ceux au meilleur score: AdaBoost et DecisionTree
End of explanation
from sklearn import tree
DecTree = tree.DecisionTreeClassifier(min_samples_leaf=10, min_samples_split=10)
DecTree.fit(datadis_train, y_train)
y_predDecdis = DecTree.fit(datadis_train, y_train).predict(datadis_test)
cm = confusion_matrix(y_test, y_predDecdis)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
f1disDec = f1_score(y_test, y_predDecdis)
print("f1disDec", f1disDec)
Explanation: Decision Tree
End of explanation
y_preddisAda= AdaBoost.fit(datadis_train, y_train).predict(datadis_test)
cm = confusion_matrix(y_test, y_preddisAda)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
f1disAda = f1_score(y_test, y_preddisAda)
print('f1disAda',f1disAda)
l = [ {"Données":"Normales", "f1Dec":f1Dec, "f1Ada":f1Ada},
{"Données":"Discrétisées", "f1Dec":f1disDec, "f1Ada":f1disAda},]
f1=pd.DataFrame(l)
f1 = f1.set_index("Données")
f1
Explanation: AdaBoost
End of explanation
#visulalisation de la contraception mal prédite avec DecisionTree
mismatchesd= (y_predDec != y_test) #liste de booléen avec True si y_pred différent de y_test
mismatchesTd= mismatchesd.copy()
u = mismatchesTd[mismatchesTd==True] #je ne retiens que les y_pred différent
usort = u.index.sort_values() #je prends leurs indices
print(usort[2::])# j'exclue au hasard les deux premiers pour pouvoir avoir des arrays de même taille afin de comparer
# Visualisation de la contraception malprédite avec Adaboost
mismatchesa= (y_predAda != y_test)
mismatchesTa= mismatchesa.copy()
v=mismatchesTa[mismatchesTa==True]
vsort= v.index.sort_values()
vsort
#On compare les deux sorties
comp = (usort[2::] == vsort)
comp
Explanation: Les prévisions ne sont pas excellentes : j'essaie donc de voir si ce sont les mêmes femmes qui sont toujours mal regroupées
Amélioration 2 : spécificité des individus mal prédits
End of explanation
#Étude sur le nombre d'estimateurs
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import mean_squared_error
curves = []
for n_estimators in range(1,10) :
clf = RandomForestClassifier(n_estimators=n_estimators, min_samples_leaf=1)
clf = clf.fit(data_train, y_train)
erra = mean_squared_error( clf.predict(data_train), y_train)**0.5
errb = mean_squared_error( clf.predict(data_test), y_test)**0.5
print("mn_estimators",n_estimators, "erreur",erra,errb)
curves.append((n_estimators, erra,errb, clf) )
plt.plot ( [c[0] for c in curves], [c[1] for c in curves], label="train")
plt.plot ( [c[0] for c in curves], [c[2] for c in curves], label="test")
plt.legend()
tic=timeit.default_timer()
# Application avec 5 estimateurs
RF= RandomForestClassifier(n_estimators=5,min_samples_leaf=1)
y_predRF= RF.fit(data_train, y_train).predict(data_test)
cm = confusion_matrix(y_test, y_predRF)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
f1RF = f1_score(y_test, y_predRF)
print('f1RF',f1RF)
toc=timeit.default_timer()
toc - tic
Explanation: Ainsi on remarque que ce n'est pas un certain groupe de femme qui a tendance à être moins bien prédit
Amélioration 3 : Utilisation d'autres modèles et hyperparamètres
Nous utilisons à présent des modèles avec des hyperparamètres qu'il faut tester en premier pour choisir le bon nombre
Random forest
Le paramètre à ajuster en premier dans les méthodes de RandomForest est le nombre d'estimateurs soit le nombre d'arbres dans la forêt. Ici nous n'avons pas a priori un problème de temps de résolution puisque nous n'avons que peu d'observations et de variables. Cependant l'arbitrage se fait tout de même puisque à partir d'un nombre suffisant d'estimateurs, le modèle ne fait que du surajustement. Nous cherchons donc dans un premier temps le nombre optimal d'arbres.
End of explanation
#Étude sur le nombre de neighbors
from sklearn.neighbors import KNeighborsClassifier
for n_neighbors in range(1, 11):
clf = KNeighborsClassifier(n_neighbors=n_neighbors).fit(data_train, y_train)
y_predNKeigh = clf.predict(data_test)
print ("KNeighbors(n_neighbors={0})".format(n_neighbors), f1_score(y_test, y_predNKeigh))
tic=timeit.default_timer()
#Application avec n_neighbors = 7
KN = KNeighborsClassifier(n_neighbors=7).fit(data_train, y_train)
y_predKNeigh = KN.predict(data_test)
cm = confusion_matrix(y_test, y_predKNeigh)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
f1KNeigh = f1_score(y_test, y_predKNeigh)
print('f1KNeigh',f1KNeigh)
toc=timeit.default_timer()
toc - tic
#Comparaison des résultats
l = [ {"Méthode":"DécisionTree", "f1":f1Dec},
{"Méthode":"AdaBoost", "f1":f1Ada},
{"Méthode":"RandomForest", "f1":f1RF},
{"Méthode":"KNeighbors", "f1":f1KNeigh}
]
f1=pd.DataFrame(l)
f1
Explanation: KNeighbors
End of explanation
# nous créons un nouveau dataframe y_nonuse en regroupant les classes short-term et long-term
y_nonuse = y.copy()
y_nonuse[y_nonuse!=1] = 0 # Utilise une contraception
y_nonuse[y_nonuse==1] = 1 # N'utilise pas de contraception
Explanation: Amélioration 4 : simplifier les catégories de la variable d'intérêt
On remarque que dans les matrices de confusion le fait de ne pas avoir de contraception est à chaque fois mieux prédit. Par ailleurs pour des raisons de politiques publiques on peut penser que le décideur est plus intéressé par le fait de ne pas avoir de contraception afin de mettre des politiques en place. En effet l'enjeu sociétal doit être de maximiser l'utilisation d'un moyen de contraception vs ne pas se protéger.
On se propose donc de regarder la classification suivant deux critères uniquement : avoir une contraception ou ne pas en avoir et donc transformer un problème de classification multi-classe en un de classification binaire. Cela nous permettra donc de pouvoir introduire la notion de courbe ROC valable uniquement pour une variable de prédiction binaire.
End of explanation
from sklearn.cross_validation import train_test_split
data_train, data_test, y_nutrain, y_nutest = train_test_split(data, y_nonuse, test_size=0.33, random_state=42)
df.shape, data_test.shape, y_nutest.shape
# DecisionTree
y_nupredDec = DecTree.fit(data_train, y_nutrain).predict(data_test)
cm = confusion_matrix(y_nutest, y_nupredDec)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
f1nuDec = f1_score(y_nutest, y_nupredDec)
print("f1nuDec", f1nuDec)
# AdaBoost
y_nupredAda = AdaBoost.fit(data_train, y_nutrain).predict(data_test)
cm = confusion_matrix(y_nutest, y_nupredAda)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
f1nuAda = f1_score(y_nutest, y_nupredAda)
print("f1nuAda", f1nuAda)
#KNeighbors
y_prednuKN= KN.fit(data_train, y_nutrain).predict(data_test)
cm = confusion_matrix(y_nutest, y_prednuKN)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
f1nuKN = f1_score(y_nutest, y_prednuKN)
print('f1nuKN',f1nuKN)
# RandomForest
y_nupredRF = RF.fit(data_train, y_nutrain).predict(data_test)
cm = confusion_matrix(y_nutest, y_nupredRF)
np.set_printoptions(precision=2)
print('Matrice de confusion')
print(cm)
f1nuRF = f1_score(y_nutest, y_nupredRF)
print("f1nuRF", f1nuRF)
Explanation: Test sur la nouvelle variable de prédiction
Nous avons choisi de faire tourner les modèles déjà envisagés : DecisionTree, RandomForest, KNeighbors et AdaBoost.
End of explanation
#Comparaison des scores
l = [ {"y":"Multiclasse", "DecTree":f1Dec, "AdaBoost":f1Ada, "Kneighbors":f1KNeigh, "RandomForest": f1RF},
{"y":"Binaire", "DecTree":f1nuDec, "AdaBoost":f1nuAda, "Kneighbors":f1nuKN, "RandomForest":f1nuRF}
]
f1=pd.DataFrame(l)
f1 = f1.set_index("y")
f1
Explanation: Comparaison des résultats
On met dans un tableau les scores obtenus en classant suivant la variable y (Contraception) à trois modalités (Non-Use, Short-Term, Long-Term) et celle à deux modalités (Use, Non-Use)
End of explanation
#Comparaison des courbes ROC et AUC
from sklearn.metrics import roc_curve, auc
probasRF = RF.predict_proba(data_test)
probasDec = DecTree.predict_proba(data_test)
probasAda = AdaBoost.predict_proba(data_test)
fpr, tpr, thresholds = roc_curve(y_nutest, probasRF[:, 1])
fpr1, tpr1, thresholds1 = roc_curve(y_nutest, probasDec[:, 1])
fpr2, tpr2, thresholds2 = roc_curve(y_nutest, probasAda[:, 1])
roc_auc = auc(fpr, tpr)
roc_auc1 = auc(fpr1, tpr1)
roc_auc2 = auc(fpr2, tpr2)
plt.plot(fpr, tpr, label=' Courbe ROC RF (auc = %0.2f)' % roc_auc)
plt.plot(fpr1, tpr1, label=' Courbe ROC DecTree (auc = %0.2f)' % roc_auc1)
plt.plot(fpr2, tpr2, label=' Courbe ROC Ada (auc = %0.2f)' % roc_auc2)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.xlabel("Taux d'erreur (faux positif)")
plt.ylabel("Taux de bonne réponse (Vrai positif)")
plt.title('Comparaison de la performance des classifiers')
plt.legend(loc="lower right")
Explanation: Ainsi cela corrobore notre intuition, tous les modèles sont plus performants lorsqu'on leur soumet la variable binaire d'utilisation ou non d'un moyen de contraception, quel qu'il soit. Cependant ce qui est plus particulièrement étonnant est le changement dans le classement de la performance relative des méthodes. En effet si on avait, en prenant comme indicateur les f1_scores, les préférences suivantes : KN>Rf>DecTRee>AdaBoost, nous avons maintenant cette nette préférence : DecTree>RF=AdaBoost>KN.
Nous allons maintenant nous concentrer sur une représentation plus compète de la performance et par la même occasion éliminer le KNeighborClassifier parce qu'il y a un score nettement inférieur aux autres. Pour cela nous allons traiter les modèles avec la variable binaire mais plutôt que de considérer leur score, nous allons regarder les courbes ROC.
End of explanation
#Distribution des scores du DecisionTree
from sklearn.metrics import auc, precision_recall_curve
y_testn = y_nutest.as_matrix()
y_minD = y_nupredDec.min()
y_scoreD = array( [probasDec[i,p-y_minD] for i,p in enumerate(y_nupredDec)] )
y_scoreD[:5]
positive_scores = y_scoreD[y_testn == y_nupredDec]
negative_scores = y_scoreD[y_testn != y_nupredDec]
fpr = dict()
tpr = dict()
roc_auc = dict()
nb_obs = dict()
for i in DecTree.classes_:
fpr[i], tpr[i], _ = roc_curve(y_nutest == i, y_scoreD)
roc_auc[i] = auc(fpr[i], tpr[i])
nb_obs[i] = (y_nutest == i).sum()
roc_auc, nb_obs
ax = seaborn.distplot(positive_scores, rug=True, hist=True, label="+")
seaborn.distplot(negative_scores, rug=True, hist=True, ax=ax, label="-")
ax.legend()
#Distribution des scores de l'AdaBoost
y_testn = y_nutest.as_matrix()
y_minD = y_nupredAda.min()
y_scoreD = array( [probasAda[i,p-y_minD] for i,p in enumerate(y_nupredAda)] )
y_scoreD[:5]
positive_scores = y_scoreD[y_testn == y_nupredAda]
negative_scores = y_scoreD[y_testn != y_nupredAda]
fpr = dict()
tpr = dict()
roc_auc = dict()
nb_obs = dict()
for i in AdaBoost.classes_:
fpr[i], tpr[i], _ = roc_curve(y_nutest == i, y_scoreD)
roc_auc[i] = auc(fpr[i], tpr[i])
nb_obs[i] = (y_nutest == i).sum()
roc_auc, nb_obs
ax = seaborn.distplot(positive_scores, rug=True, hist=True, label="+")
seaborn.distplot(negative_scores, rug=True, hist=True, ax=ax, label="-")
ax.legend()
Explanation: Nous avons confirmation que le DecisionTree et l'AdaBoost ont des meilleurs prédictions. En effet la probabilité que le score d’une bonne réponse soit supérieure au score d’une mauvaise réponse est dans les deux strictement supérieure à 0.76 (cf auc)
Limite des deux modèles les plus performants : manque de discrimination
Après avoir mis en valeur par comparaison que l'AdaBoost et le DecisionTree sont les meilleurs classifieurs, nous désirons lire de plus près la distribution du score
End of explanation
tic=timeit.default_timer()
from sklearn.ensemble import GradientBoostingClassifier
GB = GradientBoostingClassifier()
y_prednuGB = GB.fit(data_train, y_nutrain).predict(data_test)
toc=timeit.default_timer()
toc - tic
cm = confusion_matrix(y_nutest, y_prednuGB)
plot_confusion_matrix(cm)
f1nuGB = f1_score(y_nutest, y_prednuGB)
print('f1nuGB',f1nuGB)
print('f1nuDec',f1nuDec)
probasnuGB = GB.predict_proba(data_test)
fpr, tpr, thresholds = roc_curve(y_nutest, probasnuGB[:, 1])
roc_auc = auc(fpr, tpr)
print ("Area under the ROC curve GradientBoost : %f" % roc_auc)
plt.plot(fpr, tpr, label='ROC curve GradientBoost (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
fpr1, tpr1, thresholds1 = roc_curve(y_nutest, probasDec[:, 1])
# ajout pour comparer des informations obtenues avec un DecisionTree
roc_auc1 = auc(fpr1, tpr1)
print ("Area under the ROC curve DecTree: %f" % roc_auc1)
plt.plot(fpr1, tpr1, label=' Courbe ROC DecTree (auc = %0.2f)' % roc_auc1)
plt.xlabel("Taux d'erreur (faux positif)")
plt.ylabel("Taux de bonne réponse (Vrai positif)")
plt.title('Performance comparée du GradientBoosting')
plt.legend(loc="lower right")
#Distribution des scores de l'AdaBoost
y_testn = y_nutest.as_matrix()
y_minD = y_prednuGB.min()
y_scoreD = array( [probasnuGB[i,p-y_minD] for i,p in enumerate(y_prednuGB)] )
y_scoreD[:5]
positive_scores = y_scoreD[y_testn == y_prednuGB]
negative_scores = y_scoreD[y_testn != y_prednuGB]
fpr = dict()
tpr = dict()
roc_auc = dict()
nb_obs = dict()
for i in GB.classes_:
fpr[i], tpr[i], _ = roc_curve(y_nutest == i, y_scoreD)
roc_auc[i] = auc(fpr[i], tpr[i])
nb_obs[i] = (y_nutest == i).sum()
roc_auc, nb_obs
ax = seaborn.distplot(positive_scores, rug=True, hist=True, label="+")
seaborn.distplot(negative_scores, rug=True, hist=True, ax=ax, label="-")
ax.legend()
Explanation: Le score des deux modèles ne sont pas des plus discriminants puisqu’il existe une aire commune entre les bonnes et les mauvaises réponses plutôt importante.
Nous allons donc introduire un nouveau modèle, le GradientBoost, en sélectionnant toutefois pour la comparaison le DecisionTree qui a la meilleur performance de prédiction
Amélioration 5 : apport d'un nouveau modèle et analyse des features
Le GradientBoostingClassifier
End of explanation
feature_name = list(data.T.index)
limit = 30
feature_importance = GB.feature_importances_[:30]
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.subplot(1, 2, 2)
plt.barh(pos, feature_importance, align='center')
plt.yticks(pos, feature_name)
plt.xlabel('Importance relative')
plt.title("Classement des variables selon leur importance")
Explanation: Ainsi on observe une nette amélioration de la prédiction. En effet la probabilité que le score d’une bonne réponse soit supérieure au score d’une mauvaise réponse est avec le GradientBoosting quasiment de 80% (cf.AUC). Par ailleurs sa discrimination est plus importante que les deux classifieurs précédents. Cela nous encourage à utiliser cette méthode pour appréhender l'importance relative des composantes, plutôt que les RandomForest par exemple.
Importance des features
End of explanation
#Discrétisation de l'âge
u = df.copy()
u["Age"]= pandas.qcut(u.Age,10)# le choix du nombre de catégories s'est effectuée selon l'allure de l'histogramme préenté au début
# Nous choisission délibérément qcut parce que proche de la distribution initiale du fait la prise en compte des quantiles
u.head()
#Application de mosaic
from statsmodels.graphics.mosaicplot import mosaic
temp1 = pandas.crosstab([u.Contraception],[u.Age])
props = lambda key: {'color':'grey' if '1' in key else 'white'}#on met en valeur plus particulièrement le fait
#d'avoir une contraception vs ne pas en avoir puisque nous avons déjà remarqué que cela avait plus de poids dans nos données
mosaic(temp1.unstack(),properties=props, title="Répartition de la non-contraception en fonction de l'âge" )
# Défaut :
# 1)je n'arrive pas à me débarasser de ce qui est inscrit à l'intérieur des tiles même avec une lambda function comme labellizer.
# 2)On obtient aussi l'ensemble des coordonnées certes intéressant mais qui alourdisent la lecture
Explanation: Ce graphique met en évidence l'importance de quatre variables dans la prédiction : le nombre d'enfant, l'éducation du mari, l'éducation de la femme, et son âge, variable la plus prégnante.
Focus sur la relation entre l'âge et la contraception
On peut donc s'interroger sur la distribution des femmes utilisant une contraception selon leur âge. Pour se faire on veut représenter un grpah de type "mosaic plot". Il faut donc dans un premier temps discrétiser la variable âge pour rendre le graphique lisible. Dans un deuxième temps pour pouvoir appliquer l'objet "mosaic" il faut considérer le tableau de contingence de ces variables.
End of explanation
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
data_new = SelectKBest(chi2, k=2).fit_transform(data, y)
data_new.shape
# Création de datared et des bases d'apprentissages et de test correspondantes
u= data.copy()
datared = u[["Age","Nbr Enfant", "Educ", "Educ mari", "Niveau de vie"]]
datared.head()
datared_train, datared_test, y_nutrain, y_nutest = train_test_split(datared, y_nonuse, test_size=0.33, random_state=42)
print(datared_train.shape)
print(datared_test.shape)
datared.head()
Explanation: Nous pouvons voir donc qu'il y a clairement une tendance comportementale qui varie avec l'âge : ainsi après 37 ans les femmes déclareraient moins utiliser une contraception.
Amélioration 6 : réduire le nombre de variables prédictives
L'idée est d'adapter la méthode qui consiste à utiliser une ACP afin de combiner les facteurs pour augmenter la performance, à un modèle avec des variables catégorielles.
Nous allons utiliser une méthode bien plus simple qui est de réduire de le nombre de variables prédictives par la suppression des variables ayant le moins d'importance, mises en évidence par le GradientBoost.
Ainsi cinq variables ont une portée prédictive plus importante que les autres: l'âge, le nombre d'enfant, l'éducation, le niveau de vie et l'éduction du mari.
On choisit de tester ce jeu de données réduit avec à la fois de le DecisionTree et le GradientBoost afin d'avoir une idée de l'effet de ce changement sur la prédiction.
Récupération des variables à plus forte valeur prédicitive
End of explanation
y_prednuredDT = DecTree.fit(datared_train, y_nutrain).predict(datared_test)
cm = confusion_matrix(y_nutest, y_prednuredDT)
plot_confusion_matrix(cm)
f1nuredDT = f1_score(y_nutest, y_prednuredDT)
print('f1nuredDec',f1nuredDT)
probasnuredDT = DecTree.predict_proba(datared_test)
fpr, tpr, thresholds = roc_curve(y_nutest, probasnuredDT[:, 1])
roc_auc = auc(fpr, tpr)
print ("Area under the ROC curve DecTReered : %f" % roc_auc)
plt.plot(fpr, tpr, label='ROC curve DecTreered(area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
#Comparaison avec le modèle non réduit
print('f1nuDec',f1nuDec)
fpr1, tpr1, thresholds1 = roc_curve(y_nutest, probasDec[:, 1])
roc_auc1 = auc(fpr1, tpr1)
print ("Area under the ROC curve DecTree : %f" % roc_auc1)
plt.plot(fpr1, tpr1, label='ROC curve DecTree (area = %0.2f)' % roc_auc1)
plt.xlabel("Taux d'erreur (faux positif)")
plt.ylabel("Taux de bonne réponse (Vrai positif)")
plt.title('Performance comparée du modèle réduit avec DecisionTree')
plt.legend(loc="lower right")
Explanation: DecTree sur modèle réduit
End of explanation
y_prednuredGB = GB.fit(datared_train, y_nutrain).predict(datared_test)
cm = confusion_matrix(y_nutest, y_prednuredGB)
plot_confusion_matrix(cm)
f1nuredGB = f1_score(y_nutest, y_prednuredGB)
print('f1nuredGB',f1nuredGB)
print('f1nuGB',f1nuGB)
probasnuredGB = GB.predict_proba(datared_test)
fpr, tpr, thresholds = roc_curve(y_nutest, probasnuredGB[:, 1])
roc_auc2 = auc(fpr2, tpr2)
print ("Area under the ROC curve GBred : %f" % roc_auc2)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc2)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
#Comparaison avec le modèle non réduit
fpr3, tpr3, thresholds3 = roc_curve(y_nutest, probasnuGB[:, 1])
roc_auc3 = auc(fpr3, tpr3)
print ("Area under the ROC curve GB : %f" % roc_auc3)
plt.plot(fpr3, tpr3, label='ROC curve GradientBoost (area = %0.2f)' % roc_auc3)
plt.xlabel("Taux d'erreur (faux positif)")
plt.ylabel("Taux de bonne réponse (Vrai positif)")
plt.title('Performance comparée du modèle réduit avec GradientBoost')
plt.legend(loc="lower right")
Explanation: GradientBoost sur modèle réduit
End of explanation |
4,591 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'hadgem3-gc31-ll', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MOHC
Source ID: HADGEM3-GC31-LL
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:14
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
4,592 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example for bulk function management
Shows
Step1: Demonstration model
Step2: NOTE
Step3: The result of the function call is very important. It tells us what was created and the names.
The names will be based on the target variable (Baseflow store) and the names (plural) of the target object, in this case, catchment and FU.
Step4: Result of create_functions includes a list of created functions
Step5: Note You can see all these in Edit | Functions
But the dockable 'Function Manager' doesn't tend to update (at least as of 4.3)
We apply the function against a particular target (eg v.model.catchment.runoff).
Because we've done all this against one target (v.model.catchment.runoff) we can assume that everything is in the same order, so the following bulk application can work. | Python Code:
import veneer
v = veneer.Veneer()
%matplotlib inline
Explanation: Example for bulk function management
Shows:
Creating multiple modelled variables
Creating multiple functions of the same form, each using one of the newly created modelled variables
Applying multiple functions
End of explanation
v.network().plot()
set(v.model.catchment.runoff.get_models())
v.model.find_states('TIME.Models.RainfallRunoff.AWBM.AWBM')
v.model.catchment.runoff.create_modelled_variable?
Explanation: Demonstration model
End of explanation
# Save the result!
variables = v.model.catchment.runoff.create_modelled_variable('Baseflow store')
Explanation: NOTE: When creating modelled variables we need to use the names that appear in the Project Explorer.
Also note that not everything will be available. If its not in the Project Explorer, you probably can't use it for a modelled variable
End of explanation
variables
# variables['created'] are the variable names that we want to insert into the functions
variables['created']
name_params = list(v.model.catchment.runoff.enumerate_names())
name_params
v.model.functions.create_functions?
# Again, save the result...
functions = v.model.functions.create_functions('$funky_%s_%s','1.1 * %s',variables['created'],name_params)
Explanation: The result of the function call is very important. It tells us what was created and the names.
The names will be based on the target variable (Baseflow store) and the names (plural) of the target object, in this case, catchment and FU.
End of explanation
functions
functions['created']
Explanation: Result of create_functions includes a list of created functions
End of explanation
# Applying functions in some nonsensical manner...
v.model.catchment.runoff.apply_function('A2',functions['created'])
Explanation: Note You can see all these in Edit | Functions
But the dockable 'Function Manager' doesn't tend to update (at least as of 4.3)
We apply the function against a particular target (eg v.model.catchment.runoff).
Because we've done all this against one target (v.model.catchment.runoff) we can assume that everything is in the same order, so the following bulk application can work.
End of explanation |
4,593 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cross Country Production Data
This program extracts particular series from the Penn World Tables (PWT). Data and documentation for the PWT are available at https
Step1: Construct data sets
Step2: Individual time series
Step3: Multiple series for last year available
Step4: Plot for website | Python Code:
# Set the current value of the PWT data file
current_pwt_file = 'pwt100.xlsx'
# Import data from local source or download if not present
if os.path.exists('../xslx/pwt100.xlsx'):
info = pd.read_excel('../xslx/'+current_pwt_file,sheet_name='Info',header=None)
legend = pd.read_excel('../xslx/'+current_pwt_file,sheet_name='Legend',index_col=0)
pwt = pd.read_excel('../xslx/'+current_pwt_file,sheet_name='Data',index_col=3,parse_dates=True)
else:
info = pd.read_excel('https://www.rug.nl/ggdc/docs/'+current_pwt_file,sheet_name='Info',header=None)
legend = pd.read_excel('https://www.rug.nl/ggdc/docs/'+current_pwt_file,sheet_name='Legend',index_col=0)
pwt = pd.read_excel('https://www.rug.nl/ggdc/docs/'+current_pwt_file,sheet_name='Data',index_col=3,parse_dates=True)
# Find PWT version
version = info.iloc[0][0].split(' ')[-1]
# Find base year for real variables
base_year = legend.loc['rgdpe']['Variable definition'].split(' ')[-1].split('US')[0]
# Most recent year
final_year = pwt[pwt['countrycode']=='USA'].sort_index().index[-1].year
metadata = pd.Series(dtype=str,name='Values')
metadata['version'] = version
metadata['base_year'] = base_year
metadata['final_year'] = final_year
metadata['gdp_per_capita_units'] = base_year+' dollars per person'
metadata.to_csv(csv_export_path+'/pwt_metadata.csv')
# Replace Côte d'Ivoire with Cote d'Ivoire
pwt['country'] = pwt['country'].str.replace(u"Côte d'Ivoire",u"Cote d'Ivoire")
# Merge country name and code
pwt['country'] = pwt['country']+' - '+pwt['countrycode']
# Create hierarchical index
pwt = pwt.set_index(['country',pwt.index])
# Display new DataFrame
pwt
Explanation: Cross Country Production Data
This program extracts particular series from the Penn World Tables (PWT). Data and documentation for the PWT are available at https://pwt.sas.upenn.edu/. For additional reference see the article "The Next Generation of the Penn World Table" by Feenstra, Inklaar, and Timmer in the October 2015 issue of the American Economic Review (https://www.aeaweb.org/articles?id=10.1257/aer.20130954)
Import data and manage
End of explanation
# Define a function that constructs data sets
def create_data_set(year0,pwtCode,per_capita,per_worker):
year0 = str(year0)
if per_capita:
data = pwt[pwtCode]/pwt['pop']
elif per_worker:
data = pwt[pwtCode]/pwt['emp']
else:
data = pwt[pwtCode]
data = data.unstack(level='country').loc[year0:].dropna(axis=1)
return data
Explanation: Construct data sets
End of explanation
# Create data sets
gdp_pc = create_data_set(year0=1960,pwtCode='rgdpo',per_capita=True,per_worker=False)
consumption_pc = create_data_set(year0=1960,pwtCode='ccon',per_capita=True,per_worker=False)
physical_capital_pc = create_data_set(year0=1960,pwtCode='cn',per_capita=True,per_worker=False)
human_capital_pc = create_data_set(year0=1960,pwtCode='hc',per_capita=False,per_worker=False)
# Find intsection of countries with data from 1960
intersection = gdp_pc.columns.intersection(consumption_pc.columns).intersection(physical_capital_pc.columns).intersection(human_capital_pc.columns)
# Adjust data
gdp_pc = gdp_pc[intersection]
consumption_pc = consumption_pc[intersection]
physical_capital_pc = physical_capital_pc[intersection]
human_capital_pc = human_capital_pc[intersection]
# Export to csv
gdp_pc.to_csv(csv_export_path+'/cross_country_gdp_per_capita.csv')
consumption_pc.to_csv(csv_export_path+'/cross_country_consumption_per_capita.csv')
physical_capital_pc.to_csv(csv_export_path+'/cross_country_physical_capital_per_capita.csv')
human_capital_pc.to_csv(csv_export_path+'/cross_country_human_capital_per_capita.csv')
Explanation: Individual time series
End of explanation
# Restrict data to final year
df = pwt.swaplevel(0, 1).sort_index().loc[(str(final_year),slice(None))].reset_index()
# Select columns: 'countrycode','country','rgdpo','emp','hc','cn'
df = df[['countrycode','country','rgdpo','emp','hc','cn']]
# Rename columns
df.columns = ['country_code','country','gdp','labor','human_capital','physical_capital']
# Remove country codes from country column
df['country'] = df['country'].str.split(' - ',expand=True)[0]
# Drop countries with missing observations
df = df.dropna()
# 3. Export data
df[['country_code','country','gdp','labor','human_capital','physical_capital']].to_csv(csv_export_path+'/cross_country_production.csv',index=False)
Explanation: Multiple series for last year available
End of explanation
# Load data
df = pd.read_csv('../csv/cross_country_gdp_per_capita.csv',index_col='year',parse_dates=True)
income60 = df.iloc[0]/1000
growth = 100*((df.iloc[-1]/df.iloc[0])**(1/(len(df.index)-1))-1)
# Construct plot
fig = plt.figure(figsize=(10, 6))
ax = fig.add_subplot(1,1,1)
colors = ['red','blue','magenta','green']
plt.scatter(income60,growth,s=0.0001)
for i, txt in enumerate(df.columns):
ax.annotate(txt[-3:], (income60[i],growth[i]),fontsize=10,color = colors[np.mod(i,4)])
ax.grid()
ax.set_xlabel('GDP per capita in 1960\n (thousands of 2011 $ PPP)')
ax.set_ylabel('Real GDP per capita growth\nfrom 1970 to '+str(df.index[0].year)+ ' (%)')
xlim = ax.get_xlim()
ax.set_xlim([0,xlim[1]])
fig.tight_layout()
# Save image
plt.savefig('../png/fig_GDP_GDP_Growth_site.png',bbox_inches='tight')
# Export notebook to python script
runProcs.exportNb('cross_country_income_data')
Explanation: Plot for website
End of explanation |
4,594 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Short demo of psydata functions
Step1: I'll demo some functions here using a dataset I simulated earlier.
Step2: Bin bernoulli trials, compute binomial statistics
Compute binomial trials for each combination of contrast and sf, averaging over subjects
Step3: Notice that you can't compute binomial confidence intervals if the proportion success is 0 or 1. We can fix this using Laplace's Rule of Succession -- add one success and one failure to each observation (basically a prior that says that both successes and failures are possible).
Step4: Fit and plot a psychometric function to each subject, sf
Step5: Some kind of wonky fits (unrealistic slopes), but hey, that's what you get with a simple ML fit with no pooling / shrinkage / priors. | Python Code:
import seaborn as sns
import psyutils as pu
%load_ext autoreload
%autoreload 2
%matplotlib inline
sns.set_style("white")
sns.set_style("ticks")
Explanation: Short demo of psydata functions
End of explanation
# load data:
dat = pu.psydata.load_psy_data()
dat.info()
Explanation: I'll demo some functions here using a dataset I simulated earlier.
End of explanation
pu.psydata.binomial_binning(dat, y='correct',
grouping_variables=['contrast', 'sf'])
Explanation: Bin bernoulli trials, compute binomial statistics
Compute binomial trials for each combination of contrast and sf, averaging over subjects:
End of explanation
pu.psydata.binomial_binning(dat, y='correct',
grouping_variables=['contrast', 'sf'],
rule_of_succession=True)
Explanation: Notice that you can't compute binomial confidence intervals if the proportion success is 0 or 1. We can fix this using Laplace's Rule of Succession -- add one success and one failure to each observation (basically a prior that says that both successes and failures are possible).
End of explanation
g = pu.psydata.plot_psy(dat, 'contrast', 'correct',
function='weibull',
hue='sf',
col='subject',
log_x=True,
col_wrap=3,
errors=False,
fixed={'gam': .5, 'lam':.02},
inits={'m': 0.01, 'w': 3})
g.add_legend()
g.set(xlabel='Log Contrast', ylabel='Prop correct')
g.fig.subplots_adjust(wspace=.8, hspace=.8);
Explanation: Fit and plot a psychometric function to each subject, sf:
End of explanation
g = pu.psydata.plot_psy_params(dat, 'contrast', 'correct',
x="sf", y="m",
function='weibull',
hue='subject',
fixed={'gam': .5, 'lam':.02})
g.set(xlabel='Spatial Frequency', ylabel='Contrast threshold');
Explanation: Some kind of wonky fits (unrealistic slopes), but hey, that's what you get with a simple ML fit with no pooling / shrinkage / priors.
End of explanation |
4,595 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ejemplo práctico de uso de diccionarios y for loops ( y gráficas).
Comportamiento de neuronas.
Utilizando un modelo matemático (modelo de electrodifusión).
El modelo describe la actividad eléctrica de una neurona, al variar los distintos parámetros. <br />
Yo tengo un sistema de dos ecuaciones con varios parámetros, lo que quiero es tomar un parámetro y darle distintos valores y graficar la respuesta del sistema para cada valor del parámetro.<br />
<br />
Primero, la lista de los distintos parámetros que tengo y sus valores respectivos ¿A qué suena? <br />
Podría ser un diccionario, en el que las llaves son los nombres de mis parámetros y los valores, los valores asociados a cada parámetro.<br />
<br />
Ahora, ir cambiando el valor de cada parámetro en un rango escogido ¿A qué suena?<br />
<br />
...
<br />
Suena a un For loop!!!<br />
Step1: Para construir un programa la primera parte es importar las librerias que se vayan a utlizar (en caso de que se necesite) <br />
<br />
Importamos matplotlib y pylab para las gráficas.<br />
Scipy para algunas funciones como la exponencial y los arreglos y para hacer la integral (odeint).
Step2: Las ecuaciones diferenciales que vamos a utilizar.
$C \frac{dV}{dt} = I - I_{Na} - I_{K} - I_{L} $ <br />
<br />
$\frac{dW}{dt} = \frac{W_{Infty} - W}{\tau} $ <br />
<br />
Donde
Step3: Creamos un diccionario vacío que se llama p <br />
Le agregamos los diferentes parámetros que necesitamos usando la siguiente sintaxis
Step4: Escribimos la función solve, esta función integra las dos ecuaciones diferenciales que describimos arriba ('rhs'), utilizando como condiciones iniciales los valores del diccionario w0 y V0 y el tiempo descrito en el diccionario como SampTimes
Step5: Hasta ahora lo que hicimos fue definir las funciones que queremos resolver con valores de parámetros únicos (los dados en el diccionario) y resolver las ecuaciones utilizando una herramienta de scipy para integrar.<br />
<br />
Pero lo que yo quería hacer era resolver la ecuación para distintos valores de un mismo parámetros y luego graficar el comportamiento del sistema para poder visualizar las diferencias.<br />
<br />
Para esto creo una función que varíe los parámetros (paramVar) | Python Code:
%matplotlib inline
Explanation: Ejemplo práctico de uso de diccionarios y for loops ( y gráficas).
Comportamiento de neuronas.
Utilizando un modelo matemático (modelo de electrodifusión).
El modelo describe la actividad eléctrica de una neurona, al variar los distintos parámetros. <br />
Yo tengo un sistema de dos ecuaciones con varios parámetros, lo que quiero es tomar un parámetro y darle distintos valores y graficar la respuesta del sistema para cada valor del parámetro.<br />
<br />
Primero, la lista de los distintos parámetros que tengo y sus valores respectivos ¿A qué suena? <br />
Podría ser un diccionario, en el que las llaves son los nombres de mis parámetros y los valores, los valores asociados a cada parámetro.<br />
<br />
Ahora, ir cambiando el valor de cada parámetro en un rango escogido ¿A qué suena?<br />
<br />
...
<br />
Suena a un For loop!!!<br />
End of explanation
import matplotlib.pyplot as plt
import pylab as py
import scipy as sc
from scipy.integrate import odeint
Explanation: Para construir un programa la primera parte es importar las librerias que se vayan a utlizar (en caso de que se necesite) <br />
<br />
Importamos matplotlib y pylab para las gráficas.<br />
Scipy para algunas funciones como la exponencial y los arreglos y para hacer la integral (odeint).
End of explanation
def rhs(z,t,p):
v, w = z
winf = 1.0/(1.0+ sc.exp(-2*p['aw']*(v-p['V12w'])))
minf = 1.0/(1.0+ sc.exp(-2*p['am']*(v-p['V12m'])))
tauw = 1.0/(p['lambda']*sc.exp(p['aw']*(v-p['V12w']))+ p['lambda']*sc.exp(-p['aw']*(v-p['V12w'])))
INa = p['gNa']*minf**p['mp']*(1-w)*(v-p['ENa'])
IK = p['gK']*(w/p['s'])**p['wp']*(v-p['EK'])
IL = p['gL']*(v-p['EL'])
dvdt = (p['Istim'] - INa - IK - IL)/p['Cm']
dwdt = (winf - w) / tauw
return dvdt,dwdt
Explanation: Las ecuaciones diferenciales que vamos a utilizar.
$C \frac{dV}{dt} = I - I_{Na} - I_{K} - I_{L} $ <br />
<br />
$\frac{dW}{dt} = \frac{W_{Infty} - W}{\tau} $ <br />
<br />
Donde: <br />
<br />
$I_{Na} = \overline{g_{Na}}m_{Infty}^3(1-W)(V-V_{Na})$<br />
$I_{K} = \overline{g_{K}}(\frac{W}{S})^4(V-V_{K})$ <br />
$I_L = \overline{g_L}(V-V_L)$ <br />
<br />
Aquí muestro las ecuaciones que describen el sistema (la nuerona) que quiero modelar. <br />
La función rhs (Right Hand Side) utiliza los valores del diccionario (que aún no está definido) utilizando la sintaxis<br />
<br />
p['llave']<br />
<br />
Notar, lo que regresa esta función (su return) es:<br />
<br />
$\frac{dV}{dt}$ y $\frac{dW}{dt}$<br />
Dos ecuaciones diferenciales SIN RESOLVER.
End of explanation
p = {}
p['EK'] = -72.0
p['ENa']= 55.0
p['EL']= -50.0
p['gK']= 15.0
p['gNa']= 120.0
p['gL']= 0.3
p['V12m']= -31.0
p['V12w']=-46.0
p['mp']= 3
p['wp']= 4
p['am']=0.065
p['aw']=0.055
p['lambda']= 0.08
p['Istim']= 0
p['s']= 1.0
p['Cm']=1.0
p['step'] = 0.001
p['tmin']= 0.0
p['tmax']= 100.0
p['sampTimes'] = sc.arange(p['tmin'],p['tmax'],p['step'])
p['w0']=0.005
p['v0']=-60
p['z0']= (p['v0'],p['w0'])
p['rhs']=rhs
Explanation: Creamos un diccionario vacío que se llama p <br />
Le agregamos los diferentes parámetros que necesitamos usando la siguiente sintaxis: <br />
<br />
p ['Llave'] = valor<br />
<br />
El nombre del diccionario, entre corchetes el nombre de la llave (en mi caso el nombre del parámetro) signo de igual y el valor (el valor podría ser una str también, sólo es necesario ponerlo entre comillas).<br />
End of explanation
def solve (p):
orbit = sc.integrate.odeint(p['rhs'],p['z0'], p['sampTimes'],args=(p,)).transpose()
vorbit = orbit[0]
worbit = orbit[1]
xx = {'v':vorbit, 'w':worbit, 'sampTimes':p['sampTimes']}
return xx
Explanation: Escribimos la función solve, esta función integra las dos ecuaciones diferenciales que describimos arriba ('rhs'), utilizando como condiciones iniciales los valores del diccionario w0 y V0 y el tiempo descrito en el diccionario como SampTimes
End of explanation
def paramVars(pa, key = 'V12w', outString = r'$V12w$'):
vals = sc.arange(-50,-20,3) # es un arreglo (como una lista) con los valores que voy a utlizar para el parámetro que escogí.
simulations = list() # es una lista vacía en la que se van a ir agregando las soluciones para cada valor del parámetro.
nsims = len(vals) # nsims nos da cuantos valores vamos a usar, cuantas veces vamos a resolver la ecuación.
for n in sc.arange(nsims): # Creamos un for loop para cada uno de los valores en el arreglo que construimos arriba.
p = pa.copy() #Hacemos una copia del diccionario p sobre el cual vamos a variar el valor del parámetro.
p[key] = vals[n] # Cambiamos el valor del parámetro por el valor n de nuestro arreglo.
#print('Performing simulations with %s=%g'%(key,p[key]))
xx = solve(p) # resolvemos las ecuaciones usando la función solve para el valor del parámetro n.
simulations.append(xx) # Agregamos la solución a la lista (antes vacía)
#Ahora tenemos una lista llamada 'simulations' en la que se encuentran las soluciones para cada uno de los valores n
# que tomó el parámetro que escogimos.
#print simulations
if 1:
cols = 3; rows = sc.ceil(nsims/sc.float32(cols))
fig = py.figure(figsize = (14,8)) #creamos una figura en la que van a estar las gráficas y le damos un tamaño.
py.ioff()
ax = [] #Creamos una lista vacía
for n in sc.arange(nsims): #Creamos un for loop, que va a hacer una gráfica por cada valor n del parámetro.
ax.append(fig.add_subplot(rows,cols,n+1)) #Agregamos a la lista un subplot una gráfica dentro de la figura.
ax[n].plot(simulations[n]['sampTimes'], simulations[n]['v']) #Aquí le decimos que graficar en la figura que acaba de crear
str1 = r'%s=%g'%(outString, vals[n])
ax[n].set_ylim(-90,70) # Le damos los límites del eje y
ax[n].set_xlabel('tiempo (ms)') # El nombre del eje x
ax[n].set_ylabel('voltaje (mV)') # El nombre del eje y
ax[n].text(0.6*p['tmax'], 50 , str1)
py.ion(); py.draw() #Fuera del loop, le pedimos que enseñe lo que graficó en la la figura.
return simulations
gKvar = paramVars(pa=p, key = 'V12w', outString = r'$V12w$')
Explanation: Hasta ahora lo que hicimos fue definir las funciones que queremos resolver con valores de parámetros únicos (los dados en el diccionario) y resolver las ecuaciones utilizando una herramienta de scipy para integrar.<br />
<br />
Pero lo que yo quería hacer era resolver la ecuación para distintos valores de un mismo parámetros y luego graficar el comportamiento del sistema para poder visualizar las diferencias.<br />
<br />
Para esto creo una función que varíe los parámetros (paramVar)
End of explanation |
4,596 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GA360 Segmentology
GA360 funnel analysis using Census data.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project
Step2: 3. Enter GA360 Segmentology Recipe Parameters
Wait for BigQuery->->->Census_Join to be created.
Join the to access the following assets
Copy . Leave the Data Source as is, you will change it in the next step.
Click Edit Connection, and change to BigQuery->->->Census_Join.
Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play.
Step3: 4. Execute GA360 Segmentology
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: GA360 Segmentology
GA360 funnel analysis using Census data.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
FIELDS = {
'auth_write':'service', # Authorization used for writing data.
'auth_read':'service', # Authorization for reading GA360.
'view':'service', # View Id
'recipe_slug':'', # Name of Google BigQuery dataset to create.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 3. Enter GA360 Segmentology Recipe Parameters
Wait for BigQuery->->->Census_Join to be created.
Join the to access the following assets
Copy . Leave the Data Source as is, you will change it in the next step.
Click Edit Connection, and change to BigQuery->->->Census_Join.
Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'dataset':{
'description':'Create a dataset for bigquery tables.',
'hour':[
4
],
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be created in BigQuery.'}}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing function.'}},
'function':'Pearson Significance Test',
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}}
}
}
},
{
'ga':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':1,'default':'service','description':'Authorization for reading GA360.'}},
'kwargs':{
'reportRequests':[
{
'viewId':{'field':{'name':'view','kind':'string','order':2,'default':'service','description':'View Id'}},
'dateRanges':[
{
'startDate':'90daysAgo',
'endDate':'today'
}
],
'dimensions':[
{
'name':'ga:userType'
},
{
'name':'ga:userDefinedValue'
},
{
'name':'ga:latitude'
},
{
'name':'ga:longitude'
}
],
'metrics':[
{
'expression':'ga:users'
},
{
'expression':'ga:sessionsPerUser'
},
{
'expression':'ga:bounces'
},
{
'expression':'ga:timeOnPage'
},
{
'expression':'ga:pageviews'
}
]
}
],
'useResourceQuotas':False
},
'out':{
'bigquery':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'GA360_KPI'
}
}
}
},
{
'bigquery':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'from':{
'query':'WITH GA360_SUM AS ( SELECT A.Dimensions.userType AS User_Type, A.Dimensions.userDefinedValue AS User_Value, B.zip_code AS Zip, SUM(Metrics.users) AS Users, SUM(Metrics.sessionsPerUser) AS Sessions, SUM(Metrics.timeOnPage) AS Time_On_Site, SUM(Metrics.bounces) AS Bounces, SUM(Metrics.pageviews) AS Page_Views FROM `{dataset}.GA360_KPI` AS A JOIN `bigquery-public-data.geo_us_boundaries.zip_codes` AS B ON ST_WITHIN(ST_GEOGPOINT(A.Dimensions.longitude, A.Dimensions.latitude), B.zip_code_geom) GROUP BY 1,2,3 ) SELECT User_Type, User_Value, Zip, Users, SAFE_DIVIDE(Users, SUM(Users) OVER()) AS User_Percent, SAFE_DIVIDE(Sessions, SUM(Sessions) OVER()) AS Impression_Percent, SAFE_DIVIDE(Time_On_Site, SUM(Time_On_Site) OVER()) AS Time_On_Site_Percent, SAFE_DIVIDE(Bounces, SUM(Bounces) OVER()) AS Bounce_Percent, SAFE_DIVIDE(Page_Views, SUM(Page_Views) OVER()) AS Page_View_Percent FROM GA360_SUM ',
'parameters':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be created in BigQuery.'}}
},
'legacy':False
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','description':'Place where tables will be written in BigQuery.'}},
'view':'GA360_KPI_Normalized'
}
}
},
{
'census':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'normalize':{
'census_geography':'zip_codes',
'census_year':'2018',
'census_span':'5yr'
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'type':'view'
}
}
},
{
'census':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Authorization used for writing data.'}},
'correlate':{
'join':'Zip',
'pass':[
'User_Type',
'User_Value'
],
'sum':[
'Users'
],
'correlate':[
'User_Percent',
'Impression_Percent',
'Time_On_Site_Percent',
'Bounce_Percent',
'Page_View_Percent'
],
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'table':'GA360_KPI_Normalized',
'significance':80
},
'to':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':'','description':'Name of Google BigQuery dataset to create.'}},
'type':'view'
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
Explanation: 4. Execute GA360 Segmentology
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
4,597 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DKRZ CMIP6 submission form for ESGF data publication
General Information (to be completed based on official CMIP6 references)
Data to be submitted for ESGF data publication must follow the rules outlined in the CMIP6 Archive Design <br /> (https
Step1: Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate.
Evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter"
<br /> Please evaluate the following cell to initialize your form based on the information provided as part of the form generation (name, email, etc.)
Step2: please provide information on the contact person for this CORDEX data submission request
Type of submission
please specify the type of this data submission
Step3: Requested general information
... to be finalized as soon as CMIP6 specification is finalized ....
Please provide model and institution info as well as an example of a file name
institution
The value of this field has to equal the value of the optional NetCDF attribute 'institution'
(long version) in the data files if the latter is used.
Step4: institute_id
The value of this field has to equal the value of the global NetCDF attribute 'institute_id'
in the data files and must equal the 4th directory level. It is needed before the publication
process is started in order that the value can be added to the relevant CORDEX list of CV1
if not yet there. Note that 'institute_id' has to be the first part of 'model_id'
Step5: model_id
The value of this field has to be the value of the global NetCDF attribute 'model_id'
in the data files. It is needed before the publication process is started in order that
the value can be added to the relevant CORDEX list of CV1 if not yet there.
Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name,
separated by a dash. It is part of the file name and the directory structure.
Step6: experiment_id and time_period
Experiment has to equal the value of the global NetCDF attribute 'experiment_id'
in the data files. Time_period gives the period of data for which the publication
request is submitted. If you intend to submit data from multiple experiments you may
add one line for each additional experiment or send in additional publication request sheets.
Step7: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
Step8: information on the grid_mapping
the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.),
i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids
Step9: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.
Step10: Please provide information on quality check performed on the data you plan to submit
Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.
'QC1' refers to the compliancy checker that can be downloaded at http
Step11: Terms of use
Please give the terms of use that shall be asigned to the data.
The options are 'unrestricted' and 'non-commercial only'.
For the full text 'Terms of Use' of CORDEX data refer to
http
Step12: Information on directory structure and data access path
(and other information needed for data transport and data publication)
If there is any directory structure deviation from the CORDEX standard please specify here.
Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.
Step13: Give the path where the data reside, for example
Step14: Exclude variable list
In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
Step15: Uniqueness of tracking_id and creation_date
In case any of your files is replacing a file already published, it must not have the same tracking_id nor
the same creation_date as the file it replaces.
Did you make sure that that this is not the case ?
Reply 'yes'; otherwise adapt the new file versions.
Step16: Variable list
list of variables submitted -- please remove the ones you do not provide
Step17: Check your submission before submission
Step18: Save your form
your form will be stored (the form name consists of your last name plut your keyword)
Step19: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications | Python Code:
from dkrz_forms import form_widgets
form_widgets.show_status('form-submission')
Explanation: DKRZ CMIP6 submission form for ESGF data publication
General Information (to be completed based on official CMIP6 references)
Data to be submitted for ESGF data publication must follow the rules outlined in the CMIP6 Archive Design <br /> (https://...)
Thus file names have to follow the pattern:<br />
VariableName_Domain_GCMModelName_CMIP6ExperimentName_CMIP5EnsembleMember_RCMModelName_RCMVersionID_Frequency[_StartTime-EndTime].nc <br />
Example: tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
The directory structure in which these files are stored follow the pattern:<br />
activity/product/Domain/Institution/
GCMModelName/CMIP5ExperimentName/CMIP5EnsembleMember/
RCMModelName/RCMVersionID/Frequency/VariableName <br />
Example: CORDEX/output/AFR-44/MPI-CSC/MPI-M-MPI-ESM-LR/rcp26/r1i1p1/MPI-CSC-REMO2009/v1/mon/tas/tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc
Notice: If your model is not yet registered, please contact contact ....
This 'data submission form' is used to improve initial information exchange between data providers and the DKZ data managers. The form has to be filled before the publication process can be started. In case you have questions please contact [email protected]
End of explanation
MY_LAST_NAME = "...." # e.gl MY_LAST_NAME = "schulz"
#-------------------------------------------------
from dkrz_forms import form_handler, form_widgets, checks
form_info = form_widgets.check_pwd(MY_LAST_NAME)
sf = form_handler.init_form(form_info)
form = sf.sub.entity_out.form_info
Explanation: Start submission procedure
The submission is based on this interactive document consisting of "cells" you can modify and then evaluate.
Evaluation of cells is done by selecting the cell and then press the keys "Shift" + "Enter"
<br /> Please evaluate the following cell to initialize your form based on the information provided as part of the form generation (name, email, etc.)
End of explanation
sf.submission_type = "..." # example: sf.submission_type = "initial_version"
Explanation: please provide information on the contact person for this CORDEX data submission request
Type of submission
please specify the type of this data submission:
- "initial_version" for first submission of data
- "new _version" for a re-submission of previousliy submitted data
- "retract" for the request to retract previously submitted data
End of explanation
sf.institution = "..." # example: sf.institution = "Alfred Wegener Institute"
Explanation: Requested general information
... to be finalized as soon as CMIP6 specification is finalized ....
Please provide model and institution info as well as an example of a file name
institution
The value of this field has to equal the value of the optional NetCDF attribute 'institution'
(long version) in the data files if the latter is used.
End of explanation
sf.institute_id = "..." # example: sf.institute_id = "AWI"
Explanation: institute_id
The value of this field has to equal the value of the global NetCDF attribute 'institute_id'
in the data files and must equal the 4th directory level. It is needed before the publication
process is started in order that the value can be added to the relevant CORDEX list of CV1
if not yet there. Note that 'institute_id' has to be the first part of 'model_id'
End of explanation
sf.model_id = "..." # example: sf.model_id = "AWI-HIRHAM5"
Explanation: model_id
The value of this field has to be the value of the global NetCDF attribute 'model_id'
in the data files. It is needed before the publication process is started in order that
the value can be added to the relevant CORDEX list of CV1 if not yet there.
Note that it must be composed by the 'institute_id' follwed by the RCM CORDEX model name,
separated by a dash. It is part of the file name and the directory structure.
End of explanation
sf.experiment_id = "..." # example: sf.experiment_id = "evaluation"
# ["value_a","value_b"] in case of multiple experiments
sf.time_period = "..." # example: sf.time_period = "197901-201412"
# ["time_period_a","time_period_b"] in case of multiple values
Explanation: experiment_id and time_period
Experiment has to equal the value of the global NetCDF attribute 'experiment_id'
in the data files. Time_period gives the period of data for which the publication
request is submitted. If you intend to submit data from multiple experiments you may
add one line for each additional experiment or send in additional publication request sheets.
End of explanation
sf.example_file_name = "..." # example: sf.example_file_name = "tas_AFR-44_MPI-M-MPI-ESM-LR_rcp26_r1i1p1_MPI-CSC-REMO2009_v1_mon_yyyymm-yyyymm.nc"
# Please run this cell as it is to check your example file name structure
# to_do: implement submission_form_check_file function - output result (attributes + check_result)
form_handler.cordex_file_info(sf,sf.example_file_name)
Explanation: Example file name
Please provide an example file name of a file in your data collection,
this name will be used to derive the other
End of explanation
sf.grid_mapping_name = "..." # example: sf.grid_mapping_name = "rotated_latitude_longitude"
Explanation: information on the grid_mapping
the NetCDF/CF name of the data grid ('rotated_latitude_longitude', 'lambert_conformal_conic', etc.),
i.e. either that of the native model grid, or 'latitude_longitude' for the regular -XXi grids
End of explanation
sf.grid_as_specified_if_rotated_pole = "..." # example: sf.grid_as_specified_if_rotated_pole = "yes"
Explanation: Does the grid configuration exactly follow the specifications in ADD2 (Table 1)
in case the native grid is 'rotated_pole'? If not, comment on the differences; otherwise write 'yes' or 'N/A'. If the data is not delivered on the computational grid it has to be noted here as well.
End of explanation
sf.data_qc_status = "..." # example: sf.data_qc_status = "QC2-CORDEX"
sf.data_qc_comment = "..." # any comment of quality status of the files
Explanation: Please provide information on quality check performed on the data you plan to submit
Please answer 'no', 'QC1', 'QC2-all', 'QC2-CORDEX', or 'other'.
'QC1' refers to the compliancy checker that can be downloaded at http://cordex.dmi.dk.
'QC2' refers to the quality checker developed at DKRZ.
If your answer is 'other' give some informations.
End of explanation
sf.terms_of_use = "..." # example: sf.terms_of_use = "unrestricted"
Explanation: Terms of use
Please give the terms of use that shall be asigned to the data.
The options are 'unrestricted' and 'non-commercial only'.
For the full text 'Terms of Use' of CORDEX data refer to
http://cordex.dmi.dk/joomla/images/CORDEX/cordex_terms_of_use.pdf
End of explanation
sf.directory_structure = "..." # example: sf.directory_structure = "compliant"
Explanation: Information on directory structure and data access path
(and other information needed for data transport and data publication)
If there is any directory structure deviation from the CORDEX standard please specify here.
Otherwise enter 'compliant'. Please note that deviations MAY imply that data can not be accepted.
End of explanation
sf.data_path = "..." # example: sf.data_path = "mistral.dkrz.de:/mnt/lustre01/work/bm0021/k204016/CORDEX/archive/"
sf.data_information = "..." # ...any info where data can be accessed and transfered to the data center ... "
Explanation: Give the path where the data reside, for example:
blizzard.dkrz.de:/scratch/b/b364034/. If not applicable write N/A and give data access information in the data_information string
End of explanation
sf.exclude_variables_list = "..." # example: sf.exclude_variables_list=["bnds", "vertices"]
Explanation: Exclude variable list
In each CORDEX file there may be only one variable which shall be published and searchable at the ESGF portal (target variable). In order to facilitate publication, all non-target variables are included in a list used by the publisher to avoid publication. A list of known non-target variables is [time, time_bnds, lon, lat, rlon ,rlat ,x ,y ,z ,height, plev, Lambert_Conformal, rotated_pole]. Please enter other variables into the left field if applicable (e.g. grid description variables), otherwise write 'N/A'.
End of explanation
sf.uniqueness_of_tracking_id = "..." # example: sf.uniqueness_of_tracking_id = "yes"
Explanation: Uniqueness of tracking_id and creation_date
In case any of your files is replacing a file already published, it must not have the same tracking_id nor
the same creation_date as the file it replaces.
Did you make sure that that this is not the case ?
Reply 'yes'; otherwise adapt the new file versions.
End of explanation
sf.variable_list_day = [
"clh","clivi","cll","clm","clt","clwvi",
"evspsbl","evspsblpot",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","prc","prhmax","prsn","prw","ps","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","tauu","tauv","ta200","ta500","ta850","ts",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850","wsgsmax",
"zg200","zg500","zmla"
]
sf.variable_list_mon = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200",
"ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_sem = [
"clt",
"evspsbl",
"hfls","hfss","hurs","huss","hus850",
"mrfso","mrro","mrros","mrso",
"pr","psl",
"rlds","rlus","rlut","rsds","rsdt","rsus","rsut",
"sfcWind","sfcWindmax","sic","snc","snd","snm","snw","sund",
"tas","tasmax","tasmin","ta200","ta500","ta850",
"uas","ua200","ua500","ua850",
"vas","va200","va500","va850",
"zg200","zg500"
]
sf.variable_list_fx = [
"areacella",
"mrsofc",
"orog",
"rootd",
"sftgif","sftlf"
]
Explanation: Variable list
list of variables submitted -- please remove the ones you do not provide:
End of explanation
# simple consistency check report for your submission form
res = form_handler.check_submission(sf)
sf.sub['status_flag_validity'] = res['valid_submission']
form_handler.DictTable(res)
Explanation: Check your submission before submission
End of explanation
form_handler.form_save(sf)
#evaluate this cell if you want a reference to the saved form emailed to you
# (only available if you access this form via the DKRZ form hosting service)
form_handler.email_form_info()
# evaluate this cell if you want a reference (provided by email)
# (only available if you access this form via the DKRZ hosting service)
form_handler.email_form_info(sf)
Explanation: Save your form
your form will be stored (the form name consists of your last name plut your keyword)
End of explanation
form_handler.email_form_info(sf)
form_handler.form_submission(sf)
Explanation: officially submit your form
the form will be submitted to the DKRZ team to process
you also receive a confirmation email with a reference to your online form for future modifications
End of explanation |
4,598 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Linear Regression
Step1: First let's add the new features.
Step2:
Step3: 1.1 Training the Model
1.1.1 Correlation Matrix
Step4: Should I have multiplied those features by num_answers?? I fear I might have induced multicollinearity.
1.1.2 Split to train and test sets
Step5: 1.1.3 Pipeline
Step6: 1.1.4 Correlation Heatmap on Scaled Data
Step7: 1.1.4 The Model
Step8: And on the test set
Step9: 1.2 Cross Validation
Step10: 1.3 Conclusion
Since, most of the target values lie between 0.458 and 4.02, our model, which has 35.42 rmse cv score, is still underfitting. So our next option is to either select a more powerful model, or to reduce the constraints (aka hyperparameters) on the model. But, this linear regression model is not regularized, so it must be that we haven't explored the features well, or have to go with a more powerful model.
On this challenge, I don't wish to spend more time on the feature engineering than I already have. So, I will resort to more powerful models.
2. Decision Tree Regression
Step11: This is quite impressive! But it could also be that the model is overfitting, so let's do cross validation.
Step12: Yes, our fears are real. It overfit.
Step13: It does worse on the test data as the CV score had told us.
3. Random Forest Tree Regression
Step14: 3.1 CV
Step15: Forest tree also overfit on the training set.
We will have to regularize it.
3.2 Grid Search
Let's the best combination of hyperparameter values for the RandomForestRegressor. The grid search will explore 18 combinations of hyperparameter values and will train each model 5 times per cv.
Step16: 3.3 Reevaluating Features
Step17: It appears that num_ans>=29 is not really important, while on the other hand, topics_followers is the most important feature of all. Let's see if dropping it will improve things.
Step18: Sure, it improved by 1 but not really a big difference. How about on the test set? | Python Code:
import pandas as pd
import json
json_data = open('/home/yohna/Documents/quora_challenges/views/sample/input00.in') # Edit this to where you have put the input00.in file
data = []
for line in json_data:
data.append(json.loads(line))
data.remove(9000)
data.remove(1000)
df = pd.DataFrame(data)
df['anonymous'] = df['anonymous'].map({False: 0, True:1}).astype(int)
cleaned_df=df[:9000]
# to make reading the question_text cells easier, remove the maximum column width
pd.set_option('display.max_colwidth', -1)
# Don't care about warnings in this notebook
import warnings
warnings.filterwarnings("ignore")
Explanation: 1. Linear Regression
End of explanation
data_df = cleaned_df[:9000]
#---------------------------------------------------------------
# The product of num_answers and context_present
# first define context_present
data_df['context_present'] = data_df['context_topic'].apply(lambda x: 0 if x==None else 1)
# then multiply and make it into a column
data_df['context_xnum_ans'] = data_df['context_present'] * data_df['num_answers']
#---------------------------------------------------------------
#---------------------------------------------------------------
# Questions with at least 29 Answers
data_df['num_ans>= 29'] = data_df['num_answers'].apply(lambda x: 1 if x>=29 else 0)
#---------------------------------------------------------------
#---------------------------------------------------------------
# The product of a boolean column of questions ...
# ... containing "top3" (see the preceeding notebook) words | corr_coef = 0.348
data_df['qcontains_comb_1'] = data_df['question_text'].apply(lambda x: 1 if any(pd.Series(x).str.contains('university|list|With|physics?|shown|have|single|finding|around|mind|Indian|come|interesting|most|inspired|guy|movies|value|instead|most?|movies?|indian|girlfriend?|advice|across?|physical|cross|not|hate|Apple|actually|found|modern|technology|there|biggest|India?|each|but|within|physics|tell|intelligence|girlfriend|changes|sound|Glass?|iOS|mind-blowing|universe?|India|cases|right|Why|boyfriend|true|efforts|facts|girl|some')) else 0)
data_df['qcontains_comb_1_xnum_ans'] = data_df['qcontains_comb_1'] * data_df['num_answers']
#---------------------------------------------------------------
#---------------------------------------------------------------
# The product of a boolean column of the number of characters of questions ...
# ... and the number of answers. | corr_coef = 0.392
data_df['len_chars'] = data_df.question_text.apply(lambda x: len(x))
data_df['qnumchars_lessThan_182_xnum_ans'] = data_df['num_answers'] * data_df['len_chars'].apply(lambda x: 1 if x <= 182 else 0)
#---------------------------------------------------------------
#---------------------------------------------------------------
# The sum of number of followers of the topics the question is tagged for | corr_coef = 0.121
def funn(x): #where x will be a row when running `apply`
return sum(x[i]['followers'] for i in range(len(x)))
data_df['topics_followers'] = data_df['topics'].apply(funn)
data_df.drop(['topics'], axis =1, inplace=True)
#---------------------------------------------------------------
data_df.drop(['context_present', 'qcontains_comb_1'], axis=1, inplace=True) #To avoid multilinearity
Explanation: First let's add the new features.
End of explanation
data_df.head()
# numerical values only and 'question_key'
df_n = data_df[['__ans__', 'question_key', 'anonymous', 'num_answers', 'context_xnum_ans', 'num_ans>= 29', 'qcontains_comb_1_xnum_ans', 'len_chars', 'qnumchars_lessThan_182_xnum_ans', 'topics_followers']]
#set the index to question_key
df_n.set_index(df_n['question_key'],
inplace=True
)
df_n.drop(['question_key'], axis=1, inplace=True)
# Since the new index has a name ("question_key"), pandas will add an extra row with
# the index name in the first entry with 0 in the rest. Fix this by setting the index
# name to None
df_n.index.name = None
df_n.head()
Explanation:
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(context="paper")
corrmat = df_n.corr()
f, ax = plt.subplots(figsize=(14, 10))
f.text(0.45, 0.93, "Correlation coefficients", ha='center')
sns.heatmap(corrmat, square=True, linewidths=0.01, cmap='coolwarm', annot=True)
Explanation: 1.1 Training the Model
1.1.1 Correlation Matrix
End of explanation
from sklearn.model_selection import train_test_split
y = df_n.__ans__ # the target column
x = df_n.drop('__ans__', axis=1)
x_train, x_test, y_train, y_test = train_test_split(x, y,
test_size = 0.2,
random_state = 123)
x_train.head()
Explanation: Should I have multiplied those features by num_answers?? I fear I might have induced multicollinearity.
1.1.2 Split to train and test sets
End of explanation
# RobustScaler
from sklearn.preprocessing import RobustScaler
scaler = RobustScaler(quantile_range=(5,95)).fit(x_train)
X_train = scaler.transform(x_train)
X_test = scaler.transform(x_test) #it's okay not to do it now
Explanation: 1.1.3 Pipeline
End of explanation
xt_columns = list(df_n.columns)
xt_columns.remove('__ans__') # columns of X_train
xt_columns
X_train_df = pd.DataFrame(X_train, columns=xt_columns, index=x_train.index) # X_train and y_train are not dataframes initially
y_train_df = pd.DataFrame(y_train, columns=['__ans__'])
df_for_corr = pd.concat([y_train_df, X_train_df], axis=1)
df_for_corr.head()
sns.set(context="paper")
corrmat_1 = df_for_corr.corr()
f, ax = plt.subplots(figsize=(14, 10))
f.text(0.45, 0.93, "Correlation coefficients", ha='center')
sns.heatmap(corrmat_1, square=True, linewidths=0.01, cmap='coolwarm', annot=True)
Explanation: 1.1.4 Correlation Heatmap on Scaled Data
End of explanation
from sklearn.linear_model import LinearRegression
import numpy as np
from sklearn.metrics import mean_squared_error
lin_reg = LinearRegression()
lin_reg.fit(X_train, y_train)
y_pred_train_lin = lin_reg.predict(X_train)
lin_rmse_train = np.sqrt(mean_squared_error(y_pred_train_lin, y_train))
lin_rmse_train
Explanation: 1.1.4 The Model: Linear Regressor
End of explanation
y_pred_test_lin = lin_reg.predict(X_test)
lin_rmse_test = np.sqrt(mean_squared_error(y_pred_test_lin, y_test))
lin_rmse_test
df_n['__ans__'].describe()
y_train.values
from plotnine import *
(ggplot(x_test, aes(x=y_test.values, y=y_pred_test_lin)) + theme_bw() + geom_point() + geom_smooth())
min(abs((y_train.values - y_pred_train_lin)))
Explanation: And on the test set:
End of explanation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(lin_reg, X_train, y_train,
scoring = 'neg_mean_squared_error', cv=10)
rmse_scores = np.sqrt(-scores)
rmse_scores.mean() #mean
rmse_scores.std()
df_n.__ans__.describe()
Explanation: 1.2 Cross Validation
End of explanation
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(X_train, y_train)
y_train_tree_pred = tree_reg.predict(X_train)
tree_mse = mean_squared_error(y_train, y_train_tree_pred)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
Explanation: 1.3 Conclusion
Since, most of the target values lie between 0.458 and 4.02, our model, which has 35.42 rmse cv score, is still underfitting. So our next option is to either select a more powerful model, or to reduce the constraints (aka hyperparameters) on the model. But, this linear regression model is not regularized, so it must be that we haven't explored the features well, or have to go with a more powerful model.
On this challenge, I don't wish to spend more time on the feature engineering than I already have. So, I will resort to more powerful models.
2. Decision Tree Regression
End of explanation
scores_tree = cross_val_score(tree_reg, X_train, y_train,
scoring='neg_mean_squared_error', cv=10)
rmse_scores_tree = np.sqrt(-scores_tree)
print('mean:', rmse_scores_tree.mean(), 'and', 'std:', rmse_scores_tree.std())
rmse_scores_tree
Explanation: This is quite impressive! But it could also be that the model is overfitting, so let's do cross validation.
End of explanation
y_test_tree_pred = tree_reg.predict(X_test)
tree_mse_test = mean_squared_error(y_test, y_test_tree_pred)
tree_rmse_test = np.sqrt(tree_mse_test)
tree_rmse_test
Explanation: Yes, our fears are real. It overfit.
End of explanation
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(X_train, y_train)
y_train_forest_pred = forest_reg.predict(X_train)
forest_mse = mean_squared_error(y_train, y_train_forest_pred)
forest_rmse = np.sqrt(forest_mse)
forest_rmse
Explanation: It does worse on the test data as the CV score had told us.
3. Random Forest Tree Regression
End of explanation
scores_forest = cross_val_score(forest_reg, X_train, y_train,
scoring='neg_mean_squared_error', cv=10)
rmse_scores_forest = np.sqrt(-scores_forest)
print('mean:', rmse_scores_forest.mean(), 'and', 'std:', rmse_scores_forest.std())
Explanation: 3.1 CV
End of explanation
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_estimators': [3, 10, 30], 'max_features': [2, 4, 6, 8]},
{'bootstrap': [False], 'n_estimators': [3, 10], 'max_features': [2, 3, 4]}
]
forest_reg = RandomForestRegressor()
grid_search = GridSearchCV(forest_reg, param_grid, cv=5,
scoring ='neg_mean_squared_error')
grid_search.fit(X_train, y_train)
grid_search.best_params_
Explanation: Forest tree also overfit on the training set.
We will have to regularize it.
3.2 Grid Search
Let's the best combination of hyperparameter values for the RandomForestRegressor. The grid search will explore 18 combinations of hyperparameter values and will train each model 5 times per cv.
End of explanation
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres['mean_test_score'], cvres['params']):
print(np.sqrt(-mean_score), params)
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
sorted(zip(feature_importances, X_train_df.columns), reverse=True)
Explanation: 3.3 Reevaluating Features
End of explanation
X_train_df_last = X_train_df.drop(['num_ans>= 29'], axis=1)
X_train_last = X_train_df_last.values
forest_reg.fit(X_train_last, y_train)
y_train_forest_pred_last = forest_reg.predict(X_train_last)
forest_mse_last = mean_squared_error(y_train, y_train_forest_pred_last)
forest_rmse_last = np.sqrt(forest_mse_last)
forest_rmse_last
scores_forest_last = cross_val_score(forest_reg, X_train_last, y_train,
scoring='neg_mean_squared_error', cv=10)
rmse_scores_forest_last = np.sqrt(-scores_forest_last)
print('mean:', rmse_scores_forest_last.mean(), 'and', 'std:', rmse_scores_forest_last.std())
Explanation: It appears that num_ans>=29 is not really important, while on the other hand, topics_followers is the most important feature of all. Let's see if dropping it will improve things.
End of explanation
X_test_df_last = pd.DataFrame(X_test) #num>=29 is the fourth column
X_test_df_last.drop([3], axis=1, inplace=True)
X_test_last = X_test_df_last.values
forest_reg.fit(X_test_last, y_test)
y_test_forest_pred_last = forest_reg.predict(X_test_last)
forest_mse_last = mean_squared_error(y_test, y_test_forest_pred_last)
forest_rmse_last = np.sqrt(forest_mse_last)
forest_rmse_last
scores_forest_last = cross_val_score(forest_reg, X_test_last, y_test,
scoring='neg_mean_squared_error', cv=10)
rmse_scores_forest_last = np.sqrt(-scores_forest_last)
print('mean:', rmse_scores_forest_last.mean(), 'and', 'std:', rmse_scores_forest_last.std())
Explanation: Sure, it improved by 1 but not really a big difference. How about on the test set?
End of explanation |
4,599 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifiers
A classifier is a function from an asset and a moment in time to a categorical output such as a string or integer label
Step1: Previously, we saw that the latest attribute produced an instance of a Factor. In this case, since the underlying data is of type string, latest produces a Classifier.
Similarly, a computation producing the latest Morningstar sector code of a security is a Classifier. In this case, the underlying type is an int, but the integer doesn't represent a numerical value (it's a category) so it produces a classifier. To get the latest sector code, we can use the built-in Sector classifier.
Step2: Using Sector is equivalent to morningstar.asset_classification.morningstar_sector_code.latest.
Building Filters from Classifiers
Classifiers can also be used to produce filters with methods like isnull, eq, and startswith. The full list of Classifier methods producing Filters can be found here.
As an example, if we wanted a filter to select for securities trading on the New York Stock Exchange, we can use the eq method of our exchange classifier.
Step3: This filter will return True for securities having 'NYS' as their most recent exchange_id.
Quantiles
Classifiers can also be produced from various Factor methods. The most general of these is the quantiles method which accepts a bin count as an argument. The quantiles method assigns a label from 0 to (bins - 1) to every non-NaN data point in the factor output and returns a Classifier with these labels. NaNs are labeled with -1. Aliases are available for quartiles (quantiles(4)), quintiles (quantiles(5)), and deciles (quantiles(10)). As an example, this is what a filter for the top decile of a factor might look like
Step4: Let's put each of our classifiers into a pipeline and run it to see what they look like. | Python Code:
from quantopian.pipeline.data import morningstar
# Since the underlying data of morningstar.share_class_reference.exchange_id
# is of type string, .latest returns a Classifier
exchange = morningstar.share_class_reference.exchange_id.latest
Explanation: Classifiers
A classifier is a function from an asset and a moment in time to a categorical output such as a string or integer label:
F(asset, timestamp) -> category
An example of a classifier producing a string output is the exchange ID of a security. To create this classifier, we'll have to import morningstar.share_class_reference.exchange_id and use the latest attribute to instantiate our classifier:
End of explanation
from quantopian.pipeline.classifiers.morningstar import Sector
morningstar_sector = Sector()
Explanation: Previously, we saw that the latest attribute produced an instance of a Factor. In this case, since the underlying data is of type string, latest produces a Classifier.
Similarly, a computation producing the latest Morningstar sector code of a security is a Classifier. In this case, the underlying type is an int, but the integer doesn't represent a numerical value (it's a category) so it produces a classifier. To get the latest sector code, we can use the built-in Sector classifier.
End of explanation
nyse_filter = exchange.eq('NYS')
Explanation: Using Sector is equivalent to morningstar.asset_classification.morningstar_sector_code.latest.
Building Filters from Classifiers
Classifiers can also be used to produce filters with methods like isnull, eq, and startswith. The full list of Classifier methods producing Filters can be found here.
As an example, if we wanted a filter to select for securities trading on the New York Stock Exchange, we can use the eq method of our exchange classifier.
End of explanation
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
Explanation: This filter will return True for securities having 'NYS' as their most recent exchange_id.
Quantiles
Classifiers can also be produced from various Factor methods. The most general of these is the quantiles method which accepts a bin count as an argument. The quantiles method assigns a label from 0 to (bins - 1) to every non-NaN data point in the factor output and returns a Classifier with these labels. NaNs are labeled with -1. Aliases are available for quartiles (quantiles(4)), quintiles (quantiles(5)), and deciles (quantiles(10)). As an example, this is what a filter for the top decile of a factor might look like:
End of explanation
def make_pipeline():
exchange = morningstar.share_class_reference.exchange_id.latest
nyse_filter = exchange.eq('NYS')
morningstar_sector = Sector()
dollar_volume_decile = AverageDollarVolume(window_length=10).deciles()
top_decile = (dollar_volume_decile.eq(9))
return Pipeline(
columns={
'exchange': exchange,
'sector_code': morningstar_sector,
'dollar_volume_decile': dollar_volume_decile
},
screen=(nyse_filter & top_decile)
)
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
print 'Number of securities that passed the filter: %d' % len(result)
result.head(5)
Explanation: Let's put each of our classifiers into a pipeline and run it to see what they look like.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.