Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
2,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
STELLAB and OMEGA (Stellar yields + Faint Supernovae)
Documented by Jacob Brazier.
Note This notebooks require an experimental yields table, which is not part of NuPyCEE.
See Côté et al. (2016) and the OMEGA/STELLAB/SYGMA notebooks for further information.
Faint supernovae are low luminosity core-collapse supernovae characterised by their relatively small generation of Nickel-56 Nomoto et al. 2013. Nuclei that originate from the silicon burning regions of a star, undergo fallback if the explosion energy is too low to overcome gravitational attraction. The mass of the content that undergoes fallback is called the 'mass cut'. Anything above this mass cut is pushed outwards by the explosion, while the material underneath undergoes neutronisation/destruction by the formation of a black hole. The material that is ejected tend to include oxygen and carbon nuclei. Mixing may also occur at the masscut, so that near-iron nuclei may still be ejected. The mixing and fallback model is described in more detail by Umeda & Nomoto 2002
This project uses OMEGA (One-zone Model for the Evolution of Galaxies) to simulate the chemical evolution of galaxies and to plot the expected abundance ratios of titanium, oxygen, iron, hydrogen and carbon. The main plots show the how the abundance of these elements (with respect to hydrogen) change with both time and stellar metallicity. Fe/H is assumed to be a measure of metallicity, as higher Fe/H values indicate a higher proportion of metal relative to hydrogen. Fe/H is also a rough indicator of the age of stellar populations as metallicity tends to increase with time, and the age of stars can be difficult to determine. We plot the predictions of OMEGA with and without the yields of faint supernovae. Therefore, we can emanine the impact of faint supernovae on the abundance of certain elements. We have made the approximation that the yields of 20MSun pre-supernovae stars are representative of faint supernovae in the 15-40 MSun mass range.
STELLAB (Stellar abundances) plots real observational data for different galaxies, and is plotted with OMEGA for comparison. SYGMA (Stellar Yields for Galactic Modelling Applications) calculates the chemical composition of the ejecta of Simple Stellar Populations (SSP) as a function of metallicity and time (Côté et al. 2016b) Chem_evol couples the stellar yields with the the distribution of initial stellar masses (Initial Mass Function [IMF]).
Step1: The Initial Mass Function
The Initial Mass Function describes the initial mass distribution of stars in a stellar population. Applying limits to this function allows us to control the mass range of the data it couples with. In this case it limits the range of the extra yields data. We have chosen the mass range 15-40 (Solar Mass), the mass range of the 1997D and 1999br faint supernovae Nomoto et al. 2013. Every star within this mass range is assumed to release the same ejecta as the 20MSun model. In reality, stars with different masses will create ejecta of different masses and composition. The 20 Msun star model is however an average representative of the 15-40MSun mass range.
Z (iniZ) refers to the metallicity of the stellar population (one starburst).
Step2: The model assumes that all the faint SNs occur between 7.581 x 10^6 and 9.588 x 10^6 years. These are the respective lifetimes of the 20MSun and 25MSun star models at Z = 0.02. The number of faint SN per unit solar mass is constant throughout this time period (~0.00368 / MSun). The t and R arrays define the extra source time-distribution rate for a
simple stellar population. DTD stands for delay-time distribution. The extra pre-supernovae data yields are contained within 'yield_tables/extra_source_20Msun_O.txt'. The yield tables for preSN of different mass ranges are needed for future simulations for more precise analysis of faint SN ejecta.
Step3: Milky Way Commands
Omega incorporates the star formation history and data of different galaxies. The Milky Way is our chosen galaxy, although this can be changed using the galaxy parameter. The initial Milky Way mass is assumed to be 1e11 solar masses.
The transition mass refers to the transition between AGB (Asymptopic Giant Branch)stars and massive stars. A transition mass of 8Msun indicates that all stars below 8Msun in the IMF will have AGB star yields attached to them, while the stars above 8 Msun will have massive star yields attached to them.
Step4: Plotting the stellar abundance ratios
We can now plot the stellar abundance ratios. The element(s) of interest can be changed by altering the xaxis/yaxis identities below. In this case, O/Fe is plotted against Fe/H.
The abundance ratios are logarithmic as shown below.
$$[A/B]=\log(n_A/n_B)-\log(n_A/n_B)\odot.$$
where A and B are the relevant elements. This equation links $n_A/n_B$ (the abundance ratio for a star) and $(n_A/n_B)\odot$ (the solar normalisation). This ensures the resulting ratios are notated relative to the Suns own abundance ratios. If [A/B]= 0 this indicates that the star shares the same element abundance ratio for A and B. A star therefore has an abundance ratio $10^{A/B}$ times that of the Sun.The solar normalisation can be extracted from a reference paper, using the norm parameter, which in this case is 'Grevesse_Noels_1993'.
Step5: The observational data included by STELLAB can be changed by altering the list of references in obs. It is recommended that this list is kept relatively compact. Adding too much data may dominate the graph, making it difficult to observe important relationships. We have removed APOGEE, as it clouds particular areas of the graph, and does not currently feature notable error data. APOGEE can of course be returned if necessary. A list of references can be obtained using the command below.
Step6: The abundance ratios that are plotted can be changinging by swapping the element symbols in the x/y axis brackets. A list of included data is incorporated in obs, which STELLAB will plot. References can be removed from this list to reduce data dispersion and clouding. Error bars are included for detailed and accurate analysis of the ratios. These can be turned on by including 'show_err=True, show_mean_err=True' within the plot_spectro function above. However the inclusion of error may lead to OMEGA being plotted underneath STELLAB,so it is important to use an overplotting procedure as shown above. OMEGA plots in white underneath the coloured lines in this instance. This makes it easier to differentiate between OMEGA plots and STELLAB data. The colours are changed by changing the 'color' parameter. See Dale et.al for colour examples.
Not all STELLAB data currently contain error bars, as many of these papers do not include them in yield tables, and they instead state an average error. These error bars can be plotted separately on the graph. The error markers respond to the STELLAB data which match their shape and colour. | Python Code:
#import modules
#sygma and omega share the same chem_evol class
from NuPyCEE import chem_evol
from NuPyCEE import sygma
from NuPyCEE import omega
from NuPyCEE import stellab
#import Python plotting packages
import matplotlib
import matplotlib.pyplot as plt
#Define Stellab
stellab = stellab.stellab()
%matplotlib nbagg
Explanation: STELLAB and OMEGA (Stellar yields + Faint Supernovae)
Documented by Jacob Brazier.
Note This notebooks require an experimental yields table, which is not part of NuPyCEE.
See Côté et al. (2016) and the OMEGA/STELLAB/SYGMA notebooks for further information.
Faint supernovae are low luminosity core-collapse supernovae characterised by their relatively small generation of Nickel-56 Nomoto et al. 2013. Nuclei that originate from the silicon burning regions of a star, undergo fallback if the explosion energy is too low to overcome gravitational attraction. The mass of the content that undergoes fallback is called the 'mass cut'. Anything above this mass cut is pushed outwards by the explosion, while the material underneath undergoes neutronisation/destruction by the formation of a black hole. The material that is ejected tend to include oxygen and carbon nuclei. Mixing may also occur at the masscut, so that near-iron nuclei may still be ejected. The mixing and fallback model is described in more detail by Umeda & Nomoto 2002
This project uses OMEGA (One-zone Model for the Evolution of Galaxies) to simulate the chemical evolution of galaxies and to plot the expected abundance ratios of titanium, oxygen, iron, hydrogen and carbon. The main plots show the how the abundance of these elements (with respect to hydrogen) change with both time and stellar metallicity. Fe/H is assumed to be a measure of metallicity, as higher Fe/H values indicate a higher proportion of metal relative to hydrogen. Fe/H is also a rough indicator of the age of stellar populations as metallicity tends to increase with time, and the age of stars can be difficult to determine. We plot the predictions of OMEGA with and without the yields of faint supernovae. Therefore, we can emanine the impact of faint supernovae on the abundance of certain elements. We have made the approximation that the yields of 20MSun pre-supernovae stars are representative of faint supernovae in the 15-40 MSun mass range.
STELLAB (Stellar abundances) plots real observational data for different galaxies, and is plotted with OMEGA for comparison. SYGMA (Stellar Yields for Galactic Modelling Applications) calculates the chemical composition of the ejecta of Simple Stellar Populations (SSP) as a function of metallicity and time (Côté et al. 2016b) Chem_evol couples the stellar yields with the the distribution of initial stellar masses (Initial Mass Function [IMF]).
End of explanation
# Run a SYGMA instance to access the Initial Mass Function (IMF)
s_imf = sygma.sygma(iniZ=0.02, mgal=1.0)
# Define the mass range in which the extra yields will be applied
m_low = 15.0
m_up = 40.0
# Calculate the number of stars in that mass range per units of Msun formed
A = IMF = 1.0 / s_imf._imf(s_imf.imf_bdys[0], s_imf.imf_bdys[1], 2)
nb_extra_star_per_m = A * s_imf._imf(m_low, m_up, 1)
print (nb_extra_star_per_m)
Explanation: The Initial Mass Function
The Initial Mass Function describes the initial mass distribution of stars in a stellar population. Applying limits to this function allows us to control the mass range of the data it couples with. In this case it limits the range of the extra yields data. We have chosen the mass range 15-40 (Solar Mass), the mass range of the 1997D and 1999br faint supernovae Nomoto et al. 2013. Every star within this mass range is assumed to release the same ejecta as the 20MSun model. In reality, stars with different masses will create ejecta of different masses and composition. The 20 Msun star model is however an average representative of the 15-40MSun mass range.
Z (iniZ) refers to the metallicity of the stellar population (one starburst).
End of explanation
# Create the DTD and yields information for the extra source
# ==========================================================
# Event rate [yr^-1] as a function of time [yr].
# This assumes that all extra yields will be ejected
# between 7.581E+06 and 9.588E+06 years (the lifetimes
# of the 20 and 25 Msun models at Z = 0.02).
t = [7.580E+06, 7.581E+06, 9.588E+06, 9.589E+06]
R = [0.0, 1.0, 1.0, 0.0]
# Build the input DTD array
dtd = []
for i in range(0,len(t)):
dtd.append([t[i], R[i]])
# Add the DTD array in the delayed_extra_dtd array.
delayed_extra_dtd = [[dtd]]
# Define the total number of event per unit of Msun formed.
delayed_extra_dtd_norm = [[nb_extra_star_per_m]]
# Define the total mass ejected by an extra source
# Here, it would be best to find a correction factor
# to account for the different total mass ejected by
# stars having different masses. For now, each star
# in the mass range eject the same ejecta as the 20Msun
# model.
delayed_extra_yields_norm = [[1.0]]
# Define the yields path for the extra source
extra_yields = ['yield_tables/extra_source_20Msun_O.txt']
delayed_extra_yields = extra_yields
Explanation: The model assumes that all the faint SNs occur between 7.581 x 10^6 and 9.588 x 10^6 years. These are the respective lifetimes of the 20MSun and 25MSun star models at Z = 0.02. The number of faint SN per unit solar mass is constant throughout this time period (~0.00368 / MSun). The t and R arrays define the extra source time-distribution rate for a
simple stellar population. DTD stands for delay-time distribution. The extra pre-supernovae data yields are contained within 'yield_tables/extra_source_20Msun_O.txt'. The yield tables for preSN of different mass ranges are needed for future simulations for more precise analysis of faint SN ejecta.
End of explanation
# Use a different iPython notebook cell.
# OMEGA simulation with pre-SN yields and transition mass = 8 Msun
o = omega.omega(galaxy='milky_way', mgal=1e11, delayed_extra_dtd=delayed_extra_dtd, \
delayed_extra_dtd_norm=delayed_extra_dtd_norm, delayed_extra_yields=delayed_extra_yields, \
delayed_extra_yields_norm=delayed_extra_yields_norm, transitionmass=8.0)
# OMEGA simulation with pre-SN yields and transition mass = 10 Msun
o_trans = omega.omega(galaxy='milky_way', mgal=1e11, delayed_extra_dtd=delayed_extra_dtd, \
delayed_extra_dtd_norm=delayed_extra_dtd_norm, delayed_extra_yields=delayed_extra_yields, \
delayed_extra_yields_norm=delayed_extra_yields_norm, transitionmass=10.0)
# OMEGA simulation without pre-SN yields and transition mass = 8 Msun
o_no = omega.omega(galaxy='milky_way', mgal=1e11, transitionmass=8.0)
# OMEGA simulation without pre-SN yields and transition mass = 10 Msun
o_no_trans = omega.omega(galaxy='milky_way', mgal=1e11, transitionmass=10.0)
Explanation: Milky Way Commands
Omega incorporates the star formation history and data of different galaxies. The Milky Way is our chosen galaxy, although this can be changed using the galaxy parameter. The initial Milky Way mass is assumed to be 1e11 solar masses.
The transition mass refers to the transition between AGB (Asymptopic Giant Branch)stars and massive stars. A transition mass of 8Msun indicates that all stars below 8Msun in the IMF will have AGB star yields attached to them, while the stars above 8 Msun will have massive star yields attached to them.
End of explanation
#Obtain a list of normalisation reference papers
stellab.list_solar_norm()
Explanation: Plotting the stellar abundance ratios
We can now plot the stellar abundance ratios. The element(s) of interest can be changed by altering the xaxis/yaxis identities below. In this case, O/Fe is plotted against Fe/H.
The abundance ratios are logarithmic as shown below.
$$[A/B]=\log(n_A/n_B)-\log(n_A/n_B)\odot.$$
where A and B are the relevant elements. This equation links $n_A/n_B$ (the abundance ratio for a star) and $(n_A/n_B)\odot$ (the solar normalisation). This ensures the resulting ratios are notated relative to the Suns own abundance ratios. If [A/B]= 0 this indicates that the star shares the same element abundance ratio for A and B. A star therefore has an abundance ratio $10^{A/B}$ times that of the Sun.The solar normalisation can be extracted from a reference paper, using the norm parameter, which in this case is 'Grevesse_Noels_1993'.
End of explanation
stellab.list_ref_papers()
Explanation: The observational data included by STELLAB can be changed by altering the list of references in obs. It is recommended that this list is kept relatively compact. Adding too much data may dominate the graph, making it difficult to observe important relationships. We have removed APOGEE, as it clouds particular areas of the graph, and does not currently feature notable error data. APOGEE can of course be returned if necessary. A list of references can be obtained using the command below.
End of explanation
#Define your X and Y axis
xaxis = '[Fe/H]'
yaxis = '[O/Fe]'
# Plot observational data
obs = [
'stellab_data/milky_way_data/Venn_et_al_2004_stellab',
'stellab_data/milky_way_data/Akerman_et_al_2004_stellab',
'stellab_data/milky_way_data/Andrievsky_et_al_2007_stellab',
'stellab_data/milky_way_data/Andrievsky_et_al_2008_stellab',
'stellab_data/milky_way_data/Andrievsky_et_al_2010_stellab',
'stellab_data/milky_way_data/Bensby_et_al_2005_stellab',
'stellab_data/milky_way_data/Bihain_et_al_2004_stellab',
'stellab_data/milky_way_data/Bonifacio_et_al_2009_stellab',
'stellab_data/milky_way_data/Caffau_et_al_2005_stellab',
'stellab_data/milky_way_data/Gratton_et_al_2003_stellab',
'stellab_data/milky_way_data/Israelian_et_al_2004_stellab',
'stellab_data/milky_way_data/Nissen_et_al_2007_stellab',
'stellab_data/milky_way_data/Reddy_et_al_2003_stellab',
'stellab_data/milky_way_data/Spite_et_al_2005_stellab',
'stellab_data/milky_way_data/Battistini_Bensby_2016_stellab',
'stellab_data/milky_way_data/Nissen_et_al_2014_stellab',
'stellab_data/milky_way_data/Ramirez_et_al_2013_stellab',
'stellab_data/milky_way_data/Bensby_et_al_2014_stellab',
'stellab_data/milky_way_data/Battistini_Bensby_2015_stellab',
'stellab_data/milky_way_data/Adibekyan_et_al_2012_stellab',
'stellab_data/milky_way_data/Aoki_Honda_2008_stellab',
'stellab_data/milky_way_data/Hansen_et_al_2012_pecu_excluded_stellab',
'stellab_data/milky_way_data/Ishigaki_et_al_2012_2013_stellab',
'stellab_data/milky_way_data/Roederer_et_al_2009_stellab',
'stellab_data/milky_way_data/Roederer_et_al_2014_pecu_excluded_stellab']
stellab.plot_spectro(xaxis=xaxis, yaxis=yaxis, galaxy= 'milky way', norm='Grevesse_Noels_1993', show_err=True, show_mean_err=True, obs=obs, ms=4)
# Extract numerical predictions
xy = o.plot_spectro(xaxis=xaxis, yaxis=yaxis, return_x_y=True)
xy_trans = o_trans.plot_spectro(xaxis=xaxis, yaxis=yaxis, return_x_y=True)
xy_no = o_no.plot_spectro(xaxis=xaxis, yaxis=yaxis, return_x_y=True)
xy_no_trans = o_no_trans.plot_spectro(xaxis=xaxis, yaxis=yaxis, return_x_y=True)
# Plot white lines - these make it easier to differentiate OMEGA plots from STELLAB data
plt.plot(xy[0], xy[1], color='w', linewidth=3, zorder=999)
plt.plot(xy_no[0], xy_no[1], color='w', linewidth=3,zorder=999)
plt.plot(xy_trans[0], xy_trans[1], color='w', linewidth=3, alpha=0.9, zorder= 999)
plt.plot(xy_no_trans[0], xy_no_trans[1], color='w', linewidth=3, alpha=0.9, zorder=999)
# Plot coloured lines
plt.plot(xy[0], xy[1], color='g', linewidth=1.5, label='PreSN for M=[15-40], M_trans=8',zorder=1000)
plt.plot(xy_trans[0], xy_trans[1], color='r', linewidth=1.5, linestyle='--', label='PreSN for M=[15-40], M_trans=10', zorder=1000)
plt.plot(xy_no[0], xy_no[1], color='midnightblue', linewidth=1.5, label='Original, M_trans=8', zorder=1000)
plt.plot(xy_no_trans[0], xy_no_trans[1], color='yellow', linewidth=1.5, linestyle='--', label='Original, M_trans=10', zorder=1000)
# Update the legend
plt.legend(loc='center left', bbox_to_anchor=(1.01, 0.5), markerscale=1, fontsize=10)
# x and y positions of the added error bars
xlow = -4.
xhigh = 0.5
ylow = -1.4
yhigh = 1.8
plt.xlim(xlow,xhigh)
plt.ylim(ylow,yhigh)
#The size,shape and colour of the error bars and their markers
yerror=[0.16,0.05,0.0,0.23,0.2, 0.01] #dex
xerror=[0.07,0.03,0.03,0.05,0.24, 0.2] #size of error
marker=['bs','cx','bo','cs', 'co', 'rs'] #marker shape
markers=[4,4,4,4,4, 4] #marker size
colorbar=['b','c','b','c', 'c', 'r']
shift_x=[0.1,0.25, 0.37, 0.48, 0.8, 1.3] # these shift the position of the error bars
j=0
for i in yerror:
plt.errorbar(xhigh-shift_x[j],yhigh-0.26,xerr=xerror[j],yerr=i,fmt=marker[j],markersize=markers[j],ecolor=colorbar[j],capsize=2)
j=j+1
Explanation: The abundance ratios that are plotted can be changinging by swapping the element symbols in the x/y axis brackets. A list of included data is incorporated in obs, which STELLAB will plot. References can be removed from this list to reduce data dispersion and clouding. Error bars are included for detailed and accurate analysis of the ratios. These can be turned on by including 'show_err=True, show_mean_err=True' within the plot_spectro function above. However the inclusion of error may lead to OMEGA being plotted underneath STELLAB,so it is important to use an overplotting procedure as shown above. OMEGA plots in white underneath the coloured lines in this instance. This makes it easier to differentiate between OMEGA plots and STELLAB data. The colours are changed by changing the 'color' parameter. See Dale et.al for colour examples.
Not all STELLAB data currently contain error bars, as many of these papers do not include them in yield tables, and they instead state an average error. These error bars can be plotted separately on the graph. The error markers respond to the STELLAB data which match their shape and colour.
End of explanation |
2,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Step1: Create Folder Structure
Step7: Things to keep in mind (Troubleshooting)
Choose always verbosity=2 when training. Otherwise the notebook will crash.
Monitor the RAM while running the cells. You might find out that it gets filled quite fast. If this is the case, please wait until it's freed up.
In theory by running import gc; gc.collect() the Garbage collector is called. In practice it doesn't make much difference.
When it says "Memory Error" it means that you have filled the Computer RAM. Try restarting Jupyter.
When it says "It cannot allocate..." it means that you have filled the GPU VRAM. Try restarting Jupyter.
Don't go with a batch_size bigger than 4 when you have 8 GB RAM.
If you set shuffle=True in gen.flow_from_directory while getting the training batch, you might get weird results in the "Removing Dropout" section.
If you disable all optimizations in Theano in order to get memory, you might have some exceptions like
Step8: Simple Model (VGG16)
This version has the most basic possible configuration using the VGG16 pre-trained network. Please try to understand everything before moving forward.
What do we want to do?
Create simple model
Load batches (train, valid and test)
Finetune the model (replace the last dense layer by one that has only two outputs in this case)
Train the model
Step9: Data Augmentation (VGG16)
"Data Augmentation" is a technic to reduce "over-fitting", where a generator slightly modifies the images we load "on-the-fly" so the model cannot adapt too much to our training data.
What do we want to do?
Create simple model
Load batches (train, valid and test) with Data Augmentation (random changes to the images we load)
Finetune the model (replace the last dense layer by one that has only two outputs in this case)
Train the model
Step10: Backpropagation - Only Dense Layers (VGG16)
"Backpropagation" is the method of iteratively changing the weights of previous layers, not only the last one. When doing that for "Convolutional layers" we need to be very careful as it takes A LOT of memory.
What do we want to do?
Create simple model
Load batches (train, valid and test)
Finetune the model (replace the last dense layer by one that has only two outputs in this case)
Train first the last layer of the model. This way we are going to improve the overall accuracy.
Set a "trainable" ONLY the dense layers.
Train all the dense layers. Keep in mind that here the learning rate MUST be really small, as we assume that the pre-trained model is relatively good.
Step11: Data Augmentation + Backpropagation (VGG16)
Here we try the two methods together. Let's see if this improves the accuracy.
What do we want to do?
Create simple model
Load batches (train, valid and test) with Data Augmentation (random changes to the images we load)
Finetune the model (replace the last dense layer by one that has only two outputs in this case)
Train first the last layer of the model. This way we are going to improve the overall accuracy.
Set a "trainable" ONLY the dense layers.
Train all the dense layers. Keep in mind that here the learning rate MUST be really small, as we assume that the pre-trained model is relatively good.
Step12: Remove Dropout (VGG16)
"Dropout" is a regularization method that randomly removes a certain percent of activations from the previous layer. It's commonly used for networks that are "under-fitting", meaning that we are still throwing away useful information.
Why we calculate the "features" before? Because we don't want to train the convolutional layers (it takes too long). By using the output of the convolutional layers we are using a simple linear model which it's extremly fast.
What do we want to do?
Create model, finetune it and load good weights that we calculated before.
And load batches (train, valid and test)
Split the layers into two groups
Step13: Add Data Augmentation to Dropout 0.
Now that we are over-fitting, let's add Data Augmentation to the previous method.
What do we want to do?
Load batches (train, valid and test)
Get previous Fully connected model (linear model)
Add this fully connected model to the convolutional model we created before
Check that the new model is correct
Train the model
Step14: Batch Normalization.
Batch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. It's a MUST TO HAVE as you can improve the training speed up to 10x.
What do we want to do?
Check the current shape of the convolutional layers
Create model with Batch Normalization
Finetune it and adjust weights based on the new Dropout number
Train the Batch Normalization model
Create a final model based on the "convolutional layers"
Set as "non-trainable" to all the layers of this last model
Add all the new layers from the Batch Normalization model to this last model.
Set weights of the new added layers. Apparently, when you add a layer to a model, the weights are not copied through, so this step is required!
Train the last model
Step15: Viewing model prediction examples
A few correct labels at random
A few incorrect labels at random
The most correct labels of each class (ie those with highest probability that are correct)
The most incorrect labels of each class (ie those with highest probability that are incorrect)
The most uncertain labels (ie those with probability closest to 0.5).
Step16: Viewing Data Augmentation
Step17: Confussion Matrix
Step18: Predict Test set + create Kaggle submission file (taught in the Course)
Step20: Alternative way to generate Submission file (it has a better score!) | Python Code:
%matplotlib inline
import os
import sys
import math
import zipfile
import glob
import numpy as np
import utils; reload(utils)
from utils import *
from keras.models import Sequential
from keras.layers import Lambda, Dense
from keras import backend as K
from matplotlib import pyplot as plt
Explanation: Deep Learning: Dogs vs Cats Analysis
End of explanation
%pwd
#Allow relative imports to directories above this directory
sys.path.insert(1, os.path.join(sys.path[0], '..'))
zip_ref = zipfile.ZipFile('train.zip', 'r')
zip_ref.extractall('.')
zip_ref.close()
zip_ref = zipfile.ZipFile('test.zip', 'r')
zip_ref.extractall('.')
zip_ref.close()
#Create references to important directories we will use over and over
current_dir = os.getcwd()
DATA_HOME_DIR = current_dir
%cd $DATA_HOME_DIR
#Create directories
os.mkdir('valid')
os.mkdir('files')
os.mkdir('models')
os.mkdir('sample')
os.mkdir('sample/train')
os.mkdir('sample/valid')
os.mkdir('sample/files')
os.mkdir('sample/models')
os.mkdir('sample/test')
os.mkdir('sample/test/unknown')
%cd $DATA_HOME_DIR/train
# We move a certain number of files from the train to the valid directory.
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(2000): os.rename(shuf[i], DATA_HOME_DIR+'/valid/' + shuf[i])
from shutil import copyfile
# We copy a certain number of files from the train to the sample/train directory.
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(200): copyfile(shuf[i], DATA_HOME_DIR+'/sample/train/' + shuf[i])
%cd $DATA_HOME_DIR/valid
# We copy a certain number of files from the valid to the sample/valid directory.
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(50): copyfile(shuf[i], DATA_HOME_DIR+'/sample/valid/' + shuf[i])
%cd $DATA_HOME_DIR/test
# We copy a certain number of files from the test to the sample/test directory.
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(200): copyfile(shuf[i], DATA_HOME_DIR+'/sample/test/' + shuf[i])
#Divide cat/dog images into separate directories
%cd $DATA_HOME_DIR/sample/train
os.mkdir('cats')
os.mkdir('dogs')
os.rename("cat.*.jpg", "cats/")
os.rename("dog.*.jpg", "dogs/")
%cd $DATA_HOME_DIR/sample/valid
os.mkdir('cats')
os.mkdir('dogs')
os.rename("cat.*.jpg", "cats/")
os.rename("dog.*.jpg", "dogs/")
%cd $DATA_HOME_DIR/valid
os.mkdir('cats')
os.mkdir('dogs')
os.rename("cat.*.jpg", "cats/")
os.rename("dog.*.jpg", "dogs/")
%cd $DATA_HOME_DIR/train
os.mkdir('cats')
os.mkdir('dogs')
os.rename("cat.*.jpg", "cats/")
os.rename("dog.*.jpg", "dogs/")
# Create single 'unknown' class for test set
%cd $DATA_HOME_DIR/test
os.rename("*.jpg", "unknown/")
# Create single 'unknown' class for test set
%cd $DATA_HOME_DIR/sample/test
os.rename("*.jpg", "unknown/")
%cd $DATA_HOME_DIR
Explanation: Create Folder Structure
End of explanation
# We set the "seed" so we make the results a bit more predictable.
np.random.seed(1)
# Type 'sample/' if you want to work on a smaller dataset.
path = ''
# Depending on your GPU you should change this. For a GTX 970 this is a good value.
batch_size = 4
# This is the timestamp that we are going to use when saving files.
timestamp = '102714012017'
# Define some useful paths to save files (e.g weights)
files_path = path + 'files/'
models_path = path + 'models/'
def load_batches(path, shuffle=[False, False, True], augmentation=False):
Load different batches that we'll use in our calculations.
gen = image.ImageDataGenerator()
val_batches = gen.flow_from_directory(path + 'valid', target_size=(224,224),
class_mode='categorical', shuffle=shuffle[0], batch_size=batch_size)
test_batches = gen.flow_from_directory(path + 'test', target_size=(224,224),
class_mode='categorical', shuffle=shuffle[1], batch_size=batch_size)
# We only want Data augmentation for the training set.
if augmentation:
gen = image.ImageDataGenerator(rotation_range=20, width_shift_range=0.1, shear_range=0.05,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)
train_batches = gen.flow_from_directory(path + 'train', target_size=(224,224),
class_mode='categorical', shuffle=shuffle[2], batch_size=batch_size)
return train_batches, val_batches, test_batches
def finetune(model):
Removes the last layer (usually Dense) and replace it by another one more fitting.
This is useful when using a pre-trained model like VGG.
model.pop()
for layer in model.layers: layer.trainable=False
model.add(Dense(train_batches.nb_class, activation='softmax'))
model.compile(optimizer=RMSprop(lr=0.01, rho=0.7),
loss='categorical_crossentropy', metrics=['accuracy'])
def backpropagation(model):
Now we do Backpropagation. Backpropagation is when we want to train not only the last
Dense layer, but also some previous ones. Note that we don't train Convolutional layers.
layers = model.layers
for layer in layers: layer.trainable=False
# Get the index of the first dense layer...
first_dense_idx = [index for index,layer in enumerate(layers) if type(layer) is Dense][0]
# ...and set this and all subsequent layers to trainable
for layer in layers[first_dense_idx:]: layer.trainable=True
def save_weights(model, path, name, timestamp):
print 'Saving weights: {}.h5'.format(path + name + '_' + timestamp)
model.save_weights(path + '{}_{}.h5'.format(name, timestamp))
def load_weights(model, filepath):
print 'Loading weights: {}'.format(filepath)
model.load_weights(filepath)
def train_model(model, train_batches, val_batches, rules, name, timestamp):
Rules will be something like:
(
(0.01, 3),
(0.1, 2),
...
)
for lr, epochs in rules:
model.compile(optimizer=RMSprop(lr=lr, rho=0.7),
loss='categorical_crossentropy', metrics=['accuracy'])
for i in range(epochs):
print 'Lr: {}, Epoch: {}'.format(lr, i + 1)
model.fit_generator(train_batches, samples_per_epoch=train_batches.nb_sample, verbose=2,
nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
#sys.stdout = open('keras_output.txt', 'w')
#history = model.fit_generator(train_batches, samples_per_epoch=train_batches.nb_sample, verbose=2,
# nb_epoch=1, validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
#sys.stdout = sys.__stdout__
#with open('keras_output.txt') as f:
# content = f.readlines()
save_weights(model, files_path, '{}_lr{}_epoch{}'.format(
name, lr, i+1), timestamp)
def split_conv_fc(model):
Split Convolutional and Dense Layers.
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
conv_layers = layers[:last_conv_idx+1]
fc_layers = layers[last_conv_idx+1:]
return conv_layers, fc_layers
# Copy the weights from the pre-trained model.
# NB: Since we're removing dropout, we want to half the weights
def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
def get_fc_model(conv_layers, fc_layers):
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(2, activation='softmax')
])
for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
model.compile(optimizer=RMSprop(lr=0.00001, rho=0.7), loss='categorical_crossentropy', metrics=['accuracy'])
return model
Explanation: Things to keep in mind (Troubleshooting)
Choose always verbosity=2 when training. Otherwise the notebook will crash.
Monitor the RAM while running the cells. You might find out that it gets filled quite fast. If this is the case, please wait until it's freed up.
In theory by running import gc; gc.collect() the Garbage collector is called. In practice it doesn't make much difference.
When it says "Memory Error" it means that you have filled the Computer RAM. Try restarting Jupyter.
When it says "It cannot allocate..." it means that you have filled the GPU VRAM. Try restarting Jupyter.
Don't go with a batch_size bigger than 4 when you have 8 GB RAM.
If you set shuffle=True in gen.flow_from_directory while getting the training batch, you might get weird results in the "Removing Dropout" section.
If you disable all optimizations in Theano in order to get memory, you might have some exceptions like: Cuda error 'unspecified launch failure'
If you mix up optimizers like Adam or RMSprop, you might have weird results. Always use the same one.
IMPORTANT: If you get an accuracy near 0.500 in both training and validation set, try to reduce the learning rate to 0.00001 for example.
Run the following lines in order to set up the Enviroment
End of explanation
name = 'default_parameter_vgg16'
# 0. Create simple model
vgg = Vgg16()
# 1. Load batches (train, valid and test)
train_batches, val_batches, test_batches = load_batches(path)
# 2. Finetune the model (replace the last dense layer by one that has only two outputs in this case)
finetune(vgg.model)
# 3. Train the model
train_model(vgg.model,
train_batches,
val_batches,
((0.01, 1),),
name + '_lastlayer',
timestamp)
save_weights(vgg.model, files_path, name, timestamp)
Explanation: Simple Model (VGG16)
This version has the most basic possible configuration using the VGG16 pre-trained network. Please try to understand everything before moving forward.
What do we want to do?
Create simple model
Load batches (train, valid and test)
Finetune the model (replace the last dense layer by one that has only two outputs in this case)
Train the model
End of explanation
name = 'data_augmentation_vgg16'
# 0. Create simple model
vgg = Vgg16()
# 1. Load batches (train, valid and test) with Data Augmentation (random changes to the images we load)
train_batches, val_batches, test_batches = load_batches(path, augmentation=True)
# 2. Finetune the model (replace the last dense layer by one that has only two outputs in this case)
finetune(vgg.model)
# 3. Train the model
train_model(vgg.model, train_batches, val_batches, ((0.01, 1), (0.1, 1), (0.001, 1), (0.0001, 1)), name + '_lastlayer', timestamp)
save_weights(vgg.model, files_path, name, timestamp)
Explanation: Data Augmentation (VGG16)
"Data Augmentation" is a technic to reduce "over-fitting", where a generator slightly modifies the images we load "on-the-fly" so the model cannot adapt too much to our training data.
What do we want to do?
Create simple model
Load batches (train, valid and test) with Data Augmentation (random changes to the images we load)
Finetune the model (replace the last dense layer by one that has only two outputs in this case)
Train the model
End of explanation
name = 'backpropagation_vgg16'
# 0. Create simple model
vgg = Vgg16()
# 1. Load batches (train, valid and test)
train_batches, val_batches, test_batches = load_batches(path)
# 2. Finetune the model (replace the last dense layer by one that has only two outputs in this case)
finetune(vgg.model)
3. Train first the last layer of the model. This way we are going to improve the overall accuracy.
train_model(vgg.model,
train_batches,
val_batches,
((0.01, 1)),
name + '_lastlayer',
timestamp)
# 4. Set a "trainable" ALL the dense layers.
backpropagation(vgg.model)
# 5. Train all the dense layers. Keep in mind that here the learning rate MUST be really small, as we assume that the pre-trained model is relatively good.
train_model(vgg.model, train_batches, val_batches, ((0.0001, 1), (0.00001, 1)), name + '_denselayers', timestamp)
save_weights(vgg.model, files_path, name, timestamp)
Explanation: Backpropagation - Only Dense Layers (VGG16)
"Backpropagation" is the method of iteratively changing the weights of previous layers, not only the last one. When doing that for "Convolutional layers" we need to be very careful as it takes A LOT of memory.
What do we want to do?
Create simple model
Load batches (train, valid and test)
Finetune the model (replace the last dense layer by one that has only two outputs in this case)
Train first the last layer of the model. This way we are going to improve the overall accuracy.
Set a "trainable" ONLY the dense layers.
Train all the dense layers. Keep in mind that here the learning rate MUST be really small, as we assume that the pre-trained model is relatively good.
End of explanation
name = 'data_augmentation_backpropagation_vgg16'
# 0. Create simple model
vgg = Vgg16()
# 1. Load batches (train, valid and test) with Data Augmentation (random changes to the images we load)
train_batches, val_batches, test_batches = load_batches(path, augmentation=True)
# 2. Finetune the model (replace the last dense layer by one that has only two outputs in this case)
finetune(vgg.model)
# 3. Train first the last layer of the model. This way we are going to improve the overall accuracy.
train_model(vgg.model,
train_batches,
val_batches,
((0.01, 6), (0.001, 3), (0.0001, 3)),
name + '_lastlayer', timestamp)
# 4. Set a "trainable" ONLY the dense layers.
backpropagation(vgg.model)
# 5. Train all the dense layers. Keep in mind that here the learning rate MUST be really small, as we assume that the pre-trained model is relatively good.
train_model(vgg.model,
train_batches,
val_batches,
((0.0001, 1), (0.00001, 1)),
name + '_denselayers',
timestamp)
save_weights(vgg.model, files_path, name, timestamp)
Explanation: Data Augmentation + Backpropagation (VGG16)
Here we try the two methods together. Let's see if this improves the accuracy.
What do we want to do?
Create simple model
Load batches (train, valid and test) with Data Augmentation (random changes to the images we load)
Finetune the model (replace the last dense layer by one that has only two outputs in this case)
Train first the last layer of the model. This way we are going to improve the overall accuracy.
Set a "trainable" ONLY the dense layers.
Train all the dense layers. Keep in mind that here the learning rate MUST be really small, as we assume that the pre-trained model is relatively good.
End of explanation
name = 'remove_dropout_vgg16'
# 1) Create model
vgg = Vgg16()
model = vgg.model
# 1b) And load batches (train, valid and test)
train_batches, val_batches, test_batches = load_batches(path, shuffle=[False, False, False])
# 1c) finetune it!
finetune(model)
# 1d) Load good weights that we calculated before [This is an example, please change the path]
load_weights(model, 'files/data_augmentation_backpropagation_vgg16_lastlayer_lr0.0001_epoch2_144813012017.h5')
# 2) Split the layers into two groups: Convolutional layers and Dense layers (or fully connected).
layers = model.layers
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
print last_conv_idx
conv_layers = layers[:last_conv_idx+1]
fc_layers = layers[last_conv_idx+1:]
# 3) Create a model with only the Convolutional layers.
conv_model = Sequential(conv_layers)
conv_model.summary()
# 4) Calculate the predictions.
# The shape of the resulting array will be: (nb_samples, 512, 14, 14).
# This is a list of filters, so if we have 2000 images, we'll have for
# each image: 512 filters, where each filter is an array of 14x14.
val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)
train_features = conv_model.predict_generator(train_batches, train_batches.nb_sample)
# 5) Get "real" classes for the data using train_batches.classes. e.g 1 0 0 0 1 0 1 0 0 (each number is the class of an image)
val_classes = val_batches.classes
train_classes = train_batches.classes
# 6) Transform those classes in OneHot format. e.g [0,0,1,0...] per each image
val_labels = onehot(val_classes)
train_labels = onehot(train_classes)
# Optional: Save features
save_array(models_path + 'debugging.bc'.format(timestamp), train_features)
save_array(models_path + 'debugging.bc'.format(timestamp), val_features)
# Optional: Load features
train_features = load_array(models_path+'train_convlayer_features_144813012017_2.bc'.format(timestamp))
val_features = load_array(models_path+'valid_convlayer_features_144813012017_2.bc'.format(timestamp))
train_features.shape
# Optional. Look at the shape of the input of the model that we are about to create:
conv_layers[-1].output_shape[1:]
# It should have the same shape as the last convolutional layer
# 7) Create a new linear model that has this features as an input.
fc_model = get_fc_model(conv_layers, fc_layers)
# Optional. Look at the model we've just created:
fc_model.summary()
# 8) We train the model
fc_model.fit(train_features, train_labels, nb_epoch=1, verbose=2,
batch_size=batch_size, validation_data=(val_features, val_labels))
# Optional: We save the weights
save_weights(fc_model, files_path, name + '_9813', timestamp)
# Optional: Load weights
load_weights(fc_model, 'models/no_dropout.h5')
y = fc_model
Explanation: Remove Dropout (VGG16)
"Dropout" is a regularization method that randomly removes a certain percent of activations from the previous layer. It's commonly used for networks that are "under-fitting", meaning that we are still throwing away useful information.
Why we calculate the "features" before? Because we don't want to train the convolutional layers (it takes too long). By using the output of the convolutional layers we are using a simple linear model which it's extremly fast.
What do we want to do?
Create model, finetune it and load good weights that we calculated before.
And load batches (train, valid and test)
Split the layers into two groups: Convolutional layers and Dense layers.
Create a model with only the Convolutional layers.
Calculate the predictions of our train and valid data using this new model.
We'll have something like: [0, 0, 0.12, 0.45, 0,...]
The shape of the resulting array will be: (nb_samples, 512, 14, 14). This is a list of filters, so if we have 2000 images, we'll have for each image: 512 filters, where each filter is an array of 14x14.
This will be the input of the next linear model.
Get "real" classes for the data using train_batches.classes. e.g 1 0 0 0 1 0 1 0 0 (each number is the class of an image)
Transform those classes in OneHot format. e.g [0,0,1,0...] per each image
Create a new linear model that has this array as an input.
Because we removed the Dropout, now certain layers have the double number of inputs as before.
To fix that, we remove the half of the weights on those layers, so we replicate the behavior of Dropout. e.g for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
We train the model
End of explanation
name = 'data_augmentation_plus_dropout0_vgg16'
# 1b) And load batches (train, valid and test)
train_batches, val_batches, test_batches = load_batches(path, augmentation=True)
# 1. Get previous Fully connected model (linear model)
# CAREFUL! This will replace the existing weights! Leave it commented out if you want to re-use the weights
conv_model = Sequential(conv_layers)
fc_model = get_fc_model(conv_layers, fc_layers)
# 2. Add this fully connected model to the convolutional model we created before
# We need to do this because we don't want to train the convolutional layers.
#conv_model = Sequential(conv_layers)
for layer in conv_model.layers: layer.trainable = False
# Look how easy it is to connect two models together!
conv_model.add(fc_model)
# 2. Check that the new model is correct
conv_model.summary()
# 4. Train the model
train_model(conv_model,
train_batches,
val_batches,
((0.000001, 2),),
name + '_data_augentation_to_zero_dropout',
timestamp)
# Optional: We save the weights
save_weights(conv_model, files_path, name, timestamp)
Explanation: Add Data Augmentation to Dropout 0.
Now that we are over-fitting, let's add Data Augmentation to the previous method.
What do we want to do?
Load batches (train, valid and test)
Get previous Fully connected model (linear model)
Add this fully connected model to the convolutional model we created before
Check that the new model is correct
Train the model
End of explanation
name = 'batch_normalization_vgg16'
# 1. Check the current shape of the convolutional layers
conv_layers[-1].output_shape[1:]
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(4096, activation='relu'),
Dropout(p),
BatchNormalization(),
Dense(1000, activation='softmax')
]
# 2. Create model with Batch Normalization
p = 0.6
bn_model = Sequential(get_bn_layers(p))
def proc_wgts_bn(layer, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in layer.get_weights()]
# 3. Finetune it and adjust weights based on the new Dropout number
for l in bn_model.layers:
if type(l)==Dense: l.set_weights(proc_wgts_bn(l, 0.3, 0.6))
finetune(bn_model)
# 4. Train the Batch Normalization model
bn_model.fit(train_features, train_labels, nb_epoch=1, verbose=2,
batch_size=batch_size, validation_data=(val_features, val_labels))
# Optional: We save the weights
save_weights(bn_model, files_path, name, timestamp)
# 5. Create a final model based on the "convolutional layers"
bn_layers = get_bn_layers(0.6)
bn_layers.pop()
bn_layers.append(Dense(2,activation='softmax'))
final_model = Sequential(conv_layers)
# 6. Set as "non-trainable" to all the layers of this last model
for layer in final_model.layers: layer.trainable = False
# 7. Add all the new layers from the Batch Normalization model to this last model.
for layer in bn_layers: final_model.add(layer)
# 8. Set weights of the new added layers. Apparently, when you add a layer to a model, the weights are not copied through, so this step is required!
for l1,l2 in zip(bn_model.layers, bn_layers):
l2.set_weights(l1.get_weights())
# 9. Train the last model
train_model(final_model,
train_batches,
val_batches,
((0.000001, 1),),
name + '_batch_normalization',
timestamp)
# Optional: We save the weights
save_weights(bn_model, files_path, name, timestamp)
Explanation: Batch Normalization.
Batch normalization (batchnorm) is a way to ensure that activations don't become too high or too low at any point in the model. Adjusting activations so they are of similar scales is called normalization. It's a MUST TO HAVE as you can improve the training speed up to 10x.
What do we want to do?
Check the current shape of the convolutional layers
Create model with Batch Normalization
Finetune it and adjust weights based on the new Dropout number
Train the Batch Normalization model
Create a final model based on the "convolutional layers"
Set as "non-trainable" to all the layers of this last model
Add all the new layers from the Batch Normalization model to this last model.
Set weights of the new added layers. Apparently, when you add a layer to a model, the weights are not copied through, so this step is required!
Train the last model
End of explanation
val_batches, probs = vgg.test(path + 'valid', batch_size = batch_size)
filenames = val_batches.filenames
expected_labels = val_batches.classes #0 or 1
#Round our predictions to 0/1 to generate labels
our_predictions = probs[:,0]
our_labels = np.round(1-our_predictions)
from keras.preprocessing import image
#Helper function to plot images by index in the validation set
#Plots is a helper function in utils.py
def plots_idx(idx, titles=None):
plots([image.load_img(path + 'valid/' + filenames[i]) for i in idx], titles=titles)
#Number of images to view for each visualization task
n_view = 4
#1. A few correct labels at random
correct = np.where(our_labels==expected_labels)[0]
print "Found %d correct labels" % len(correct)
idx = permutation(correct)[:n_view]
plots_idx(idx, our_predictions[idx])
#2. A few incorrect labels at random
incorrect = np.where(our_labels!=expected_labels)[0]
print "Found %d incorrect labels" % len(incorrect)
idx = permutation(incorrect)[:n_view]
plots_idx(idx, our_predictions[idx])
#3a. The images we most confident were cats, and are actually cats
correct_cats = np.where((our_labels==0) & (our_labels==expected_labels))[0]
print "Found %d confident correct cats labels" % len(correct_cats)
most_correct_cats = np.argsort(our_predictions[correct_cats])[::-1][:n_view]
plots_idx(correct_cats[most_correct_cats], our_predictions[correct_cats][most_correct_cats])
#3b. The images we most confident were dogs, and are actually dogs
correct_dogs = np.where((our_labels==1) & (our_labels==expected_labels))[0]
print "Found %d confident correct dogs labels" % len(correct_dogs)
most_correct_dogs = np.argsort(our_predictions[correct_dogs])[:n_view]
plots_idx(correct_dogs[most_correct_dogs], our_predictions[correct_dogs][most_correct_dogs])
#4a. The images we were most confident were cats, but are actually dogs
incorrect_cats = np.where((our_labels==0) & (our_labels!=expected_labels))[0]
print "Found %d incorrect cats" % len(incorrect_cats)
if len(incorrect_cats):
most_incorrect_cats = np.argsort(our_predictions[incorrect_cats])[::-1][:n_view]
plots_idx(incorrect_cats[most_incorrect_cats], our_predictions[incorrect_cats][most_incorrect_cats])
#4b. The images we were most confident were dogs, but are actually cats
incorrect_dogs = np.where((our_labels==1) & (our_labels!=expected_labels))[0]
print "Found %d incorrect dogs" % len(incorrect_dogs)
if len(incorrect_dogs):
most_incorrect_dogs = np.argsort(our_predictions[incorrect_dogs])[:n_view]
plots_idx(incorrect_dogs[most_incorrect_dogs], our_predictions[incorrect_dogs][most_incorrect_dogs])
#5. The most uncertain labels (ie those with probability closest to 0.5).
most_uncertain = np.argsort(np.abs(our_predictions-0.5))
plots_idx(most_uncertain[:n_view], our_predictions[most_uncertain])
Explanation: Viewing model prediction examples
A few correct labels at random
A few incorrect labels at random
The most correct labels of each class (ie those with highest probability that are correct)
The most incorrect labels of each class (ie those with highest probability that are incorrect)
The most uncertain labels (ie those with probability closest to 0.5).
End of explanation
# dim_ordering='tf' uses tensorflow dimension ordering,
# which is the same order as matplotlib uses for display.
# Therefore when just using for display purposes, this is more convenient
gen = image.ImageDataGenerator(rotation_range=20, width_shift_range=0.1, shear_range=0.05,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True,dim_ordering='tf')
# Create a 'batch' of a single image
img = np.expand_dims(ndimage.imread(path+'test/unknown/87.jpg'),0)
# Request the generator to create batches from this image
aug_iter = gen.flow(img)
# Get eight examples of these augmented images
aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]
# The original
plt.imshow(img[0])
# Augmented data
plots(aug_imgs, (20,7), 2)
# Ensure that we return to theano dimension ordering
K.set_image_dim_ordering('th')
Explanation: Viewing Data Augmentation
End of explanation
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(expected_labels, our_labels)
plot_confusion_matrix(cm, val_batches.class_indices)
Explanation: Confussion Matrix
End of explanation
predictions = fc_model.predict_generator(test_batches, test_batches.nb_sample)
isdog = predictions[:,1]
print "Raw Predictions: " + str(isdog[:5])
print "Mid Predictions: " + str(isdog[(isdog < .6) & (isdog > .4)])
print "Edge Predictions: " + str(isdog[(isdog == 1) | (isdog == 0)])
isdog = isdog.clip(min=0.05, max=0.95)
#Extract imageIds from the filenames in our test/unknown directory
filenames = test_batches.filenames
ids = np.array([int(f[8:f.find('.')]) for f in filenames])
subm = np.stack([ids,isdog], axis=1)
subm[:5]
submission_file_name = 'submission_{}_5.csv'.format(timestamp)
np.savetxt(submission_file_name, subm, fmt='%d,%.5f', header='id,label', comments='')
from IPython.display import FileLink
FileLink(submission_file_name)
Explanation: Predict Test set + create Kaggle submission file (taught in the Course)
End of explanation
load_weights(conv_model, 'files/data_augmentation_plus_dropout0_vgg16_data_augentation_to_zero_dropout_lr1e-05_epoch1_102714012017.h5')
def write_submission_csv(submission_file_name, data, columns):
Write data according to the Kaggle submission format.
with open(submission_file_name, 'wb') as f:
w = csv.writer(f)
w.writerow(columns)
for key in data.keys():
w.writerow([key, data[key]])
gen = image.ImageDataGenerator()
test_batches = gen.flow_from_directory(path + 'test', target_size=(224,224),
class_mode=None, shuffle=False, batch_size=batch_size)
predictions = conv_model.predict_generator(test_batches, test_batches.nb_sample)
predictions[0]
#conv_model.summary()
import csv
d = {}
submission_file_name = 'submission_{}_5_new.csv'.format(timestamp)
for idx, filename in enumerate(test_batches.filenames):
# We only want the ID, so remove the folder name and file extension.
result = int(filename[8:-4])
# We use a trick to never show 0 or 1, but 0.05 and 0.95.
# This is required becase log loss penalizes predictions that are confident and wrong.
d[result] = predictions[idx][1].clip(min=0.05, max=0.95)
write_submission_csv(submission_file_name, d, ['id', 'label'])
from IPython.display import FileLink
FileLink(submission_file_name)
Explanation: Alternative way to generate Submission file (it has a better score!)
End of explanation |
2,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: Note
Step2: Include the input file that contains all input parameters needed for all components. This file can either be a python dictionary or a text file that can be converted into a python dictionary. If a text file is provided, it will be converted to a Python dictionary. Here we use an existing text file prepared for this exercise.
Step3: Instantiate landlab components to simulate corresponding attributes. In this example, we shall demonstrate the use of seasonal rainfall and PFT-specific potential evapotranspiration. The instantiated objects are
Step4: Lets look at the initial organization of PFTs
Step5: Specify an approximate number of years for the model to run. For this example, we will run the simulation for 600 years. It might take less than 2+ minutes to run.
Step6: Create empty arrays to store spatio-temporal data over multiple iterations. The captured data can be used for plotting model outputs.
Step7: To reduce computational overhead, we shall create a lookup array for plant-specific PET values for each day of the year.
Step8: Specify current_time (in years). current_time is the current time in the simulation.
Step9: The loop below couples the components introduced above in a for loop until all "n" number of storms are generated. Time is advanced by the soil moisture object based on storm and interstorm durations that are estimated by the strom generator object. The ecohydrologic model is run each storm whereas cellular automaton vegetation component is run once every year.
Note
Step10: Time_Consumed is an optional variable that gives information about computer running time
Step11: Save the outputs using numpy.save(). These files have '.nc' extension, which can be loaded using numpy.load().
Step12: Let's look at outputs.
Plots of the cellular field of PFT at specified year step can be found below where | Python Code:
from __future__ import print_function
%matplotlib inline
import time
import numpy as np
from landlab import RasterModelGrid as rmg
from landlab import load_params
from Ecohyd_functions_flat import (
Initialize_,
Empty_arrays,
Create_PET_lookup,
Save_,
Plot_,
)
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
WARNING: This tutorial has not been updated to work with Landlab 2.0 and is thus not tested to verify that it will run.
Tutorial For Cellular Automaton Vegetation Model Coupled With Ecohydrologic Model
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/v2_dev/user_guide/tutorials.html">https://landlab.readthedocs.io/en/v2_dev/user_guide/tutorials.html</a></small>
<hr>
This tutorial demonstrates implementation of the Cellular Automaton Tree-GRass-Shrub Simulator (CATGRaSS) [Zhou et al., 2013] on a flat domain. This model is built using components from the Landlab component library. CATGRaSS is spatially explicit model of plant coexistence. It simulates local ecohydrologic dynamics (soil moisture, transpiration, biomass) and spatial evolution of tree, grass, and shrub Plant Functional Types (PFT) driven by rainfall and solar radiation.
Each cell in the model grid can hold a single PFT or remain empty. Tree and shrub plants disperse seeds to their neighbors. Grass seeds are assumed to be available at each cell. Establishment of plants in empty cells is determined probabilistically based on water stress of each PFT. Plants with lower water stress have higher probability of establishment. Plant mortality is simulated probabilistically as a result of aging and drought stress. Fires and grazing will be added to this model soon.
This model (driver) contains:
- A local vegetation dynamics model that simulates storm and inter-storm water balance and ecohydrologic fluxes (ET, runoff), and plant biomass dynamics by coupling the following components:
- PrecipitationDistribution
- Radiation
- PotentialEvapotranspiration
- SoilMoisture
- Vegetation
A spatially explicit probabilistic cellular automaton component that simulates plant competition by tracking establishment and mortality of plants based on soil moisture stress:
- VegCA
To run this Jupyter notebook, please make sure that the following files are in the same folder:
- cellular_automaton_vegetation_flat_domain.ipynb (this notebook)
- Inputs_Vegetation_CA.txt (Input parameters for the model)
- Ecohyd_functions_flat.py (Utility functions)
[Ref: Zhou, X, E. Istanbulluoglu, and E.R. Vivoni. "Modeling the ecohydrological role of aspect-controlled radiation on tree-grass-shrub coexistence in a semiarid climate." Water Resources Research 49.5 (2013): 2872-2895]
In this tutorial, we are going to work with a landscape in central New Mexico, USA, where aspect controls the organization of PFTs. The climate in this area is semi-arid with Mean Annual Precipitation (MAP) of 254 mm [Zhou et. al 2013].
We will do the following:
- Import a landscape
- Initialize the landscape with random distribution of PFTs
- Run the coupled Ecohydrology and cellular automata plant competition model for 50 years
- Visualize and examine outputs
Let us walk through the code:
Import the required libraries
End of explanation
grid1 = rmg((100, 100), spacing=(5.0, 5.0))
grid = rmg((5, 4), spacing=(5.0, 5.0))
Explanation: Note: 'Ecohyd_functions_flat.py' is a utility script that contains 'functions', which instantiates components and manages inputs and outputs, and help keep this driver concise. Contents of 'Ecohyd_functions_flat.py' can be a part of this driver (current file), however left out to keep driver concise.
To minimize computation time, we will use two grids in this driver. One grid will represent a flat landscape or domain (i.e., landscape with same elevation), on which the cellular automata plant competition will be simulated at an yearly time step. Another grid, with enough cells to house one cell for each of the plant functional types (PFTs), will be used to simulate soil moisture decay and local vegetation dynamics, in between successive storms (i.e. time step = one storm). Cumulative water stress (stress experienced by plants due to lack of enough soil moisture) will be calculated over an year and mapped to the other grid.
grid: This grid represents the actual landscape. Each cell can be occupied by a single PFT such as tree, shrub, grass, or can be empty (bare). Initial PFT distribution is randomnly generated from inputs of percentage of cells occupied by each PFT.
grid1: This grid allows us to calculate PFT specific cumulative water stress (cumulated over each storm in the year) and mapped with 'grid'.
Note: In this tutorial, the physical ecohydrological components and cellular automata plant competition will be run on grids with different resolution. To use grids with same resolution, see the tutorial 'cellular_automaton_vegetation_DEM.ipynb'.
End of explanation
InputFile = "Inputs_Vegetation_CA_flat.txt"
data = load_params(InputFile) # Create dictionary that holds the inputs
Explanation: Include the input file that contains all input parameters needed for all components. This file can either be a python dictionary or a text file that can be converted into a python dictionary. If a text file is provided, it will be converted to a Python dictionary. Here we use an existing text file prepared for this exercise.
End of explanation
PD_D, PD_W, Rad, PET_Tree, PET_Shrub, PET_Grass, SM, VEG, vegca = Initialize_(
data, grid, grid1
)
Explanation: Instantiate landlab components to simulate corresponding attributes. In this example, we shall demonstrate the use of seasonal rainfall and PFT-specific potential evapotranspiration. The instantiated objects are:
- PD_D: object for dry season rainfall,
- PD_W: object for wet season rainfall,
- Rad: Radiation object computes radiation factor defined as the ratio of total shortwave radiation incident on a sloped surface to total shortwave radiation incident on a flat surface. Note: in this example a flat domain is considered. Radiation factor returned will be a cellular field of ones. This component is included because potential evaporanspiration (PET) component receives an input of radiation factor as a field.
- PET_PFT: Plant specific PET objects. PET is upper boundary to ET. For long-term simulations PET is represented using a cosine function as a function of day of year. Parameters of this function were obtained from P-M model application at a weather station. PET is spatially distributed by using the radiation factor.
- SM: Soil Moisture object simulates depth-averaged soil moisture at each cell using inputs of potential evapotranspiration, live leaf area index and vegetation cover.
- VEG: Vegetation dynamics object simulates net primary productivity, biomass and leaf area index (LAI) at each cell based on inputs of root-zone average soil moisture.
- vegca: Cellular Automaton plant competition object is run once every year. This object is initialized with a random cellular field of PFT. Every year, this object updates the cellular field of PFT based on probabilistic establishment and mortality of PFT at each cell.
Note: Almost every component in landlab is coded as a 'class' (to harness the advantages of objective oriented programming). An 'object' is the instantiation of the 'class' (for more information, please refer any objective oriented programming book). A 'field' refers to a Landlab field (please refer to the Landlab documentation to learn more about Landlab fields).
Now let's instantiate all Landlab components that we are going to use for this tutorial:
End of explanation
import matplotlib.pyplot as plt
import matplotlib as mpl
cmap = mpl.colors.ListedColormap(["green", "red", "black", "white", "red", "black"])
bounds = [-0.5, 0.5, 1.5, 2.5, 3.5, 4.5, 5.5]
norm = mpl.colors.BoundaryNorm(bounds, cmap.N)
description = "green: grass; red: shrub; black: tree; white: bare"
plt.figure(101)
grid1.imshow(
"vegetation__plant_functional_type",
at="cell",
cmap=cmap,
grid_units=("m", "m"),
norm=norm,
limits=[0, 5],
allow_colorbar=False,
)
plt.figtext(0.2, 0.0, description, weight="bold", fontsize=10)
Explanation: Lets look at the initial organization of PFTs
End of explanation
n_years = 600 # Approx number of years for model to run
# Calculate approximate number of storms per year
fraction_wet = (data["doy__end_of_monsoon"] - data["doy__start_of_monsoon"]) / 365.0
fraction_dry = 1 - fraction_wet
no_of_storms_wet = (
8760 * (fraction_wet) / (data["mean_interstorm_wet"] + data["mean_storm_wet"])
)
no_of_storms_dry = (
8760 * (fraction_dry) / (data["mean_interstorm_dry"] + data["mean_storm_dry"])
)
n = int(n_years * (no_of_storms_wet + no_of_storms_dry))
Explanation: Specify an approximate number of years for the model to run. For this example, we will run the simulation for 600 years. It might take less than 2+ minutes to run.
End of explanation
P, Tb, Tr, Time, VegType, PET_, Rad_Factor, EP30, PET_threshold = Empty_arrays(
n, grid, grid1
)
Explanation: Create empty arrays to store spatio-temporal data over multiple iterations. The captured data can be used for plotting model outputs.
End of explanation
Create_PET_lookup(Rad, PET_Tree, PET_Shrub, PET_Grass, PET_, Rad_Factor, EP30, grid)
Explanation: To reduce computational overhead, we shall create a lookup array for plant-specific PET values for each day of the year.
End of explanation
# # Represent current time in years
current_time = 0 # Start from first day of Jan
# Keep track of run time for simulation - optional
Start_time = time.clock() # Recording time taken for simulation
# declaring few variables that will be used in the storm loop
time_check = 0.0 # Buffer to store current_time at previous storm
yrs = 0 # Keep track of number of years passed
WS = 0.0 # Buffer for Water Stress
Tg = 270 # Growing season in days
Explanation: Specify current_time (in years). current_time is the current time in the simulation.
End of explanation
# # Run storm Loop
for i in range(0, n):
# Update objects
# Calculate Day of Year (DOY)
Julian = int(np.floor((current_time - np.floor(current_time)) * 365.0))
# Generate seasonal storms
# for Dry season
if Julian < data["doy__start_of_monsoon"] or Julian > data["doy__end_of_monsoon"]:
PD_D.update()
P[i] = PD_D.storm_depth
Tr[i] = PD_D.storm_duration
Tb[i] = PD_D.interstorm_duration
# Wet Season - Jul to Sep - NA Monsoon
else:
PD_W.update()
P[i] = PD_W.storm_depth
Tr[i] = PD_W.storm_duration
Tb[i] = PD_W.interstorm_duration
# Spatially distribute PET and its 30-day-mean (analogous to degree day)
grid["cell"]["surface__potential_evapotranspiration_rate"] = PET_[Julian]
grid["cell"]["surface__potential_evapotranspiration_30day_mean"] = EP30[Julian]
# Assign spatial rainfall data
grid["cell"]["rainfall__daily_depth"] = P[i] * np.ones(grid.number_of_cells)
# Update soil moisture component
current_time = SM.update(current_time, Tr=Tr[i], Tb=Tb[i])
# Decide whether its growing season or not
if Julian != 364:
if EP30[Julian + 1, 0] > EP30[Julian, 0]:
PET_threshold = 1
# 1 corresponds to ETThresholdup (begin growing season)
else:
PET_threshold = 0
# 0 corresponds to ETThresholddown (end growing season)
# Update vegetation component
VEG.update(PETThreshold_switch=PET_threshold, Tb=Tb[i], Tr=Tr[i])
# Update yearly cumulative water stress data
WS += (grid["cell"]["vegetation__water_stress"]) * Tb[i] / 24.0
# Record time (optional)
Time[i] = current_time
# Update spatial PFTs with Cellular Automata rules
if (current_time - time_check) >= 1.0:
if yrs % 100 == 0:
print("Elapsed time = {time} years".format(time=yrs))
VegType[yrs] = grid1["cell"]["vegetation__plant_functional_type"]
WS_ = np.choose(VegType[yrs], WS)
grid1["cell"]["vegetation__cumulative_water_stress"] = WS_ / Tg
vegca.update()
time_check = current_time
WS = 0
yrs += 1
VegType[yrs] = grid1["cell"]["vegetation__plant_functional_type"]
Explanation: The loop below couples the components introduced above in a for loop until all "n" number of storms are generated. Time is advanced by the soil moisture object based on storm and interstorm durations that are estimated by the strom generator object. The ecohydrologic model is run each storm whereas cellular automaton vegetation component is run once every year.
Note: This loop might take less than 2 minutes (depending on your computer) to run for 600 year simulation. Ignore any warnings you might see.
End of explanation
Final_time = time.clock()
Time_Consumed = (Final_time - Start_time) / 60.0 # in minutes
print("Time_consumed = {time} minutes".format(time=Time_Consumed))
Explanation: Time_Consumed is an optional variable that gives information about computer running time
End of explanation
# # Saving
sim = "Sim_26Jul16_"
# Save_(sim, Tb, Tr, P, VegType, yrs, Time_Consumed, Time)
Explanation: Save the outputs using numpy.save(). These files have '.nc' extension, which can be loaded using numpy.load().
End of explanation
Plot_(grid1, VegType, yrs, yr_step=100)
Explanation: Let's look at outputs.
Plots of the cellular field of PFT at specified year step can be found below where:
GRASS = green; SHRUB = red; TREE = black; BARE = white;
At the end, percentage cover of each PFT is plotted with respect to time.
End of explanation |
2,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
Step1: First reload the data we generated in 1_notmnist.ipynb.
Step2: Reformat into a shape that's more adapted to the models we're going to train
Step3: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this
Step4: Let's run this computation and iterate
Step5: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run().
Step6: Let's run it
Step7: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve your validation / test accuracy. | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
from six.moves import range
Explanation: Deep Learning
Assignment 2
Previously in 1_notmnist.ipynb, we created a pickle with formatted datasets for training, development and testing on the notMNIST dataset.
The goal of this assignment is to progressively train deeper and more accurate models using TensorFlow.
End of explanation
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
Explanation: First reload the data we generated in 1_notmnist.ipynb.
End of explanation
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 0 to [1.0, 0.0, 0.0 ...], 1 to [0.0, 1.0, 0.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
print(len(train_dataset[0]))
Explanation: Reformat into a shape that's more adapted to the models we're going to train:
- data as a flat matrix,
- labels as float 1-hot encodings.
End of explanation
# With gradient descent training, even this much data is prohibitive.
# Subset the training data for faster turnaround.
train_subset = 10000
graph = tf.Graph()
with graph.as_default():
# Input data.
# Load the training, validation and test data into constants that are
# attached to the graph.
tf_train_dataset = tf.constant(train_dataset[:train_subset, :])
tf_train_labels = tf.constant(train_labels[:train_subset])
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
# These are the parameters that we are going to be training. The weight
# matrix will be initialized using random valued following a (truncated)
# normal distribution. The biases get initialized to zero.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
# We multiply the inputs with the weight matrix, and add biases. We compute
# the softmax and cross-entropy (it's one operation in TensorFlow, because
# it's very common, and it can be optimized). We take the average of this
# cross-entropy across all training examples: that's our loss.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
# We are going to find the minimum of this loss using gradient descent.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
# These are not part of training, but merely here so that we can report
# accuracy figures as we train.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: We're first going to train a multinomial logistic regression using simple gradient descent.
TensorFlow works like this:
* First you describe the computation that you want to see performed: what the inputs, the variables, and the operations look like. These get created as nodes over a computation graph. This description is all contained within the block below:
with graph.as_default():
...
Then you can run the operations on this graph as many times as you want by calling session.run(), providing it outputs to fetch from the graph that get returned. This runtime operation is all contained in the block below:
with tf.Session(graph=graph) as session:
...
Let's load all the data into TensorFlow and build the computation graph corresponding to our training:
End of explanation
num_steps = 801
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
with tf.Session(graph=graph) as session:
# This is a one-time operation which ensures the parameters get initialized as
# we described in the graph: random weights for the matrix, zeros for the
# biases.
tf.initialize_all_variables().run()
print('Initialized')
for step in range(num_steps):
# Run the computations. We tell .run() that we want to run the optimizer,
# and get the loss value and the training predictions returned as numpy
# arrays.
_, l, predictions = session.run([optimizer, loss, train_prediction])
if (step % 100 == 0):
print('Loss at step %d: %f' % (step, l))
print('Training accuracy: %.1f%%' % accuracy(
predictions, train_labels[:train_subset, :]))
# Calling .eval() on valid_prediction is basically like calling run(), but
# just to get that one numpy array. Note that it recomputes all its graph
# dependencies.
print('Validation accuracy: %.1f%%\n' % accuracy(
valid_prediction.eval(), valid_labels))
print('Test accuracy: %.1f%%' % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run this computation and iterate:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights = tf.Variable(
tf.truncated_normal([image_size * image_size, num_labels]))
biases = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(tf_train_dataset, weights) + biases
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(
tf.matmul(tf_valid_dataset, weights) + biases)
test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights) + biases)
Explanation: Let's now switch to stochastic gradient descent training instead, which is much faster.
The graph will be similar, except that instead of holding all the training data into a constant node, we create a Placeholder node which will be fed actual data at every call of sesion.run().
End of explanation
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%\n" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Let's run it:
End of explanation
batch_size = 128
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Hidden layer
hidden_layer_size = 1024
weights_h = tf.Variable(
tf.truncated_normal([image_size * image_size, hidden_layer_size]))
biases_h = tf.Variable(tf.zeros([hidden_layer_size]))
hidden = tf.nn.relu(tf.matmul(tf_train_dataset, weights_h) + biases_h)
# Output layer
weights_o = tf.Variable(
tf.truncated_normal([hidden_layer_size, num_labels]))
biases_o = tf.Variable(tf.zeros([num_labels]))
# Training computation.
logits = tf.matmul(hidden, weights_o) + biases_o
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits, tf_train_labels))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
valid_hidden = tf.nn.relu(tf.matmul(tf_valid_dataset, weights_h) + biases_h)
valid_logits = tf.matmul(valid_hidden, weights_o) + biases_o
valid_prediction = tf.nn.softmax(valid_logits)
test_hidden = tf.nn.relu(tf.matmul(tf_test_dataset, weights_h) + biases_h)
test_logits = tf.matmul(test_hidden, weights_o) + biases_o
test_prediction = tf.nn.softmax(test_logits)
num_steps = 3001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run(
[optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%\n" % accuracy(
valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
Explanation: Problem
Turn the logistic regression example with SGD into a 1-hidden layer neural network with rectified linear units (nn.relu()) and 1024 hidden nodes. This model should improve your validation / test accuracy.
End of explanation |
2,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute source power spectral density (PSD) in a label
Returns an STC file containing the PSD (in dB) of each of the sources
within a label.
Step1: Set parameters
Step2: View PSD of sources in label | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, compute_source_psd
print(__doc__)
Explanation: Compute source power spectral density (PSD) in a label
Returns an STC file containing the PSD (in dB) of each of the sources
within a label.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_label = data_path + '/MEG/sample/labels/Aud-lh.label'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = 0, 120 # use the first 120s of data
fmin, fmax = 4, 100 # look at frequencies between 4 and 100Hz
n_fft = 2048 # the FFT size (n_fft). Ideally a power of 2
label = mne.read_label(fname_label)
stc = compute_source_psd(raw, inverse_operator, lambda2=1. / 9., method="dSPM",
tmin=tmin, tmax=tmax, fmin=fmin, fmax=fmax,
pick_ori="normal", n_fft=n_fft, label=label,
dB=True)
stc.save('psd_dSPM')
Explanation: Set parameters
End of explanation
plt.plot(stc.times, stc.data.T)
plt.xlabel('Frequency (Hz)')
plt.ylabel('PSD (dB)')
plt.title('Source Power Spectrum (PSD)')
plt.show()
Explanation: View PSD of sources in label
End of explanation |
2,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sale price distribution
First step is to look at the target sale price for the training data set, i.e. the column we're trying to predict.
Step1: The sale price is in hte hundreds of thousands, so let's divide the price by 1000 to get more manageable numbers.
Step2: The distribution is skewed (as demonstrated by the large z-score (and small pvalue) of teh skewtest). It is right skewed (the skew is positive). Skewed distribution are not ideal for linear models, which often assume a normal distribution. One way to correct for right-skewness is to take the log [1,2,3]
[1] http
Step3: Merge the training and test datasets for data preparation
We're going to explore the training dataset and apply some transformations to it (fixing missing values, transforming columns etc). We'll need to apply the same transformations to the test dataset. To make that easy, let's put the training and test datasets into one dataframe.
Step4: Features
The dataset is wide with 78 features.
Step5: We've got 3 data types
Step6: Split the data between categorical and numerical features
Step7: Numerical features
Create a numerical dataset to keep track of the features
Step8: We've got 36 numerical features. We can use the describe method to get some statistics
Step9: But that's a lot of numbers to digest. Better get started plotting! To help with plotting, but also to improve linear regression models, we're going to standardize our data. But before that we must deal with the NaN values.
http
Step10: Based on the description, the null values for the MasVnrArea should be 0 (no massonry veneer type)
Step11: For the GarageYrBlt, replace by the year the house was built.
Step12: Standardize the data
Step14: Plot violinplots for each feature
The violin plots give us some idea of the distribution of data for each feature. We can look for things like skewness, non-normality, and the presence of outliers.
Step15: Many of the features are higly skewed with very long tails.
Step16: Most of these are right skewed as well. BsmtFullBath has some discrete values (number of bathrooms).
Step17: Some features, such as BsmtFinSF2, are almost constant (blobs with long tail) as can be seen below
Step18: Drop nearly constant features
Step19: Log transform the other features if they have a high skewness
Using a log transformation for some of the skewed features should help, as illustrated below. We use the raw data (not the standardized one) because we need positive values for the log function (we'll standardize the transformed variables later).
Step20: Check the sign of the skewness for all these
Step22: Let's apply a log1p transform to all these and plot the distributions again
Step23: Now our originally skewed features look more symmetric.
Step24: Save transformed numerical data
Use the storage magic to communicate between notebooks.
Step25: Feature selection
We're now in a good position to identify the key numerical features. Those should be hightly correlated with the sale price.
Step26: Let's keep only the features that have a high enough correlation with the price (correlation less than 0.2) | Python Code:
target = pd.read_csv('../data/train_target.csv')
target.describe()
Explanation: Sale price distribution
First step is to look at the target sale price for the training data set, i.e. the column we're trying to predict.
End of explanation
target = target / 1000
sns.distplot(target);
plt.title('SalePrice')
import scipy as sp
sp.stats.skew(target)
sp.stats.skewtest(target)
Explanation: The sale price is in hte hundreds of thousands, so let's divide the price by 1000 to get more manageable numbers.
End of explanation
logtarget = np.log1p(target)
print('skewness of logtarget = ', sp.stats.skew(logtarget)[0])
print('skewness test of logtarget = ', sp.stats.skewtest(logtarget))
sns.distplot(logtarget)
plt.title(r'log(1 + SalePrice)')
Explanation: The distribution is skewed (as demonstrated by the large z-score (and small pvalue) of teh skewtest). It is right skewed (the skew is positive). Skewed distribution are not ideal for linear models, which often assume a normal distribution. One way to correct for right-skewness is to take the log [1,2,3]
[1] http://fmwww.bc.edu/repec/bocode/t/transint.html
[2] https://www.r-statistics.com/2013/05/log-transformations-for-skewed-and-wide-distributions-from-practical-data-science-with-r/
[3] Alexandru Papiu's notebook https://www.kaggle.com/apapiu/house-prices-advanced-regression-techniques/regularized-linear-models/commentsnotebook
We apply the function $x \rightarrow \log(1 + x)$ because it is always positive for $x \geq 0$
End of explanation
raw_train = pd.read_csv('../data/train_prepared_light.csv')
raw_test = pd.read_csv('../data/test_prepared_light.csv')
df = pd.concat([raw_train, raw_test], keys=['train', 'test'])
df.shape
ncategories = sum(df.dtypes == object)
ncategories
df.head()
df.tail()
Explanation: Merge the training and test datasets for data preparation
We're going to explore the training dataset and apply some transformations to it (fixing missing values, transforming columns etc). We'll need to apply the same transformations to the test dataset. To make that easy, let's put the training and test datasets into one dataframe.
End of explanation
df.columns, len(df.columns)
Explanation: Features
The dataset is wide with 78 features.
End of explanation
df.dtypes.unique()
Explanation: We've got 3 data types: int, float and object
End of explanation
is_categorical = (df.dtypes == object)
is_numerical = ~is_categorical
Explanation: Split the data between categorical and numerical features
End of explanation
dfnum = df.loc[:, is_numerical].copy()
dfnum.columns, len(dfnum.columns)
Explanation: Numerical features
Create a numerical dataset to keep track of the features
End of explanation
dfnum.describe()
Explanation: We've got 36 numerical features. We can use the describe method to get some statistics:
End of explanation
cols_with_nulls = dfnum.columns[dfnum.isnull().sum() > 0]
cols_with_nulls
dfnum.shape
dfnum[cols_with_nulls].isnull().sum().sort_values(ascending=False)
#.plot(kind='bar')
Explanation: But that's a lot of numbers to digest. Better get started plotting! To help with plotting, but also to improve linear regression models, we're going to standardize our data. But before that we must deal with the NaN values.
http://sebastianraschka.com/Articles/2014_about_feature_scaling.html
Deal with NaN values
End of explanation
# We may want to refine this in the future. Perhaps build a model to predict the missing GarageCars from the other features?
median_list = 'LotFrontage', 'BsmtFullBath','BsmtHalfBath', 'GarageCars', 'GarageArea'
zero_list = 'MasVnrArea', 'BsmtFinSF1', 'BsmtFinSF2', 'TotalBsmtSF', 'BsmtUnfSF'
for feature in median_list:
dfnum[feature].fillna(dfnum[feature].median(), inplace=True)
for feature in zero_list:
dfnum[feature].fillna(0, inplace=True)
Explanation: Based on the description, the null values for the MasVnrArea should be 0 (no massonry veneer type)
End of explanation
dfnum.GarageYrBlt.fillna(dfnum.YearBuilt[dfnum.GarageYrBlt.isnull()], inplace=True)
# Check that we got rid of the nulls
dfnum.isnull().sum().any()
# Assign to the slice (see the copy / write problem in Pandas)
df.loc[:, is_numerical] = dfnum
Explanation: For the GarageYrBlt, replace by the year the house was built.
End of explanation
def standardize(df):
return sk.preprocessing.StandardScaler().fit_transform(df)
dfnum_t = dfnum.apply(standardize)
dfnum_t.head()
Explanation: Standardize the data
End of explanation
def violinplot(df, ax=None):
if ax is None:
ax = plt.gca()
sns.violinplot(df, ax=ax)
for xlab in ax.get_xticklabels():
xlab.set_rotation(30)
def featureplot(df, nrows=1, figsize=(12,8), plotfunc=violinplot):
Plot the dataframe features
width, height = figsize
fig, axes = plt.subplots(nrows, 1, figsize=(width, height * nrows));
i = 0
plots_per_figure = df.shape[1] // nrows
if nrows == 1:
axes = [axes]
for j, ax in zip(range(plots_per_figure, df.shape[1] + 1, plots_per_figure), axes):
plotfunc(df.iloc[:, i:j], ax=ax)
i = j
dfnum_t.head()
train = dfnum.loc['train',:]
train_t = dfnum_t.loc['train',:]
Explanation: Plot violinplots for each feature
The violin plots give us some idea of the distribution of data for each feature. We can look for things like skewness, non-normality, and the presence of outliers.
End of explanation
featureplot(train_t.iloc[:, 0:9])
Explanation: Many of the features are higly skewed with very long tails.
End of explanation
featureplot(train_t.iloc[:, 9:18])
Explanation: Most of these are right skewed as well. BsmtFullBath has some discrete values (number of bathrooms).
End of explanation
fig, ax = plt.subplots(1,1, figsize=(4, 4))
sns.distplot(train_t['BsmtFinSF2'], ax=ax)
ax.set_title('Distribution of BsmtFinSF2')
Explanation: Some features, such as BsmtFinSF2, are almost constant (blobs with long tail) as can be seen below
End of explanation
def test_nearly_constant(series):
counts = series.value_counts()
max_val_count = max(counts)
other_val_count = counts.drop(counts.argmax()).sum()
return other_val_count / max_val_count < 0.25
is_nearly_constant = train_t.apply(test_nearly_constant)
is_nearly_constant.value_counts()
dropme = train_t.columns[is_nearly_constant]
dropme
df = df.drop(dropme, axis=1)
train = train.drop(dropme, axis=1)
train_t = train_t.drop(dropme, axis=1)
Explanation: Drop nearly constant features
End of explanation
fig, axes = plt.subplots(1,2, figsize=(8, 4))
sns.distplot(train['LotArea'], ax=axes[0])
sns.distplot(np.log1p(train['LotArea']), ax=axes[1])
zfactors = sp.stats.skewtest(train)[0]
sns.distplot(zfactors)
is_skewed = np.abs(zfactors) > 10
pd.Series(data=zfactors, index=train.columns)[is_skewed].sort_values().plot(kind='barh')
plt.title('Z-factor for skewtest')
Explanation: Log transform the other features if they have a high skewness
Using a log transformation for some of the skewed features should help, as illustrated below. We use the raw data (not the standardized one) because we need positive values for the log function (we'll standardize the transformed variables later).
End of explanation
assert all(np.sign(sp.stats.skew(train)[is_skewed]) > 0)
Explanation: Check the sign of the skewness for all these
End of explanation
def transform_skewed_colums(dfnum, is_skewed=is_skewed):
dfnum: dataframe to transform
dropme: columns to drop
is_skewed: iterable of length dfnum.columns indicating if a column is skewed
dfnum2 = dfnum.copy()
for feature, skewed_feature in zip(dfnum.columns, is_skewed):
if skewed_feature:
dfnum2[feature] = np.log1p(dfnum[feature])
dfnum_t2 = dfnum2.apply(standardize)
return dfnum_t2
# the transformed dataset has fewer columns and we only want those
dfnum_t2 = transform_skewed_colums(df.loc[:, is_numerical])
dfnum_t2.iloc[:, is_skewed].columns
zfactors2 = sp.stats.skewtest(dfnum_t2)[0]
pd.Series(data=zfactors2, index=dfnum_t2.columns)[is_skewed].sort_values().plot(kind='barh')
Explanation: Let's apply a log1p transform to all these and plot the distributions again
End of explanation
featureplot(dfnum_t2.iloc[:, is_skewed], nrows=2, figsize=(10,5))
featureplot(dfnum_t2.iloc[:, ~is_skewed], nrows=2, figsize=(10, 5))
Explanation: Now our originally skewed features look more symmetric.
End of explanation
dfnum_t2.index.names = ['Dataset', 'Id']
dfnum_t2.head()
dfnum_t2.to_csv('transformed_dataset_dfnum_t2.csv', index=True)
nfeatures = dfnum_t2.columns
target_t = logtarget.apply(standardize)
target_t.head()
Explanation: Save transformed numerical data
Use the storage magic to communicate between notebooks.
End of explanation
dfnum_t2.head()
corr = pd.DataFrame(data=dfnum_t2.loc['train',:].apply(lambda feature: sp.stats.pearsonr(feature, target_t['SalePrice'])),
columns=['pearsonr'])
corr = corr.assign(correlation=corr.applymap(lambda x: x[0]),
pvalue=corr.applymap(lambda x: x[1]))
corr = corr.drop('pearsonr', axis=1)
corr.head()
corr.sort_values('pvalue', ascending=False)['correlation'].plot(kind='barh')
corr.sort_values('pvalue').head()
corr.sort_values('pvalue').tail()
Explanation: Feature selection
We're now in a good position to identify the key numerical features. Those should be hightly correlated with the sale price.
End of explanation
min_correlation = 0.2
key_features = corr[np.abs(corr['correlation'] > min_correlation)].sort_values(by='correlation', ascending=False).index.values
key_features, key_features.size
%store key_features
Explanation: Let's keep only the features that have a high enough correlation with the price (correlation less than 0.2)
End of explanation |
2,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Optimization
Bayesian optimization is a powerful strategy for minimizing (or maximizing) objective functions that are costly to evaluate. It is an important component of automated machine learning toolboxes such as auto-sklearn, auto-weka, and scikit-optimize, where Bayesian optimization is used to select model hyperparameters. Bayesian optimization is used for a wide range of other applications as well; as cataloged in the review [2], these include interactive user-interfaces, robotics, environmental monitoring, information extraction, combinatorial optimization, sensor networks, adaptive Monte Carlo, experimental design, and reinforcement learning.
Problem Setup
We are given a minimization problem
$$ x^* = \text{arg}\min \ f(x), $$
where $f$ is a fixed objective function that we can evaluate pointwise.
Here we assume that we do not have access to the gradient of $f$. We also
allow for the possibility that evaluations of $f$ are noisy.
To solve the minimization problem, we will construct a sequence of points ${x_n}$ that converge to $x^$. Since we implicitly assume that we have a fixed budget (say 100 evaluations), we do not expect to find the exact minumum $x^$
Step1: Define an objective function
For the purposes of demonstration, the objective function we are going to consider is the Forrester et al. (2008) function
Step2: Let's begin by plotting $f$.
Step3: Setting a Gaussian Process prior
Gaussian processes are a popular choice for a function priors due to their power and flexibility. The core of a Gaussian Process is its covariance function $k$, which governs the similarity of $f(x)$ for pairs of input points. Here we will use a Gaussian Process as our prior for the objective function $f$. Given inputs $X$ and the corresponding noisy observations $y$, the model takes the form
$$f\sim\mathrm{MultivariateNormal}(0,k(X,X)),$$
$$y\sim f+\epsilon,$$
where $\epsilon$ is i.i.d. Gaussian noise and $k(X,X)$ is a covariance matrix whose entries are given by $k(x,x^\prime)$ for each pair of inputs $(x,x^\prime)$.
We choose the Matern kernel with $\nu = \frac{5}{2}$ (as suggested in reference [1]). Note that the popular RBF kernel, which is used in many regression tasks, results in a function prior whose samples are infinitely differentiable; this is probably an unrealistic assumption for most 'black-box' objective functions.
Step4: The following helper function update_posterior will take care of updating our gpmodel each time we evaluate $f$ at a new value $x$.
Step5: Define an acquisition function
There are many reasonable options for the acquisition function (see references [1] and [2] for a list of popular choices and a discussion of their properties). Here we will use one that is 'simple to implement and interpret,' namely the 'Lower Confidence Bound' acquisition function.
It is given by
$$
\alpha(x) = \mu(x) - \kappa \sigma(x)
$$
where $\mu(x)$ and $\sigma(x)$ are the mean and square root variance of the posterior at the point $x$, and the arbitrary constant $\kappa>0$ controls the trade-off between exploitation and exploration. This acquisition function will be minimized for choices of $x$ where either
Step6: The final component we need is a way to find (approximate) minimizing points $x_{\rm min}$ of the acquisition function. There are several ways to proceed, including gradient-based and non-gradient-based techniques. Here we will follow the gradient-based approach. One of the possible drawbacks of gradient descent methods is that the minimization algorithm can get stuck at a local minimum. In this tutorial, we adopt a (very) simple approach to address this issue
Step7: The inner loop of Bayesian Optimization
With the various helper functions defined above, we can now encapsulate the main logic of a single step of Bayesian Optimization in the function next_x
Step8: Running the algorithm
To illustrate how Bayesian Optimization works, we make a convenient plotting function that will help us visualize our algorithm's progress.
Step9: Our surrogate model gpmodel already has 4 function evaluations at its disposal; however, we have yet to optimize the GP hyperparameters. So we do that first. Then in a loop we call the next_x and update_posterior functions repeatedly. The following plot illustrates how Gaussian Process posteriors and the corresponding acquisition functions change at each step in the algorith. Note how query points are chosen both for exploration and exploitation. | Python Code:
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import torch
import torch.autograd as autograd
import torch.optim as optim
from torch.distributions import constraints, transform_to
import pyro
import pyro.contrib.gp as gp
assert pyro.__version__.startswith('1.7.0')
pyro.set_rng_seed(1)
Explanation: Bayesian Optimization
Bayesian optimization is a powerful strategy for minimizing (or maximizing) objective functions that are costly to evaluate. It is an important component of automated machine learning toolboxes such as auto-sklearn, auto-weka, and scikit-optimize, where Bayesian optimization is used to select model hyperparameters. Bayesian optimization is used for a wide range of other applications as well; as cataloged in the review [2], these include interactive user-interfaces, robotics, environmental monitoring, information extraction, combinatorial optimization, sensor networks, adaptive Monte Carlo, experimental design, and reinforcement learning.
Problem Setup
We are given a minimization problem
$$ x^* = \text{arg}\min \ f(x), $$
where $f$ is a fixed objective function that we can evaluate pointwise.
Here we assume that we do not have access to the gradient of $f$. We also
allow for the possibility that evaluations of $f$ are noisy.
To solve the minimization problem, we will construct a sequence of points ${x_n}$ that converge to $x^$. Since we implicitly assume that we have a fixed budget (say 100 evaluations), we do not expect to find the exact minumum $x^$: the goal is to get the best approximate solution we can given the allocated budget.
The Bayesian optimization strategy works as follows:
Place a prior on the objective function $f$. Each time we evaluate $f$ at a new point $x_n$, we update our model for $f(x)$. This model serves as a surrogate objective function and reflects our beliefs about $f$ (in particular it reflects our beliefs about where we expect $f(x)$ to be close to $f(x^*)$). Since we are being Bayesian, our beliefs are encoded in a posterior that allows us to systematically reason about the uncertainty of our model predictions.
Use the posterior to derive an "acquisition" function $\alpha(x)$ that is easy to evaluate and differentiate (so that optimizing $\alpha(x)$ is easy). In contrast to $f(x)$, we will generally evaluate $\alpha(x)$ at many points $x$, since doing so will be cheap.
Repeat until convergence:
Use the acquisition function to derive the next query point according to
$$ x_{n+1} = \text{arg}\min \ \alpha(x). $$
Evaluate $f(x_{n+1})$ and update the posterior.
A good acquisition function should make use of the uncertainty encoded in the posterior to encourage a balance between exploration—querying points where we know little about $f$—and exploitation—querying points in regions we have good reason to think $x^$ may lie. As the iterative procedure progresses our model for $f$ evolves and so does the acquisition function. If our model is good and we've chosen a reasonable acquisition function, we expect that the acquisition function will guide the query points $x_n$ towards $x^$.
In this tutorial, our model for $f$ will be a Gaussian process. In particular we will see how to use the Gaussian Process module in Pyro to implement a simple Bayesian optimization procedure.
End of explanation
def f(x):
return (6 * x - 2)**2 * torch.sin(12 * x - 4)
Explanation: Define an objective function
For the purposes of demonstration, the objective function we are going to consider is the Forrester et al. (2008) function:
$$f(x) = (6x-2)^2 \sin(12x-4), \quad x\in [0, 1].$$
This function has both a local minimum and a global minimum. The global minimum is at $x^* = 0.75725$.
End of explanation
x = torch.linspace(0, 1)
plt.figure(figsize=(8, 4))
plt.plot(x.numpy(), f(x).numpy())
plt.show()
Explanation: Let's begin by plotting $f$.
End of explanation
# initialize the model with four input points: 0.0, 0.33, 0.66, 1.0
X = torch.tensor([0.0, 0.33, 0.66, 1.0])
y = f(X)
gpmodel = gp.models.GPRegression(X, y, gp.kernels.Matern52(input_dim=1),
noise=torch.tensor(0.1), jitter=1.0e-4)
Explanation: Setting a Gaussian Process prior
Gaussian processes are a popular choice for a function priors due to their power and flexibility. The core of a Gaussian Process is its covariance function $k$, which governs the similarity of $f(x)$ for pairs of input points. Here we will use a Gaussian Process as our prior for the objective function $f$. Given inputs $X$ and the corresponding noisy observations $y$, the model takes the form
$$f\sim\mathrm{MultivariateNormal}(0,k(X,X)),$$
$$y\sim f+\epsilon,$$
where $\epsilon$ is i.i.d. Gaussian noise and $k(X,X)$ is a covariance matrix whose entries are given by $k(x,x^\prime)$ for each pair of inputs $(x,x^\prime)$.
We choose the Matern kernel with $\nu = \frac{5}{2}$ (as suggested in reference [1]). Note that the popular RBF kernel, which is used in many regression tasks, results in a function prior whose samples are infinitely differentiable; this is probably an unrealistic assumption for most 'black-box' objective functions.
End of explanation
def update_posterior(x_new):
y = f(x_new) # evaluate f at new point.
X = torch.cat([gpmodel.X, x_new]) # incorporate new evaluation
y = torch.cat([gpmodel.y, y])
gpmodel.set_data(X, y)
# optimize the GP hyperparameters using Adam with lr=0.001
optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001)
gp.util.train(gpmodel, optimizer)
Explanation: The following helper function update_posterior will take care of updating our gpmodel each time we evaluate $f$ at a new value $x$.
End of explanation
def lower_confidence_bound(x, kappa=2):
mu, variance = gpmodel(x, full_cov=False, noiseless=False)
sigma = variance.sqrt()
return mu - kappa * sigma
Explanation: Define an acquisition function
There are many reasonable options for the acquisition function (see references [1] and [2] for a list of popular choices and a discussion of their properties). Here we will use one that is 'simple to implement and interpret,' namely the 'Lower Confidence Bound' acquisition function.
It is given by
$$
\alpha(x) = \mu(x) - \kappa \sigma(x)
$$
where $\mu(x)$ and $\sigma(x)$ are the mean and square root variance of the posterior at the point $x$, and the arbitrary constant $\kappa>0$ controls the trade-off between exploitation and exploration. This acquisition function will be minimized for choices of $x$ where either: i) $\mu(x)$ is small (exploitation); or ii) where $\sigma(x)$ is large (exploration). A large value of $\kappa$ means that we place more weight on exploration because we prefer candidates $x$ in areas of high uncertainty. A small value of $\kappa$ encourages exploitation because we prefer candidates $x$ that minimize $\mu(x)$, which is the mean of our surrogate objective function. We will use $\kappa=2$.
End of explanation
def find_a_candidate(x_init, lower_bound=0, upper_bound=1):
# transform x to an unconstrained domain
constraint = constraints.interval(lower_bound, upper_bound)
unconstrained_x_init = transform_to(constraint).inv(x_init)
unconstrained_x = unconstrained_x_init.clone().detach().requires_grad_(True)
minimizer = optim.LBFGS([unconstrained_x], line_search_fn='strong_wolfe')
def closure():
minimizer.zero_grad()
x = transform_to(constraint)(unconstrained_x)
y = lower_confidence_bound(x)
autograd.backward(unconstrained_x, autograd.grad(y, unconstrained_x))
return y
minimizer.step(closure)
# after finding a candidate in the unconstrained domain,
# convert it back to original domain.
x = transform_to(constraint)(unconstrained_x)
return x.detach()
Explanation: The final component we need is a way to find (approximate) minimizing points $x_{\rm min}$ of the acquisition function. There are several ways to proceed, including gradient-based and non-gradient-based techniques. Here we will follow the gradient-based approach. One of the possible drawbacks of gradient descent methods is that the minimization algorithm can get stuck at a local minimum. In this tutorial, we adopt a (very) simple approach to address this issue:
First, we seed our minimization algorithm with 5 different values: i) one is chosen to be $x_{n-1}$, i.e. the candidate $x$ used in the previous step; and ii) four are chosen uniformly at random from the domain of the objective function.
We then run the minimization algorithm to approximate convergence for each seed value.
Finally, from the five candidate $x$s identified by the minimization algorithm, we select the one that minimizes the acquisition function.
Please refer to reference [2] for a more detailed discussion of this problem in Bayesian Optimization.
End of explanation
def next_x(lower_bound=0, upper_bound=1, num_candidates=5):
candidates = []
values = []
x_init = gpmodel.X[-1:]
for i in range(num_candidates):
x = find_a_candidate(x_init, lower_bound, upper_bound)
y = lower_confidence_bound(x)
candidates.append(x)
values.append(y)
x_init = x.new_empty(1).uniform_(lower_bound, upper_bound)
argmin = torch.min(torch.cat(values), dim=0)[1].item()
return candidates[argmin]
Explanation: The inner loop of Bayesian Optimization
With the various helper functions defined above, we can now encapsulate the main logic of a single step of Bayesian Optimization in the function next_x:
End of explanation
def plot(gs, xmin, xlabel=None, with_title=True):
xlabel = "xmin" if xlabel is None else "x{}".format(xlabel)
Xnew = torch.linspace(-0.1, 1.1)
ax1 = plt.subplot(gs[0])
ax1.plot(gpmodel.X.numpy(), gpmodel.y.numpy(), "kx") # plot all observed data
with torch.no_grad():
loc, var = gpmodel(Xnew, full_cov=False, noiseless=False)
sd = var.sqrt()
ax1.plot(Xnew.numpy(), loc.numpy(), "r", lw=2) # plot predictive mean
ax1.fill_between(Xnew.numpy(), loc.numpy() - 2*sd.numpy(), loc.numpy() + 2*sd.numpy(),
color="C0", alpha=0.3) # plot uncertainty intervals
ax1.set_xlim(-0.1, 1.1)
ax1.set_title("Find {}".format(xlabel))
if with_title:
ax1.set_ylabel("Gaussian Process Regression")
ax2 = plt.subplot(gs[1])
with torch.no_grad():
# plot the acquisition function
ax2.plot(Xnew.numpy(), lower_confidence_bound(Xnew).numpy())
# plot the new candidate point
ax2.plot(xmin.numpy(), lower_confidence_bound(xmin).numpy(), "^", markersize=10,
label="{} = {:.5f}".format(xlabel, xmin.item()))
ax2.set_xlim(-0.1, 1.1)
if with_title:
ax2.set_ylabel("Acquisition Function")
ax2.legend(loc=1)
Explanation: Running the algorithm
To illustrate how Bayesian Optimization works, we make a convenient plotting function that will help us visualize our algorithm's progress.
End of explanation
plt.figure(figsize=(12, 30))
outer_gs = gridspec.GridSpec(5, 2)
optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001)
gp.util.train(gpmodel, optimizer)
for i in range(8):
xmin = next_x()
gs = gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=outer_gs[i])
plot(gs, xmin, xlabel=i+1, with_title=(i % 2 == 0))
update_posterior(xmin)
plt.show()
Explanation: Our surrogate model gpmodel already has 4 function evaluations at its disposal; however, we have yet to optimize the GP hyperparameters. So we do that first. Then in a loop we call the next_x and update_posterior functions repeatedly. The following plot illustrates how Gaussian Process posteriors and the corresponding acquisition functions change at each step in the algorith. Note how query points are chosen both for exploration and exploitation.
End of explanation |
2,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to pybids
pybids is a tool to query, summarize and manipulate data using the BIDS standard.
In this tutorial we will use a pybids test dataset to illustrate some of the functionality of pybids.layout
Step1: The BIDSLayout
At the core of pybids is the BIDSLayout object. A BIDSLayout is a lightweight Python class that represents a BIDS project file tree and provides a variety of helpful methods for querying and manipulating BIDS files. While the BIDSLayout initializer has a large number of arguments you can use to control the way files are indexed and accessed, you will most commonly initialize a BIDSLayout by passing in the BIDS dataset root location as a single argument
Step2: Querying the BIDSLayout
When we initialize a BIDSLayout, all of the files and metadata found under the specified root folder are indexed. This can take a few seconds (or, for very large datasets, a minute or two). Once initialization is complete, we can start querying the BIDSLayout in various ways. The workhorse method is .get(). If we call .get() with no additional arguments, we get back a list of all the BIDS files in our dataset
Step3: The returned object is a Python list. By default, each element in the list is a BIDSFile object. We discuss the BIDSFile object in much more detail below. For now, let's simplify things and work with just filenames
Step4: This time, we get back only the names of the files.
Filtering files by entities
The utility of the BIDSLayout would be pretty limited if all we could do was retrieve a list of all files in the dataset. Fortunately, the .get() method accepts all kinds of arguments that allow us to filter the result set based on specified criteria. In fact, we can pass any BIDS-defined keywords (or, as they're called in PyBIDS, entities) as constraints. For example, here's how we would retrieve all BOLD runs with .nii.gz extensions for subject '01'
Step5: If you're wondering what entities you can pass in as filtering arguments, the answer is contained in the .json configuration files housed here. To save you the trouble, here are a few of the most common entities
Step6: Notice that we passed a list in for subject rather than just a string. This principle applies to all filters
Step7: If our target is a BIDS entity that corresponds to a particular directory in the BIDS spec (e.g., subject or session) we can also use return_type='dir' to get all matching subdirectories
Step8: Other get() options
The .get() method has a number of other useful arguments that control its behavior. We won't discuss these in detail here, but briefly, here are a couple worth knowing about
Step9: Here are some of the attributes and methods available to us in a BIDSFile (note that some of these are only available for certain subclasses of BIDSFile; e.g., you can't call get_image() on a BIDSFile that doesn't correspond to an image file!)
Step10: Here are all the files associated with our target file in some way. Notice how we get back both the JSON sidecar for our target file, and the BOLD run that our target file contains physiological recordings for.
Step11: In cases where a file has a .tsv.gz or .tsv extension, it will automatically be created as a BIDSDataFile, and we can easily grab the contents as a pandas DataFrame
Step12: While it would have been easy enough to read the contents of the file ourselves with pandas' read_csv() method, notice that in the above example, get_df() saved us the trouble of having to read the physiological recording file's metadata, pull out the column names and sampling rate, and add timing information.
Mind you, if we don't want the timing information, we can ignore it
Step13: Other utilities
Filename parsing
Say you have a filename, and you want to manually extract BIDS entities from it. The parse_file_entities method provides the facility
Step14: A version of this utility independent of a specific layout is available at bids.layout (doc) -
Step15: Path construction
You may want to create valid BIDS filenames for files that are new or hypothetical that would sit within your BIDS project. This is useful when you know what entity values you need to write out to, but don't want to deal with looking up the precise BIDS file-naming syntax. In the example below, imagine we've created a new file containing stimulus presentation information, and we want to save it to a .tsv.gz file, per the BIDS naming conventions. All we need to do is define a dictionary with the name components, and build_path takes care of the rest (including injecting sub-directories!)
Step16: You can also use build_path in more sophisticated ways—for example, by defining your own set of matching templates that cover cases not supported by BIDS out of the box. For example, suppose you want to create a template for naming a new z-stat file. You could do something like
Step17: Note that in the above example, we set validate=False to ensure that the standard BIDS file validator doesn't run (because the pattern we defined isn't actually compliant with the BIDS specification).
Loading derivatives
By default, BIDSLayout objects are initialized without scanning contained derivatives/ directories. But you can easily ensure that all derivatives files are loaded and endowed with the extra structure specified in the derivatives config file
Step18: The scope argument to get() specifies which part of the project to look in. By default, valid values are 'bids' (for the "raw" BIDS project that excludes derivatives) and 'derivatives' (for all BIDS-derivatives files). You can also pass the names of individual derivatives pipelines (e.g., passing 'fmriprep' would search only in a /derivatives/fmriprep folder). Either a string or a list of strings can be passed.
The following call returns the filenames of all derivatives files.
Step19: Exporting a BIDSLayout to a pandas Dataframe
If you want a summary of all the files in your BIDSLayout, but don't want to have to iterate BIDSFile objects and extract their entities, you can get a nice bird's-eye view of your dataset using the to_df() method.
Step20: We can also include metadata in the result if we like (which may blow up our DataFrame if we have a large dataset). Note that in this case, most of our cells will have missing values.
Step21: Retrieving BIDS variables
BIDS variables are stored in .tsv files at the run, session, subject, or dataset level. You can retrieve these variables with layout.get_collections(). The resulting objects can be converted to dataframes and merged with the layout to associate the variables with corresponding scans.
In the following example, we request all subject-level variable data available anywhere in the BIDS project, and merge the results into a single DataFrame (by default, we'll get back a single BIDSVariableCollection object for each subject).
Step22: BIDSValidator
pybids implicitly imports a BIDSValidator class from the separate bids-validator package. You can use the BIDSValidator to determine whether a filepath is a valid BIDS filepath, as well as answering questions about what kind of data it represents. Note, however, that this implementation of the BIDS validator is not necessarily up-to-date with the JavaScript version available online. Moreover, the Python validator only tests individual files, and is currently unable to validate entire BIDS datasets. For that, you should use the online BIDS validator. | Python Code:
from bids import BIDSLayout
from bids.tests import get_test_data_path
import os
Explanation: Introduction to pybids
pybids is a tool to query, summarize and manipulate data using the BIDS standard.
In this tutorial we will use a pybids test dataset to illustrate some of the functionality of pybids.layout
End of explanation
# Here we're using an example BIDS dataset that's bundled with the pybids tests
data_path = os.path.join(get_test_data_path(), '7t_trt')
# Initialize the layout
layout = BIDSLayout(data_path)
# Print some basic information about the layout
layout
Explanation: The BIDSLayout
At the core of pybids is the BIDSLayout object. A BIDSLayout is a lightweight Python class that represents a BIDS project file tree and provides a variety of helpful methods for querying and manipulating BIDS files. While the BIDSLayout initializer has a large number of arguments you can use to control the way files are indexed and accessed, you will most commonly initialize a BIDSLayout by passing in the BIDS dataset root location as a single argument:
End of explanation
all_files = layout.get()
print("There are {} files in the layout.".format(len(all_files)))
print("\nThe first 10 files are:")
all_files[:10]
Explanation: Querying the BIDSLayout
When we initialize a BIDSLayout, all of the files and metadata found under the specified root folder are indexed. This can take a few seconds (or, for very large datasets, a minute or two). Once initialization is complete, we can start querying the BIDSLayout in various ways. The workhorse method is .get(). If we call .get() with no additional arguments, we get back a list of all the BIDS files in our dataset:
End of explanation
layout.get(return_type='filename')[:10]
Explanation: The returned object is a Python list. By default, each element in the list is a BIDSFile object. We discuss the BIDSFile object in much more detail below. For now, let's simplify things and work with just filenames:
End of explanation
# Retrieve filenames of all BOLD runs for subject 01
layout.get(subject='01', extension='nii.gz', suffix='bold', return_type='filename')
Explanation: This time, we get back only the names of the files.
Filtering files by entities
The utility of the BIDSLayout would be pretty limited if all we could do was retrieve a list of all files in the dataset. Fortunately, the .get() method accepts all kinds of arguments that allow us to filter the result set based on specified criteria. In fact, we can pass any BIDS-defined keywords (or, as they're called in PyBIDS, entities) as constraints. For example, here's how we would retrieve all BOLD runs with .nii.gz extensions for subject '01':
End of explanation
# Retrieve all files where SamplingFrequency (a metadata key) = 100
# and acquisition = prefrontal, for the first two subjects
layout.get(subject=['01', '02'], SamplingFrequency=100, acquisition="prefrontal")
Explanation: If you're wondering what entities you can pass in as filtering arguments, the answer is contained in the .json configuration files housed here. To save you the trouble, here are a few of the most common entities:
suffix: The part of a BIDS filename just before the extension (e.g., 'bold', 'events', 'physio', etc.).
subject: The subject label
session: The session label
run: The run index
task: The task name
New entities are continually being defined as the spec grows, and in principle (though not always in practice), PyBIDS should be aware of all entities that are defined in the BIDS specification.
Filtering by metadata
All of the entities listed above are found in the names of BIDS files. But sometimes we want to search for files based not just on their names, but also based on metadata defined (per the BIDS spec) in JSON files. Fortunately for us, when we initialize a BIDSLayout, all metadata files associated with BIDS files are automatically indexed. This means we can pass any key that occurs in any JSON file in our project as an argument to .get(). We can combine these with any number of core BIDS entities (like subject, run, etc.).
For example, say we want to retrieve all files where (a) the value of SamplingFrequency (a metadata key) is 100, (b) the acquisition type is 'prefrontal', and (c) the subject is '01' or '02'. Here's how we can do that:
End of explanation
# Ask get() to return the ids of subjects that have T1w files
layout.get(return_type='id', target='subject', suffix='T1w')
Explanation: Notice that we passed a list in for subject rather than just a string. This principle applies to all filters: you can always pass in a list instead of a single value, and this will be interpreted as a logical disjunction (i.e., a file must match any one of the provided values).
Other return_type values
While we'll typically want to work with either BIDSFile objects or filenames, we can also ask get() to return unique values (or ids) of particular entities. For example, say we want to know which subjects have at least one T1w file. We can request that information by setting return_type='id'. When using this option, we also need to specify a target entity (or metadata keyword) called target. This combination tells the BIDSLayout to return the unique values for the specified target entity. For example, in the next example, we ask for all of the unique subject IDs that have at least one file with a T1w suffix:
End of explanation
layout.get(return_type='dir', target='subject')
Explanation: If our target is a BIDS entity that corresponds to a particular directory in the BIDS spec (e.g., subject or session) we can also use return_type='dir' to get all matching subdirectories:
End of explanation
# Pick the 15th file in the dataset
bf = layout.get()[15]
# Print it
bf
Explanation: Other get() options
The .get() method has a number of other useful arguments that control its behavior. We won't discuss these in detail here, but briefly, here are a couple worth knowing about:
* regex_search: If you set this to True, string filter argument values will be interpreted as regular expressions.
* scope: If your BIDS dataset contains BIDS-derivatives sub-datasets, you can specify the scope (e.g., derivatives, or a BIDS-Derivatives pipeline name) of the search space.
The BIDSFile
When you call .get() on a BIDSLayout, the default returned values are objects of class BIDSFile. A BIDSFile is a lightweight container for individual files in a BIDS dataset. It provides easy access to a variety of useful attributes and methods. Let's take a closer look. First, let's pick a random file from our existing layout.
End of explanation
# Print all the entities associated with this file, and their values
bf.get_entities()
# Print all the metadata associated with this file
bf.get_metadata()
# We can the union of both of the above in one shot like this
bf.get_entities(metadata='all')
Explanation: Here are some of the attributes and methods available to us in a BIDSFile (note that some of these are only available for certain subclasses of BIDSFile; e.g., you can't call get_image() on a BIDSFile that doesn't correspond to an image file!):
* .path: The full path of the associated file
* .filename: The associated file's filename (without directory)
* .dirname: The directory containing the file
* .get_entities(): Returns information about entities associated with this BIDSFile (optionally including metadata)
* .get_image(): Returns the file contents as a nibabel image (only works for image files)
* .get_df(): Get file contents as a pandas DataFrame (only works for TSV files)
* .get_metadata(): Returns a dictionary of all metadata found in associated JSON files
* .get_associations(): Returns a list of all files associated with this one in some way
Let's see some of these in action.
End of explanation
bf.get_associations()
Explanation: Here are all the files associated with our target file in some way. Notice how we get back both the JSON sidecar for our target file, and the BOLD run that our target file contains physiological recordings for.
End of explanation
# Use a different test dataset--one that contains physio recording files
data_path = os.path.join(get_test_data_path(), 'synthetic')
layout2 = BIDSLayout(data_path)
# Get the first physiological recording file
recfile = layout2.get(suffix='physio')[0]
# Get contents as a DataFrame and show the first few rows
df = recfile.get_df()
df.head()
Explanation: In cases where a file has a .tsv.gz or .tsv extension, it will automatically be created as a BIDSDataFile, and we can easily grab the contents as a pandas DataFrame:
End of explanation
recfile.get_df(include_timing=False).head()
Explanation: While it would have been easy enough to read the contents of the file ourselves with pandas' read_csv() method, notice that in the above example, get_df() saved us the trouble of having to read the physiological recording file's metadata, pull out the column names and sampling rate, and add timing information.
Mind you, if we don't want the timing information, we can ignore it:
End of explanation
path = "/a/fake/path/to/a/BIDS/file/sub-01_run-1_T2w.nii.gz"
layout.parse_file_entities(path)
Explanation: Other utilities
Filename parsing
Say you have a filename, and you want to manually extract BIDS entities from it. The parse_file_entities method provides the facility:
End of explanation
from bids.layout import parse_file_entities
path = "/a/fake/path/to/a/BIDS/file/sub-01_run-1_T2w.nii.gz"
parse_file_entities(path)
Explanation: A version of this utility independent of a specific layout is available at bids.layout (doc) -
End of explanation
entities = {
'subject': '01',
'run': 2,
'task': 'nback',
'suffix': 'bold'
}
layout.build_path(entities)
Explanation: Path construction
You may want to create valid BIDS filenames for files that are new or hypothetical that would sit within your BIDS project. This is useful when you know what entity values you need to write out to, but don't want to deal with looking up the precise BIDS file-naming syntax. In the example below, imagine we've created a new file containing stimulus presentation information, and we want to save it to a .tsv.gz file, per the BIDS naming conventions. All we need to do is define a dictionary with the name components, and build_path takes care of the rest (including injecting sub-directories!):
End of explanation
# Define the pattern to build out of the components passed in the dictionary
pattern = "sub-{subject}[_ses-{session}]_task-{task}[_acq-{acquisition}][_rec-{reconstruction}][_run-{run}][_echo-{echo}]_{suffix<z>}.nii.gz",
entities = {
'subject': '01',
'run': 2,
'task': 'nback',
'suffix': 'z'
}
# Notice we pass the new pattern as the second argument
layout.build_path(entities, pattern, validate=False)
Explanation: You can also use build_path in more sophisticated ways—for example, by defining your own set of matching templates that cover cases not supported by BIDS out of the box. For example, suppose you want to create a template for naming a new z-stat file. You could do something like:
End of explanation
# Define paths to root and derivatives folders
root = os.path.join(get_test_data_path(), 'synthetic')
layout2 = BIDSLayout(root, derivatives=True)
layout2
Explanation: Note that in the above example, we set validate=False to ensure that the standard BIDS file validator doesn't run (because the pattern we defined isn't actually compliant with the BIDS specification).
Loading derivatives
By default, BIDSLayout objects are initialized without scanning contained derivatives/ directories. But you can easily ensure that all derivatives files are loaded and endowed with the extra structure specified in the derivatives config file:
End of explanation
# Get all files in derivatives
layout2.get(scope='derivatives', return_type='file')
Explanation: The scope argument to get() specifies which part of the project to look in. By default, valid values are 'bids' (for the "raw" BIDS project that excludes derivatives) and 'derivatives' (for all BIDS-derivatives files). You can also pass the names of individual derivatives pipelines (e.g., passing 'fmriprep' would search only in a /derivatives/fmriprep folder). Either a string or a list of strings can be passed.
The following call returns the filenames of all derivatives files.
End of explanation
# Convert the layout to a pandas dataframe
df = layout.to_df()
df.head()
Explanation: Exporting a BIDSLayout to a pandas Dataframe
If you want a summary of all the files in your BIDSLayout, but don't want to have to iterate BIDSFile objects and extract their entities, you can get a nice bird's-eye view of your dataset using the to_df() method.
End of explanation
layout.to_df(metadata=True).head()
Explanation: We can also include metadata in the result if we like (which may blow up our DataFrame if we have a large dataset). Note that in this case, most of our cells will have missing values.
End of explanation
# Get subject variables as a dataframe and merge them back in with the layout
subj_df = layout.get_collections(level='subject', merge=True).to_df()
subj_df.head()
Explanation: Retrieving BIDS variables
BIDS variables are stored in .tsv files at the run, session, subject, or dataset level. You can retrieve these variables with layout.get_collections(). The resulting objects can be converted to dataframes and merged with the layout to associate the variables with corresponding scans.
In the following example, we request all subject-level variable data available anywhere in the BIDS project, and merge the results into a single DataFrame (by default, we'll get back a single BIDSVariableCollection object for each subject).
End of explanation
from bids import BIDSValidator
# Note that when using the bids validator, the filepath MUST be relative to the top level bids directory
validator = BIDSValidator()
validator.is_bids('/sub-02/ses-01/anat/sub-02_ses-01_T2w.nii.gz')
# Can decide if a filepath represents a file part of the specification
validator.is_file('/sub-02/ses-01/anat/sub-02_ses-01_T2w.json')
# Can check if a file is at the top level of the dataset
validator.is_top_level('/dataset_description.json')
# or subject (or session) level
validator.is_subject_level('/dataset_description.json')
validator.is_session_level('/sub-02/ses-01/sub-02_ses-01_scans.json')
# Can decide if a filepath represents phenotypic data
validator.is_phenotypic('/sub-02/ses-01/anat/sub-02_ses-01_T2w.nii.gz')
Explanation: BIDSValidator
pybids implicitly imports a BIDSValidator class from the separate bids-validator package. You can use the BIDSValidator to determine whether a filepath is a valid BIDS filepath, as well as answering questions about what kind of data it represents. Note, however, that this implementation of the BIDS validator is not necessarily up-to-date with the JavaScript version available online. Moreover, the Python validator only tests individual files, and is currently unable to validate entire BIDS datasets. For that, you should use the online BIDS validator.
End of explanation |
2,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Colorization AutoEncoder PyTorch Demo using CIFAR10
In this demo, we build a simple colorization autoencoder using PyTorch.
Step1: CNN Encoder using PyTorch
We use 3 CNN layers to encode the grayscale input image. We use stride of 2 to reduce the feature map size. The last MLP layer resizes the flattened feature map to the target latent vector size. We use more filters and a much bigger latent vector size of 256 to encode more information.
Step2: CNN Decoder using PyTorch
A decoder is used to reconstruct the input image. The decoder is trained to reconstruct the input data from the latent space. The architecture is similar to the encoder but inverted. A latent vector is resized using an MLP layer so that it is suitable for a convolutional layer. We use strided tranposed convolutional layers to upsample the feature map until the desired image size is reached. The target image is the colorized version of the input image.
Step3: PyTorch Lightning Colorization AutoEncoder
In the colorization autoencoder, the encoder extracts features from the input image and the decoder reconstructs the input image from the latent space. The decoder adds color. The decoder's last layer has 3 output channels corresponding to RGB.
We gray_collate_fn to generate gray images from RGB images.
Step4: Arguments
Similar to the MNIST AE but we use a bigger latent vector size of 256 given that the colorization task needs more feature information from the input image.
Step5: Weights and Biases Callback
The callback logs train and validation metrics to wandb. It also logs sample predictions. This is similar to our WandbCallback example for MNIST.
Step6: Training an AE
We train the autoencoder on the CIFAR10 dataset.
The results can be viewed on wandb. | Python Code:
import torch
import torchvision
import wandb
import time
from torch import nn
from einops import rearrange, reduce
from argparse import ArgumentParser
from pytorch_lightning import LightningModule, Trainer, Callback
from pytorch_lightning.loggers import WandbLogger
from torch.optim import Adam
from torch.optim.lr_scheduler import CosineAnnealingLR
Explanation: Colorization AutoEncoder PyTorch Demo using CIFAR10
In this demo, we build a simple colorization autoencoder using PyTorch.
End of explanation
class Encoder(nn.Module):
def __init__(self, n_features=1, kernel_size=3, n_filters=64, feature_dim=256):
super().__init__()
self.conv1 = nn.Conv2d(n_features, n_filters, kernel_size=kernel_size, stride=2)
self.conv2 = nn.Conv2d(n_filters, n_filters*2, kernel_size=kernel_size, stride=2)
self.conv3 = nn.Conv2d(n_filters*2, n_filters*4, kernel_size=kernel_size, stride=2)
self.fc1 = nn.Linear(2304, feature_dim)
def forward(self, x):
y = nn.ReLU()(self.conv1(x))
y = nn.ReLU()(self.conv2(y))
y = nn.ReLU()(self.conv3(y))
y = rearrange(y, 'b c h w -> b (c h w)')
y = self.fc1(y)
return y
# use this to get the correct input shape for fc1.
encoder = Encoder(n_features=1)
x = torch.Tensor(1, 1, 32, 32)
h = encoder(x)
print("h.shape:", h.shape)
Explanation: CNN Encoder using PyTorch
We use 3 CNN layers to encode the grayscale input image. We use stride of 2 to reduce the feature map size. The last MLP layer resizes the flattened feature map to the target latent vector size. We use more filters and a much bigger latent vector size of 256 to encode more information.
End of explanation
class Decoder(nn.Module):
def __init__(self, kernel_size=3, n_filters=256, feature_dim=256, output_size=32, output_channels=3):
super().__init__()
self.init_size = output_size // 2**2
self.fc1 = nn.Linear(feature_dim, self.init_size**2 * n_filters)
# output size of conv2dtranspose is (h-1)*2 + 1 + (kernel_size - 1)
self.conv1 = nn.ConvTranspose2d(n_filters, n_filters//2, kernel_size=kernel_size, stride=2, padding=1)
self.conv2 = nn.ConvTranspose2d(n_filters//2, n_filters//4, kernel_size=kernel_size, stride=2, padding=1)
self.conv3 = nn.ConvTranspose2d(n_filters//4, n_filters//4, kernel_size=kernel_size, padding=1)
self.conv4 = nn.ConvTranspose2d(n_filters//4, output_channels, kernel_size=kernel_size+1)
def forward(self, x):
B, _ = x.shape
y = self.fc1(x)
y = rearrange(y, 'b (c h w) -> b c h w', b=B, h=self.init_size, w=self.init_size)
y = nn.ReLU()(self.conv1(y))
y = nn.ReLU()(self.conv2(y))
y = nn.ReLU()(self.conv3(y))
y = nn.Sigmoid()(self.conv4(y))
return y
decoder = Decoder()
x_tilde = decoder(h)
print("x_tilde.shape:", x_tilde.shape)
Explanation: CNN Decoder using PyTorch
A decoder is used to reconstruct the input image. The decoder is trained to reconstruct the input data from the latent space. The architecture is similar to the encoder but inverted. A latent vector is resized using an MLP layer so that it is suitable for a convolutional layer. We use strided tranposed convolutional layers to upsample the feature map until the desired image size is reached. The target image is the colorized version of the input image.
End of explanation
def gray_collate_fn(batch):
x, _ = zip(*batch)
x = torch.stack(x, dim=0)
xn = reduce(x,"b c h w -> b 1 h w", 'mean')
return xn, x
class LitColorizeCIFAR10Model(LightningModule):
def __init__(self, feature_dim=256, lr=0.001, batch_size=64,
num_workers=4, max_epochs=30, **kwargs):
super().__init__()
self.save_hyperparameters()
self.encoder = Encoder(feature_dim=feature_dim)
self.decoder = Decoder(feature_dim=feature_dim)
self.loss = nn.MSELoss()
def forward(self, x):
h = self.encoder(x)
x_tilde = self.decoder(h)
return x_tilde
# this is called during fit()
def training_step(self, batch, batch_idx):
x_in, x = batch
x_tilde = self.forward(x_in)
loss = self.loss(x_tilde, x)
return {"loss": loss}
# calls to self.log() are recorded in wandb
def training_epoch_end(self, outputs):
avg_loss = torch.stack([x["loss"] for x in outputs]).mean()
self.log("train_loss", avg_loss, on_epoch=True)
# this is called at the end of an epoch
def test_step(self, batch, batch_idx):
x_in, x = batch
x_tilde = self.forward(x_in)
loss = self.loss(x_tilde, x)
return {"x_in" : x_in, "x": x, "x_tilde" : x_tilde, "test_loss" : loss,}
# this is called at the end of all epochs
def test_epoch_end(self, outputs):
avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean()
self.log("test_loss", avg_loss, on_epoch=True, prog_bar=True)
# validation is the same as test
def validation_step(self, batch, batch_idx):
return self.test_step(batch, batch_idx)
def validation_epoch_end(self, outputs):
return self.test_epoch_end(outputs)
# we use Adam optimizer
def configure_optimizers(self):
optimizer = Adam(self.parameters(), lr=self.hparams.lr)
# this decays the learning rate to 0 after max_epochs using cosine annealing
scheduler = CosineAnnealingLR(optimizer, T_max=self.hparams.max_epochs)
return [optimizer], [scheduler],
# this is called after model instatiation to initiliaze the datasets and dataloaders
def setup(self, stage=None):
self.train_dataloader()
self.test_dataloader()
# build train and test dataloaders using MNIST dataset
# we use simple ToTensor transform
def train_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10(
"./data", train=True, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=True,
num_workers=self.hparams.num_workers,
pin_memory=True,
collate_fn=gray_collate_fn
)
def test_dataloader(self):
return torch.utils.data.DataLoader(
torchvision.datasets.CIFAR10(
"./data", train=False, download=True,
transform=torchvision.transforms.ToTensor()
),
batch_size=self.hparams.batch_size,
shuffle=False,
num_workers=self.hparams.num_workers,
pin_memory=True,
collate_fn=gray_collate_fn
)
def val_dataloader(self):
return self.test_dataloader()
Explanation: PyTorch Lightning Colorization AutoEncoder
In the colorization autoencoder, the encoder extracts features from the input image and the decoder reconstructs the input image from the latent space. The decoder adds color. The decoder's last layer has 3 output channels corresponding to RGB.
We gray_collate_fn to generate gray images from RGB images.
End of explanation
def get_args():
parser = ArgumentParser(description="PyTorch Lightning Colorization AE CIFAR10 Example")
parser.add_argument("--max-epochs", type=int, default=30, help="num epochs")
parser.add_argument("--batch-size", type=int, default=64, help="batch size")
parser.add_argument("--lr", type=float, default=0.001, help="learning rate")
parser.add_argument("--feature-dim", type=int, default=256, help="ae feature dimension")
parser.add_argument("--devices", default=1)
parser.add_argument("--accelerator", default='gpu')
parser.add_argument("--num-workers", type=int, default=4, help="num workers")
args = parser.parse_args("")
return args
Explanation: Arguments
Similar to the MNIST AE but we use a bigger latent vector size of 256 given that the colorization task needs more feature information from the input image.
End of explanation
class WandbCallback(Callback):
def on_validation_batch_end(self, trainer, pl_module, outputs, batch, batch_idx, dataloader_idx):
# process first 10 images of the first batch
if batch_idx == 0:
x, c = batch
n = pl_module.hparams.batch_size // 4
outputs = outputs["x_tilde"]
columns = ['gt color', 'gray', 'colorized']
data = [[wandb.Image(c_i), wandb.Image(x_i), wandb.Image(x_tilde_i)] for c_i, x_i, x_tilde_i in list(zip(c[:n], x[:n], outputs[:n]))]
wandb_logger.log_table(key="cifar10-colorize-ae", columns=columns, data=data)
Explanation: Weights and Biases Callback
The callback logs train and validation metrics to wandb. It also logs sample predictions. This is similar to our WandbCallback example for MNIST.
End of explanation
if __name__ == "__main__":
args = get_args()
ae = LitColorizeCIFAR10Model(feature_dim=args.feature_dim, lr=args.lr,
batch_size=args.batch_size, num_workers=args.num_workers,
max_epochs=args.max_epochs)
#ae.setup()
wandb_logger = WandbLogger(project="colorize-cifar10")
start_time = time.time()
trainer = Trainer(accelerator=args.accelerator,
devices=args.devices,
max_epochs=args.max_epochs,
logger=wandb_logger,
callbacks=[WandbCallback()])
trainer.fit(ae)
elapsed_time = time.time() - start_time
print("Elapsed time: {}".format(elapsed_time))
wandb.finish()
Explanation: Training an AE
We train the autoencoder on the CIFAR10 dataset.
The results can be viewed on wandb.
End of explanation |
2,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting sentiment from product reviews
The goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.
In this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.
Use SFrames to do some feature engineering
Train a logistic regression model to predict the sentiment of product reviews.
Inspect the weights (coefficients) of a trained logistic regression model.
Make a prediction (both class and probability) of sentiment for a new product review.
Given the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.
Inspect the coefficients of the logistic regression model and interpret their meanings.
Compare multiple logistic regression models.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
Step1: Data preparation
We will use a dataset consisting of baby product reviews on Amazon.com.
Step2: Now, let us see a preview of what the dataset looks like.
Step3: Build the word count vector for each review
Let us explore a specific example of a baby product.
Step4: Now, we will perform 2 simple data transformations
Step5: Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.
Step6: Extract sentiments
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.
Step7: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
Step8: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Split data into training and test sets
Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
Step9: Train a sentiment classifier with logistic regression
We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.
Note
Step10: Aside. You may get a warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.
Now that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows
Step11: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint
Step12: Quiz question
Step13: Let's dig deeper into the first row of the sample_test_data. Here's the full review
Step14: That review seems pretty positive.
Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
Step15: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as
Step16: Predicting sentiment
These scores can be used to make class predictions as follows
Step17: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.
Step18: Checkpoint
Step19: Checkpoint
Step20: Quiz Question
Step21: Find the most positive (and negative) review
We now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.
Using the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews."
To calculate these top-20 reviews, use the following steps
Step22: Quiz Question
Step23: Quiz Question
Step24: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
Step25: Quiz Question
Step26: For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.
Step27: Let's see what the first example of the dataset looks like
Step28: The word_count column had been working with before looks like the following
Step29: Since we are only working with a subset of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.
Step30: Train a logistic regression model on a subset of data
We will now build a classifier with word_count_subset as the feature and sentiment as the target.
Step31: We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.
Step32: Now, we will inspect the weights (coefficients) of the simple_model
Step33: Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.
Step34: Quiz Question
Step35: Quiz Question
Step36: Comparing models
We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.
First, compute the classification accuracy of the sentiment_model on the train_data
Step37: Now, compute the classification accuracy of the simple_model on the train_data
Step38: Quiz Question
Step39: Next, we will compute the classification accuracy of the simple_model on the test_data
Step40: Quiz Question
Step41: Now compute the accuracy of the majority class classifier on test_data.
Quiz Question
Step42: Quiz Question | Python Code:
from __future__ import division
import graphlab
import math
import string
import numpy
Explanation: Predicting sentiment from product reviews
The goal of this first notebook is to explore logistic regression and feature engineering with existing GraphLab functions.
In this notebook you will use product review data from Amazon.com to predict whether the sentiments about a product (from its reviews) are positive or negative.
Use SFrames to do some feature engineering
Train a logistic regression model to predict the sentiment of product reviews.
Inspect the weights (coefficients) of a trained logistic regression model.
Make a prediction (both class and probability) of sentiment for a new product review.
Given the logistic regression weights, predictors and ground truth labels, write a function to compute the accuracy of the model.
Inspect the coefficients of the logistic regression model and interpret their meanings.
Compare multiple logistic regression models.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create.
End of explanation
products = graphlab.SFrame('amazon_baby.gl/')
Explanation: Data preparation
We will use a dataset consisting of baby product reviews on Amazon.com.
End of explanation
products
Explanation: Now, let us see a preview of what the dataset looks like.
End of explanation
products[269]
Explanation: Build the word count vector for each review
Let us explore a specific example of a baby product.
End of explanation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
review_without_punctuation = products['review'].apply(remove_punctuation)
products['word_count'] = graphlab.text_analytics.count_words(review_without_punctuation)
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Transform the reviews into word-counts.
Aside. In this notebook, we remove all punctuations for the sake of simplicity. A smarter approach to punctuations would preserve phrases such as "I'd", "would've", "hadn't" and so forth. See this page for an example of smart handling of punctuations.
End of explanation
products[269]['word_count']
Explanation: Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.
End of explanation
products = products[products['rating'] != 3]
len(products)
Explanation: Extract sentiments
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment.
End of explanation
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products
Explanation: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
End of explanation
train_data, test_data = products.random_split(.8, seed=1)
print len(train_data)
print len(test_data)
Explanation: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Split data into training and test sets
Let's perform a train/test split with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
End of explanation
sentiment_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count'],
validation_set=None)
sentiment_model
Explanation: Train a sentiment classifier with logistic regression
We will now use logistic regression to create a sentiment classifier on the training data. This model will use the column word_count as a feature and the column sentiment as the target. We will use validation_set=None to obtain same results as everyone else.
Note: This line may take 1-2 minutes.
End of explanation
weights = sentiment_model.coefficients
weights.column_names()
weights[weights['value'] > 0]['value']
Explanation: Aside. You may get a warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.
Now that we have fitted the model, we can extract the weights (coefficients) as an SFrame as follows:
End of explanation
num_positive_weights = weights[weights['value'] >= 0]['value'].size()
num_negative_weights = weights[weights['value'] < 0]['value'].size()
print "Number of positive weights: %s " % num_positive_weights
print "Number of negative weights: %s " % num_negative_weights
Explanation: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Fill in the following block of code to calculate how many weights are positive ( >= 0). (Hint: The 'value' column in SFrame weights must be positive ( >= 0)).
End of explanation
sample_test_data = test_data[10:13]
print sample_test_data['rating']
sample_test_data
Explanation: Quiz question: How many weights are >= 0?
Making predictions with logistic regression
Now that a model is trained, we can make predictions on the test data. In this section, we will explore this in the context of 3 examples in the test dataset. We refer to this set of 3 examples as the sample_test_data.
End of explanation
sample_test_data[0]['review']
Explanation: Let's dig deeper into the first row of the sample_test_data. Here's the full review:
End of explanation
sample_test_data[1]['review']
Explanation: That review seems pretty positive.
Now, let's see what the next row of the sample_test_data looks like. As we could guess from the sentiment (-1), the review is quite negative.
End of explanation
scores = sentiment_model.predict(sample_test_data, output_type='margin')
print scores
Explanation: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as:
$$
\mbox{score}_i = \mathbf{w}^T h(\mathbf{x}_i)
$$
where $h(\mathbf{x}_i)$ represents the features for example $i$. We will write some code to obtain the scores using GraphLab Create. For each row, the score (or margin) is a number in the range [-inf, inf].
End of explanation
def margin_based_classifier(score):
return 1 if score > 0 else -1
sample_test_data['predictions'] = scores.apply(margin_based_classifier)
sample_test_data['predictions']
Explanation: Predicting sentiment
These scores can be used to make class predictions as follows:
$$
\hat{y} =
\left{
\begin{array}{ll}
+1 & \mathbf{w}^T h(\mathbf{x}_i) > 0 \
-1 & \mathbf{w}^T h(\mathbf{x}_i) \leq 0 \
\end{array}
\right.
$$
Using scores, write code to calculate $\hat{y}$, the class predictions:
End of explanation
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data)
Explanation: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.
End of explanation
def logistic_classifier_prob(weight):
return 1.0 / (1.0 + math.exp(-1 * weight))
probabilities = scores.apply(logistic_classifier_prob)
probabilities
Explanation: Checkpoint: Make sure your class predictions match with the one obtained from GraphLab Create.
Probability predictions
Recall from the lectures that we can also calculate the probability predictions from the scores using:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))}.
$$
Using the variable scores calculated previously, write code to calculate the probability that a sentiment is positive using the above formula. For each row, the probabilities should be a number in the range [0, 1].
End of explanation
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data, output_type='probability')
Explanation: Checkpoint: Make sure your probability predictions match the ones obtained from GraphLab Create.
End of explanation
print "Third"
Explanation: Quiz Question: Of the three data points in sample_test_data, which one (first, second, or third) has the lowest probability of being classified as a positive review?
End of explanation
a = graphlab.SArray([1,2,3])
b = graphlab.SArray([1,2,1])
print a == b
print (a == b).sum()
test_data['predicted_prob'] = sentiment_model.predict(test_data, output_type='probability')
test_data
test_data.topk('predicted_prob', 20).print_rows(20)
Explanation: Find the most positive (and negative) review
We now turn to examining the full test dataset, test_data, and use GraphLab Create to form predictions on all of the test data points for faster performance.
Using the sentiment_model, find the 20 reviews in the entire test_data with the highest probability of being classified as a positive review. We refer to these as the "most positive reviews."
To calculate these top-20 reviews, use the following steps:
1. Make probability predictions on test_data using the sentiment_model. (Hint: When you call .predict to make predictions on the test data, use option output_type='probability' to output the probability rather than just the most likely class.)
2. Sort the data according to those predictions and pick the top 20. (Hint: You can use the .topk method on an SFrame to find the top k rows sorted according to the value of a specified column.)
End of explanation
test_data.topk('predicted_prob', 20, reverse=True).print_rows(20)
Explanation: Quiz Question: Which of the following products are represented in the 20 most positive reviews? [multiple choice]
Now, let us repeat this exercise to find the "most negative reviews." Use the prediction probabilities to find the 20 reviews in the test_data with the lowest probability of being classified as a positive review. Repeat the same steps above but make sure you sort in the opposite order.
End of explanation
def get_classification_accuracy(model, data, true_labels):
# First get the predictions
prediction = model.predict(data)
# Compute the number of correctly classified examples
correctly_classified = prediction == true_labels
# Then compute accuracy by dividing num_correct by total number of examples
accuracy = float(correctly_classified.sum()) / true_labels.size()
return accuracy
Explanation: Quiz Question: Which of the following products are represented in the 20 most negative reviews? [multiple choice]
Compute accuracy of the classifier
We will now evaluate the accuracy of the trained classifier. Recall that the accuracy is given by
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified examples}}{\mbox{# total examples}}
$$
This can be computed as follows:
Step 1: Use the trained model to compute class predictions (Hint: Use the predict method)
Step 2: Count the number of data points when the predicted class labels match the ground truth labels (called true_labels below).
Step 3: Divide the total number of correct predictions by the total number of data points in the dataset.
Complete the function below to compute the classification accuracy:
End of explanation
get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
Explanation: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
End of explanation
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
len(significant_words)
Explanation: Quiz Question: What is the accuracy of the sentiment_model on the test_data? Round your answer to 2 decimal places (e.g. 0.76).
Quiz Question: Does a higher accuracy value on the training_data always imply that the classifier is better?
Learn another classifier with fewer words
There were a lot of words in the model we trained above. We will now train a simpler logistic regression model using only a subset of words that occur in the reviews. For this assignment, we selected a 20 words to work with. These are:
End of explanation
train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
test_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
Explanation: For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.
End of explanation
train_data[0]['review']
Explanation: Let's see what the first example of the dataset looks like:
End of explanation
print train_data[0]['word_count']
Explanation: The word_count column had been working with before looks like the following:
End of explanation
print train_data[0]['word_count_subset']
Explanation: Since we are only working with a subset of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.
End of explanation
simple_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count_subset'],
validation_set=None)
simple_model
Explanation: Train a logistic regression model on a subset of data
We will now build a classifier with word_count_subset as the feature and sentiment as the target.
End of explanation
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
Explanation: We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.
End of explanation
simple_model.coefficients
Explanation: Now, we will inspect the weights (coefficients) of the simple_model:
End of explanation
simple_model.coefficients.sort('value', ascending=False).print_rows(num_rows=21)
Explanation: Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.
End of explanation
simple_model.coefficients[simple_model.coefficients['value'] > 0]['value'].size() - 1
Explanation: Quiz Question: Consider the coefficients of simple_model. There should be 21 of them, an intercept term + one for each word in significant_words. How many of the 20 coefficients (corresponding to the 20 significant_words and excluding the intercept term) are positive for the simple_model?
End of explanation
positive_significant_words = simple_model.coefficients[simple_model.coefficients['value'] > 0]
positive_significant_words
for w in positive_significant_words['index']:
print sentiment_model.coefficients[sentiment_model.coefficients['index'] == w]
Explanation: Quiz Question: Are the positive words in the simple_model (let us call them positive_significant_words) also positive words in the sentiment_model?
End of explanation
get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])
Explanation: Comparing models
We will now compare the accuracy of the sentiment_model and the simple_model using the get_classification_accuracy method you implemented above.
First, compute the classification accuracy of the sentiment_model on the train_data:
End of explanation
get_classification_accuracy(simple_model, train_data, train_data['sentiment'])
Explanation: Now, compute the classification accuracy of the simple_model on the train_data:
End of explanation
round(get_classification_accuracy(sentiment_model, test_data, test_data['sentiment']), 2)
Explanation: Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TRAINING set?
Now, we will repeat this exercise on the test_data. Start by computing the classification accuracy of the sentiment_model on the test_data:
End of explanation
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
Explanation: Next, we will compute the classification accuracy of the simple_model on the test_data:
End of explanation
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
print num_positive
print num_negative
Explanation: Quiz Question: Which model (sentiment_model or simple_model) has higher accuracy on the TEST set?
Baseline: Majority class prediction
It is quite common to use the majority class classifier as the a baseline (or reference) model for comparison with your classifier model. The majority classifier model predicts the majority class for all data points. At the very least, you should healthily beat the majority class classifier, otherwise, the model is (usually) pointless.
What is the majority class in the train_data?
End of explanation
num_positive_test = (test_data['sentiment'] == +1).sum()
num_negative_test = (test_data['sentiment'] == -1).sum()
print num_positive_test
print num_negative_test
majority_accuracy = float(num_positive_test) / test_data['sentiment'].size()
print round(majority_accuracy, 2)
Explanation: Now compute the accuracy of the majority class classifier on test_data.
Quiz Question: Enter the accuracy of the majority class classifier model on the test_data. Round your answer to two decimal places (e.g. 0.76).
End of explanation
print "Yes"
graphlab.version
Explanation: Quiz Question: Is the sentiment_model definitely better than the majority class classifier (the baseline)?
End of explanation |
2,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step3: Action Recognition with an Inflated 3D CNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step4: Using the UCF101 dataset
Step5: Run the id3 model and print the top-5 action predictions.
Step6: Now try a new video, from | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install -q imageio
!pip install -q opencv-python
!pip install -q git+https://github.com/tensorflow/docs
#@title Import the necessary modules
# TensorFlow and TF-Hub modules.
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
from tensorflow_docs.vis import embed
logging.set_verbosity(logging.ERROR)
# Some modules to help with reading the UCF101 dataset.
import random
import re
import os
import tempfile
import ssl
import cv2
import numpy as np
# Some modules to display an animation using imageio.
import imageio
from IPython import display
from urllib import request # requires python3
#@title Helper functions for the UCF101 dataset
# Utilities to fetch videos from UCF101 dataset
UCF_ROOT = "https://www.crcv.ucf.edu/THUMOS14/UCF101/UCF101/"
_VIDEO_LIST = None
_CACHE_DIR = tempfile.mkdtemp()
# As of July 2020, crcv.ucf.edu doesn't use a certificate accepted by the
# default Colab environment anymore.
unverified_context = ssl._create_unverified_context()
def list_ucf_videos():
Lists videos available in UCF101 dataset.
global _VIDEO_LIST
if not _VIDEO_LIST:
index = request.urlopen(UCF_ROOT, context=unverified_context).read().decode("utf-8")
videos = re.findall("(v_[\w_]+\.avi)", index)
_VIDEO_LIST = sorted(set(videos))
return list(_VIDEO_LIST)
def fetch_ucf_video(video):
Fetchs a video and cache into local filesystem.
cache_path = os.path.join(_CACHE_DIR, video)
if not os.path.exists(cache_path):
urlpath = request.urljoin(UCF_ROOT, video)
print("Fetching %s => %s" % (urlpath, cache_path))
data = request.urlopen(urlpath, context=unverified_context).read()
open(cache_path, "wb").write(data)
return cache_path
# Utilities to open video files using CV2
def crop_center_square(frame):
y, x = frame.shape[0:2]
min_dim = min(y, x)
start_x = (x // 2) - (min_dim // 2)
start_y = (y // 2) - (min_dim // 2)
return frame[start_y:start_y+min_dim,start_x:start_x+min_dim]
def load_video(path, max_frames=0, resize=(224, 224)):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = crop_center_square(frame)
frame = cv2.resize(frame, resize)
frame = frame[:, :, [2, 1, 0]]
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
return np.array(frames) / 255.0
def to_gif(images):
converted_images = np.clip(images * 255, 0, 255).astype(np.uint8)
imageio.mimsave('./animation.gif', converted_images, fps=25)
return embed.embed_file('./animation.gif')
#@title Get the kinetics-400 labels
# Get the kinetics-400 action labels from the GitHub repository.
KINETICS_URL = "https://raw.githubusercontent.com/deepmind/kinetics-i3d/master/data/label_map.txt"
with request.urlopen(KINETICS_URL) as obj:
labels = [line.decode("utf-8").strip() for line in obj.readlines()]
print("Found %d labels." % len(labels))
Explanation: Action Recognition with an Inflated 3D CNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/action_recognition_with_tf_hub.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/action_recognition_with_tf_hub.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/action_recognition_with_tf_hub.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/deepmind/i3d-kinetics-400/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
This Colab demonstrates recognizing actions in video data using the
tfhub.dev/deepmind/i3d-kinetics-400/1 module. More models to detect actions in videos can be found here.
The underlying model is described in the paper "Quo Vadis, Action Recognition? A New
Model and the Kinetics Dataset" by Joao
Carreira and Andrew Zisserman. The paper was posted on arXiv in May 2017, and
was published as a CVPR 2017 conference paper.
The source code is publicly available on
github.
"Quo Vadis" introduced a new architecture for video classification, the Inflated
3D Convnet or I3D. This architecture achieved state-of-the-art results on the UCF101
and HMDB51 datasets from fine-tuning these models. I3D models pre-trained on Kinetics
also placed first in the CVPR 2017 Charades challenge.
The original module was trained on the kinetics-400 dateset
and knows about 400 different actions.
Labels for these actions can be found in the
label map file.
In this Colab we will use it recognize activites in videos from a UCF101 dataset.
Setup
End of explanation
# Get the list of videos in the dataset.
ucf_videos = list_ucf_videos()
categories = {}
for video in ucf_videos:
category = video[2:-12]
if category not in categories:
categories[category] = []
categories[category].append(video)
print("Found %d videos in %d categories." % (len(ucf_videos), len(categories)))
for category, sequences in categories.items():
summary = ", ".join(sequences[:2])
print("%-20s %4d videos (%s, ...)" % (category, len(sequences), summary))
# Get a sample cricket video.
video_path = fetch_ucf_video("v_CricketShot_g04_c02.avi")
sample_video = load_video(video_path)
sample_video.shape
i3d = hub.load("https://tfhub.dev/deepmind/i3d-kinetics-400/1").signatures['default']
Explanation: Using the UCF101 dataset
End of explanation
def predict(sample_video):
# Add a batch axis to the sample video.
model_input = tf.constant(sample_video, dtype=tf.float32)[tf.newaxis, ...]
logits = i3d(model_input)['default'][0]
probabilities = tf.nn.softmax(logits)
print("Top 5 actions:")
for i in np.argsort(probabilities)[::-1][:5]:
print(f" {labels[i]:22}: {probabilities[i] * 100:5.2f}%")
predict(sample_video)
Explanation: Run the id3 model and print the top-5 action predictions.
End of explanation
!curl -O https://upload.wikimedia.org/wikipedia/commons/8/86/End_of_a_jam.ogv
video_path = "End_of_a_jam.ogv"
sample_video = load_video(video_path)[:100]
sample_video.shape
to_gif(sample_video)
predict(sample_video)
Explanation: Now try a new video, from: https://commons.wikimedia.org/wiki/Category:Videos_of_sports
How about this video by Patrick Gillett:
End of explanation |
2,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The following script extracts the (more) helpful reviews from the swiss reviews and saves them locally.
From the extracted reviews it also saves a list with their asin identifiers.
The list of asin identifiers will be later used to to find the average review rating for the respective products.
Step1: Load the swiss reviews
Step2: The filter_helpful function keeps only the reviews which had at least 5 flags/votes in the helpfulness field.
This amounts to a subset of around 23000 reviews. A smaller subset of around 10000 reviews was obtained as well by only keeping reviews with 10 flags/votes. The main advantage of the smaller subset is that it contains better quality reviews while its drawback is, of course, the reduced size.
1) Extract the helpful reviews
Step3: Apply the filter_helpful to each swiss product review
Step4: Save the subset with helpful swiss product reviews
Step5: 2) Extract the asins of the products which the helpful reviews correspond to
Step6: The following function simply extracts the 'asin' from the helpful reviews.
Repetitions of the asins are of no consequence, as the list is just meant to be a check up.
Step7: Save the list of asins. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import yaml
Explanation: The following script extracts the (more) helpful reviews from the swiss reviews and saves them locally.
From the extracted reviews it also saves a list with their asin identifiers.
The list of asin identifiers will be later used to to find the average review rating for the respective products.
End of explanation
with open("data/swiss-reviews.txt", 'r') as fp:
swiss_rev = fp.readlines()
len(swiss_rev)
swiss_rev[2]
Explanation: Load the swiss reviews
End of explanation
def filter_helpful(line):
l = line.rstrip('\n')
l = yaml.load(l)
if('helpful' in l.keys()):
if(l['helpful'][1] >= 5):
return True
else:
return False
else:
print("Review does not have helpful score key: "+line)
return False
Explanation: The filter_helpful function keeps only the reviews which had at least 5 flags/votes in the helpfulness field.
This amounts to a subset of around 23000 reviews. A smaller subset of around 10000 reviews was obtained as well by only keeping reviews with 10 flags/votes. The main advantage of the smaller subset is that it contains better quality reviews while its drawback is, of course, the reduced size.
1) Extract the helpful reviews
End of explanation
def get_helpful(data):
res = []
counter = 1
i = 0
for line in data:
i += 1
if(filter_helpful(line)):
if(counter % 1000 == 0):
print("Count "+str(counter)+" / "+str(i))
counter += 1
res.append(line)
return res
swiss_reviews_helpful = get_helpful(swiss_rev)
len(swiss_reviews_helpful)
Explanation: Apply the filter_helpful to each swiss product review
End of explanation
write_file = open('data/swiss-reviews-helpful-correct-bigger.txt', 'w')
for item in swiss_reviews_helpful:
write_file.write(item)
write_file.close()
Explanation: Save the subset with helpful swiss product reviews
End of explanation
with open('data/swiss-reviews-helpful-correct-bigger.txt', 'r') as fp:
swiss_reviews_helpful = fp.readlines()
Explanation: 2) Extract the asins of the products which the helpful reviews correspond to
End of explanation
def filter_asin(line):
l = line.rstrip('\n')
l = yaml.load(l)
if('asin' in l.keys()):
return l['asin']
else:
return ''
helpful_asins = []
counter = 1
for item in swiss_reviews_helpful:
if(counter%500 == 0):
print(counter)
counter += 1
x = filter_asin(item)
if(len(x) > 0):
helpful_asins.append(x)
Explanation: The following function simply extracts the 'asin' from the helpful reviews.
Repetitions of the asins are of no consequence, as the list is just meant to be a check up.
End of explanation
import pickle
with open('data/helpful_asins_bigger.pickle', 'wb') as fp:
pickle.dump(helpful_asins, fp)
Explanation: Save the list of asins.
End of explanation |
2,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#First-Foray-Into-Discrete/Fast-Fourier-Transformation" data-toc-modified-id="First-Foray-Into-Discrete/Fast-Fourier-Transformation-1"><span class="toc-item-num">1 </span>First Foray Into Discrete/Fast Fourier Transformation</a></span><ul class="toc-item"><li><span><a href="#Correlation" data-toc-modified-id="Correlation-1.1"><span class="toc-item-num">1.1 </span>Correlation</a></span></li><li><span><a href="#Fourier-Transformation" data-toc-modified-id="Fourier-Transformation-1.2"><span class="toc-item-num">1.2 </span>Fourier Transformation</a></span></li><li><span><a href="#DFT-In-Action" data-toc-modified-id="DFT-In-Action-1.3"><span class="toc-item-num">1.3 </span>DFT In Action</a></span></li><li><span><a href="#Fast-Fourier-Transformation-(FFT)" data-toc-modified-id="Fast-Fourier-Transformation-(FFT)-1.4"><span class="toc-item-num">1.4 </span>Fast Fourier Transformation (FFT)</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step2: First Foray Into Discrete/Fast Fourier Transformation
In many real-world applications, signals are typically represented as a sequence of numbers that are time dependent. For example, digital audio signal would be one common example, or the hourly temperature in California would be another one. In order to extract meaningful characteristics from these kind of data, many different transformation techniques have been developed to decompose it into simpler individual pieces that are much easier and compact to reason with.
Discrete Fourier Transformation (DFT) is one of these algorithms that takes a signal as an input and breaks it down into many individual frequency components. Giving us, the end-user, easier pieces to work with. For the digital audio signal, applying DFT gives us what tones are represented in the sound and at what energies.
Some basics of digital signal processing is assumed. The following link contains an excellent primer to get people up to speed. I feverishly recommend going through all of it if the reader is not pressed with time. Blog
Step3: Correlation is one of the key concept behind DFT, because as we'll soon see, in DFT, our goal is to find frequencies that gives a high correlation with the signal at hand and a high amplitude of this correlation indicates the presence of this frequency in our signal.
Fourier Transformation
Fourier Transformation takes a time-based signal as an input, measures every possible cycle and returns the overall cycle components (by cycle, we're essentially preferring to circles). Each cycle components stores information such as for each cycle
Step4: Then we will combine these individual signals together with some weights assigned to each signal.
Step5: By looking at the dummy signal we've created visually, we might be able to notice the presence of a signal which shows 5 periods in the sampling duration of 10 seconds. In other words, after applying DFT to our signal, we should expect the presence a signal with the frequency of 0.5 HZ.
Here, we will leverage numpy's implementation to check whether the result makes intuitive sense or not. The implementation is called fft, but let's not worry about that for the moment.
Step6: The fft routine returns an array of length 1000 which is equivalent to the number of samples. If we look at each individual element in the array, we'll notice that these are the DFT coefficients. It has two components, the real number corresponds to the cosine waves and the imaginary number that comes from the sine waves. In general though, we don't really care if there's a cosine or sine wave present, as we are only concerned which frequency pattern has a higher correlation with our original signal. This can be done by considering the absolute value of these coefficients.
Step7: If we plot the absolute values of the fft result, we can clearly see a spike at K=0, 5, 20, 100 in the graph above. However, we are often times more interested in the energy of of each frequency. Frequency Resolution is the distance in Hz between two adjacent data points in DFT, which is defined as
Step9: Fast Fourier Transformation (FFT)
Recall that the formula for Discrete Fourier Transformation was
Step10: However, if we compare the timing between our simplistic implementation versus the one from numpy, we can see a dramatic time difference.
Step11: If we leave aside the fact that one is implemented using Python's numpy and one is most likely implemented in optimized C++, the time difference actually comes from the fact that in practice, people uses a more optimized version of Fourier Transformation called Fast Fourier Transformation (how unexpected ...) to perform the calculation. The algorithm accomplish significant speedup by exploiting symmetry property. i.e. if we devise a hypothetical algorithm which can decompose a 1024-point DFT into two 512-point DFTs, then we are essentially halving our computational cost. Let's take a look at how we can achieve this by looking at an example with 8 data points.
\begin{align}
X_k = x_0 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 0 } + x_1 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } + \dots + x_7 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 7 }
\end{align}
Our goal is to examine the possibility of rewriting this eight-point DFT in terms of two DFTs of smaller length. Let's first examine choosing all the terms with an even sample index, i.e. $x_0$, $x_2$, $x_4$, and $x_6$. Giving us | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(css_style='custom2.css', plot_style=False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn,matplotlib
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#First-Foray-Into-Discrete/Fast-Fourier-Transformation" data-toc-modified-id="First-Foray-Into-Discrete/Fast-Fourier-Transformation-1"><span class="toc-item-num">1 </span>First Foray Into Discrete/Fast Fourier Transformation</a></span><ul class="toc-item"><li><span><a href="#Correlation" data-toc-modified-id="Correlation-1.1"><span class="toc-item-num">1.1 </span>Correlation</a></span></li><li><span><a href="#Fourier-Transformation" data-toc-modified-id="Fourier-Transformation-1.2"><span class="toc-item-num">1.2 </span>Fourier Transformation</a></span></li><li><span><a href="#DFT-In-Action" data-toc-modified-id="DFT-In-Action-1.3"><span class="toc-item-num">1.3 </span>DFT In Action</a></span></li><li><span><a href="#Fast-Fourier-Transformation-(FFT)" data-toc-modified-id="Fast-Fourier-Transformation-(FFT)-1.4"><span class="toc-item-num">1.4 </span>Fast Fourier Transformation (FFT)</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
# create examples of two signals that are dissimilar
# and two that are similar to illustrate the concept
def create_signal(sample_duration, sample_freq, signal_type, signal_freq):
Create some signals to work with, e.g. if we were to sample at 100 Hz
(100 times per second) and collect the data for 10 seconds, resulting
in 1000 samples in total. Then we would specify sample_duration = 10,
sample_freq = 100.
Apart from that, we will also give the option of generating sine or cosine
wave and the frequencies of these signals
raw_value = 2 * np.pi * signal_freq * np.arange(0, sample_duration, 1. / sample_freq)
if signal_type == 'cos':
return np.cos(raw_value)
elif signal_type == 'sin':
return np.sin(raw_value)
# change default style figure and font size
plt.rcParams['figure.figsize'] = 8, 6
plt.rcParams['font.size'] = 12
plt.style.use('fivethirtyeight')
# dissimilar signals have low correlation
signal1 = create_signal(10, 100, 'sin', 0.1)
signal2 = create_signal(10, 100, 'cos', 0.1)
plt.plot(signal1, label='Sine')
plt.plot(signal2, label='Cosine')
plt.title('Correlation={:.1f}'.format(np.dot(signal1, signal2)))
plt.legend()
plt.show()
# similar signals have high correlation
signal1 = create_signal(10, 100, 'sin', 0.1)
signal2 = create_signal(10, 100, 'sin', 0.1)
plt.plot(signal1, label='Sine 1')
plt.plot(signal2, label='Sine 2', linestyle='--')
plt.title('Correlation={}'.format(np.dot(signal1, signal2)))
plt.legend()
plt.show()
Explanation: First Foray Into Discrete/Fast Fourier Transformation
In many real-world applications, signals are typically represented as a sequence of numbers that are time dependent. For example, digital audio signal would be one common example, or the hourly temperature in California would be another one. In order to extract meaningful characteristics from these kind of data, many different transformation techniques have been developed to decompose it into simpler individual pieces that are much easier and compact to reason with.
Discrete Fourier Transformation (DFT) is one of these algorithms that takes a signal as an input and breaks it down into many individual frequency components. Giving us, the end-user, easier pieces to work with. For the digital audio signal, applying DFT gives us what tones are represented in the sound and at what energies.
Some basics of digital signal processing is assumed. The following link contains an excellent primer to get people up to speed. I feverishly recommend going through all of it if the reader is not pressed with time. Blog: Seeing Circles, Sines, And Signals a Compact Primer On Digital Signal Processing
Correlation
Correlation is a widely used concept in signal processing. It must be noted that the definition of correlation here is slightly different from the definition we encounter in statistics. In the context of signal processing, correlation measures how similar two signals are by computing the dot product between the two. i.e. given two signals $x$ and $y$, the correlation of the two signal can be computed using:
\begin{align}
\sum_{n=0}^N x_n \cdot y_n
\end{align}
The intuition behind this is that if the two signals are indeed similar, then whenever $x_n$ is positive/negative then $y_n$ should also be positive/negative. Hence when two signals' sign often matches, the resulting correlation number will also be large, indicating that the two signals are similar to one another. It is worth noting that correlation can also take on negative values, a large negative correlation means that the signal is also similar to each other, but one is inverted with respect to the other.
End of explanation
# reminder:
# sample_duration means we're collecting the data for x seconds
# sample_freq means we're sampling x times per second
sample_duration = 10
sample_freq = 100
signal_type = 'sin'
num_samples = sample_freq * sample_duration
num_components = 4
components = np.zeros((num_components, num_samples))
components[0] = np.ones(num_samples)
components[1] = create_signal(sample_duration, sample_freq, signal_type, 10)
components[2] = create_signal(sample_duration, sample_freq, signal_type, 2)
components[3] = create_signal(sample_duration, sample_freq, signal_type, 0.5)
fig, ax = plt.subplots(nrows=num_components, sharex=True, figsize=(12,8))
for i in range(num_components):
ax[i].plot(components[i])
ax[i].set_ylim((-1.1, 1.1))
ax[i].set_title('Component {}'.format(i))
ax[i].set_ylabel('Amplitude')
ax[num_components - 1].set_xlabel('Samples')
plt.tight_layout()
Explanation: Correlation is one of the key concept behind DFT, because as we'll soon see, in DFT, our goal is to find frequencies that gives a high correlation with the signal at hand and a high amplitude of this correlation indicates the presence of this frequency in our signal.
Fourier Transformation
Fourier Transformation takes a time-based signal as an input, measures every possible cycle and returns the overall cycle components (by cycle, we're essentially preferring to circles). Each cycle components stores information such as for each cycle:
Amplitude: how big is the circle?
Frequency: How fast is it moving? The faster the cycle component is moving, the higher the frequency of the wave.
Phase: Where does it start, or what angle does it start?
This cycle component is also referred to as phasor. The following gif aims to make this seemingly abstract description into a concrete process that we can visualize.
<img src="img/fft_decompose.gif">
After applying DFT to our signal shown on the right, we realized that it can be decomposed into five different phasors. Here, the center of the first phasor/cycle component is placed at the origin, and the center of each subsequent phasor is "attached" to the tip of the previous phasor. Once the chain of phasors is built, we begin rotating the phasor. We can then reconstruct the time domain signal by tracing the vertical distance from the origin to the tip of the last phasor.
Let's now take a look at DFT's formula:
\begin{align}
X_k = \sum_{n=0}^{N-1} x_n \cdot e^{ -\varphi \mathrm{i} }
\end{align}
$x_n$: The signal's value at time $n$.
$e^{-\varphi\mathrm{i}}$: Is a compact way of describing a pair of sine and cosine waves.
$\varphi = \frac{n}{N} 2\pi k$: Records phase and frequency of our cycle components. Where $N$ is the number of samples we have. $n$ the current sample we're considering. $k$ the currenct frequency we're considering. The $2\pi k$ part represents the cycle component's speed measured in radians and $n / N$ measures the percentage of time that our cycle component has traveled.
$X_k$ Amount of cycle component with frequency $k$.
Side Note: If the readers are a bit rusty with trigonometry (related to sine and cosine) and complex numbers. e.g. There're already many excellent materials out there that covers these concepts. Blog: Trigonometry Review and Blog: Complex Numbers
From the formula, we notice that it's taking the dot product between the original signal $x_n$ and $e^{ -\varphi \mathrm{i} }$. If we expand $e^{ -\varphi \mathrm{i} }$ using the Euler's formula. $e^{ -\varphi \mathrm{i} } = cos(\varphi) - sin(\varphi)i$, we end up with the formula:
\begin{align}
X_k &= \sum_{n=0}^{N-1} x_n \cdot \big( cos(\varphi) - sin(\varphi)i \big) \
&= \sum_{n=0}^{N-1} x_n \cdot cos(\varphi) - i \sum_{n=0}^{N-1} x_n \cdot sin(\varphi)
\end{align}
By breaking down the formula a little bit, we can see that underneath the hood, what fourier transformation is doing is taking the input signal and doing 2 correlation calculations, one with the sine wave (it will give us the y coordinates of the circle) and one with the cosine wave (which will give us the x coordinates or the circle). And the following succinct one-sentence colour-coded explanation is also a great reference that we can use for quick reference.
<img src="img/fft_one_sentence.png" width="50%" height="50%">
DFT In Action
To see DFT in action, we will create a dummy signal that will be composed of four sinusoidal waves of different frequencies. 0, 10, 2 and 0.5 Hz respectively.
End of explanation
signal = -0.5 * components[0] + 0.1 * components[1] + 0.2 * components[2] - 0.6 * components[3]
plt.plot(signal)
plt.xlabel('Samples')
plt.ylabel('Amplitude')
plt.show()
Explanation: Then we will combine these individual signals together with some weights assigned to each signal.
End of explanation
fft_result = np.fft.fft(signal)
print('length of fft result: ', len(fft_result))
fft_result[:5]
Explanation: By looking at the dummy signal we've created visually, we might be able to notice the presence of a signal which shows 5 periods in the sampling duration of 10 seconds. In other words, after applying DFT to our signal, we should expect the presence a signal with the frequency of 0.5 HZ.
Here, we will leverage numpy's implementation to check whether the result makes intuitive sense or not. The implementation is called fft, but let's not worry about that for the moment.
End of explanation
plt.plot(np.abs(fft_result))
plt.xlim((-5, 120)) # notice that we limited the x-axis to 120 to focus on the interesting part
plt.ylim((-5, 520))
plt.xlabel('K')
plt.ylabel('|DFT(K)|')
plt.show()
Explanation: The fft routine returns an array of length 1000 which is equivalent to the number of samples. If we look at each individual element in the array, we'll notice that these are the DFT coefficients. It has two components, the real number corresponds to the cosine waves and the imaginary number that comes from the sine waves. In general though, we don't really care if there's a cosine or sine wave present, as we are only concerned which frequency pattern has a higher correlation with our original signal. This can be done by considering the absolute value of these coefficients.
End of explanation
t = np.linspace(0, sample_freq, len(fft_result))
plt.plot(t, np.abs(fft_result))
plt.xlim((-1, 15))
plt.ylim((-5, 520))
plt.xlabel('K')
plt.ylabel('|DFT(K)|')
plt.show()
Explanation: If we plot the absolute values of the fft result, we can clearly see a spike at K=0, 5, 20, 100 in the graph above. However, we are often times more interested in the energy of of each frequency. Frequency Resolution is the distance in Hz between two adjacent data points in DFT, which is defined as:
\begin{align}
\Delta f = \frac{f_s}{N}
\end{align}
Where $f_s$ is the sampling rate and $N$ is the number of data points. The denominator can be expressed in terms of sampling rate and time, $N = f_s \cdot t$. Looking closely at the formula, it is telling us the only thing that increases frequency resolution is time.
In our case, the sample_duration we've specified above was 10, thus the frequencies corresponding to these K are: 0 Hz, 0.5 Hz, 2 Hz and 10 Hz respectively (remember that these frequencies were the components that was used in the dummy signal that we've created). And based on the graph depicted below, we can see that by passing our signal to a DFT, we were able to retrieve its underlying frequency information.
End of explanation
def dft(x):
Compute the Discrete Fourier Transform of the 1d ndarray x.
N = x.size
n = np.arange(N)
k = n.reshape((N, 1))
# complex number in python are denoted by the j symbol,
# instead of i that we're showing in the formula
e = np.exp(-2j * np.pi * k * n / N)
return np.dot(e, x)
# apply dft to our original signal and confirm
# the results looks the same
dft_result = dft(signal)
print('result matches:', np.allclose(dft_result, fft_result))
plt.plot(np.abs(dft_result))
plt.xlim((-5, 120))
plt.ylim((-5, 520))
plt.xlabel('K')
plt.ylabel('|DFT(K)|')
plt.show()
Explanation: Fast Fourier Transformation (FFT)
Recall that the formula for Discrete Fourier Transformation was:
\begin{align}
X_k = \sum_{n=0}^{N-1} x_n \cdot e^{ -\frac{n}{N} 2\pi k \mathrm{i} }
\end{align}
Since we now know that it's computing the dot product between the original signal and a cycle component at every frequency, we can implement this ourselves.
End of explanation
%timeit dft(signal)
%timeit np.fft.fft(signal)
Explanation: However, if we compare the timing between our simplistic implementation versus the one from numpy, we can see a dramatic time difference.
End of explanation
def fft(x):
N = x.shape[0]
if N % 2 > 0:
raise ValueError('size of x must be a power of 2')
elif N <= 32: # this cutoff should be enough to start using the non-recursive version
return dft(x)
else:
fft_even = fft(x[0::2])
fft_odd = fft(x[1::2])
factor = np.exp(-2j * np.pi * np.arange(N) / N)
return np.concatenate([fft_even + factor[:N // 2] * fft_odd,
fft_even + factor[N // 2:] * fft_odd])
# here, we assume the input data length is a power of two
# if it doesn't, we can choose to zero-pad the input signal
x = np.random.random(1024)
np.allclose(fft(x), np.fft.fft(x))
%timeit dft(x)
%timeit fft(x)
%timeit np.fft.fft(x)
Explanation: If we leave aside the fact that one is implemented using Python's numpy and one is most likely implemented in optimized C++, the time difference actually comes from the fact that in practice, people uses a more optimized version of Fourier Transformation called Fast Fourier Transformation (how unexpected ...) to perform the calculation. The algorithm accomplish significant speedup by exploiting symmetry property. i.e. if we devise a hypothetical algorithm which can decompose a 1024-point DFT into two 512-point DFTs, then we are essentially halving our computational cost. Let's take a look at how we can achieve this by looking at an example with 8 data points.
\begin{align}
X_k = x_0 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 0 } + x_1 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } + \dots + x_7 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 7 }
\end{align}
Our goal is to examine the possibility of rewriting this eight-point DFT in terms of two DFTs of smaller length. Let's first examine choosing all the terms with an even sample index, i.e. $x_0$, $x_2$, $x_4$, and $x_6$. Giving us:
\begin{align}
G_k &= x_0 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 0 } + x_2 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 2 } + x_4 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 4 } + x_6 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 6 } \
&= x_0 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 0 } + x_2 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 1 } + x_4 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 2 } + x_6 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 3 }
\end{align}
After plugging the values for the even sample index and simplifying the fractions in the complex exponentials, we can observe that our $G_k$ is a 4 samples DFT with $x_0$, $x_2$, $x_4$, $x_6$ as our input signal. Now that we've shown that we can decompose the even index samples, let's see if we can simplify the remaining terms, the odd-index samples, are given by:
\begin{align}
Q_k &= x_1 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } + x_3 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 3 } + x_5 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 5 } + x_7 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 7 } \
&= e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } \cdot \big( x_1 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 0 } + x_3 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 2 } + x_5 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 4 } + x_7 \cdot e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 6 } \big) \
&= e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } \cdot \big( x_1 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 0 } + x_3 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 1 } + x_5 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 2 } + x_7 \cdot e^{ -\mathrm{i} \frac{2\pi}{4} k ~\times~ 3 } \big) \
&= e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } \cdot H_k
\end{align}
After the derivation, we can see our $Q_k$ is obtained by multiplying $e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 }$ by the four point DFT with the odd index samples of $x_1$, $x_3$, $x_5$, $x_7$, which we'll denote as $H_k$. Hence, we have achieved the goal of decomposing an eight-point DFT into two four-point ones:
\begin{align}
X_k &= G_k + e^{ -\mathrm{i} \frac{2\pi}{8} k ~\times~ 1 } \cdot H_k
\end{align}
We have only worked through rearranging the terms a bit, next we'll introduce a symmetric trick that allows us to compute the sub-result only once and save computational cost.
The question that we'll be asking ourselves is what is the value of $X_{N+k}$ is. From our above expression:
\begin{align}
X_{N + k} &= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~(N + k)~n~/~N}\
&= \sum_{n=0}^{N-1} x_n \cdot e^{- i~2\pi~n} \cdot e^{-i~2\pi~k~n~/~N}\
&= \sum_{n=0}^{N-1} x_n \cdot e^{-i~2\pi~k~n~/~N}
\end{align}
Here we've used the property that $exp[2\pi~i~n] = 1$ for any integer $n$, since $exp[2\pi~i]$ means that we're going 1 full circle, and multiplying that number by any integer $n$ means we're spinning for $n$ circles. The last line shows a nice symmetry property of the DFT: $X_{N+k}=X_k$. This means when we break our eight-point DFT into two four-point DFTs, it allows us to re-use a lot of the results for both $X_k$ and $X_{k + 4}$ and significantly reduce the number of calculations through the symmetric property:
\begin{align}
X_{k + 4} &= G_{k + 4} + e^{ -\mathrm{i} \frac{2\pi}{8} (k + 4) ~\times~ 1 } \cdot H_{k + 4} \
&= G_k + e^{ -\mathrm{i} \frac{2\pi}{8} (k + 4) ~\times~ 1 } \cdot H_k
\end{align}
We saw that the starting point of the algorithm was that the DFT length $N$ was even and we were able to decrease the computation by splitting it into two DFTS of length $N/2$, following this procedure we can again decompose each of the $N/2$ DFTs into two $N/4$ DFTs. This property turns the original $\mathcal{O}[N^2]$ DFT computation into a $\mathcal{O}[N\log N]$ algorithm to compute DFT.
End of explanation |
2,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: SLD gradients
For the moment, BornAgain does not support input of SLD profiles. However, one can approximate the smooth SLD profile by a large number of layers. See the example script below.
Step2: Exercise
Play with the script above. Change the function for SLD profile, add/remove/vary the beam divergence. How does it influence the simulation result?
Step7: Diffuse scattering
Step8: Exercise
Add the rotational distribution. If got stucked, see the solution or just run the line below.
Step13: Lattice rotation
Imaging techniques deliver information from a few $\mu m^2$, while GISAS typically averages the information over the whole sample surface. This explains the effect of diffuse scattering for samples that look well ordered on SEM/AFM images.
Let's take a BornAgain example and modify it to account for variety of lattice rotations.
First, we increase a bit lattice length to see more peaks and DecayLengths of the decay function to make peaks more narrow
Step14: Exercise
Modify the script above to account for the lattice rotational distribution. Let's distribute lattice rotation angles using Gaussian PDF.
Hint
Step15: Solution
See the solution or just run the line below.
Step20: Remember, peak on the detector is observed when the reciprocal lattice is aligned so that the Ewald sphere intersects the peak. Thus, lattice rotations cause additional peaks at the GISAS pattern.
Image from | Python Code:
# %load density_grad.py
import numpy as np
import bornagain as ba
from bornagain import deg, angstrom, nm
# define used SLDs
sld_D2O = 6.34e-06
sld_polymer = 4.0e-06
sld_Si = 2.07e-06
h = 100.0*nm # thickness of the non-uniform polymer layer
nslices = 100 # number of slices to slice the polymer layer
def get_sld(z):
function to calculate SLD(z) for the polymer layer
return sld_polymer*np.exp(-z/h)
def add_slices(multilayer):
dz = h/nslices
zvals = np.linspace(0, h, nslices, endpoint=False) + 0.5*dz
for z in zvals:
sld = get_sld(z)
material = ba.MaterialBySLD("Polymer_{:.1f}".format(z), sld, 0.0)
layer = ba.Layer(material, dz)
multilayer.addLayer(layer)
def get_sample():
# Defining Materials
m_Si = ba.MaterialBySLD("Si", sld_Si, 0.0)
m_Polymer = ba.MaterialBySLD("Polymer-0", sld_polymer, 0.0)
m_D2O = ba.MaterialBySLD("D2O", sld_D2O, 0.0)
# Defining Layers
layer_si = ba.Layer(m_Si)
layer_polymer = ba.Layer(m_Polymer, 2.0*nm)
layer_d2o = ba.Layer(m_D2O)
# Defining Multilayers
multiLayer = ba.MultiLayer()
multiLayer.addLayer(layer_si)
multiLayer.addLayer(layer_polymer)
add_slices(multiLayer)
multiLayer.addLayer(layer_d2o)
return multiLayer
def get_simulation():
simulation = ba.SpecularSimulation()
alpha_i_axis = ba.FixedBinAxis("alpha_i", 500, 0.0*deg, 6.5*deg)
simulation.setBeamParameters(8.0*angstrom, alpha_i_axis)
simulation.setBeamIntensity(1.0)
# add wavelength distribution
distr_1 = ba.DistributionCosine(8.0*angstrom, 0.8*angstrom/2.355)
simulation.addParameterDistribution("*/Beam/Wavelength", distr_1, 50, 2.0, ba.RealLimits.positive())
return simulation
def run_simulation():
sample = get_sample()
simulation = get_simulation()
simulation.setSample(sample)
simulation.runSimulation()
return simulation.result()
if __name__ == '__main__':
results = run_simulation()
ba.plot_simulation_result(results, units=ba.AxesUnits.QSPACE)
Explanation: SLD gradients
For the moment, BornAgain does not support input of SLD profiles. However, one can approximate the smooth SLD profile by a large number of layers. See the example script below.
End of explanation
# plot an SLD profile
import matplotlib.pyplot as plt
x = np.linspace(0, h, nslices)
y = get_sld(x)
plt.plot(x, y*1e+6, color='k')
plt.xlabel(r'$z$ (nm)')
plt.ylabel(r'SLD$\cdot 10^6$')
plt.title("SLD profile");
Explanation: Exercise
Play with the script above. Change the function for SLD profile, add/remove/vary the beam divergence. How does it influence the simulation result?
End of explanation
# %load https://www.bornagainproject.org/files/python/simulation/ex01_BasicParticles/RotatedPyramids.py
Rotated pyramids on top of substrate
import bornagain as ba
from bornagain import deg, angstrom, nm
def get_sample():
Returns a sample with rotated pyramids on top of a substrate.
# defining materials
m_ambience = ba.HomogeneousMaterial("Air", 0.0, 0.0)
m_substrate = ba.HomogeneousMaterial("Substrate", 6e-6, 2e-8)
m_particle = ba.HomogeneousMaterial("Particle", 6e-4, 2e-8)
# collection of particles
pyramid_ff = ba.FormFactorPyramid(40*nm, 20*nm, 54.73*deg)
pyramid = ba.Particle(m_particle, pyramid_ff)
transform = ba.RotationZ(45.*deg)
particle_layout = ba.ParticleLayout()
particle_layout.addParticle(
pyramid, 1.0, ba.kvector_t(0.0, 0.0, 0.0), transform)
air_layer = ba.Layer(m_ambience)
air_layer.addLayout(particle_layout)
substrate_layer = ba.Layer(m_substrate)
multi_layer = ba.MultiLayer()
multi_layer.addLayer(air_layer)
multi_layer.addLayer(substrate_layer)
return multi_layer
def get_simulation():
Returns a GISAXS simulation with beam and detector defined.
simulation = ba.GISASSimulation()
simulation.setDetectorParameters(200, -2.0*deg, 2.0*deg,
200, 0.0*deg, 2.0*deg)
simulation.setBeamParameters(1.0*angstrom, 0.2*deg, 0.0*deg)
return simulation
def run_simulation():
Runs simulation and returns intensity map.
simulation = get_simulation()
simulation.setSample(get_sample())
simulation.runSimulation()
return simulation.result()
if __name__ == '__main__':
result = run_simulation()
ba.plot_simulation_result(result)
Explanation: Diffuse scattering: disordered samples
Understanding the diffuse scattering in GISAS is a challenging task. The reason of the diffuse scattering is any kind of disorder in the sample.
Possible reasons of diffuse scattering
- Particle size distribution
- Different kinds of particles
- Disordered particle layout
- Variety of particle rotations
- Variety of lattice rotation
- Polymer density fluctuations
Particle rotation
Let's take the Rotated Pyramids example and modiy it to account for rotational distribution of particles. First, we increase a bit the size of pyramids to get nicer images. Set the pyramid BaseEdge to be 40 nm and the pyramid Height to 20 nm:
python
pyramid_ff = ba.FormFactorPyramid(40*nm, 20*nm, 54.73*deg)
and run the script below.
End of explanation
%load RotatedPyramids.py
Explanation: Exercise
Add the rotational distribution. If got stucked, see the solution or just run the line below.
End of explanation
# %load https://www.bornagainproject.org/files/python/simulation/ex03_InterferenceFunctions/SpheresAtHexLattice.py
Spheres on a hexagonal lattice
import bornagain as ba
from bornagain import deg, angstrom, nm
def get_sample():
Returns a sample with spherical particles on a substrate,
forming a hexagonal 2D lattice.
m_air = ba.HomogeneousMaterial("Air", 0.0, 0.0)
m_substrate = ba.HomogeneousMaterial("Substrate", 6e-6, 2e-8)
m_particle = ba.HomogeneousMaterial("Particle", 6e-4, 2e-8)
sphere_ff = ba.FormFactorFullSphere(10.0*nm)
sphere = ba.Particle(m_particle, sphere_ff)
particle_layout = ba.ParticleLayout()
particle_layout.addParticle(sphere)
interference = ba.InterferenceFunction2DLattice.createHexagonal(35.0*nm)
pdf = ba.FTDecayFunction2DCauchy(100*nm, 100*nm)
interference.setDecayFunction(pdf)
particle_layout.setInterferenceFunction(interference)
air_layer = ba.Layer(m_air)
air_layer.addLayout(particle_layout)
substrate_layer = ba.Layer(m_substrate, 0)
multi_layer = ba.MultiLayer()
multi_layer.addLayer(air_layer)
multi_layer.addLayer(substrate_layer)
return multi_layer
def get_simulation():
Create and return GISAXS simulation with beam and detector defined
simulation = ba.GISASSimulation()
simulation.setDetectorParameters(200, -1.0*deg, 1.0*deg,
200, 0.0*deg, 1.0*deg)
simulation.setBeamParameters(1.0*angstrom, 0.2*deg, 0.0*deg)
return simulation
def run_simulation():
Runs simulation and returns intensity map.
simulation = get_simulation()
simulation.setSample(get_sample())
simulation.runSimulation()
return simulation.result()
if __name__ == '__main__':
result = run_simulation()
ba.plot_simulation_result(result)
Explanation: Lattice rotation
Imaging techniques deliver information from a few $\mu m^2$, while GISAS typically averages the information over the whole sample surface. This explains the effect of diffuse scattering for samples that look well ordered on SEM/AFM images.
Let's take a BornAgain example and modify it to account for variety of lattice rotations.
First, we increase a bit lattice length to see more peaks and DecayLengths of the decay function to make peaks more narrow:
python
interference = ba.InterferenceFunction2DLattice.createHexagonal(35.0*nm)
pdf = ba.FTDecayFunction2DCauchy(100*nm, 100*nm)
End of explanation
sample = get_sample()
print(sample.parametersToString())
Explanation: Exercise
Modify the script above to account for the lattice rotational distribution. Let's distribute lattice rotation angles using Gaussian PDF.
Hint: the code below helps to get the list of sample parameters
End of explanation
%load rotated_lattice.py
Explanation: Solution
See the solution or just run the line below.
End of explanation
# %load polymer.py
import numpy as np
import bornagain as ba
from bornagain import deg, angstrom, nm
# KWS-1 detector parameters
npx, npy = 128, 128 # number of detector pixels
psize = 5.3 # pixel size, mm
det_width, det_height = npx*psize, npy*psize # mm, detector size
sdd = 20000.0 # mm, sample-detector distance
# direct beam position
beam_xpos, beam_ypos = 64.5, 64.5 # pixel
# incident angle
ai = 0.2 # degree
wavelength = 5.0 # angstrom
# beam
beam_intensity = 1.0
# SLDs
sld_Si = 2.074e-6
sld_Si_im = -2.3819e-11
sld_D2O = 6.356e-6
sld_D2O_im = -1.1295e-13
sld_polymer = 4.0e-6
sld_polymer_im = 0.0
def get_sample():
Returns a sample
# defining materials
m_si = ba.MaterialBySLD("Si", sld_Si, sld_Si_im)
m_d2o = ba.MaterialBySLD("D2O", sld_D2O, sld_D2O_im)
m_polymer = ba.MaterialBySLD("Polymer", sld_polymer, sld_polymer_im)
# particle layout
microgel_layout = ba.ParticleLayout()
# weights for components
w_particles = 0.005
w_oz =0.5
w_db = 1.0 - w_oz - w_particles
# fluctuation component
ff_oz = ba.FormFactorOrnsteinZernike(1000, 10.0*nm, 5.0*nm)
particle_oz = ba.Particle(m_polymer, ff_oz)
microgel_layout.addParticle(particle_oz, w_oz)
# Debye-Buche component
ff_db = ba.FormFactorDebyeBueche(1000, 20.0*nm)
particle_db = ba.Particle(m_polymer, ff_db)
microgel_layout.addParticle(particle_db, w_db)
# collection of particles
radius = 100.0*nm
ff = ba.FormFactorTruncatedSphere(radius=radius, height=radius)
particle = ba.Particle(m_polymer, ff)
particle.setPosition(ba.kvector_t(0.0, 0.0, -1.0*radius))
microgel_layout.addParticle(particle, w_particles)
# no interference function
interference = ba.InterferenceFunctionNone()
microgel_layout.setInterferenceFunction(interference)
microgel_layout.setTotalParticleSurfaceDensity(1e-6)
d2o_layer = ba.Layer(m_d2o)
d2o_layer.addLayout(microgel_layout)
si_layer = ba.Layer(m_si)
multi_layer = ba.MultiLayer()
multi_layer.addLayer(si_layer)
multi_layer.addLayer(d2o_layer)
return multi_layer
def create_detector():
Creates and returns KWS-1 detector
u0 = beam_xpos*psize # in mm
v0 = beam_ypos*psize # in mm
detector = ba.RectangularDetector(npx, det_width, npy, det_height)
detector.setPerpendicularToDirectBeam(sdd, u0, v0)
return detector
def get_simulation(wl=5.0, alpha_i=ai):
Returns a GISAS simulation with beam and detector defined
simulation = ba.GISASSimulation()
simulation.setBeamParameters(wl*ba.angstrom, alpha_i*ba.deg, 0.0*ba.deg)
simulation.setDetector(create_detector())
simulation.setBeamIntensity(beam_intensity)
return simulation
def run_simulation():
Runs simulation and returns resulting intensity map.
sample = get_sample()
simulation = get_simulation(wavelength)
simulation.setDetectorResolutionFunction(ba.ResolutionFunction2DGaussian(2.0*psize, 1.0*psize))
simulation.setSample(sample)
simulation.setRegionOfInterest(20, 400, 650, 650)
# options
simulation.getOptions().setUseAvgMaterials(True)
#simulation.getOptions().setIncludeSpecular(True)
simulation.setTerminalProgressMonitor()
simulation.runSimulation()
return simulation.result()
if __name__ == '__main__':
result = run_simulation()
ba.plot_simulation_result(result, units=ba.AxesUnits.QSPACE)
Explanation: Remember, peak on the detector is observed when the reciprocal lattice is aligned so that the Ewald sphere intersects the peak. Thus, lattice rotations cause additional peaks at the GISAS pattern.
Image from: K. Yager, GISAXS/GIWAX Data Analysis: Thinking in Reciprocal Space
Feel free to play with the example: change the kind of the distribution, add the beam divergence. How does it influence the simulation result?
Polymer density fluctuations
Polymer density fluctuations smear out the peaks an cause a lot of diffuse scattering. Example GISANS partern is shown in the figure below [1].
Figure below illustrates kinds of inhomogenities in polymer solutions. Blobs of polymer chains are represented as black lines and blobs of crosslinks with red dots.
Schematic representations of (a) a two-dimensional reaction bath well above the chain gelation threshold, (b) an overswollen gel by the addition of solvent and (c) dynamic, static and total concentration fluctuations with space coordinate r. For the sake of simplicity, the chains, which are random walks on this lattice, are not shown in the figure. Black dots represent the interchain crosslinks placed at random [2].
These inhomogenities account for the diffuse scattering. To take them into account, two form factors are available in BornAgain:
Form factor Ornstein-Zernike
Born form factor is implemented in BornAgain as
$$F_{OZ}(\mathbf{q}) = \sqrt{\frac{I_0}{1 + \xi_{xy}^2\cdot(q_x^2 + q_y^2) + \xi_z^2\cdot q_z^2}}$$
where $\xi_{xy}$ and $\xi_z$ represent inhomogenity blob size (in nm) in azimuthal and vertical directions, respectively.
To create the Ornstein-Zernike form factor, use statement
python
import bornagain as ba
myff = ba.FormFactorOrnsteinZernike(I0, xi_xy, xi_z)
Form factor Debye-Buche
Born form factor is implemented in BornAgain as
$$F_{DB}(\mathbf{q}) = \frac{\sqrt{I_0}}{1 + \xi^2\cdot|\mathbf{q}|^2}$$
To create it, use statement
python
import bornagain as ba
myff = ba.FormFactorDebyeBueche(I0, xi)
Example script
End of explanation |
2,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to Load CSV and Numpy File Types in TensorFlow 2.0
Learning Objectives
Load a CSV file into a tf.data.Dataset.
Load Numpy data
Introduction
In this lab, you load CSV data from a file into a tf.data.Dataset. This tutorial provides an example of loading data from NumPy arrays into a tf.data.Dataset you also load text data.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Load necessary libraries
We will start by importing the necessary libraries for this lab.
Step1: Load data
This section provides an example of how to load CSV data from a file into a tf.data.Dataset. The data used in this tutorial are taken from the Titanic passenger list. The model will predict the likelihood a passenger survived based on characteristics like age, gender, ticket class, and whether the person was traveling alone.
To start, let's look at the top of the CSV file to see how it is formatted.
Step2: You can load this using pandas, and pass the NumPy arrays to TensorFlow. If you need to scale up to a large set of files, or need a loader that integrates with TensorFlow and tf.data then use the tf.data.experimental.make_csv_dataset function
Step3: Now read the CSV data from the file and create a dataset.
(For the full documentation, see tf.data.experimental.make_csv_dataset)
Step4: Each item in the dataset is a batch, represented as a tuple of (many examples, many labels). The data from the examples is organized in column-based tensors (rather than row-based tensors), each with as many elements as the batch size (5 in this case).
It might help to see this yourself.
Step5: As you can see, the columns in the CSV are named. The dataset constructor will pick these names up automatically. If the file you are working with does not contain the column names in the first line, pass them in a list of strings to the column_names argument in the make_csv_dataset function.
Step6: This example is going to use all the available columns. If you need to omit some columns from the dataset, create a list of just the columns you plan to use, and pass it into the (optional) select_columns argument of the constructor.
Step7: Data preprocessing
A CSV file can contain a variety of data types. Typically you want to convert from those mixed types to a fixed length vector before feeding the data into your model.
TensorFlow has a built-in system for describing common input conversions
Step8: Here's a simple function that will pack together all the columns
Step9: Apply this to each element of the dataset
Step10: If you have mixed datatypes you may want to separate out these simple-numeric fields. The tf.feature_column api can handle them, but this incurs some overhead and should be avoided unless really necessary. Switch back to the mixed dataset
Step11: So define a more general preprocessor that selects a list of numeric features and packs them into a single column
Step12: Data Normalization
Continuous data should always be normalized.
Step13: Now create a numeric column. The tf.feature_columns.numeric_column API accepts a normalizer_fn argument, which will be run on each batch.
Bind the MEAN and STD to the normalizer fn using functools.partial.
Step14: When you train the model, include this feature column to select and center this block of numeric data
Step15: The mean based normalization used here requires knowing the means of each column ahead of time.
Categorical data
Some of the columns in the CSV data are categorical columns. That is, the content should be one of a limited set of options.
Use the tf.feature_column API to create a collection with a tf.feature_column.indicator_column for each categorical column.
Step16: This will be become part of a data processing input later when you build the model.
Combined preprocessing layer
Add the two feature column collections and pass them to a tf.keras.layers.DenseFeatures to create an input layer that will extract and preprocess both input types
Step17: Next Step
A next step would be to build a build a tf.keras.Sequential, starting with the preprocessing_layer, which is beyond the scope of this lab. We will cover the Keras Sequential API in the next Lesson.
Load NumPy data
Load necessary libraries
First, restart the Kernel. Then, we will start by importing the necessary libraries for this lab.
Step18: Load data from .npz file
We use the MNIST dataset in Keras.
Step19: Load NumPy arrays with tf.data.Dataset
Assuming you have an array of examples and a corresponding array of labels, pass the two arrays as a tuple into tf.data.Dataset.from_tensor_slices to create a tf.data.Dataset. | Python Code:
import functools
import numpy as np
import tensorflow as tf
print("TensorFlow version: ", tf.version.VERSION)
TRAIN_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/train.csv"
TEST_DATA_URL = "https://storage.googleapis.com/tf-datasets/titanic/eval.csv"
train_file_path = tf.keras.utils.get_file("train.csv", TRAIN_DATA_URL)
test_file_path = tf.keras.utils.get_file("eval.csv", TEST_DATA_URL)
# Make numpy values easier to read.
np.set_printoptions(precision=3, suppress=True)
Explanation: How to Load CSV and Numpy File Types in TensorFlow 2.0
Learning Objectives
Load a CSV file into a tf.data.Dataset.
Load Numpy data
Introduction
In this lab, you load CSV data from a file into a tf.data.Dataset. This tutorial provides an example of loading data from NumPy arrays into a tf.data.Dataset you also load text data.
Each learning objective will correspond to a #TODO in the student lab notebook -- try to complete that notebook first before reviewing this solution notebook.
Load necessary libraries
We will start by importing the necessary libraries for this lab.
End of explanation
!head {train_file_path}
Explanation: Load data
This section provides an example of how to load CSV data from a file into a tf.data.Dataset. The data used in this tutorial are taken from the Titanic passenger list. The model will predict the likelihood a passenger survived based on characteristics like age, gender, ticket class, and whether the person was traveling alone.
To start, let's look at the top of the CSV file to see how it is formatted.
End of explanation
# TODO 1: Add string name for label column
LABEL_COLUMN = ""
LABELS = []
Explanation: You can load this using pandas, and pass the NumPy arrays to TensorFlow. If you need to scale up to a large set of files, or need a loader that integrates with TensorFlow and tf.data then use the tf.data.experimental.make_csv_dataset function:
The only column you need to identify explicitly is the one with the value that the model is intended to predict.
End of explanation
def get_dataset(file_path, **kwargs):
# TODO 2
# TODO: Read the CSV data from the file and create a dataset
dataset = tf.data.experimental.make_csv_dataset(
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
# TODO: Your code goes here.
)
return dataset
raw_train_data = # TODO: Your code goes here.
raw_test_data = # TODO: Your code goes here.
def show_batch(dataset):
for batch, label in dataset.take(1):
for key, value in batch.items():
print(f"{key:20s}: {value.numpy()}")
Explanation: Now read the CSV data from the file and create a dataset.
(For the full documentation, see tf.data.experimental.make_csv_dataset)
End of explanation
show_batch(raw_train_data)
Explanation: Each item in the dataset is a batch, represented as a tuple of (many examples, many labels). The data from the examples is organized in column-based tensors (rather than row-based tensors), each with as many elements as the batch size (5 in this case).
It might help to see this yourself.
End of explanation
CSV_COLUMNS = [
"survived",
"sex",
"age",
"n_siblings_spouses",
"parch",
"fare",
"class",
"deck",
"embark_town",
"alone",
]
temp_dataset = get_dataset(train_file_path, column_names=CSV_COLUMNS)
show_batch(temp_dataset)
Explanation: As you can see, the columns in the CSV are named. The dataset constructor will pick these names up automatically. If the file you are working with does not contain the column names in the first line, pass them in a list of strings to the column_names argument in the make_csv_dataset function.
End of explanation
SELECT_COLUMNS = [
"survived",
"age",
"n_siblings_spouses",
"class",
"deck",
"alone",
]
temp_dataset = get_dataset(train_file_path, select_columns=SELECT_COLUMNS)
show_batch(temp_dataset)
Explanation: This example is going to use all the available columns. If you need to omit some columns from the dataset, create a list of just the columns you plan to use, and pass it into the (optional) select_columns argument of the constructor.
End of explanation
SELECT_COLUMNS = ["survived", "age", "n_siblings_spouses", "parch", "fare"]
DEFAULTS = [0, 0.0, 0.0, 0.0, 0.0]
temp_dataset = get_dataset(
train_file_path, select_columns=SELECT_COLUMNS, column_defaults=DEFAULTS
)
show_batch(temp_dataset)
example_batch, labels_batch = next(iter(temp_dataset))
Explanation: Data preprocessing
A CSV file can contain a variety of data types. Typically you want to convert from those mixed types to a fixed length vector before feeding the data into your model.
TensorFlow has a built-in system for describing common input conversions: tf.feature_column, see this tutorial for details.
You can preprocess your data using any tool you like (like nltk or sklearn), and just pass the processed output to TensorFlow.
The primary advantage of doing the preprocessing inside your model is that when you export the model it includes the preprocessing. This way you can pass the raw data directly to your model.
Continuous data
If your data is already in an appropriate numeric format, you can pack the data into a vector before passing it off to the model:
End of explanation
def pack(features, label):
return tf.stack(list(features.values()), axis=-1), label
Explanation: Here's a simple function that will pack together all the columns:
End of explanation
packed_dataset = temp_dataset.map(pack)
for features, labels in packed_dataset.take(1):
print(features.numpy())
print()
print(labels.numpy())
Explanation: Apply this to each element of the dataset:
End of explanation
show_batch(raw_train_data)
example_batch, labels_batch = next(iter(temp_dataset))
Explanation: If you have mixed datatypes you may want to separate out these simple-numeric fields. The tf.feature_column api can handle them, but this incurs some overhead and should be avoided unless really necessary. Switch back to the mixed dataset:
End of explanation
class PackNumericFeatures:
def __init__(self, names):
self.names = names
def __call__(self, features, labels):
numeric_features = [features.pop(name) for name in self.names]
numeric_features = [
tf.cast(feat, tf.float32) for feat in numeric_features
]
numeric_features = tf.stack(numeric_features, axis=-1)
features["numeric"] = numeric_features
return features, labels
NUMERIC_FEATURES = ["age", "n_siblings_spouses", "parch", "fare"]
packed_train_data = raw_train_data.map(PackNumericFeatures(NUMERIC_FEATURES))
packed_test_data = raw_test_data.map(PackNumericFeatures(NUMERIC_FEATURES))
show_batch(packed_train_data)
example_batch, labels_batch = next(iter(packed_train_data))
Explanation: So define a more general preprocessor that selects a list of numeric features and packs them into a single column:
End of explanation
import pandas as pd
desc = pd.read_csv(train_file_path)[NUMERIC_FEATURES].describe()
desc
# TODO 1
MEAN = # TODO: Your code goes here.
STD = # TODO: Your code goes here.
def normalize_numeric_data(data, mean, std):
# Center the data
# TODO 2
print(MEAN, STD)
Explanation: Data Normalization
Continuous data should always be normalized.
End of explanation
# See what you just created.
normalizer = functools.partial(normalize_numeric_data, mean=MEAN, std=STD)
numeric_column = tf.feature_column.numeric_column(
"numeric", normalizer_fn=normalizer, shape=[len(NUMERIC_FEATURES)]
)
numeric_columns = [numeric_column]
numeric_column
Explanation: Now create a numeric column. The tf.feature_columns.numeric_column API accepts a normalizer_fn argument, which will be run on each batch.
Bind the MEAN and STD to the normalizer fn using functools.partial.
End of explanation
example_batch["numeric"]
numeric_layer = tf.keras.layers.DenseFeatures(numeric_columns)
numeric_layer(example_batch).numpy()
Explanation: When you train the model, include this feature column to select and center this block of numeric data:
End of explanation
CATEGORIES = {
"sex": ["male", "female"],
"class": ["First", "Second", "Third"],
"deck": ["A", "B", "C", "D", "E", "F", "G", "H", "I", "J"],
"embark_town": ["Cherbourg", "Southhampton", "Queenstown"],
"alone": ["y", "n"],
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = tf.feature_column.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab
)
categorical_columns.append(tf.feature_column.indicator_column(cat_col))
# See what you just created.
categorical_columns
categorical_layer = tf.keras.layers.DenseFeatures(categorical_columns)
print(categorical_layer(example_batch).numpy()[0])
Explanation: The mean based normalization used here requires knowing the means of each column ahead of time.
Categorical data
Some of the columns in the CSV data are categorical columns. That is, the content should be one of a limited set of options.
Use the tf.feature_column API to create a collection with a tf.feature_column.indicator_column for each categorical column.
End of explanation
# TODO 1
preprocessing_layer = # TODO: Your code goes here.
print(preprocessing_layer(example_batch).numpy()[0])
Explanation: This will be become part of a data processing input later when you build the model.
Combined preprocessing layer
Add the two feature column collections and pass them to a tf.keras.layers.DenseFeatures to create an input layer that will extract and preprocess both input types:
End of explanation
import numpy as np
import tensorflow as tf
print("TensorFlow version: ", tf.version.VERSION)
Explanation: Next Step
A next step would be to build a build a tf.keras.Sequential, starting with the preprocessing_layer, which is beyond the scope of this lab. We will cover the Keras Sequential API in the next Lesson.
Load NumPy data
Load necessary libraries
First, restart the Kernel. Then, we will start by importing the necessary libraries for this lab.
End of explanation
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
# TODO 1
train_examples = # TODO: Your code goes here.
train_labels = # TODO: Your code goes here.
test_examples = # TODO: Your code goes here.
test_labels = # TODO: Your code goes here.
Explanation: Load data from .npz file
We use the MNIST dataset in Keras.
End of explanation
# TODO 2
train_dataset = # TODO: Your code goes here.
test_dataset = # TODO: Your code goes here.
Explanation: Load NumPy arrays with tf.data.Dataset
Assuming you have an array of examples and a corresponding array of labels, pass the two arrays as a tuple into tf.data.Dataset.from_tensor_slices to create a tf.data.Dataset.
End of explanation |
2,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
Step1: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
Step2: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note
Step6: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this
Step7: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
Step10: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords
Step11: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note
Step12: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
Step13: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
Step14: Try out your own text! | Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
Explanation: Sentiment analysis with TFLearn
In this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.
We'll start off by importing all the modules we'll need, then load and prepare the data.
End of explanation
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
Explanation: Preparing the data
Following along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.
Read the data
Use the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.
End of explanation
from collections import Counter
total_counts = Counter()
for _, row in reviews.iterrows():
total_counts.update(row[0].split(' '))
print("Total words in data set: ", len(total_counts))
total_counts.most_common(10)
Explanation: Counting word frequency
To start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.
Exercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.
End of explanation
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
Explanation: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
End of explanation
print(vocab[-1], ': ', total_counts[vocab[-1]])
Explanation: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
End of explanation
word2idx = {word: i for i, word in enumerate(vocab)}
Explanation: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Note: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.
Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.
Exercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.
End of explanation
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
Explanation: Text to vector function
Now we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:
Initialize the word vector with np.zeros, it should be the length of the vocabulary.
Split the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.
For each word in that list, increment the element in the index associated with that word, which you get from word2idx.
Note: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.
End of explanation
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
Explanation: If you do this right, the following code should return
```
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,
0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])
```
End of explanation
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Explanation: Now, run through our entire review data set and convert each review to a word vector.
End of explanation
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
Explanation: Train, Validation, Test sets
Now that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
End of explanation
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, 10000])
# Hidden layer(s)
net = tflearn.fully_connected(net, 200, activation='ReLU')
net = tflearn.fully_connected(net, 25, activation='ReLU')
# Output layer
net = tflearn.fully_connected(net, 2, activation='softmax')
net = tflearn.regression(net, optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
Explanation: Building the network
TFLearn lets you build the network by defining the layers.
Input layer
For the input layer, you just need to tell it how many units you have. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.
The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).
Output layer
The last layer you add is used as the output layer. There for, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.
net = tflearn.fully_connected(net, 2, activation='softmax')
Training
To set how you train the network, use
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with the categorical cross-entropy.
Finally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like
net = tflearn.input_data([None, 10]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
End of explanation
model = build_model()
Explanation: Intializing the model
Next we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.
Note: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=100)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.
You can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.
End of explanation
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with your hyperparameters, you can run the network on the test set to measure it's performance. Remember, only do this after finalizing the hyperparameters.
End of explanation
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
Explanation: Try out your own text!
End of explanation |
2,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint | Python Code:
import sys
print(sys.version)
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: Answer: 61.62%
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
int(data[:1]['Sex'] == "female")
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Simple way of returning 1 if female, 0 if male
if(passenger['Sex']=="female"):
z = 1
else:
z = 0
predictions.append(z)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: Accuracy of 78.68%
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
if(passenger['Sex'] == "female"):
z = 1
elif(passenger['Sex'] == "male" and passenger["Age"] < 10):
z = 1
else:
z = 0
predictions.append(z)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'", "Age < 18"])
# Note: exploration was done in R, as it lends itself better to
# data analysis. After building a decision tree, we will implement the logic
# from it in Python
Explanation: Answer: 79.35%
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
if passenger['Sex'] == "male":
if passenger['Age'] >= 6.5:
z = 0
if passenger['Age'] < 6.5:
if passenger['SibSp'] >= 2.5:
z = 0
else: z = 1
else:
z = 0
if passenger['Sex'] == 'female':
if passenger['Pclass'] >= 2.5:
if passenger['Fare'] >= 23.35:
z = 0
if passenger['Fare'] < 23.35:
if passenger['Embarked'] == "S":
z = 0
if passenger['Embarked'] in ["C", "Q"]:
z = 1
if passenger['Pclass'] < 2.5:
z = 1
predictions.append(z)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation |
2,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Style Transfer
Our Changes
Step1: Imports
Step2: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step3: The VGG-16 model is downloaded from the internet. This is the default directory where you want to save the data-files. The directory will be created if it does not exist.
Step4: Helper-functions for image manipulation
This function loads an image and returns it as a numpy array of floating-points. The image can be automatically resized so the largest of the height or width equals max_size.
Step5: Save an image as a jpeg-file. The image is given as a numpy array with pixel-values between 0 and 255.
Step6: This function plots a large image. The image is given as a numpy array with pixel-values between 0 and 255.
This function plots the content-, mixed- and style-images.
Step10: Loss Functions
These helper-functions create the loss-functions that are used in optimization with TensorFlow.
This function creates a TensorFlow operation for calculating the Mean Squared Error between the two input tensors.
Step11: Example
This example shows how to transfer the style of various images onto a portrait.
First we load the content-image which has the overall contours that we want in the mixed-image.
Step12: Then we load the style-image which has the colours and textures we want in the mixed-image.
Step13: Then we define a list of integers which identify the layers in the neural network that we want to use for matching the content-image. These are indices into the layers in the neural network. For the VGG16 model, the 5th layer (index 4) seems to work well as the sole content-layer.
Step14: Then we define another list of integers for the style-layers. | Python Code:
from IPython.display import Image, display
Image('images/15_style_transfer_flowchart.png')
Explanation: Style Transfer
Our Changes:
We are just saving the mixed image every 10 iterations.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import PIL.Image
Explanation: Imports
End of explanation
tf.__version__
import vgg16
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
# vgg16.data_dir = 'vgg16/'
vgg16.maybe_download()
Explanation: The VGG-16 model is downloaded from the internet. This is the default directory where you want to save the data-files. The directory will be created if it does not exist.
End of explanation
def load_image(filename, max_size=None):
image = PIL.Image.open(filename)
if max_size is not None:
# Calculate the appropriate rescale-factor for
# ensuring a max height and width, while keeping
# the proportion between them.
factor = max_size / np.max(image.size)
# Scale the image's height and width.
size = np.array(image.size) * factor
# The size is now floating-point because it was scaled.
# But PIL requires the size to be integers.
size = size.astype(int)
# Resize the image.
image = image.resize(size, PIL.Image.LANCZOS)
print(image)
# Convert to numpy floating-point array.
return np.float32(image)
Explanation: Helper-functions for image manipulation
This function loads an image and returns it as a numpy array of floating-points. The image can be automatically resized so the largest of the height or width equals max_size.
End of explanation
def save_image(image, filename):
# Ensure the pixel-values are between 0 and 255.
image = np.clip(image, 0.0, 255.0)
# Convert to bytes.
image = image.astype(np.uint8)
# Write the image-file in jpeg-format.
with open(filename, 'wb') as file:
PIL.Image.fromarray(image).save(file, 'jpeg')
Explanation: Save an image as a jpeg-file. The image is given as a numpy array with pixel-values between 0 and 255.
End of explanation
def plot_image_big(image):
# Ensure the pixel-values are between 0 and 255.
image = np.clip(image, 0.0, 255.0)
# Convert pixels to bytes.
image = image.astype(np.uint8)
# Convert to a PIL-image and display it.
display(PIL.Image.fromarray(image))
def plot_images(content_image, style_image, mixed_image):
# Create figure with sub-plots.
fig, axes = plt.subplots(1, 3, figsize=(10, 10))
# Adjust vertical spacing.
fig.subplots_adjust(hspace=0.1, wspace=0.1)
# Use interpolation to smooth pixels?
smooth = True
# Interpolation type.
if smooth:
interpolation = 'sinc'
else:
interpolation = 'nearest'
# Plot the content-image.
# Note that the pixel-values are normalized to
# the [0.0, 1.0] range by dividing with 255.
ax = axes.flat[0]
ax.imshow(content_image / 255.0, interpolation=interpolation)
ax.set_xlabel("Content")
# Plot the mixed-image.
ax = axes.flat[1]
ax.imshow(mixed_image / 255.0, interpolation=interpolation)
ax.set_xlabel("Mixed")
# Plot the style-image
ax = axes.flat[2]
ax.imshow(style_image / 255.0, interpolation=interpolation)
ax.set_xlabel("Style")
# Remove ticks from all the plots.
for ax in axes.flat:
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: This function plots a large image. The image is given as a numpy array with pixel-values between 0 and 255.
This function plots the content-, mixed- and style-images.
End of explanation
def mean_squared_error(a, b):
return tf.reduce_mean(tf.square(a - b))
def create_content_loss(session, model, content_image, layer_ids):
Create the loss-function for the content-image.
Parameters:
session: An open TensorFlow session for running the model's graph.
model: The model, e.g. an instance of the VGG16-class.
content_image: Numpy float array with the content-image.
layer_ids: List of integer id's for the layers to use in the model.
# Create a feed-dict with the content-image.
feed_dict = model.create_feed_dict(image=content_image)
# Get references to the tensors for the given layers.
layers = model.get_layer_tensors(layer_ids)
# Calculate the output values of those layers when
# feeding the content-image to the model.
values = session.run(layers, feed_dict=feed_dict)
# Set the model's graph as the default so we can add
# computational nodes to it. It is not always clear
# when this is necessary in TensorFlow, but if you
# want to re-use this code then it may be necessary.
with model.graph.as_default():
# Initialize an empty list of loss-functions.
layer_losses = []
# For each layer and its corresponding values
# for the content-image.
for value, layer in zip(values, layers):
# These are the values that are calculated
# for this layer in the model when inputting
# the content-image. Wrap it to ensure it
# is a const - although this may be done
# automatically by TensorFlow.
value_const = tf.constant(value)
# The loss-function for this layer is the
# Mean Squared Error between the layer-values
# when inputting the content- and mixed-images.
# Note that the mixed-image is not calculated
# yet, we are merely creating the operations
# for calculating the MSE between those two.
loss = mean_squared_error(layer, value_const)
# Add the loss-function for this layer to the
# list of loss-functions.
layer_losses.append(loss)
# The combined loss for all layers is just the average.
# The loss-functions could be weighted differently for
# each layer. You can try it and see what happens.
total_loss = tf.reduce_mean(layer_losses)
return total_loss
def gram_matrix(tensor):
shape = tensor.get_shape()
# Get the number of feature channels for the input tensor,
# which is assumed to be from a convolutional layer with 4-dim.
num_channels = int(shape[3])
# Reshape the tensor so it is a 2-dim matrix. This essentially
# flattens the contents of each feature-channel.
matrix = tf.reshape(tensor, shape=[-1, num_channels])
# Calculate the Gram-matrix as the matrix-product of
# the 2-dim matrix with itself. This calculates the
# dot-products of all combinations of the feature-channels.
gram = tf.matmul(tf.transpose(matrix), matrix)
return gram
def create_style_loss(session, model, style_image, layer_ids):
Create the loss-function for the style-image.
Parameters:
session: An open TensorFlow session for running the model's graph.
model: The model, e.g. an instance of the VGG16-class.
style_image: Numpy float array with the style-image.
layer_ids: List of integer id's for the layers to use in the model.
# Create a feed-dict with the style-image.
feed_dict = model.create_feed_dict(image=style_image)
# Get references to the tensors for the given layers.
layers = model.get_layer_tensors(layer_ids)
# Set the model's graph as the default so we can add
# computational nodes to it. It is not always clear
# when this is necessary in TensorFlow, but if you
# want to re-use this code then it may be necessary.
with model.graph.as_default():
# Construct the TensorFlow-operations for calculating
# the Gram-matrices for each of the layers.
gram_layers = [gram_matrix(layer) for layer in layers]
# Calculate the values of those Gram-matrices when
# feeding the style-image to the model.
values = session.run(gram_layers, feed_dict=feed_dict)
# Initialize an empty list of loss-functions.
layer_losses = []
# For each Gram-matrix layer and its corresponding values.
for value, gram_layer in zip(values, gram_layers):
# These are the Gram-matrix values that are calculated
# for this layer in the model when inputting the
# style-image. Wrap it to ensure it is a const,
# although this may be done automatically by TensorFlow.
value_const = tf.constant(value)
# The loss-function for this layer is the
# Mean Squared Error between the Gram-matrix values
# for the content- and mixed-images.
# Note that the mixed-image is not calculated
# yet, we are merely creating the operations
# for calculating the MSE between those two.
loss = mean_squared_error(gram_layer, value_const)
# Add the loss-function for this layer to the
# list of loss-functions.
layer_losses.append(loss)
# The combined loss for all layers is just the average.
# The loss-functions could be weighted differently for
# each layer. You can try it and see what happens.
total_loss = tf.reduce_mean(layer_losses)
return total_loss
def create_denoise_loss(model):
loss = tf.reduce_sum(tf.abs(model.input[:,1:,:,:] - model.input[:,:-1,:,:])) + \
tf.reduce_sum(tf.abs(model.input[:,:,1:,:] - model.input[:,:,:-1,:]))
return loss
def style_transfer(content_image, style_image,
content_layer_ids, style_layer_ids,
weight_content=1.5, weight_style=10.0,
weight_denoise=0.3,
num_iterations=120, step_size=10.0):
Use gradient descent to find an image that minimizes the
loss-functions of the content-layers and style-layers. This
should result in a mixed-image that resembles the contours
of the content-image, and resembles the colours and textures
of the style-image.
Parameters:
content_image: Numpy 3-dim float-array with the content-image.
style_image: Numpy 3-dim float-array with the style-image.
content_layer_ids: List of integers identifying the content-layers.
style_layer_ids: List of integers identifying the style-layers.
weight_content: Weight for the content-loss-function.
weight_style: Weight for the style-loss-function.
weight_denoise: Weight for the denoising-loss-function.
num_iterations: Number of optimization iterations to perform.
step_size: Step-size for the gradient in each iteration.
# Create an instance of the VGG16-model. This is done
# in each call of this function, because we will add
# operations to the graph so it can grow very large
# and run out of RAM if we keep using the same instance.
model = vgg16.VGG16()
# Create a TensorFlow-session.
session = tf.InteractiveSession(graph=model.graph)
# Print the names of the content-layers.
print("Content layers:")
print(model.get_layer_names(content_layer_ids))
print('Content Layers:',content_layer_ids)
print()
# Print the names of the style-layers.
print("Style layers:")
print(model.get_layer_names(style_layer_ids))
print('Style Layers:',style_layer_ids)
print()
#Printing the input paramenter to the function
print('Weight Content:',weight_content)
print('Weight Style:',weight_style)
print('Weight Denoise:',weight_denoise)
print('Number of Iterations:',num_iterations)
print('Step Size:',step_size)
print()
# Create the loss-function for the content-layers and -image.
loss_content = create_content_loss(session=session,
model=model,
content_image=content_image,
layer_ids=content_layer_ids)
# Create the loss-function for the style-layers and -image.
loss_style = create_style_loss(session=session,
model=model,
style_image=style_image,
layer_ids=style_layer_ids)
# Create the loss-function for the denoising of the mixed-image.
loss_denoise = create_denoise_loss(model)
# Create TensorFlow variables for adjusting the values of
# the loss-functions. This is explained below.
adj_content = tf.Variable(1e-10, name='adj_content')
adj_style = tf.Variable(1e-10, name='adj_style')
adj_denoise = tf.Variable(1e-10, name='adj_denoise')
# Initialize the adjustment values for the loss-functions.
session.run([adj_content.initializer,
adj_style.initializer,
adj_denoise.initializer])
# Create TensorFlow operations for updating the adjustment values.
# These are basically just the reciprocal values of the
# loss-functions, with a small value 1e-10 added to avoid the
# possibility of division by zero.
update_adj_content = adj_content.assign(1.0 / (loss_content + 1e-10))
update_adj_style = adj_style.assign(1.0 / (loss_style + 1e-10))
update_adj_denoise = adj_denoise.assign(1.0 / (loss_denoise + 1e-10))
# This is the weighted loss-function that we will minimize
# below in order to generate the mixed-image.
# Because we multiply the loss-values with their reciprocal
# adjustment values, we can use relative weights for the
# loss-functions that are easier to select, as they are
# independent of the exact choice of style- and content-layers.
loss_combined = weight_content * adj_content * loss_content + \
weight_style * adj_style * loss_style + \
weight_denoise * adj_denoise * loss_denoise
# Use TensorFlow to get the mathematical function for the
# gradient of the combined loss-function with regard to
# the input image.
gradient = tf.gradients(loss_combined, model.input)
# List of tensors that we will run in each optimization iteration.
run_list = [gradient, update_adj_content, update_adj_style, \
update_adj_denoise]
# The mixed-image is initialized with random noise.
# It is the same size as the content-image.
mixed_image = np.random.rand(*content_image.shape) + 128
for i in range(num_iterations):
# Create a feed-dict with the mixed-image.
feed_dict = model.create_feed_dict(image=mixed_image)
# Use TensorFlow to calculate the value of the
# gradient, as well as updating the adjustment values.
grad, adj_content_val, adj_style_val, adj_denoise_val \
= session.run(run_list, feed_dict=feed_dict)
# Reduce the dimensionality of the gradient.
grad = np.squeeze(grad)
# Scale the step-size according to the gradient-values.
step_size_scaled = step_size / (np.std(grad) + 1e-8)
# Update the image by following the gradient.
mixed_image -= grad * step_size_scaled
# Ensure the image has valid pixel-values between 0 and 255.
mixed_image = np.clip(mixed_image, 0.0, 255.0)
# Print a little progress-indicator.
print(". ", end="")
# Display status once every 10 iterations, and the last.
if (i % 10 == 0) or (i == num_iterations - 1):
print()
print("Iteration:", i)
# Print adjustment weights for loss-functions.
msg = "Weight Adj. for Content: {0:.2e}, Style: {1:.2e}, Denoise: {2:.2e}"
print(msg.format(adj_content_val, adj_style_val, adj_denoise_val))
# Plot the content-, style- and mixed-images.
plot_images(content_image=content_image,
style_image=style_image,
mixed_image=mixed_image)
#Saving the mixed image after every 10 iterations
filename='images/outputs_StyleTransfer/Mixed_Iteration' + str(i) +'.jpg'
print(filename)
save_image(mixed_image, filename)
print()
print()
print("Final image:")
plot_image_big(mixed_image)
# Close the TensorFlow session to release its resources.
session.close()
# Return the mixed-image.
return mixed_image
Explanation: Loss Functions
These helper-functions create the loss-functions that are used in optimization with TensorFlow.
This function creates a TensorFlow operation for calculating the Mean Squared Error between the two input tensors.
End of explanation
content_filename = 'images/willy_wonka_new.jpg'
content_image = load_image(content_filename, max_size=None)
filenamecontent='images/outputs_StyleTransfer/Content.jpg'
print(filenamecontent)
save_image(content_image, filenamecontent)
Explanation: Example
This example shows how to transfer the style of various images onto a portrait.
First we load the content-image which has the overall contours that we want in the mixed-image.
End of explanation
style_filename = 'images/style7.jpg'
style_image = load_image(style_filename, max_size=300)
filenamestyle='images/outputs_StyleTransfer/Style.jpg'
print(filenamestyle)
save_image(style_image, filenamestyle)
Explanation: Then we load the style-image which has the colours and textures we want in the mixed-image.
End of explanation
content_layer_ids = [6]
Explanation: Then we define a list of integers which identify the layers in the neural network that we want to use for matching the content-image. These are indices into the layers in the neural network. For the VGG16 model, the 5th layer (index 4) seems to work well as the sole content-layer.
End of explanation
# The VGG16-model has 13 convolutional layers.
# This selects all those layers as the style-layers.
# This is somewhat slow to optimize.
style_layer_ids = list(range(13))
# You can also select a sub-set of the layers, e.g. like this:
# style_layer_ids = [1, 2, 3, 4]
%%time
img = style_transfer(content_image=content_image,
style_image=style_image,
content_layer_ids=content_layer_ids,
style_layer_ids=style_layer_ids,
weight_content=1.5,
weight_style=10.0,
weight_denoise=0.3,
num_iterations=20,
step_size=10.0)
# Printing the output mixed image
filename='images/outputs_StyleTransfer/Mixed.jpg'
save_image(img, filename)
Explanation: Then we define another list of integers for the style-layers.
End of explanation |
2,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Network queries
veneer-py supports a number topological queries on the Source node-link network and including identifying outlets, upstream and downstream nodes, links and catchments.
These queries operate on the network object returned by v.network(). The topological queriers are not available on the dataframe version (created with .as_dataframe()), although in some cases the results of the previous queries can be carried over to the dataframe.
Step1: Different forms of the network.
The node-link network that we get from Source includes topological information, in addition to the geometries of the various nodes, links and catchments, and their attributes, such as node names.
When we initial retrieve the network, with v.network() we get an object that includes a number of queries based on this topology.
Note
Step2: eg, find all outlet nodes
Step3: Feature id
Other topological queries are based on the id attribute of features in the network. For example /network/nodes/187
Step4: Partitioning the network
The network.partition method can be very useful for a range of parameterisation and reporting needs.
partition groups all features (nodes, links and catchments) in the network based on which of a series of key nodes those features drain through.
parition adds a new property to each feature, naming the relevant key node (or the outlet node if none of the key nodes are downstream of a particular feature).
Note | Python Code:
import veneer
%matplotlib inline
v = veneer.Veneer()
Explanation: Network queries
veneer-py supports a number topological queries on the Source node-link network and including identifying outlets, upstream and downstream nodes, links and catchments.
These queries operate on the network object returned by v.network(). The topological queriers are not available on the dataframe version (created with .as_dataframe()), although in some cases the results of the previous queries can be carried over to the dataframe.
End of explanation
network = v.network()
Explanation: Different forms of the network.
The node-link network that we get from Source includes topological information, in addition to the geometries of the various nodes, links and catchments, and their attributes, such as node names.
When we initial retrieve the network, with v.network() we get an object that includes a number of queries based on this topology.
Note: These queries are not implemented on the dataframe of the network, created with v.network().as_dataframe(). However you can call as_dataframe() on the result of some of the topological queries.
End of explanation
outlets = network.outlet_nodes().as_dataframe()
outlets[:10]
Explanation: eg, find all outlet nodes
End of explanation
upstream_features = network.upstream_features('/network/nodes/214').as_dataframe()
upstream_features
upstream_features.plot()
Explanation: Feature id
Other topological queries are based on the id attribute of features in the network. For example /network/nodes/187
End of explanation
network.partition?
gauge_names = network['features'].find_by_icon('/resources/GaugeNodeModel')._select(['name'])
gauge_names
network.partition(gauge_names,'downstream_gauge')
dataframe = network.as_dataframe()
dataframe[:10]
## Path between two features
network.path_between?
network.path_between('/network/catchments/20797','/network/nodes/56').as_dataframe()
Explanation: Partitioning the network
The network.partition method can be very useful for a range of parameterisation and reporting needs.
partition groups all features (nodes, links and catchments) in the network based on which of a series of key nodes those features drain through.
parition adds a new property to each feature, naming the relevant key node (or the outlet node if none of the key nodes are downstream of a particular feature).
Note: You can name the property used to identify the key nodes, which means you can run partition multiple times to identify different groupings within the network
End of explanation |
2,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kaggle
Step1: Data exploration
First we load and explore the dataset a little.
Step2: There are strong differences in the frequencies in which the different categories of crime occur. Larency/Theft make up most of the commited crimes, whereas other, possibly minor offenses are the second most
Step3: Next we take a look at the concentration of criminal activity for 2010-2015 which gives us a hint that crimes are most concentrated around the union square
Step4: Next we look up the locations for different crime categories to examine if there are category patterns available in the underlying data
Step7: Most crimes are distributed arround the union square and in lesser density distributed over san francisco whereas PROSTITUTION is well contained in two areas
Feature extraction
In the following we present our different features which we combine for a data preparation method.
Time based features
First we have some time based features because we assume a correlation between the categorie of crime and time when it is committed.
Step8: Assign crimes to cells in a grid
Like in the presentation we create a grid which covers San Francisco and calculate the distribution of the different crime categories for each cell. You can see in the heatmap above, that there are certain hot spots for different categories of crimes.
Step9: We noticed that there a 67 crimes in the training set which were committed outside San Francisco. We notice that these outliers were all committed at the same position (X=-120.5, Y=90), which is right at the geographic North Pole. To deal with them we map all of them in an empty subregion into the sea in the north western corner of San Francisco.
Step10: Next we define functions to assign each crime according to its position to the right cell of the grid.
Step11: Avoiding floating point errors
Because many probabilities in the following statistics are close to zero, we use logit to avoid floating point inaccuracies.
Step12: Non-prosecution rate per grid cell
Like in the presentation we calculate the non-prosecution rate for each cell in the grid. Areas with high non-prosecution rate may have lower police presence.
Step13: Crime category distribution per grid cell
Our next feature is the distrubition of crime categories in a grid cell. Certain ares might be hot spots for different types of crime and we want to define what is normal for a region.
Step14: Streets crime category distribution
Our next feature is the distrubition of crime categories per street. Certain streets might be hot spots for different types of crime.
Step15: One-hot encoding of police destricts
We assume dependencies between the police district in which a crime takes place and its category.
Step18: Distance to the next police station
Our next feature is the distance between the place where the crime is committed and the nearest police station, because police presence may influence the category of crime which is committed.
Step19: Additional Spatial and other Features
After the commonly known grid features additionals metrics for the grids are calculated describing the area topology and architectural grid features
Street Types
If a mapping can be caclulated the corresponding street type is used. Always the first occuring street type will be used. If the crime address includes a "/" symbol this means a crossing like a corner to another street is meatn and the type CRO (Crossing) is inferred, else the most common type RD for Road is used. This represents basically a simple street taxonomy. A one-hot encoded dataset is returned.
Step27: Subregion spatial and architectural features
In this part the following features will be calculated. It has to be mentioned that the calculation of the betweenes takes it time and the train and test dataset should be stored to allow more fluid work from there on
Step28: Generation of training and test dataset
In the following the traning and test datasets will be build - Standardized, Normalized and cleaned for traning the models
Step30: Build final dataset for training and testing from partial datasets
Step31: Defining Multilayer perceptron
Step32: Training MLP for local test
Always execute all the rows here
Step34: Training MLP for submission
Step35: Training MLP with sknn and grid search | Python Code:
# imports
import math
import datetime
import matplotlib
import matplotlib.pyplot as plt
import osmnx as ox
import pandas as pd
import numpy as np
import pprint
import requests
import gmaps
import seaborn as sns
import os
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.metrics import log_loss, accuracy_score
from sklearn.neural_network import MLPClassifier
Explanation: Kaggle: San Francisco Crime Classification
In this excercise we will create an improved version of last years model.
Please be aware that the following packages and additional assets have to be installed in the conda environment to guarantue proper execution of this notebook
- gmaps
- osmnx
End of explanation
train_data = pd.read_csv("../../data/raw/train.csv")
train_data['Dates'] = pd.to_datetime(train_data['Dates'])
test_data = pd.read_csv("../../data/raw/test.csv")
test_data['Dates'] = pd.to_datetime(test_data['Dates'])
print("Size of train_data: ", len(train_data))
print("Size of test_data: ", len(test_data))
train_data.head()
Explanation: Data exploration
First we load and explore the dataset a little.
End of explanation
# visualizing category distribution of the crimes
#crimes = train_data['Category'].unique()
#print("Categories:", crimes)
#train_data['Category'].value_counts(sort=False).plot.bar()
#plt.show()
Explanation: There are strong differences in the frequencies in which the different categories of crime occur. Larency/Theft make up most of the commited crimes, whereas other, possibly minor offenses are the second most
End of explanation
# load private google api key
file = open("./assets/gapi.key", 'r')
key = file.read()
file.close()
gmaps.configure(api_key=key)
# Creating a location subset from the most current crimes (2010-2015) for heatmap visualization
import datetime
start_date = datetime.date(2010,1,1)
end_date = datetime.date(2016,1,1)
date_mask = (train_data['Dates'] > start_date) & (train_data['Dates'] <= end_date)
location_subset = train_data.loc[date_mask][['Y', 'X']]
locations = [tuple(x) for x in location_subset.values]
fig = gmaps.figure()
fig.add_layer(gmaps.heatmap_layer(locations))
fig
Explanation: Next we take a look at the concentration of criminal activity for 2010-2015 which gives us a hint that crimes are most concentrated around the union square
End of explanation
# Manual Inspection of specific incidents to check if different crime types are locally bounded
def draw_category_gmap(category, start_date, end_date):
specific_incidents = train_data.loc[train_data['Category']==category]
date_mask = (specific_incidents['Dates'] > start_date) & (specific_incidents['Dates'] <= end_date)
location_subset = specific_incidents.loc[date_mask][['Y', 'X']]
locations = [tuple(x) for x in location_subset.values]
spec_fig = gmaps.figure()
gmaps.heatmap_layer.max_intensity = 1000
gmaps.heatmap_layer.point_radius = 2
spec_fig.add_layer(gmaps.heatmap_layer(locations))
return spec_fig
draw_category_gmap('KIDNAPPING', datetime.date(2010,1,1), datetime.date(2016,1,1))
draw_category_gmap('WARRANTS', datetime.date(2010,1,1), datetime.date(2016,1,1))
draw_category_gmap('DRUNKENNESS', datetime.date(2010,1,1), datetime.date(2016,1,1))
Explanation: Next we look up the locations for different crime categories to examine if there are category patterns available in the underlying data
End of explanation
from pandas.tseries.holiday import USFederalHolidayCalendar as calendar
# some functions for time based features
first_sixth = datetime.time(23,59,59)
second_sixth = datetime.time(3,59,59)
third_sixth = datetime.time(7,59,59)
fourth_sixth = datetime.time(11,59,59)
fifth_sixth = datetime.time(15,59,59)
sixth_sixth = datetime.time(19,59,59)
cal = calendar()
holiday_timetamps = cal.holidays(start='2003-01-01', end='2015-05-13')
holidays = []
for t in holiday_timetamps:
holidays.append(t.date())
holidays = set(holidays)
def get_halfhour(minute):
if minute < 30:
return 0
else:
return 1
def get_daynight(hour):
if 5 < hour and hour < 23:
return 0
else:
return 1
def generate_day_sixths(times):
This function has to be executed on the original datetime features
def day_sixths(dt):
Mapping time to sixths of a day
if dt.time() > datetime.time(0,0,0) and dt.time() <= second_sixth:
return 0/6
if dt.time() > second_sixth and dt.time() <= third_sixth:
return 1/6
if dt.time() > third_sixth and dt.time() <= fourth_sixth:
return 2/6
if dt.time() > fourth_sixth and dt.time() <= fifth_sixth:
return 3/6
if dt.time() > fifth_sixth and dt.time() <= sixth_sixth:
return 4/6
if dt.time() > sixth_sixth and dt.time() <= first_sixth:
return 5/6
return times.map(day_sixths)
def get_holiday(day, date):
if day == "Sunday" or date in holidays:
return 1
else:
return 0
def generate_time_features(data):
times = data["Dates"]
days = data["DayOfWeek"]
#perhaps try one hot encoding for some of the series.
minute_series = pd.Series([x.minute for x in times], name='minute')
# halfhour_series = pd.Series([get_halfhour(x.minute) for x in times], name='halfhour')
# hour_series = pd.Series([x.hour for x in times], name='hour')
daynight_series = pd.Series([get_daynight(x.hour) for x in times], name='day_night')
day_series = pd.Series([x.day for x in times], name='day')
month_series = pd.Series([x.month for x in times], name='month')
year_series = pd.Series([x.year for x in times], name='year')
# sixths = pd.Series(generate_day_sixths(times), name='day_sixths')
day_of_week = pd.get_dummies(days)
is_holiday = pd.Series([get_holiday(days[i], times[i].date()) for i in range(len(times))], name='is_holiday')
minute_one_hot = pd.get_dummies(minute_series)
# better than phase and hour if no one hot encoding is used
rel_time = pd.Series([(x.hour + x.minute / 60) for x in times], name='rel_time')
time_features = pd.concat([minute_one_hot, rel_time, day_of_week, is_holiday, daynight_series,
day_series, month_series, year_series], axis=1)
return time_features
# show the structure of our time based features]
#time_features = generate_time_features(data)
#time_features['sixths'] = generate_day_sixths(times)
#time_features.head()
Explanation: Most crimes are distributed arround the union square and in lesser density distributed over san francisco whereas PROSTITUTION is well contained in two areas
Feature extraction
In the following we present our different features which we combine for a data preparation method.
Time based features
First we have some time based features because we assume a correlation between the categorie of crime and time when it is committed.
End of explanation
# We define a bounding box arround San Francisco
min_x = -122.53
max_x = -122.35
min_y = 37.65
max_y = 37.84
dif_x = max_x - min_x
dif_y = max_y - min_y
Explanation: Assign crimes to cells in a grid
Like in the presentation we create a grid which covers San Francisco and calculate the distribution of the different crime categories for each cell. You can see in the heatmap above, that there are certain hot spots for different categories of crimes.
End of explanation
# Please zoom out a little bit to identify the location of this point ,
marker_loc = [(37.783939,-122.412614)]
spec_fig = gmaps.figure()
spec_fig.add_layer(gmaps.marker_layer(marker_loc))
spec_fig
# Functions to reposition outliers into a separate valid region.
def reposition_x(x):
if x < min_x or max_x <= x:
return -122.412614
else:
return x
def reposition_y(y):
if y < min_y or max_y <= y:
return 37.783939
else:
return y
def reposition_outliers(data):
repositioning = data.copy()
new_X = pd.Series([reposition_x(x) for x in data["X"]], name='X')
repositioning = repositioning.drop('X', axis=1)
new_Y = pd.Series([reposition_y(y) for y in data["Y"]], name="Y")
repositioning = repositioning.drop('Y', axis=1)
repositioning = pd.concat([repositioning, new_X, new_Y], axis=1)
return repositioning
train_data = reposition_outliers(train_data)
Explanation: We noticed that there a 67 crimes in the training set which were committed outside San Francisco. We notice that these outliers were all committed at the same position (X=-120.5, Y=90), which is right at the geographic North Pole. To deal with them we map all of them in an empty subregion into the sea in the north western corner of San Francisco.
End of explanation
# grid functions
def assign_subregion(pos_x, pos_y, min_x, min_y, dif_x, dif_y, x_sections, y_sections):
x = pos_x - min_x
x_sec = int(x_sections * x / dif_x)
y = pos_y - min_y
y_sec = int(y_sections * y / dif_y)
return x_sec + x_sections * y_sec
def create_subregion_series(data, min_x, min_y, dif_x, dif_y, x_sections, y_sections):
subregion_list = []
for i in range(len(data)):
pos_x = data["X"][i]
pos_y = data["Y"][i]
subregion = assign_subregion(pos_x, pos_y, min_x, min_y, dif_x, dif_y, x_sections, y_sections)
subregion_list.append(subregion)
return pd.Series(subregion_list, name='subregion')
def get_subregion_pos(subregion_id, min_x, min_y, dif_x, dif_y, x_sections, y_sections):
x = subregion_id % x_sections
x_pos = ((x + 1/2) / x_sections) * dif_x + min_x
y = subregion_id // x_sections
y_pos = ((y + 1/2) / y_sections) * dif_y + min_y
return (x_pos, y_pos)
def get_subregion_rectangle(subregion_id, min_x, min_y, dif_x, dif_y, x_sections, y_sections):
x = subregion_id % x_sections
y = subregion_id // x_sections
x_pos_ll = (x / x_sections) * dif_x + min_x
y_pos_ll = (y / y_sections) * dif_y + min_y
lower_left = (x_pos_ll, y_pos_ll)
x_pos_ur = ((x + 1) / x_sections) * dif_x + min_x
y_pos_ur = ((y + 1) / y_sections) * dif_y + min_y
upper_right= (x_pos_ur, y_pos_ur)
return lower_left, upper_right
# show the structure of subregion feature
#subregions = create_subregion_series(train_data, min_x, min_y, dif_x, dif_y, 20, 20)
#subregions_df = pd.concat([subregions], axis=1)
#subregions_df.head()
Explanation: Next we define functions to assign each crime according to its position to the right cell of the grid.
End of explanation
from math import log
def logit(p):
return log(p / (1 - p))
logit_eps = 0.0001
upper_bound = 1 - logit_eps
logit_one = logit(1 - logit_eps)
logit_zero = logit(logit_eps)
def calc_logit(p):
if p < logit_eps:
return logit_zero
elif p > upper_bound:
return logit_one
else:
return logit(p)
Explanation: Avoiding floating point errors
Because many probabilities in the following statistics are close to zero, we use logit to avoid floating point inaccuracies.
End of explanation
# functions to calculate the non-prosecution rate for each cell in the grid.
def count_non_prosecuted_crimes(data, subregions, num_regions):
none_prosecutions_local = {}
none_prosecutions_overall = data["Resolution"].value_counts()["NONE"]
resolution = data["Resolution"]
for r in range(num_regions):
none_prosecutions_local[r] = 0
for i, r in enumerate(subregions):
if resolution[i] == "NONE":
none_prosecutions_local[r] += 1
return none_prosecutions_local, none_prosecutions_overall
def calculate_prosection_rates(counts_local, counts_overall, subregions, num_regions, sufficient_n):
none_prosecutions_rate_overall = calc_logit(counts_overall / len(subregions))
none_prosecution_rate_local = {}
occupied_regions = subregions.unique()
counts = subregions.value_counts()
for r in range(num_regions):
if r in occupied_regions and counts[r] >= sufficient_n:
none_prosecution_rate_local[r] = calc_logit(counts_local[r] / counts[r])
else:
none_prosecution_rate_local[r] = none_prosecutions_rate_overall
return none_prosecution_rate_local
def get_non_prosecution_rate_frame(data, subregions, num_regions, sufficient_n):
counts_local, counts_overall = count_non_prosecuted_crimes(data, subregions, num_regions)
rates_local = calculate_prosection_rates(counts_local, counts_overall, subregions, num_regions, sufficient_n)
non_prosecution_series = pd.Series([rates_local[x] for x in range(num_regions)], name='non_prosecution_rate')
return non_prosecution_series
# show the structure of non-prosecution rate feature
#non_prosecution_rate = get_non_prosecution_rate_frame(train_data, subregions, 400, 50)
#non_prosecution_rate.head()
Explanation: Non-prosecution rate per grid cell
Like in the presentation we calculate the non-prosecution rate for each cell in the grid. Areas with high non-prosecution rate may have lower police presence.
End of explanation
# functions to calculate the category distribution rate for each cell in the grid.
def count_crimes_per_category(data, subregions, crimes, num_regions):
# count crimes per region and category
criminal_activity_local = {}
criminal_activity_overall = data["Category"].value_counts()
category = data["Category"]
for r in range(num_regions):
criminal_activity_local[r] = {}
for c in crimes:
criminal_activity_local[r][c] = 0
for i, r in enumerate(subregions):
criminal_activity_local[r][category[i]] += 1
return criminal_activity_local, criminal_activity_overall
def determine_distribution_categories(data, activity_lokal, activity_overall, subregions, crimes, num_regions, sufficient_n):
distribution_global = {}
for c in crimes:
distribution_global[c] = calc_logit(activity_overall[c] / len(data))
occupied_regions = subregions.unique()
counts = subregions.value_counts()
distribution_local = {}
for r in range(num_regions):
distribution_local[r] = {}
for c in crimes:
if r in occupied_regions and counts[r] >= sufficient_n:
distribution_local[r][c] = calc_logit(activity_lokal[r][c] / counts[r])
else:
distribution_local[r][c] = distribution_global[c]
return distribution_local
def get_crime_distribution_frame(data, subregions, crimes, num_regions, sufficient_n):
activity_lokal, activity_overall = count_crimes_per_category(data, subregions, crimes, num_regions)
distribution_local = determine_distribution_categories(data, activity_lokal, activity_overall, subregions, crimes, num_regions, sufficient_n)
# convert to dataframe
distribution_frame = pd.DataFrame()
for c in crimes:
category_series = pd.Series([distribution_local[r][c] for r in range(num_regions)], name=c)
distribution_frame = pd.concat([distribution_frame, category_series], axis=1)
return distribution_frame
# show the structure of category distribution feature
#distribution_frame = get_crime_distribution_frame(train_data, subregions, crimes, 400, 50)
#distribution_frame.head()
Explanation: Crime category distribution per grid cell
Our next feature is the distrubition of crime categories in a grid cell. Certain ares might be hot spots for different types of crime and we want to define what is normal for a region.
End of explanation
# functions for street statistics
def get_relevant_streets(data, sufficient_n):
streets = data["Address"]
street_counts = streets.value_counts()
relevant_streets = []
for k in street_counts.keys():
if street_counts[k] >= sufficient_n:
relevant_streets.append(k)
else:
break
return relevant_streets
def count_street_crime_per_category(data, relevant_streets, crimes):
# count crimes per region and category
street_activity = {}
streets = data["Address"]
category = data["Category"]
for s in relevant_streets:
street_activity[s] = {}
street_activity[s]["crime_count"] = 0
for c in crimes:
street_activity[s][c] = 0
for i, s in enumerate(streets):
if s in street_activity:
street_activity[s][category[i]] += 1
street_activity[s]["crime_count"] += 1
return street_activity
def determine_street_crime_distribution_categories(data, relevant_streets, street_activity, crimes):
# default distribution
street_distribution = {}
street_distribution["DEFAULT_DISTRIBUTION"] = {}
overall_counts = data["Category"].value_counts()
for c in crimes:
street_distribution["DEFAULT_DISTRIBUTION"][c] = calc_logit(overall_counts[c] / len(data))
# street distribution
for s in relevant_streets:
street_distribution[s] = {}
for c in crimes:
street_distribution[s][c] = calc_logit(street_activity[s][c] / street_activity[s]["crime_count"])
return street_distribution
def get_street_crime_distribution_dict(data, crimes, sufficient_n=48):
rel_streets = get_relevant_streets(data, sufficient_n)
street_activity = count_street_crime_per_category(data, rel_streets, crimes)
street_distribution = determine_street_crime_distribution_categories(data, rel_streets, street_activity, crimes)
# convert to dataframe
'''
street_distribution_frame = pd.DataFrame()
for c in crimes:
category_series = pd.Series([street_distribution[s][c] for s in rel_streets] + [street_distribution["DEFAULT_DISTRIBUTION"][c]],
name=("street_" + c))
street_distribution_frame = pd.concat([street_distribution_frame, category_series], axis=1)
'''
return street_distribution
def get_street_crime_rate(street, crime, street_crime_distribution):
if street in street_crime_distribution.keys():
return street_crime_distribution[street][crime]
else:
return street_crime_distribution['DEFAULT_DISTRIBUTION'][crime]
Explanation: Streets crime category distribution
Our next feature is the distrubition of crime categories per street. Certain streets might be hot spots for different types of crime.
End of explanation
def create_police_destrict_frame(data):
one_hot_police_destricts = pd.get_dummies(data["PdDistrict"])
return one_hot_police_destricts
# show the structure of the police destrict feature
#police_destricts = create_police_destrict_frame(train_data)
#police_destricts.head()
Explanation: One-hot encoding of police destricts
We assume dependencies between the police district in which a crime takes place and its category.
End of explanation
# function to measure distance to next police station
from geopy.distance import vincenty
police_stations = pd.read_json("https://data.sfgov.org/resource/me2e-mc38.json")
police_station_coordinates = []
for elem in police_stations.iterrows():
Create police station coordinates as tuple
police_station_coordinates.append(tuple(elem[1]['location']['coordinates']))
def get_crime_coordinate_series(data):
# prepare crime X,Y coordinates as tuple of coordinates
return list(zip(data['X'], data['Y']))
def caculate_min_distance_police(crime_coordinate):
calculate distance from crime to nearest police station
current_min = 10000000
for police_station_coordinate in police_station_coordinates:
current_min = min(current_min, vincenty(police_station_coordinate, crime_coordinate).meters)
return current_min
def get_police_distance_series(data):
get_crime_coordinates = get_crime_coordinate_series(data)
police_distance_series = pd.Series([caculate_min_distance_police(c) for c in get_crime_coordinates], name='police_distance')
return police_distance_series
# show the structure of the police distance feature
#police_dist_series = get_police_distance_series(train_data)
#police_dist_pd = pd.concat([police_dist_series], axis=1)
#police_dist_pd.head()
Explanation: Distance to the next police station
Our next feature is the distance between the place where the crime is committed and the nearest police station, because police presence may influence the category of crime which is committed.
End of explanation
types = ['ST','AV','BL', 'BL NORTH', 'BL SOUTH', 'AVE','AL', 'ALY', 'CT', 'WY' ,'WAY', 'TER', 'BLVD', 'RP','RAMP', 'PL', 'LN',
'LOOP', 'DR', 'RD','CR', 'CIR','WL', 'WK' 'WALK','PK', 'PARK','RW', 'ROW', 'PATH','HY', 'HW',
'HWY', 'EXPY', 'HL', 'PZ','PLZ', 'STPS','I-80', 'MAR','BLVD NORTH', 'BLVD SOUTH',
'STWY','PALMS','WK','EX' , 'TR','TUNL','FERLINGHETTI', 'BUFANO']
def generate_street_types(addresses):
def map_street_to_street_type(street):
addrl = street.split(' ')
if '/' in addrl:
return 'CRO'
elif '/' not in addrl:
for elem in addrl:
if elem in types:
return elem
else:
return 'RD'
return pd.get_dummies(pd.Series(addresses.map(map_street_to_street_type), name='StreetType'))
# Show the structure of the street type feature
#street_type_frame = generate_street_types(train_data['Address'])
#street_type_frame.head()
Explanation: Additional Spatial and other Features
After the commonly known grid features additionals metrics for the grids are calculated describing the area topology and architectural grid features
Street Types
If a mapping can be caclulated the corresponding street type is used. Always the first occuring street type will be used. If the crime address includes a "/" symbol this means a crossing like a corner to another street is meatn and the type CRO (Crossing) is inferred, else the most common type RD for Road is used. This represents basically a simple street taxonomy. A one-hot encoded dataset is returned.
End of explanation
def create_subregions_graphs(subregions, x_sections, y_sections, from_point=False):
Creates a subregions graph dictionary for each unique
subregion
global subregions_graphs
subregions_graphs = {}
for subregion in subregions.unique():
if from_point:
subregion_graph = create_subregion_graph_from_coordinate(subregion, x_sections, y_sections)# create_subregion_graph_from_bb(bb_coord)
else:
bb_coord = get_subregion_rectangle(subregion, min_x, min_y, dif_x, dif_y, x_sections, y_sections)
subregion_graph = create_subregion_graph_from_bb(bb_coord)
if subregion_graph:
subregions_graphs[subregion] = subregion_graph
print(subregions_graphs.keys())
return subregions_graphs
def create_subregion_graph_from_coordinate(subregion, x_sections, y_sections, radius=100):
Creates a subregion graph by a subregion coordinate and a radius
G = None
try:
point = get_subregion_pos(subregion, min_x, min_y, dif_x, dif_y, x_sections, y_sections)
G = ox.graph_from_point((point[1],point[0]), distance=radius, network_type='all')
except (Exception, RuntimeError, ValueError):
print("A RuntimeError, ValueError, or NetworkXPointlessConcept Error occured probably due to invalid coordinates")
return G
def create_subregion_graph_from_bb(subregion_rectangle):
Creates a subregion graph by a subregion bounding box (rectangle)
G = None
try:
G = ox.graph_from_bbox(subregion_rectangle[1][1],
subregion_rectangle[0][1],
subregion_rectangle[1][0],
subregion_rectangle[0][0],
network_type='all')
except (Exception, RuntimeError, ValueError):
print("A RuntimeError, ValueError, or NetworkXPointlessConcept Error occured probably due to invalid coordinates")
return G
def calculate_subregion_deden(subregion_id):
The cul de sac density is calculated as the ratio of
dead end roads to all roads in the subregion
if subregion_id in subregions_graphs.keys():
subregion = subregions_graphs[subregion_id]
culdesacs = [key for key, value in subregion.graph['streets_per_node'].items() if value==1]
return len(culdesacs) / len(subregion.edges())
else:
return 0
def calculate_subregion_reg(subregion_id):
The regularity of the street network is calculated as the standard
deviation of node degrees for the subregion normalized by the average
node degree
if subregion_id in subregions_graphs.keys():
subregion = subregions_graphs[subregion_id]
subregion_nodes = subregion.nodes()
degree_values = [value for key, value in subregion.graph['streets_per_node'].items()]
degree_sum = sum(degree_values)
node_degree_mean = degree_sum/len(degree_values)
var = 1/(len(degree_values)-1) * sum([(value - node_degree_mean)**2 for value in degree_values])
return math.sqrt(var) / node_degree_mean
else:
return 0
def calculate_subregion_cnr(subregion_id):
if subregion_id in subregions_graphs.keys():
subregion = subregions_graphs[subregion_id]
realintersect = [value for key, value in subregion.graph['streets_per_node'].items() if value>1]
return len(realintersect) / len(subregion.nodes())
else:
return 0
def calculate_subregion_iden(subregion_id):
pass
def calculate_subregion_osperc(subregion_id):
pass
global betweenes_dict
betweenes_dict = {}
def calculate_subregion_bet(subregion_id):
calculates the betweenes centrality for the region
if subregion_id in betweenes_dict.keys():
return betweenes_dict[subregion_id]
else:
if subregion_id in subregions_graphs.keys():
subregion = subregions_graphs[subregion_id]
extended_stats = ox.extended_stats(subregion, bc=True)
betweenes_dict[subregion_id] = extended_stats['betweenness_centrality_avg']
return betweenes_dict[subregion_id]
else:
return 0
def calculate_one_way_density(subregion_id):
pass
def generate_subregion_architectural_features(subregions, x_sections, y_sections, from_point=False):
Generates a dataframe with the subregions architectural features
subregions_graphs = create_subregions_graphs(subregions, x_sections, y_sections, from_point=from_point)
deden = pd.Series(subregions.map(calculate_subregion_deden), name = 'DED')
reg = pd.Series(subregions.map(calculate_subregion_reg), name = 'REG')
cnr = pd.Series(subregions.map(calculate_subregion_cnr), name = 'CNR')
# iden = pd.Series(subregions.map(calculate_subregion_iden), name = "IDEN")
# osperc = pd.Series(subregions.map(calculate_subregion_osperc), name = 'OSPERC')
bet = pd.Series(subregions.map(calculate_subregion_bet), name = 'BET')
#owd =
return pd.concat([deden, reg, cnr, bet], axis=1)
# Generating and viewing subregion architectural features, which is a very expensive procedure! Please assure to save these features accordingly
# To ommit recalculation
#architectural_features = generate_subregion_architectural_features(subregions)
#architectural_features.head()
# Visualizing Street Network and cul-de-sacs in union square grid
union_square = ox.graph_from_point((37.787994,-122.407437), distance=300, network_type='all')
ox.plot_graph(union_square)
# Red Nodes are endpoints of cul-de-sacs
culdesacs = [key for key, value in union_square.graph['streets_per_node'].items() if value==1]
nc = ['r' if node in culdesacs else 'b' for node in union_square.nodes()]
ox.plot_graph(union_square, node_color=nc)
# Visualizing one way streets (network_type has to be drive)
# Red streets are one way streets
union_square_d = ox.graph_from_point((37.787994,-122.407437), distance=300, network_type='drive')
ec = ['r' if data['oneway'] else 'b' for u, v, key, data in union_square_d.edges(keys=True, data=True)]
ox.plot_graph(union_square_d, node_size=0, edge_color=ec)
Explanation: Subregion spatial and architectural features
In this part the following features will be calculated. It has to be mentioned that the calculation of the betweenes takes it time and the train and test dataset should be stored to allow more fluid work from there on
End of explanation
# first initialise important variables
if not os.path.exists("../../data/interim/train_cleaned.csv"):
train_data = pd.read_csv("../../data/raw/train.csv")
cleaned_data = reposition_outliers(train_data)
cleaned_data.to_csv("../../data/interim/train_cleaned.csv", index=False)
if not os.path.exists("../../data/interim/test_cleaned.csv"):
test_data = pd.read_csv("../../data/raw/test.csv")
cleaned_data = reposition_outliers(test_data)
cleaned_data.to_csv("../../data/interim/test_cleaned.csv", index=False)
train_data = pd.read_csv("../../data/interim/train_cleaned.csv")
train_data['Dates'] = pd.to_datetime(train_data['Dates'])
test_data = pd.read_csv("../../data/interim/test_cleaned.csv")
test_data['Dates'] = pd.to_datetime(test_data['Dates'])
# bounding box
min_x = -122.53
max_x = -122.35
min_y = 37.65
max_y = 37.84
dif_x = max_x - min_x
dif_y = max_y - min_y
# grid resolution
x_sections = 30
y_sections = 30
num_subregions = x_sections * y_sections
sufficient_n_non_prosecution = 100
sufficient_n_distribution = 200
sufficient_n_streets = 48
crimes = train_data['Category'].unique()
street_crime_distribution_dict = get_street_crime_distribution_dict(train_data, crimes, sufficient_n_streets)
# a function which builds the dataset
# always calculate regional features first
def build_and_store_regional_features(train_data):
print("Processing regional features")
print("-----------------------------")
if not os.path.exists("../../data/processed/train_subregions.csv"):
subregions = create_subregion_series(train_data, min_x, min_y, dif_x, dif_y, x_sections, y_sections)
subregions = pd.concat([subregions], axis=1)
subregions.to_csv("../../data/processed/train_subregions.csv", index=False)
print("Finished: subregions")
if not os.path.exists("../../data/processed/regional_non_prosecution_rates.csv"):
subregions = pd.read_csv("../../data/processed/train_subregions.csv")
subregions = subregions["subregion"]
non_prosecution_rate = get_non_prosecution_rate_frame(train_data, subregions, num_subregions,
sufficient_n_non_prosecution)
non_prosecution_rate = pd.concat([non_prosecution_rate], axis=1)
non_prosecution_rate.to_csv("../../data/processed/regional_non_prosecution_rates.csv", index=False)
print("Finished: regional non prosecution rates")
if not os.path.exists("../../data/processed/regional_crime_distribution.csv"):
subregions = pd.read_csv("../../data/processed/train_subregions.csv")
subregions = subregions["subregion"]
crimes = train_data['Category'].unique()
crime_distribution = get_crime_distribution_frame(train_data, subregions, crimes, num_subregions,
sufficient_n_distribution)
crime_distribution.to_csv("../../data/processed/regional_crime_distribution.csv", index=False)
print("Finished: regional crime distributions")
print("Finished: build_and_store_regional_features")
print()
def build_and_store_crime_features(data, name):
print("Processing crime features")
print("-----------------------------")
if not os.path.exists("../../data/processed/" + name + "_time_features.csv"):
time_features = generate_time_features(data)
time_features.to_csv("../../data/processed/" + name + "_time_features.csv", index=False)
print("Finished: time features")
if not os.path.exists("../../data/processed/" + name + "_subregions.csv"):
subregions = create_subregion_series(data, min_x, min_y, dif_x, dif_y, x_sections, y_sections)
subregions = pd.concat([subregions], axis=1)
subregions.to_csv("../../data/processed/" + name + "_subregions.csv", index=False)
print("Finished: subregions")
if not os.path.exists("../../data/processed/" + name + "_police_destrict.csv"):
police_destricts = create_police_destrict_frame(data)
police_destricts.to_csv("../../data/processed/" + name + "_police_destrict.csv", index=False)
print("Finished: police destricts")
if not os.path.exists("../../data/processed/" + name + "_police_distance.csv"):
police_distance = get_police_distance_series(data)
police_distance = pd.concat([police_distance], axis=1)
police_distance.to_csv("../../data/processed/" + name + "_police_distance.csv", index=False)
print("Finished: police distances")
if not os.path.exists("../../data/processed/" + name + "_street_types.csv"):
street_type = generate_street_types(data['Address'])
street_type.to_csv("../../data/processed/" + name + "_street_types.csv", index=False)
print("Finished: street types")
if not os.path.exists("../../data/processed/" + name + "_non_prosecution_rates.csv"):
subregions = pd.read_csv("../../data/processed/" + name + "_subregions.csv")
subregions = subregions["subregion"]
regional_non_prosecution_rate = pd.read_csv("../../data/processed/regional_non_prosecution_rates.csv")
regional_non_prosecution_rate = regional_non_prosecution_rate["non_prosecution_rate"]
non_prosecution_rates = pd.Series([regional_non_prosecution_rate[r] for r in subregions], name='non_prosecution_rate')
non_prosecution_rates = pd.concat([non_prosecution_rates], axis=1)
non_prosecution_rates.to_csv("../../data/processed/" + name + "_non_prosecution_rates.csv", index=False)
print("Finished: non prosecution rates")
if not os.path.exists("../../data/processed/" + name + "_crime_distribution.csv"):
subregions = pd.read_csv("../../data/processed/" + name + "_subregions.csv")
subregions = subregions["subregion"]
distribution_local = pd.read_csv("../../data/processed/regional_crime_distribution.csv")
crime_distribution = pd.DataFrame()
for c in crimes:
category_series = pd.Series([distribution_local[c][r] for r in subregions], name=c)
crime_distribution = pd.concat([crime_distribution, category_series], axis=1)
crime_distribution.to_csv("../../data/processed/" + name + "_crime_distribution.csv", index=False)
print("Finished: crime distributions")
if not os.path.exists("../../data/processed/" + name + "_street_crime_distribution.csv"):
streets = data["Address"]
street_crime_distribution = pd.DataFrame()
for c in crimes:
category_series = pd.Series([get_street_crime_rate(s, c, street_crime_distribution_dict) for s in streets], name=("street_" + c))
street_crime_distribution = pd.concat([street_crime_distribution, category_series], axis=1)
street_crime_distribution.to_csv("../../data/processed/" + name + "_street_crime_distribution.csv", index=False)
print("Finished: finished street crime distributions")
# here only subregions with criminal activity is regarded
if not os.path.exists("../../data/processed/" + name + "_architectural_features.csv"):
subregions = pd.read_csv("../../data/processed/" + name + "_subregions.csv")
subregions = subregions["subregion"]
architectural_features = generate_subregion_architectural_features(subregions, x_sections, y_sections, from_point=False)
architectural_features.to_csv("../../data/processed/" + name + "_architectural_features.csv", index=False)
print("Finished: architectural features")
print("Finished: build_and_store_crime_features")
print()
build_and_store_regional_features(train_data)
build_and_store_crime_features(train_data, "train")
build_and_store_crime_features(test_data, "test")
t = pd.read_csv("../../data/processed/train_street_crime_distribution.csv")
t.head()
Explanation: Generation of training and test dataset
In the following the traning and test datasets will be build - Standardized, Normalized and cleaned for traning the models
End of explanation
from sklearn import preprocessing
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
#global available label encoder to retrieve category description later on
le = preprocessing.LabelEncoder()
def concat_generated_subdataset(data, name):
# loading distinct datasets
base_path = "../../data/processed/"
subregions = pd.read_csv(base_path + name + "_subregions.csv")
non_presecution_rate = pd.read_csv(base_path + name + "_non_prosecution_rates.csv")
crime_distribution = pd.read_csv(base_path + name + "_crime_distribution.csv")
time_features = pd.read_csv(base_path + name + "_time_features.csv")
police_districts = pd.read_csv(base_path + name + "_police_destrict.csv")
police_distance = pd.read_csv(base_path + name + "_police_distance.csv")
street_types = pd.read_csv(base_path + name + "_street_types.csv")
street_crime_distribution = pd.read_csv(base_path + name + "_street_crime_distribution.csv")
architectural_feat = pd.read_csv(base_path + name + "_architectural_features.csv" )
#print("subregions: ", len(subregions))
#print("non_pres: ", len(non_presecution_rate))
#print("crim_dis: ", len(crime_distribution))
#print("time feat: ", len(time_features))
#print("police districts: ", len(police_districts))
#print("police dist: ", len(police_distance))
#print("street types: ", len(street_types))
#print("architect: ", len(architectural_feat))
series = [
data['X'],
data['Y'],
subregions,
time_features,
police_districts,
crime_distribution,
street_crime_distribution,
non_presecution_rate,
police_distance, street_types,
architectural_feat
]
if name == 'train':
# label encoding category
categories = pd.Series(le.fit_transform(data['Category']), name='Category')
series = [categories] + series
return pd.concat(series, axis=1)
def build_final_datasets(pca_components=10):
Builds the final datasets for processing with the neuronal network
Performs PCA and standard scaling on these datasets
This is done this way instead of a pipline due to a separatly
provided testset by kaggle
pca = PCA(n_components=pca_components)
ss = StandardScaler()
train_data = pd.read_csv("../../data/interim/train_cleaned.csv")
train_data['Dates'] = pd.to_datetime(train_data['Dates'])
train = concat_generated_subdataset(train_data, 'train')
test_data = pd.read_csv("../../data/interim/test_cleaned.csv")
test_data['Dates'] = pd.to_datetime(test_data['Dates'])
test = concat_generated_subdataset(test_data, 'test')
missing_columns = set(train.columns) - set(test.columns)
missing_columns.remove('Category')
print("Missing columns in test set: ", set(train.columns) - set(test.columns))
print("Imputing empty {} (0) columns into test set".format(missing_columns))
test['BUFANO'] = 0
test['FERLINGHETTI'] = 0
print("Extracting values and ravel categories")
X = train.iloc[:,1:].values
y = train.iloc[:,:1].values.ravel()
test = test.iloc[:].values
print("Standard Scaling train and test set")
X = ss.fit_transform(X)
test = ss.transform(test)
print("Applying PCA on training and test set")
# X = pca.fit_transform(X)
# test = pca.transform(test)
print("\n----Done----")
return X, y, test
X,y,test = build_final_datasets()
Explanation: Build final dataset for training and testing from partial datasets
End of explanation
mlp = MLPClassifier(hidden_layer_sizes=(200, 180, 200),
activation='tanh',
learning_rate_init=0.005,
max_iter=400)
Explanation: Defining Multilayer perceptron
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
mlp.fit(X_train, y_train)
mlp_proba = mlp.predict_proba(X_test)
mlp_pred = mlp.predict(X_test)
log_score = log_loss(y_test, mlp_proba)
acc_score = accuracy_score(y_test, mlp_pred)
print("log_loss: ", log_score)
print("Accuracy: ", acc_score)
Explanation: Training MLP for local test
Always execute all the rows here
End of explanation
mlp.fit(X, y)
def create_submission(probabilities):
Creates a kaggle csv submission file within the notebook folder
submission = pd.DataFrame(probabilities, columns=list(le.classes_))
submission.insert(0, 'Id', range(0, len(submission)))
submission.to_csv("submission.csv", index=False)
Explanation: Training MLP for submission
End of explanation
def create_submission(probabilities):
submission = pd.DataFrame(probabilities, columns=list(le.classes_))
submission.insert(0, 'Id', range(0, len(submission)))
submission.to_csv("submission.csv", index=False)
create_submission(mlp.predict_proba(test))
Explanation: Training MLP with sknn and grid search
End of explanation |
2,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Wave Packets
Step2: A particle with total energy $E$ in a region of constant potential $V_0$ has a wave number
$$
k = \pm \frac{2m}{\hbar^2}(E - V_0)
$$
and dispersion relation
$$
\omega(k) = \frac{E(k)}{\hbar} = \frac{\hbar k^2}{2m} + \frac{V_0}{\hbar}
$$
leading to phase and group velocities
$$
v_p(k) = \frac{\omega(k)}{k} = \frac{\hbar k}{2 m} + \frac{V_0}{\hbar k} \quad, \quad
v_g(k) = \frac{d\omega}{dk}(k) = \frac{\hbar k}{m} \; .
$$
Consider an initial state at $t=0$ that is a normalized Gaussian wavepacket
$$
\Psi(x,0) = \pi^{-1/4} \sigma_x^{-1/2} e^{i k_0 x}\,
\exp\left(-\frac{1}{2} \frac{x^2}{\sigma_x^2}\right)
$$
with a central group velocity $v_g = \hbar k_0 / m$.
We can expand this state's time evolution in plane waves as
$$
\Psi(x,t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} c(k)
\exp\left[ i (k x - \omega(k) t)\right]\, dk
$$
with coefficients
$$
c(k) = \frac{1}{\sqrt{2\pi}}\,\int_{-\infty}^{+\infty} \Psi(x,0)\, e^{-i k x} dx
= \pi^{-1/4} \sigma_x^{1/2}\, \exp\left( -\frac{1}{2} (k - k_0)^2 \sigma_x^2\right) \; .
$$
Approximate the integral over $k$ with a discrete sum of values $k_i$ centered on $k_0$ then the (un-normalized) real part of the wave function is
$$
\text{Re}\Psi(x,t) \simeq \sum_{i=-N}^{+N} c(k_i) \cos\left[
k x_i - \left(\frac{\hbar k_i^2}{2m} + \frac{V_0}{\hbar}\right) t \right] \; .
$$
Step3: Build an animation using the solver above. Each frame shows
Step4: Simulate with different values of the constant potential
Step5: Uncomment and run the line below to display an animation inline
Step6: Convert video to the open-source Theora format using, e.g.
ffmpeg -i wavepacket0.mp4 -codec | Python Code:
%pylab inline
import matplotlib.animation
from IPython.display import HTML
Explanation: Wave Packets
End of explanation
def solve(k0=10., sigmax=0.25, V0=0., mass=1., tmax=0.25, nwave=15, nx=500, nt=10):
Solve for the evolution of a 1D Gaussian wave packet.
Parameters
----------
k0 : float
Central wavenumber, which determines the group velocity of the wave
packet and can be negative, zero, or positive.
sigmax : float
Initial wave packet sigma. Smaller values lead to faster spreading.
V0 : float
Constant potential in units of hbar. Changes the (unphysical)
phase velocities but not the group velocity.
mass : float
Particle mass in units of hbar.
tmax : float
Amount of time to simulate on a uniform grid in arbitrary units.
nwave : int
Wave packet is approximated by 2*nwave+1 plane waves centered on k0.
nx : int
Number of grid points to use in x.
nt : int
Number of grid points to use in t.
t = np.linspace(0, tmax, nt).reshape(-1, 1)
# Calculate the group velocity at k0.
vgroup0 = k0 / mass
# Calculate the distance traveled by the wave packet.
dist = np.abs(vgroup0) * tmax
# Calculate the spreading of the wave packet.
spread = np.sqrt((sigmax ** 4 + (t / mass) ** 2) / sigmax ** 2)
# Calculate an x-range that keeps the packet visible during tmax.
nsigmas = 1.5
tails = nsigmas * (spread[0] + spread[-1])
xrange = max(tails + dist, 2 * tails)
x0 = nsigmas * spread[0] + 0.5 * (xrange - tails) - 0.5 * vgroup0 * tmax - 0.5 * xrange
x = np.linspace(-0.5 * xrange, 0.5 * xrange, nx) - x0
# Build grid of k values to use, centered on k0.
nsigmas = 2.0
sigmak = 1. / sigmax
k = k0 + sigmak * np.linspace(-nsigmas, +nsigmas, 2 * nwave + 1).reshape(-1, 1, 1)
# Calculate coefficients c(k).
ck = np.exp(-0.5 * (k - k0) ** 2 * sigmax ** 2)
# Calculate omega(k)
omega = k ** 2 / (2 * mass) + V0
# Calculate the (un-normalized) evolution of each wavenumber.
psi = ck * np.cos(k * x - omega * t)
# Calculate the (x,y) coordinates of a tracer for each wavenumber.
xtrace = np.zeros((nt, 2 * nwave + 1))
nz = k != 0
xtrace[:, nz.ravel()] = ((k[nz] / (2 * mass) + V0 / k[nz]) * t)
ytrace = ck.reshape(-1)
# Calculate the motion of the center of the wave packet.
xcenter = vgroup0 * t
return x, psi, xtrace, ytrace, xcenter
Explanation: A particle with total energy $E$ in a region of constant potential $V_0$ has a wave number
$$
k = \pm \frac{2m}{\hbar^2}(E - V_0)
$$
and dispersion relation
$$
\omega(k) = \frac{E(k)}{\hbar} = \frac{\hbar k^2}{2m} + \frac{V_0}{\hbar}
$$
leading to phase and group velocities
$$
v_p(k) = \frac{\omega(k)}{k} = \frac{\hbar k}{2 m} + \frac{V_0}{\hbar k} \quad, \quad
v_g(k) = \frac{d\omega}{dk}(k) = \frac{\hbar k}{m} \; .
$$
Consider an initial state at $t=0$ that is a normalized Gaussian wavepacket
$$
\Psi(x,0) = \pi^{-1/4} \sigma_x^{-1/2} e^{i k_0 x}\,
\exp\left(-\frac{1}{2} \frac{x^2}{\sigma_x^2}\right)
$$
with a central group velocity $v_g = \hbar k_0 / m$.
We can expand this state's time evolution in plane waves as
$$
\Psi(x,t) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^{+\infty} c(k)
\exp\left[ i (k x - \omega(k) t)\right]\, dk
$$
with coefficients
$$
c(k) = \frac{1}{\sqrt{2\pi}}\,\int_{-\infty}^{+\infty} \Psi(x,0)\, e^{-i k x} dx
= \pi^{-1/4} \sigma_x^{1/2}\, \exp\left( -\frac{1}{2} (k - k_0)^2 \sigma_x^2\right) \; .
$$
Approximate the integral over $k$ with a discrete sum of values $k_i$ centered on $k_0$ then the (un-normalized) real part of the wave function is
$$
\text{Re}\Psi(x,t) \simeq \sum_{i=-N}^{+N} c(k_i) \cos\left[
k x_i - \left(\frac{\hbar k_i^2}{2m} + \frac{V_0}{\hbar}\right) t \right] \; .
$$
End of explanation
def animate(k0=10., sigmax=0.25, V0=0., mass=1., nt=30, save=None, height=480, width=720):
x, psi, xt, yt, xc = solve(k0, sigmax, V0, mass, nt=nt)
nw, nt, nx = psi.shape
nwave = (nw - 1) // 2
psi_sum = np.sum(psi, axis=0)
ymax = 1.02 * np.max(np.abs(psi_sum))
dy = 0.95 * ymax * np.arange(-nwave, nwave + 1) / nwave
assert len(dy) == nw
artists = []
dpi = 100.
dot = 0.006 * height
figure = plt.figure(figsize=(width / dpi, height / dpi), dpi=dpi, frameon=False)
ax = figure.add_axes([0, 0, 1, 1])
plt.axis('off')
for i in range(2 * nwave + 1):
artists += ax.plot(x, psi[i, 0] + dy[i], lw=2 * dot, c='b', alpha=0.2)
artists.append(ax.axvline(xc[0], c='r', ls='-', lw=dot, alpha=0.4))
artists += ax.plot(xt[0], yt + dy, 'b.', ms=2.5 * dot, alpha=0.4, lw=0)
artists += ax.plot(x, psi_sum[0], 'r-', lw=2.5 * dot, alpha=0.5)
ax.set_xlim(x[0], x[-1])
ax.set_ylim(-ymax, +ymax)
def init():
return artists
def update(j):
for i in range(2 * nwave + 1):
artists[i].set_ydata(psi[i, j] + dy[i])
artists[-3].set_xdata([xc[j], xc[j]])
artists[-2].set_xdata(xt[j])
artists[-1].set_ydata(psi_sum[j])
return artists
animation = matplotlib.animation.FuncAnimation(
figure, update, init_func=init, frames=nt, repeat=True, blit=True)
if save:
meta = dict(
title='Gaussian quantum wavepacket superposition in 1D',
artist='David Kirkby <[email protected]>',
comment='https://dkirkby.github.io/quantum-demo/',
copyright='Copyright (c) 2018 David Kirkby')
engine = 'imagemagick' if save.endswith('.gif') else 'ffmpeg'
writer = matplotlib.animation.writers[engine](fps=30, metadata=meta)
animation.save(save, writer=writer)
return animation
Explanation: Build an animation using the solver above. Each frame shows:
- The real part of the component plane waves in blue, with increasing $k$ (decreasing $\lambda$) moving up the plot. Each component is vertically offset and has an amplitude of $c(k)$.
- The sum of each plane is shown in red and represents the real part of the wave packet wave function.
- Blue dots trace the motion of each plane from the initial wave packet center, each traveling at their phase velocity $v_p(k)$.
- A red vertical line moves with the central group velocity $v_g(k_0)$.
The main points to note are:
- In the intial frame, the blue plane waves all interfere constructively at the center of the wave packet, but become progressively more incoherent moving away from the peak.
- Each blue plane wave propagates at a different phase velocity $v_p(k)$, causing the blue tracer dots to separate horizontally over time.
- Changes in the constant potential $V_0$ lead to different phase velocities but an identical combined red wave function and group velocity.
- The center of the wave packet travels at the central group velocity $v_g(k_0)$ indicated by the vertical red line.
- The red wave packet spreads as it propagates.
End of explanation
animate(V0=-50, nt=300, save='wavepacket0.mp4');
animate(V0=0, nt=300, height=480, width=720, save='wavepacket1.mp4');
animate(V0=50, nt=300, height=480, width=720, save='wavepacket2.mp4');
Explanation: Simulate with different values of the constant potential:
$$
\begin{aligned}
V_0 &= -50 \quad &\Rightarrow \quad v_p(k_0) &= 0 \
V_0 &= 0 \quad &\Rightarrow \quad v_p(k_0) &= v_g(k_0) / 2 \
V_0 &= +50 \quad &\Rightarrow \quad v_p(k_0) &= v_g(k_0) \
\end{aligned}
$$
End of explanation
#HTML(animate().to_html5_video())
Explanation: Uncomment and run the line below to display an animation inline:
End of explanation
animate(V0=0, nt=100, height=150, width=200, save='wavepacket1.gif');
Explanation: Convert video to the open-source Theora format using, e.g.
ffmpeg -i wavepacket0.mp4 -codec:v libtheora -qscale:v 7 wavepacket0.ogv
Produce a smaller GIF animation for wikimedia. Note that this file is slightly larger (~1Mb) than the MP4 files, despite the lower quality.
End of explanation |
2,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plot sensor denoising using oversampled temporal projection
This demonstrates denoising using the OTP algorithm
Step1: Plot the phantom data, lowpassed to get rid of high-frequency artifacts.
We also crop to a single 10-second segment for speed.
Notice that there are two large flux jumps on channel 1522 that could
spread to other channels when performing subsequent spatial operations
(e.g., Maxwell filtering, SSP, or ICA).
Step2: Now we can clean the data with OTP, lowpass, and plot. The flux jumps have
been suppressed alongside the random sensor noise.
Step3: We can also look at the effect on single-trial phantom localization.
See the tut-brainstorm-elekta-phantom
for more information. Here we use a version that does single-trial
localization across the 17 trials are in our 10-second window | Python Code:
# Author: Eric Larson <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import mne
import numpy as np
from mne import find_events, fit_dipole
from mne.datasets.brainstorm import bst_phantom_elekta
from mne.io import read_raw_fif
print(__doc__)
Explanation: Plot sensor denoising using oversampled temporal projection
This demonstrates denoising using the OTP algorithm :footcite:LarsonTaulu2018
on data with with sensor artifacts (flux jumps) and random noise.
End of explanation
dipole_number = 1
data_path = bst_phantom_elekta.data_path()
raw = read_raw_fif(
op.join(data_path, 'kojak_all_200nAm_pp_no_chpi_no_ms_raw.fif'))
raw.crop(40., 50.).load_data()
order = list(range(160, 170))
raw.copy().filter(0., 40.).plot(order=order, n_channels=10)
Explanation: Plot the phantom data, lowpassed to get rid of high-frequency artifacts.
We also crop to a single 10-second segment for speed.
Notice that there are two large flux jumps on channel 1522 that could
spread to other channels when performing subsequent spatial operations
(e.g., Maxwell filtering, SSP, or ICA).
End of explanation
raw_clean = mne.preprocessing.oversampled_temporal_projection(raw)
raw_clean.filter(0., 40.)
raw_clean.plot(order=order, n_channels=10)
Explanation: Now we can clean the data with OTP, lowpass, and plot. The flux jumps have
been suppressed alongside the random sensor noise.
End of explanation
def compute_bias(raw):
events = find_events(raw, 'STI201', verbose=False)
events = events[1:] # first one has an artifact
tmin, tmax = -0.2, 0.1
epochs = mne.Epochs(raw, events, dipole_number, tmin, tmax,
baseline=(None, -0.01), preload=True, verbose=False)
sphere = mne.make_sphere_model(r0=(0., 0., 0.), head_radius=None,
verbose=False)
cov = mne.compute_covariance(epochs, tmax=0, method='oas',
rank=None, verbose=False)
idx = epochs.time_as_index(0.036)[0]
data = epochs.get_data()[:, :, idx].T
evoked = mne.EvokedArray(data, epochs.info, tmin=0.)
dip = fit_dipole(evoked, cov, sphere, n_jobs=None, verbose=False)[0]
actual_pos = mne.dipole.get_phantom_dipoles()[0][dipole_number - 1]
misses = 1000 * np.linalg.norm(dip.pos - actual_pos, axis=-1)
return misses
bias = compute_bias(raw)
print('Raw bias: %0.1fmm (worst: %0.1fmm)'
% (np.mean(bias), np.max(bias)))
bias_clean = compute_bias(raw_clean)
print('OTP bias: %0.1fmm (worst: %0.1fmm)'
% (np.mean(bias_clean), np.max(bias_clean),))
Explanation: We can also look at the effect on single-trial phantom localization.
See the tut-brainstorm-elekta-phantom
for more information. Here we use a version that does single-trial
localization across the 17 trials are in our 10-second window:
End of explanation |
2,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convert training sessions to labeled examples, each example will have a seq_size sequence size, we will include three features per data point, the timestamp, the x position and the y position. We will one-hot encode the labels, and shuffle all the data.
Step1: TSNE | Python Code:
df_train = sessions_to_dataframe(training_sessions)
df_val = sessions_to_dataframe(validation_sessions)
df_train.head()
df_train = preprocess_data(df_train)
df_val = preprocess_data(df_val)
#### SPECIAL CASE #####
# There isnt any XButton data in the validation set so we better drop this column for the training set
# if we want to have the same number of features in both sets
df_train = df_train.drop(['XButton'], axis = 1)
#### SPECIAL CASE #####
df_train.head()
seq_size = 300
train_x, train_y = data_to_machine_learning_examples(df_train, seq_size)
print('[*] Generated traning examples {} and labels {}'.format(train_x.shape, train_y.shape))
val_x, val_y = data_to_machine_learning_examples(df_val, seq_size)
print('[*] Generated validation examples {} and labels {}'.format(val_x.shape, val_y.shape))
def print_model(model):
print("[*] Sequential model created with the following layers:")
for layer in model.layers:
print("{:30}{} -> {}".format(layer.name, layer.input_shape, layer.output_shape))
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
from keras.callbacks import ReduceLROnPlateau
from keras.callbacks import TensorBoard
epochs = 200
batch_size = 30
learning_rate = 0.0001
batch_norm_momentum = 0.2
n_classes = 10
data_point_dimensionality = 13
# model = load_model('model/model_18.h5')
model = create_model_paper(input_shape = (seq_size, data_point_dimensionality),
classes = n_classes,
batch_norm_momentum = batch_norm_momentum,
l2_regularization = 0.01)
optimizer = Adam(lr=learning_rate)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
cb_check = ModelCheckpoint('model/checkpoint', monitor='val_loss', verbose=1, period=30)
cb_reducelr = ReduceLROnPlateau(verbose=1)
cb_tensorboard = TensorBoard(log_dir='./logs', histogram_freq=30, write_graph=True)
hist = model.fit(train_x, train_y,
batch_size, epochs, 2,
validation_data=(val_x, val_y),
callbacks = [cb_reducelr])
# callbacks =[cb_check, cb_reducelr, cb_tensorboard])
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
Explanation: Convert training sessions to labeled examples, each example will have a seq_size sequence size, we will include three features per data point, the timestamp, the x position and the y position. We will one-hot encode the labels, and shuffle all the data.
End of explanation
from keras.models import load_model
model = load_model('model/model_18.h5')
def print_model(model):
print("[*] Sequential model created with the following layers:")
for layer in model.layers:
print("{:30}{} -> {}".format(layer.name, layer.input_shape, layer.output_shape))
print_model(model)
from keras.models import Model
layer_name = 'global_average_pooling1d_1'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(train_x)
y_data = model.predict(train_x)
intermediate_output.shape
y_data_nums = [np.argmax(row) for row in y_data]
from sklearn.manifold import TSNE
tsne_model = TSNE(n_components=2, random_state=0)
np.set_printoptions(suppress=True)
result = tsne_model.fit_transform(intermediate_output)
print(result)
import seaborn as sns
sns.set(style="white", color_codes=True)
g = sns.jointplot(x=result[:,0], y=result[:,1])
plt.figure(1, figsize=(12, 10))
plt.scatter(result[:,0], result[:,1], c=y_data_nums, cmap=plt.cm.get_cmap("jet"))
# plt.scatter(result[:,0], result[:,1])
plt.colorbar(ticks=range(10))
plt.clim(-0.5, 9.5)
plt.show()
Explanation: TSNE
End of explanation |
2,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Saving and Loading Models
In this bite-sized notebook, we'll go over how to save and load models. In general, the process is the same as for any PyTorch module.
Step1: Saving a Simple Model
First, we define a GP Model that we'd like to save. The model used below is the same as the model from our
<a href="../01_Exact_GPs/Simple_GP_Regression.ipynb">Simple GP Regression</a> tutorial.
Step2: Change Model State
To demonstrate model saving, we change the hyperparameters from the default values below. For more information on what is happening here, see our tutorial notebook on <a href="Hyperparameters.ipynb">Initializing Hyperparameters</a>.
Step3: Getting Model State
To get the full state of a GPyTorch model, simply call state_dict as you would on any PyTorch model. Note that the state dict contains raw parameter values. This is because these are the actual torch.nn.Parameters that are learned in GPyTorch. Again see our notebook on hyperparamters for more information on this.
Step4: Saving Model State
The state dictionary above represents all traininable parameters for the model. Therefore, we can save this to a file as follows
Step5: Loading Model State
Next, we load this state in to a new model and demonstrate that the parameters were updated correctly.
Step6: A More Complex Example
Next we demonstrate this same principle on a more complex exact GP where we have a simple feed forward neural network feature extractor as part of the model.
Step7: Getting Model State
In the next cell, we once again print the model state via model.state_dict(). As you can see, the state is substantially more complex, as the model now includes our neural network parameters. Nevertheless, saving and loading is straight forward. | Python Code:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
Explanation: Saving and Loading Models
In this bite-sized notebook, we'll go over how to save and load models. In general, the process is the same as for any PyTorch module.
End of explanation
train_x = torch.linspace(0, 1, 100)
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
Explanation: Saving a Simple Model
First, we define a GP Model that we'd like to save. The model used below is the same as the model from our
<a href="../01_Exact_GPs/Simple_GP_Regression.ipynb">Simple GP Regression</a> tutorial.
End of explanation
model.covar_module.outputscale = 1.2
model.covar_module.base_kernel.lengthscale = 2.2
Explanation: Change Model State
To demonstrate model saving, we change the hyperparameters from the default values below. For more information on what is happening here, see our tutorial notebook on <a href="Hyperparameters.ipynb">Initializing Hyperparameters</a>.
End of explanation
model.state_dict()
Explanation: Getting Model State
To get the full state of a GPyTorch model, simply call state_dict as you would on any PyTorch model. Note that the state dict contains raw parameter values. This is because these are the actual torch.nn.Parameters that are learned in GPyTorch. Again see our notebook on hyperparamters for more information on this.
End of explanation
torch.save(model.state_dict(), 'model_state.pth')
Explanation: Saving Model State
The state dictionary above represents all traininable parameters for the model. Therefore, we can save this to a file as follows:
End of explanation
state_dict = torch.load('model_state.pth')
model = ExactGPModel(train_x, train_y, likelihood) # Create a new GP model
model.load_state_dict(state_dict)
model.state_dict()
Explanation: Loading Model State
Next, we load this state in to a new model and demonstrate that the parameters were updated correctly.
End of explanation
class GPWithNNFeatureExtractor(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(GPWithNNFeatureExtractor, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
self.feature_extractor = torch.nn.Sequential(
torch.nn.Linear(1, 2),
torch.nn.BatchNorm1d(2),
torch.nn.ReLU(),
torch.nn.Linear(2, 2),
torch.nn.BatchNorm1d(2),
torch.nn.ReLU(),
)
def forward(self, x):
x = self.feature_extractor(x)
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = GPWithNNFeatureExtractor(train_x, train_y, likelihood)
Explanation: A More Complex Example
Next we demonstrate this same principle on a more complex exact GP where we have a simple feed forward neural network feature extractor as part of the model.
End of explanation
model.state_dict()
torch.save(model.state_dict(), 'my_gp_with_nn_model.pth')
state_dict = torch.load('my_gp_with_nn_model.pth')
model = GPWithNNFeatureExtractor(train_x, train_y, likelihood)
model.load_state_dict(state_dict)
model.state_dict()
Explanation: Getting Model State
In the next cell, we once again print the model state via model.state_dict(). As you can see, the state is substantially more complex, as the model now includes our neural network parameters. Nevertheless, saving and loading is straight forward.
End of explanation |
2,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 2 assignment
This assignment will get you familiar with the basic elements of Python by programming a simple card game. We will create a custom class to represent each player in the game, which will store information about their current pot, as well as a series of methods defining how they play the game. We will also build several functions to control the flow of the game and get data back at the end.
We will start by importing the 'random' library, which will allow us to use its functions for picking a random entry from a list.
Step1: First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.
Step2: Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
Step3: Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.
Step4: Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.
Step5: Now we are ready to start the game. First we create an empy list to store the collection of players in the game.
Step6: Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.
Step7: Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.
Step8: Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players. | Python Code:
import random
Explanation: Lab 2 assignment
This assignment will get you familiar with the basic elements of Python by programming a simple card game. We will create a custom class to represent each player in the game, which will store information about their current pot, as well as a series of methods defining how they play the game. We will also build several functions to control the flow of the game and get data back at the end.
We will start by importing the 'random' library, which will allow us to use its functions for picking a random entry from a list.
End of explanation
gameStake = 50
cards = range(10)
Explanation: First we will establish some general variables for our game, including the 'stake' of the game (how much money each play is worth), as well as a list representing the cards used in the game. To make things easier, we will just use a list of numbers 0-9 for the cards.
End of explanation
class Player:
# create here two local variables to store a unique ID for each player and the player's current 'pot' of money
# [FILL IN YOUR VARIABLES HERE]
# in the __init__() function, use the two input variables to initialize the ID and starting pot of each player
def __init__(self, inputID, startingPot):
# [CREATE YOUR INITIALIZATIONS HERE]
# create a function for playing the game. This function starts by taking an input for the dealer's card
# and picking a random number from the 'cards' list for the player's card
def play(self, dealerCard):
# we use the random.choice() function to select a random item from a list
playerCard = random.choice(cards)
# here we should have a conditional that tests the player's card value against the dealer card
# and returns a statement saying whether the player won or lost the hand
# before returning the statement, make sure to either add or subtract the stake from the player's pot so that
# the 'pot' variable tracks the player's money
if playerCard < dealerCard:
# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
else:
# [INCREMENT THE PLAYER'S POT, AND RETURN A MESSAGE]
# create an accessor function to return the current value of the player's pot
def returnPot(self):
# [FILL IN THE RETURN STATEMENT]
# create an accessor function to return the player's ID
def returnID(self):
# [FILL IN THE RETURN STATEMENT]
Explanation: Next, let's define a new class to represent each player in the game. I have provided a rough framework of the class definition along with comments along the way to help you complete it. Places where you should write code are denoted by comments inside [] brackets and CAPITAL TEXT.
End of explanation
def playHand(players):
for player in players:
dealerCard = random.choice(cards)
#[EXECUTE THE PLAY() FUNCTION FOR EACH PLAYER USING THE DEALER CARD, AND PRINT OUT THE RESULTS]
Explanation: Next we will create some functions outside the class definition which will control the flow of the game. The first function will play one round. It will take as an input the collection of players, and iterate through each one, calling each player's '.play() function.
End of explanation
def checkBalances(players):
for player in players:
#[PRINT OUT EACH PLAYER'S BALANCE BY USING EACH PLAYER'S ACCESSOR FUNCTIONS]
Explanation: Next we will define a function that will check the balances of each player, and print out a message with the player's ID and their balance.
End of explanation
players = []
Explanation: Now we are ready to start the game. First we create an empy list to store the collection of players in the game.
End of explanation
for i in range(5):
players.append(Player(i, 500))
Explanation: Then we create a loop that will run a certain number of times, each time creating a player with a unique ID and a starting balance. Each player should be appended to the empty list, which will store all the players. In this case we pass the 'i' iterator of the loop as the player ID, and set a constant value of 500 for the starting balance.
End of explanation
for i in range(10):
print ''
print 'start game ' + str(i)
playHand(players)
Explanation: Once the players are created, we will create a loop to run the game a certain amount of times. Each step of the loop should start with a print statement announcing the start of the game, and then call the playHand() function, passing as an input the list of players.
End of explanation
print ''
print 'game results:'
checkBalances(players)
Explanation: Finally, we will analyze the results of the game by running the 'checkBalances()' function and passing it our list of players.
End of explanation |
2,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Correção de exercício Cap 03 - Exercício 14
Step1: Método dos Mínimos Quadrados
Para achar a função de calibração
\( D = N\sum{Y^{2}} - (\sum{Y})^2 \)
\( c_{0} = (\sum{X}\sum{Y^2}\,-\,\sum{Y}\sum{XY}) / D \)
\( c_{1} = (N\sum{XY}\,-\,\sum{X}\sum{Y}) / D \)
Step2: Resumo
Equação de Transferência | Python Code:
%matplotlib notebook
import numpy as np
import matplotlib
from matplotlib import pyplot as plt
import pandas as pd
df=pd.read_table('./data/temperatura.txt',sep='\s',header=0, engine='python')
df.head()
fig, ax1 = plt.subplots()
ax1.plot(df.Xi, 'b')
ax1.plot(df.Y1, 'y')
#ax1.set_xlabel('time (s)')
# Make the y-axis label, ticks and tick labels match the line color.
ax2 = ax1.twinx()
ax2.plot(df.Xi, 'r')
ax2.plot(df.Y1, 'r')
ax2.set_ylabel('ax2', color='r')
ax2.tick_params('y', colors='r')
fig.tight_layout()
fig, ax1 = plt.subplots()
ax1.set_title('Y1(xi)')
ax1.set_ylabel('Saída Bruta (V)', color='b')
ax1.set_xlabel('mensurando (K)', color='g')
plt.plot(df.Xi,df.Y1,'o')
plt.show()
Explanation: Correção de exercício Cap 03 - Exercício 14
End of explanation
#N número de pontos/dados/valores
N = df.Y1.size
D = N * sum(df.Y1**2) - sum(df.Y1)**2
#592.08848258 print(D)
c0 = ( (sum(df.Xi) * sum(df.Y1**2)) - (sum(df.Y1) * sum(df.Xi * df.Y1)) ) / D
c1 = ( (N * sum(df.Xi * df.Y1) - (sum(df.Xi) * sum(df.Y1))) ) / D
print(' N = %s \n D = %s \n c0 = %s K \n c1 = %s K/V' % (N,D,c0,c1))
df['X1'] = c0+c1*df.Y1 # Saída calibrada
df['erro'] = df.X1 - df.Xi # Estimativa de erro
print(df)
# Viés (bias)
vies = df.erro.mean()
erro2 = (df.erro**2).mean()
desvio_padrao = np.sqrt( sum((df.erro - vies)**2)/(N-1) )
print(" Bias/viés = %s K \n Imprecisão = %s K \n Inacurácia = %s ± %s K" % (vies,desvio_padrao,vies,desvio_padrao))
print(80*"=")
print(df.cov())
print(df.corr())
Explanation: Método dos Mínimos Quadrados
Para achar a função de calibração
\( D = N\sum{Y^{2}} - (\sum{Y})^2 \)
\( c_{0} = (\sum{X}\sum{Y^2}\,-\,\sum{Y}\sum{XY}) / D \)
\( c_{1} = (N\sum{XY}\,-\,\sum{X}\sum{Y}) / D \)
End of explanation
fig, ax1 = plt.subplots()
ax1.set_title('Y1(X1)')
ax1.set_ylabel('Saída Bruta (V)', color='b')
ax1.set_xlabel('mensurando (K)', color='g')
#plt.plot(df.Xi,df.Y1,'+',color='r')
# vezez 10 para aparecer melhor.
ax1.errorbar(df.X1,df.Y1, xerr=df.erro, fmt="-o", ecolor='grey', capthick=2)
plt.show()
Explanation: Resumo
Equação de Transferência : (serve para achar a reta de ajuste)
\( \hat{Y_1} = a_0 - a_1 Y_1 \)
Equação de Calibração :
\( X_1 = c_0 - c_1 Y_1 \)
End of explanation |
2,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementation of the Thornthwaite-Mather procedure to map groundwater recharge
Author
Step1: Other libraries
Import other libraries/modules used in this notebook.
pandas
Step2: Some input parameters
We additionally define some parameters used to evaluate the results of our implementation. Particularly
Step3: From soil texture to hydraulic properties
Definitions
Two hydraulic properties of soil are commonly used in the TM procedure
Step5: We now define a function to get the ee.Image associated to the parameter we are interested in (e.g. sand, clay, organic carbon content, etc.).
Step6: We apply this function to import soil properties
Step8: To illustrate the result, we define a new method for handing Earth Engine tiles and using it to display the clay content of the soil at a given reference depth, to a Leaflet map.
Step9: Now, a function is defined to get soil properties at a given location. The following function returns a dictionary indicating the value of the parameter of interest for each standard depth (in centimeter). This function uses the ee.Image.sample method to evaluate the ee.Image properties on the region of interest. The result is then transferred client-side using the ee.Image.getInfo method.
In the example below, we are asking for the sand content.
Step11: We now apply the function to plot the profile of the soil regarding sand and clay and organic carbon content at the location of interest
Step12: Expression to calculate hydraulic properties
Now that soil properties are described, the water content at the field capacity and at the wilting point can be calculated according to the equation defined at the beginning of this section. Please note that in the equation of Saxton & Rawls (2006), the wilting point and field capacity are calculated using the Organic Matter content ($OM$) and not the Organic Carbon content ($OC$). In the following, we convert $OC$ into $OM$ using the corrective factor known as the Van Bemmelen factor
Step13: When the mathematical operation to apply to the ee.Image becomes too complex, the ee.Image.expression is a good alternative. We use it in the following code block since the calculation of wilting point and field capacity relies on multiple parameters and images. This method takes two arguments
Step14: Let's see the result around our location of interest
Step15: The result is displayed using barplots as follows
Step16: Getting meteorological datasets
Datasets exploration
The meteorological data used in our implementation of the TM procedure relies on the following datasets
Step17: Now we can have a closer look around our location of interest. To evaluate the properties of an ee.ImageCollection, the ee.ImageCollection.getRegion method is used and combined with ee.ImageCollection.getInfo method for a client-side visualization.
Step19: We now establish a procedure to get meteorological data around a given location in form of a pandas.DataFrame
Step20: We apply the function and see the head of the resulting pandas.DataFrame
Step21: We do the same for potential evaporation
Step23: Looking at both pandas.DataFrame shows the following points
Step24: The precipitation dataset is now resampled by month as follows
Step25: For evapotranspiration, we have to be careful with the unit. The dataset gives us an 8-day sum and a scale factor of 10 is applied. Then, to get a homogeneous unit, we need to rescale by dividing by 8 and 10
Step26: We now combine both ee.ImageCollection objects (pet_m and pr_m) using the ee.ImageCollection.combine method. Note that corresponding images in both ee.ImageCollection objects need to have the same time index before combining.
Step27: We evaluate the result on our location of interest
Step28: Implementation of the TM procedure
Description
Some additional definitions are needed to formalize the Thornthwaite-Mather procedure. The following definitions are given in accordance with Allen et al. (1998) (the document can be downloaded here)
Step30: In the following, we also consider an averaged value between reference depths of the water content at wilting point and field capacity
Step31: The Thornthwaite-Mather procedure used to estimate groundwater recharge is explicitly described by Steenhuis and Van der Molen (1985). This procedure uses monthly sums of potential evaporation, cumulative precipitation, and the moisture status of the soil which is calculated iteratively. The moisture status of the soils depends on the accumulated potential water loss ($APWL$). This parameter is calculated depending on whether the potential evaporation is greater than or less than the cumulative precipitation. The procedure reads as follow
Step32: Then, we initialize the calculation with an ee.Image where all bands associated to the hydric state of the soil are set equal to ee.Image(0), except for the initial storage which is considered to be equal to the water content at field capacity, meaning that $ST_{0} = ST_{FC}$.
Step33: We combine all these bands into one ee.Image adding new bands to the first using the ee.Image.addBands method
Step34: We also initialize a list in which new images will be added after each iteration. We create this server-side list using the ee.List method.
Step36: Iteration over an ee.ImageCollection
The procedure is implemented by means of the ee.ImageCollection.iterate method, which applies a user-supplied function to each element of a collection. For each time step, groundwater recharge is calculated using the recharge_calculator considering the previous hydric state of the soil and current meteorological conditions.
Of course, considering the TM description, several cases must be distinguished to calculate groundwater recharge. The distinction is made by the definition of binary layers with different logical operations. It allows specific calculations to be applied in areas where a given condition is true using the ee.Image.where method.
The function we apply to each element of the meteorological dataset to calculate groundwater recharge is defined as follows.
Step37: The TM procedure can now be applied to the meteorological ee.ImageCollection
Step38: Let's have a look at the result around the location of interest
Step39: The result can be displayed in the form of a barplot as follows
Step40: The result shows the distribution of precipitation, potential evapotranspiration, and groundwater recharge along the year. It shows that in our area of interest, groundwater recharge generally occurs from October to March. Even though a significant amount of precipitation occurs from April to September, evapotranspiration largely dominates because of high temperatures and sun exposure during these months. The result is that no percolation into aquifers occurs during this period.
Now the annual average recharge over the period of interest can be calculated. To do that, we resample the DataFrame we've just created
Step42: Groundwater recharge comparison between multiple places
We now may want to get local information about groundwater recharge and/or map this variable on an area of interest.
Let's define a function to get the local information based on the ee.ImageCollection we've just built
Step44: We now use this function on a second point of interest located near the city of Montpellier (France). This city is located in the south of France, and precipitation and groundwater recharge are expected to be much lower than in the previous case.
Step45: The result shows that the annual recharge in Lyon is almost twice as high as in the area of Montpellier. The result also shows a great variability of the annual recharge ranging from 98 mm/y to 258 mm/y in Lyon and from 16 mm/y to 147 mm/y in Montpellier.
Groundwater recharge map of France
To get a map of groundwater recharge around our region of interest, let's create a mean composite ee.Image based on our resulting ee.ImageCollection.
Step46: And finally the map can be drawn. | Python Code:
import ee
# Trigger the authentication flow.
ee.Authenticate()
# Initialize the library.
ee.Initialize()
Explanation: Implementation of the Thornthwaite-Mather procedure to map groundwater recharge
Author: guiattard
Groundwater recharge represents the amount of water coming from precipitation reaching the groundwater table. Its determination helps to better understand the available/renewable groundwater in watersheds and the shape of groundwater flow systems.
One of the simplest methods to estimate groundwater recharge is the Thornthwaite-Mather procedure (Steenhuis and Van Der Molen, 1986). This procedure was published by Thornthwaite and Mather (1955, 1957). The idea of this procedure is to calculate the water balance in the root zone of the soil where water can be (1) evaporated into the atmosphere under the effect of heat, (2) transpired by vegetation, (3) stored by the soil, and eventually (4) infiltrated when stored water exceeds the field capacity.
This procedures relies on several parameters and variables described as follows:
- information about soil texture (e.g. sand and clay content) to describe the hydraulic properties of the soil and its capacity to store/infiltrate,
- meteorological records: precipitation and potential evapotranspiration.
Of course groundwater recharge can be influenced by many other factors such as the slope of the terrain, the snow cover, the variability of the crop/land cover and the irrigation. In the following these aspects are not taken into account.
In the first part of the tutorial, the Earth Engine python API will be initialized, some useful libraries will be imported, and the location/period of interest will be defined.
In the second part, OpenLandMap datasets related to soil properties will be explored. The wilting point and field capacity of the soil will be calculated by applying some mathematical expressions to multiple images.
In the third part, evapotranspiration and precipitation datasets will be imported. A function will be defined to resample the time resolution of an ee.ImageCollection and to homogenize time index of both datasets. Both datasets will then be combined into one.
In the fourth and final part, the Thornthwaite-Mather(TM) procedure will be implemented by iterating over the meteorological ee.ImageCollection. Finally, a comparison between groundwater recharge in two places will be described and the resulting mean annual groundwater recharge will be displayed over France.
Run me first
Earth Engine API
First of all, run the following cell to initialize the API. The output will contain instructions on how to grant this notebook access to Earth Engine using your account.
End of explanation
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import folium
import pprint
import branca.colormap as cm
Explanation: Other libraries
Import other libraries/modules used in this notebook.
pandas: data analysis (including the DataFrame data structure)
matplotlib: data visualization library
numpy: array-processing package
folium: interactive web map
pprint: a pretty printer
branca.colormap: utility module for dealing with colormaps.
End of explanation
# Initial date of interest (inclusive).
i_date = "2015-01-01"
# Final date of interest (exclusive).
f_date = "2020-01-01"
# Define the location of interest with a point.
lon = 5.145041
lat = 45.772439
poi = ee.Geometry.Point(lon, lat)
# A nominal scale in meters of the projection to work in [in meters].
scale = 1000
Explanation: Some input parameters
We additionally define some parameters used to evaluate the results of our implementation. Particularly:
- the period of interest to get meteorological records,
- a location of interest based on longitude and latitude coordinates. In the following, the point of interest is located in a productive agricultural region which is about 30 kilometers outside of the city of Lyon (France). This point is used to evaluate and illustrate the progress of the described procedure.
End of explanation
# Soil depths [in cm] where we have data.
olm_depths = [0, 10, 30, 60, 100, 200]
# Names of bands associated with reference depths.
olm_bands = ["b" + str(sd) for sd in olm_depths]
Explanation: From soil texture to hydraulic properties
Definitions
Two hydraulic properties of soil are commonly used in the TM procedure:
- the wilting point represents the point below what water cannot be extracted by plant roots,
- the field capacity represents the point after which water cannot be stored by soil any more. After that point, gravitational forces become too high and water starts to infiltrate the lower levels.
Some equations given by Saxton & Rawls (2006) are used to link both parameters to the texture of the soil. The calculation of water content at wilting point $θ_{WP}$ can be done as follows:
$$\theta_{WP}= \theta_{1500t} + (0.14 \theta_{1500t} - 0.002)$$ with:
$$\theta_{1500t} = -0.024 S + 0.487 C + 0.006 OM + 0.005(S \times OM) - 0.013 (C \times OM) + 0.068 (S \times C) + 0.031$$
where:
- $S$: represents the sand content of the soil (mass percentage),
- $C$: represents the clay content of the soil (mass percentage),
- $OM$: represents the organic matter content of the soil (mass percentage).
Similarly, the calculation of the water content at field capacity $θ_{FC}$ can be done as follows:
$$\theta_{FC} = \theta_{33t} + (1.283 \theta_{33t}^{2} - 0.374 \theta_{33t}-0.15)$$ with:
$$\theta_{33t} = -0.251 S + 0.195 C + 0.011 OM + 0.006 (S \times OM) - 0.027 (C \times OM) + 0.452 (S \times C) + 0.299$$
Determination of soil texture and properties
In the following, OpenLandMap datasets are used to describe clay, sand and organic carbon content of soil.
A global dataset of soil water content at the field capacity with a resolution of 250 m has been made available by Hengl & Gupta (2019). However, up to now, there is no dataset dedicated to the water content of soil at the wilting point. Consequently, in the following, both parameters will be determined considering the previous equations and using the global datasets giving the sand, clay and organic matter contents of the soil. According to the description, these datasets are based on machine learning predictions from global compilation of soil profiles and samples. Processing steps are described in detail here. The information (clay, sand content, etc.) is given at 6 standard depths (0, 10, 30, 60, 100 and 200 cm) at 250 m resolution.
These standard depths and associated bands are defined into a list as follows:
End of explanation
def get_soil_prop(param):
This function returns soil properties image
param (str): must be one of:
"sand" - Sand fraction
"clay" - Clay fraction
"orgc" - Organic Carbon fraction
if param == "sand": # Sand fraction [%w]
snippet = "OpenLandMap/SOL/SOL_SAND-WFRACTION_USDA-3A1A1A_M/v02"
# Define the scale factor in accordance with the dataset description.
scale_factor = 1 * 0.01
elif param == "clay": # Clay fraction [%w]
snippet = "OpenLandMap/SOL/SOL_CLAY-WFRACTION_USDA-3A1A1A_M/v02"
# Define the scale factor in accordance with the dataset description.
scale_factor = 1 * 0.01
elif param == "orgc": # Organic Carbon fraction [g/kg]
snippet = "OpenLandMap/SOL/SOL_ORGANIC-CARBON_USDA-6A1C_M/v02"
# Define the scale factor in accordance with the dataset description.
scale_factor = 5 * 0.001 # to get kg/kg
else:
return print("error")
# Apply the scale factor to the ee.Image.
dataset = ee.Image(snippet).multiply(scale_factor)
return dataset
Explanation: We now define a function to get the ee.Image associated to the parameter we are interested in (e.g. sand, clay, organic carbon content, etc.).
End of explanation
# Image associated with the sand content.
sand = get_soil_prop("sand")
# Image associated with the clay content.
clay = get_soil_prop("clay")
# Image associated with the organic carbon content.
orgc = get_soil_prop("orgc")
Explanation: We apply this function to import soil properties:
End of explanation
def add_ee_layer(self, ee_image_object, vis_params, name):
Adds a method for displaying Earth Engine image tiles to folium map.
map_id_dict = ee.Image(ee_image_object).getMapId(vis_params)
folium.raster_layers.TileLayer(
tiles=map_id_dict["tile_fetcher"].url_format,
attr="Map Data © <a href='https://earthengine.google.com/'>Google Earth Engine</a>",
name=name,
overlay=True,
control=True,
).add_to(self)
# Add Earth Engine drawing method to folium.
folium.Map.add_ee_layer = add_ee_layer
my_map = folium.Map(location=[lat, lon], zoom_start=3)
# Set visualization parameters.
vis_params = {
"bands": ["b0"],
"min": 0.01,
"max": 1,
"opacity": 1,
"palette": ["white", "#464646"],
}
# Add the sand content data to the map object.
my_map.add_ee_layer(sand, vis_params, "Sand Content")
# Add a marker at the location of interest.
folium.Marker([lat, lon], popup="point of interest").add_to(my_map)
# Add a layer control panel to the map.
my_map.add_child(folium.LayerControl())
# Display the map.
display(my_map)
Explanation: To illustrate the result, we define a new method for handing Earth Engine tiles and using it to display the clay content of the soil at a given reference depth, to a Leaflet map.
End of explanation
def local_profile(dataset, poi, buffer):
# Get properties at the location of interest and transfer to client-side.
prop = dataset.sample(poi, buffer).select(olm_bands).getInfo()
# Selection of the features/properties of interest.
profile = prop["features"][0]["properties"]
# Re-shaping of the dict.
profile = {key: round(val, 3) for key, val in profile.items()}
return profile
# Apply the function to get the sand profile.
profile_sand = local_profile(sand, poi, scale)
# Print the result.
print("Sand content profile at the location of interest:\n", profile_sand)
Explanation: Now, a function is defined to get soil properties at a given location. The following function returns a dictionary indicating the value of the parameter of interest for each standard depth (in centimeter). This function uses the ee.Image.sample method to evaluate the ee.Image properties on the region of interest. The result is then transferred client-side using the ee.Image.getInfo method.
In the example below, we are asking for the sand content.
End of explanation
# Clay and organic content profiles.
profile_clay = local_profile(clay, poi, scale)
profile_orgc = local_profile(orgc, poi, scale)
# Data visualization in the form of a bar plot.
fig, ax = plt.subplots(figsize=(15, 6))
ax.axes.get_yaxis().set_visible(False)
# Definition of label locations.
x = np.arange(len(olm_bands))
# Definition of the bar width.
width = 0.25
# Bar plot representing the sand content profile.
rect1 = ax.bar(
x - width,
[round(100 * profile_sand[b], 2) for b in olm_bands],
width,
label="Sand",
color="#ecebbd",
)
# Bar plot representing the clay content profile.
rect2 = ax.bar(
x,
[round(100 * profile_clay[b], 2) for b in olm_bands],
width,
label="Clay",
color="#6f6c5d",
)
# Bar plot representing the organic carbon content profile.
rect3 = ax.bar(
x + width,
[round(100 * profile_orgc[b], 2) for b in olm_bands],
width,
label="Organic Carbon",
color="black",
alpha=0.75,
)
# Definition of a function to attach a label to each bar.
def autolabel_soil_prop(rects):
Attach a text label above each bar in *rects*, displaying its height.
for rect in rects:
height = rect.get_height()
ax.annotate(
"{}".format(height) + "%",
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 3), # 3 points vertical offset.
textcoords="offset points",
ha="center",
va="bottom",
fontsize=10,
)
# Application of the function to each barplot.
autolabel_soil_prop(rect1)
autolabel_soil_prop(rect2)
autolabel_soil_prop(rect3)
# Title of the plot.
ax.set_title("Properties of the soil at different depths (mass content)", fontsize=14)
# Properties of x/y labels and ticks.
ax.set_xticks(x)
x_labels = [str(d) + " cm" for d in olm_depths]
ax.set_xticklabels(x_labels, rotation=45, fontsize=10)
ax.spines["left"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
# Shrink current axis's height by 10% on the bottom.
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9])
# Add a legend below current axis.
ax.legend(
loc="upper center", bbox_to_anchor=(0.5, -0.15), fancybox=True, shadow=True, ncol=3
)
plt.show()
Explanation: We now apply the function to plot the profile of the soil regarding sand and clay and organic carbon content at the location of interest:
End of explanation
# Conversion of organic carbon content into organic matter content.
orgm = orgc.multiply(1.724)
# Organic matter content profile.
profile_orgm = local_profile(orgm, poi, scale)
print("Organic Matter content profile at the location of interest:\n", profile_orgm)
Explanation: Expression to calculate hydraulic properties
Now that soil properties are described, the water content at the field capacity and at the wilting point can be calculated according to the equation defined at the beginning of this section. Please note that in the equation of Saxton & Rawls (2006), the wilting point and field capacity are calculated using the Organic Matter content ($OM$) and not the Organic Carbon content ($OC$). In the following, we convert $OC$ into $OM$ using the corrective factor known as the Van Bemmelen factor:
$$0M = 1.724 \times OC$$
Several operators are available to perform basic mathematical operations on image bands: add(), subtract(), multiply() and divide(). Here, we multiply the organic content by the Van Bemmelen factor. It is done using the ee.Image.multiply method on the organic carbon content ee.Image.
End of explanation
# Initialization of two constant images for wilting point and field capacity.
wilting_point = ee.Image(0)
field_capacity = ee.Image(0)
# Calculation for each standard depth using a loop.
for key in olm_bands:
# Getting sand, clay and organic matter at the appropriate depth.
si = sand.select(key)
ci = clay.select(key)
oi = orgm.select(key)
# Calculation of the wilting point.
# The theta_1500t parameter is needed for the given depth.
theta_1500ti = (
ee.Image(0)
.expression(
"-0.024 * S + 0.487 * C + 0.006 * OM + 0.005 * (S * OM)\
- 0.013 * (C * OM) + 0.068 * (S * C) + 0.031",
{
"S": si,
"C": ci,
"OM": oi,
},
)
.rename("T1500ti")
)
# Final expression for the wilting point.
wpi = theta_1500ti.expression(
"T1500ti + ( 0.14 * T1500ti - 0.002)", {"T1500ti": theta_1500ti}
).rename("wpi")
# Add as a new band of the global wilting point ee.Image.
# Do not forget to cast the type with float().
wilting_point = wilting_point.addBands(wpi.rename(key).float())
# Same process for the calculation of the field capacity.
# The parameter theta_33t is needed for the given depth.
theta_33ti = (
ee.Image(0)
.expression(
"-0.251 * S + 0.195 * C + 0.011 * OM +\
0.006 * (S * OM) - 0.027 * (C * OM)+\
0.452 * (S * C) + 0.299",
{
"S": si,
"C": ci,
"OM": oi,
},
)
.rename("T33ti")
)
# Final expression for the field capacity of the soil.
fci = theta_33ti.expression(
"T33ti + (1.283 * T33ti * T33ti - 0.374 * T33ti - 0.015)",
{"T33ti": theta_33ti.select("T33ti")},
)
# Add a new band of the global field capacity ee.Image.
field_capacity = field_capacity.addBands(fci.rename(key).float())
Explanation: When the mathematical operation to apply to the ee.Image becomes too complex, the ee.Image.expression is a good alternative. We use it in the following code block since the calculation of wilting point and field capacity relies on multiple parameters and images. This method takes two arguments:
- a string formalizing the arithmetic expression we want to evaluate,
- a dict associating images to each parameter of the arithmetic expression.
The mathematical expression is applied as follows to determine wilting point and field capacity:
End of explanation
profile_wp = local_profile(wilting_point, poi, scale)
profile_fc = local_profile(field_capacity, poi, scale)
print("Wilting point profile:\n", profile_wp)
print("Field capacity profile:\n", profile_fc)
Explanation: Let's see the result around our location of interest:
End of explanation
fig, ax = plt.subplots(figsize=(15, 6))
ax.axes.get_yaxis().set_visible(False)
# Definition of the label locations.
x = np.arange(len(olm_bands))
# Width of the bar of the barplot.
width = 0.25
# Barplot associated with the water content at the wilting point.
rect1 = ax.bar(
x - width / 2,
[round(profile_wp[b] * 100, 2) for b in olm_bands],
width,
label="Water content at wilting point",
color="red",
alpha=0.5,
)
# Barplot associated with the water content at the field capacity.
rect2 = ax.bar(
x + width / 2,
[round(profile_fc[b] * 100, 2) for b in olm_bands],
width,
label="Water content at field capacity",
color="blue",
alpha=0.5,
)
# Add Labels on top of bars.
autolabel_soil_prop(rect1)
autolabel_soil_prop(rect2)
# Title of the plot.
ax.set_title("Hydraulic properties of the soil at different depths", fontsize=14)
# Properties of x/y labels and ticks.
ax.set_xticks(x)
x_labels = [str(d) + " cm" for d in olm_depths]
ax.set_xticklabels(x_labels, rotation=45, fontsize=10)
ax.spines["left"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
# Shrink current axis's height by 10% on the bottom.
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9])
# Put a legend below current axis.
ax.legend(
loc="upper center", bbox_to_anchor=(0.5, -0.15), fancybox=True, shadow=True, ncol=2
)
plt.show()
Explanation: The result is displayed using barplots as follows:
End of explanation
# Import precipitation.
pr = (
ee.ImageCollection("UCSB-CHG/CHIRPS/DAILY")
.select("precipitation")
.filterDate(i_date, f_date)
)
# Import potential evaporation PET and its quality indicator ET_QC.
pet = (
ee.ImageCollection("MODIS/006/MOD16A2")
.select(["PET", "ET_QC"])
.filterDate(i_date, f_date)
)
Explanation: Getting meteorological datasets
Datasets exploration
The meteorological data used in our implementation of the TM procedure relies on the following datasets:
- Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) gives precipitations on a daily basis (resolution of 5 km),
- MODIS Terra Net gives evapotranspiration on a 8-days basis (resolution of 500 m).
Both datasets are imported as follows, specifying the bands of interest using .select() and the period of interest using .filterDate().
End of explanation
# Evaluate local precipitation conditions.
local_pr = pr.getRegion(poi, scale).getInfo()
pprint.pprint(local_pr[:5])
Explanation: Now we can have a closer look around our location of interest. To evaluate the properties of an ee.ImageCollection, the ee.ImageCollection.getRegion method is used and combined with ee.ImageCollection.getInfo method for a client-side visualization.
End of explanation
def ee_array_to_df(arr, list_of_bands):
Transforms client-side ee.Image.getRegion array to pandas.DataFrame.
df = pd.DataFrame(arr)
# Rearrange the header.
headers = df.iloc[0]
df = pd.DataFrame(df.values[1:], columns=headers)
# Convert the data to numeric values.
for band in list_of_bands:
df[band] = pd.to_numeric(df[band], errors="coerce")
# Convert the time field into a datetime.
df["datetime"] = pd.to_datetime(df["time"], unit="ms")
# Keep the columns of interest.
df = df[["time", "datetime", *list_of_bands]]
# The datetime column is defined as index.
df = df.set_index("datetime")
return df
Explanation: We now establish a procedure to get meteorological data around a given location in form of a pandas.DataFrame:
End of explanation
pr_df = ee_array_to_df(local_pr, ["precipitation"])
pr_df.head(10)
Explanation: We apply the function and see the head of the resulting pandas.DataFrame:
End of explanation
# Evaluate local potential evapotranspiration.
local_pet = pet.getRegion(poi, scale).getInfo()
# Transform the result into a pandas dataframe.
pet_df = ee_array_to_df(local_pet, ["PET", "ET_QC"])
pet_df.head(5)
Explanation: We do the same for potential evaporation:
End of explanation
def sum_resampler(coll, freq, unit, scale_factor, band_name):
This function aims to resample the time scale of an ee.ImageCollection.
The function returns an ee.ImageCollection with the averaged sum of the
band on the selected frequency.
coll: (ee.ImageCollection) only one band can be handled
freq: (int) corresponds to the resampling frequence
unit: (str) corresponds to the resampling time unit.
must be 'day', 'month' or 'year'
scale_factor (float): scaling factor used to get our value in the good unit
band_name (str) name of the output band
# Define initial and final dates of the collection.
firstdate = ee.Date(
coll.sort("system:time_start", True).first().get("system:time_start")
)
lastdate = ee.Date(
coll.sort("system:time_start", False).first().get("system:time_start")
)
# Calculate the time difference between both dates.
# https://developers.google.com/earth-engine/apidocs/ee-date-difference
diff_dates = lastdate.difference(firstdate, unit)
# Define a new time index (for output).
new_index = ee.List.sequence(0, ee.Number(diff_dates), freq)
# Define the function that will be applied to our new time index.
def apply_resampling(date_index):
# Define the starting date to take into account.
startdate = firstdate.advance(ee.Number(date_index), unit)
# Define the ending date to take into account according
# to the desired frequency.
enddate = firstdate.advance(ee.Number(date_index).add(freq), unit)
# Calculate the number of days between starting and ending days.
diff_days = enddate.difference(startdate, "day")
# Calculate the composite image.
image = (
coll.filterDate(startdate, enddate)
.mean()
.multiply(diff_days)
.multiply(scale_factor)
.rename(band_name)
)
# Return the final image with the appropriate time index.
return image.set("system:time_start", startdate.millis())
# Map the function to the new time index.
res = new_index.map(apply_resampling)
# Transform the result into an ee.ImageCollection.
res = ee.ImageCollection(res)
return res
Explanation: Looking at both pandas.DataFrame shows the following points:
- the time resolution between both datasets is not the same,
- for some reasons, potential evapotranspiration cannot be calculated at some dates. It corresponds to lines where the quality indicator ET_QC is higher than 1.
Both issues must be handled before implementing the iterative process: we want to work on a similar timeline with potential evapotranspiration and precipitation, and we want to avoid missing values.
Resampling the time resolution of an ee.ImageCollection
To address these issues (homogeneous time index and missing values), we make a sum resampling of both datasets by month. When PET cannot be calculated, the monthly averaged value is considered. The key steps and functions used to resample are described below:
- A new date index is defined as a sequence using the ee.List.sequence method.
- A function representing the resampling operation is defined. This function consists of grouping images of the desired time interval and calculating the sum. The sum is calculated by taking the mean between available images and multiplying it by the duration of the interval.
- The user-supplied function is then mapped over the new time index using .map().
Finally, the resampling procedure reads as follows:
End of explanation
# Apply the resampling function to the precipitation dataset.
pr_m = sum_resampler(pr, 1, "month", 1, "pr")
# Evaluate the result at the location of interest.
pprint.pprint(pr_m.getRegion(poi, scale).getInfo()[:5])
Explanation: The precipitation dataset is now resampled by month as follows:
- the collection to resample is defined as pr,
- we want a collection on a monthly basis then freq = 1 and unit = "month",
- there is no correction factor to apply according to the dataset description then scale_factor = 1,
- "pr" is the name of the output band.
End of explanation
# Apply the resampling function to the PET dataset.
pet_m = sum_resampler(pet.select("PET"), 1, "month", 0.0125, "pet")
# Evaluate the result at the location of interest.
pprint.pprint(pet_m.getRegion(poi, scale).getInfo()[:5])
Explanation: For evapotranspiration, we have to be careful with the unit. The dataset gives us an 8-day sum and a scale factor of 10 is applied. Then, to get a homogeneous unit, we need to rescale by dividing by 8 and 10: $\frac{1}{10 \times 8} = 0.0125$.
End of explanation
# Combine precipitation and evapotranspiration.
meteo = pr_m.combine(pet_m)
# Import meteorological data as an array at the location of interest.
meteo_arr = meteo.getRegion(poi, scale).getInfo()
# Print the result.
pprint.pprint(meteo_arr[:5])
Explanation: We now combine both ee.ImageCollection objects (pet_m and pr_m) using the ee.ImageCollection.combine method. Note that corresponding images in both ee.ImageCollection objects need to have the same time index before combining.
End of explanation
# Transform the array into a pandas dataframe and sort the index.
meteo_df = ee_array_to_df(meteo_arr, ["pr", "pet"]).sort_index()
# Data visualization
fig, ax = plt.subplots(figsize=(15, 6))
# Barplot associated with precipitations.
meteo_df["pr"].plot(kind="bar", ax=ax, label="precipitation")
# Barplot associated with potential evapotranspiration.
meteo_df["pet"].plot(
kind="bar", ax=ax, label="potential evapotranspiration", color="orange", alpha=0.5
)
# Add a legend.
ax.legend()
# Add some x/y-labels properties.
ax.set_ylabel("Intensity [mm]")
ax.set_xlabel(None)
# Define the date format and shape of x-labels.
x_labels = meteo_df.index.strftime("%m-%Y")
ax.set_xticklabels(x_labels, rotation=90, fontsize=10)
plt.show()
Explanation: We evaluate the result on our location of interest:
End of explanation
zr = ee.Image(0.5)
p = ee.Image(0.5)
Explanation: Implementation of the TM procedure
Description
Some additional definitions are needed to formalize the Thornthwaite-Mather procedure. The following definitions are given in accordance with Allen et al. (1998) (the document can be downloaded here):
$$TAW = 1000 \times (\theta_{FC} − \theta_{WP})\times Z{r}$$ where:
- $TAW$: the total available soil water in the root zone [$mm$],
- $\theta_{FC}$: the water content at the field capacity [$m^{3} m^{-3}$],
- $\theta_{WP}$: the water content at wilting point [$m^{3} m^{-3}$],
- $Z_{r}$: the rooting depth [$mm$],
Typical values of $\theta_{FC}$ and $\theta_{WP}$ for different soil types are given in Table 19 of Allen et al. (1998).
The readily available water ($RAW$) is given by $RAW = p \times TAW$, where $p$ is the average fraction of $TAW$ that can be depleted from the root zone before moisture stress (ranging between 0 to 1). This quantity is also noted $ST_{FC}$ which is the available water stored at field capacity in the root zone.
Ranges of maximum effective rooting depth $Z_{r}$, and soil water depletion fraction for no stress $p$, for common crops are given in the Table 22 of Allen et al. (1998). In addition, a global effective plant rooting depth dataset is provided by Yang et al. (2016) with a resolution of 0.5° by 0.5° (see the paper here and the dataset here).
According to this global dataset, the effective rooting depth around our region of interest (France) can reasonably assumed to $Z_{r} = 0.5$. Additionally, the parameter $p$ is also assumed constant and equal to and $p = 0.5$ which is in line with common values described in Table 22 of Allen et al. (1998).
End of explanation
def olm_prop_mean(olm_image, band_output_name):
This function calculates an averaged value of
soil properties between reference depths.
mean_image = olm_image.expression(
"(b0 + b10 + b30 + b60 + b100 + b200) / 6",
{
"b0": olm_image.select("b0"),
"b10": olm_image.select("b10"),
"b30": olm_image.select("b30"),
"b60": olm_image.select("b60"),
"b100": olm_image.select("b100"),
"b200": olm_image.select("b200"),
},
).rename(band_output_name)
return mean_image
# Apply the function to field capacity and wilting point.
fcm = olm_prop_mean(field_capacity, "fc_mean")
wpm = olm_prop_mean(wilting_point, "wp_mean")
# Calculate the theoretical available water.
taw = (
(fcm.select("fc_mean").subtract(wpm.select("wp_mean"))).multiply(1000).multiply(zr)
)
# Calculate the stored water at the field capacity.
stfc = taw.multiply(p)
Explanation: In the following, we also consider an averaged value between reference depths of the water content at wilting point and field capacity:
End of explanation
# Define the initial time (time0) according to the start of the collection.
time0 = meteo.first().get("system:time_start")
Explanation: The Thornthwaite-Mather procedure used to estimate groundwater recharge is explicitly described by Steenhuis and Van der Molen (1985). This procedure uses monthly sums of potential evaporation, cumulative precipitation, and the moisture status of the soil which is calculated iteratively. The moisture status of the soils depends on the accumulated potential water loss ($APWL$). This parameter is calculated depending on whether the potential evaporation is greater than or less than the cumulative precipitation. The procedure reads as follow:
Case 1: potential evapotranspiration is higher than precipitation.
In that case, $PET>P$ and $APWL_{m}$ is incremented as follows:
$APWL_{m} = APWL_{m - 1} + (PET_{m} - P_{m})$ where:
- $APWL_{m}$ (respectively $APWL_{m - 1}$) represents the accumulated potential water loss for the month $m$ (respectively at the previous month $m - 1$)
- $PET_{m}$ the cumulative potential evapotranspiration at month $m$,
- $P_{m}$ the cumulative precipitation at month $m$,
and the relationship between $APWL$ and the amount of water stored in the root zone for the month $m$ is expressed as:
$ST_{m} = ST_{FC} \times [\textrm{exp}(-APWL_{m}/ST_{FC})]$ where $ST_{m}$ is the available water stored in the root zone for the month $m$.
Case 2: potential evapotranspiration is lower than precipitation.
In that case, $PET<P$ and $ST_{m}$ is incremented as follows:
$ST_{m} = ST_{m-1} + (P_{m} - PET_{m})$.
Case 2.1: the storage $ST_{m}$ is higher than the water stored at the field capacity.
If $ST_{m} > ST_{FC}$ the recharge is calculated as:
$R_{m} = ST_{m} - ST_{FC} + P_{m} - PET_{m}$
In addition, the water stored at the end of the month $m$ becomes equal to $ST_{FC}$ and $APWL_{m}$ is set equal to zero.
Case 2.2: the storage $ST_{m}$ is less than or equal to the water stored at the field capacity.
If $ST_{m} <= ST_{FC}$, $APWL_{m}$ is implemented as follows:
$APWL_{m} = ST_{FC} \times \textrm{ln}(ST_{m}/ST_{FC})$, and no percolation occurs.
Initialization
The initial time of the calculation is defined according to the first date of the meteorological collection:
End of explanation
# Initialize all bands describing the hydric state of the soil.
# Do not forget to cast the type of the data with a .float().
# Initial recharge.
initial_rech = ee.Image(0).set("system:time_start", time0).select([0], ["rech"]).float()
# Initialization of APWL.
initial_apwl = ee.Image(0).set("system:time_start", time0).select([0], ["apwl"]).float()
# Initialization of ST.
initial_st = stfc.set("system:time_start", time0).select([0], ["st"]).float()
# Initialization of precipitation.
initial_pr = ee.Image(0).set("system:time_start", time0).select([0], ["pr"]).float()
# Initialization of potential evapotranspiration.
initial_pet = ee.Image(0).set("system:time_start", time0).select([0], ["pet"]).float()
Explanation: Then, we initialize the calculation with an ee.Image where all bands associated to the hydric state of the soil are set equal to ee.Image(0), except for the initial storage which is considered to be equal to the water content at field capacity, meaning that $ST_{0} = ST_{FC}$.
End of explanation
initial_image = initial_rech.addBands(
ee.Image([initial_apwl, initial_st, initial_pr, initial_pet])
)
Explanation: We combine all these bands into one ee.Image adding new bands to the first using the ee.Image.addBands method:
End of explanation
image_list = ee.List([initial_image])
Explanation: We also initialize a list in which new images will be added after each iteration. We create this server-side list using the ee.List method.
End of explanation
def recharge_calculator(image, image_list):
Contains operations made at each iteration.
# Determine the date of the current ee.Image of the collection.
localdate = image.date().millis()
# Import previous image stored in the list.
prev_im = ee.Image(ee.List(image_list).get(-1))
# Import previous APWL and ST.
prev_apwl = prev_im.select("apwl")
prev_st = prev_im.select("st")
# Import current precipitation and evapotranspiration.
pr_im = image.select("pr")
pet_im = image.select("pet")
# Initialize the new bands associated with recharge, apwl and st.
# DO NOT FORGET TO CAST THE TYPE WITH .float().
new_rech = (
ee.Image(0)
.set("system:time_start", localdate)
.select([0], ["rech"])
.float()
)
new_apwl = (
ee.Image(0)
.set("system:time_start", localdate)
.select([0], ["apwl"])
.float()
)
new_st = (
prev_st.set("system:time_start", localdate).select([0], ["st"]).float()
)
# Calculate bands depending on the situation using binary layers with
# logical operations.
# CASE 1.
# Define zone1: the area where PET > P.
zone1 = pet_im.gt(pr_im)
# Calculation of APWL in zone 1.
zone1_apwl = prev_apwl.add(pet_im.subtract(pr_im)).rename("apwl")
# Implementation of zone 1 values for APWL.
new_apwl = new_apwl.where(zone1, zone1_apwl)
# Calculate ST in zone 1.
zone1_st = prev_st.multiply(
ee.Image.exp(zone1_apwl.divide(stfc).multiply(-1))
).rename("st")
# Implement ST in zone 1.
new_st = new_st.where(zone1, zone1_st)
# CASE 2.
# Define zone2: the area where PET <= P.
zone2 = pet_im.lte(pr_im)
# Calculate ST in zone 2.
zone2_st = prev_st.add(pr_im).subtract(pet_im).rename("st")
# Implement ST in zone 2.
new_st = new_st.where(zone2, zone2_st)
# CASE 2.1.
# Define zone21: the area where PET <= P and ST >= STfc.
zone21 = zone2.And(zone2_st.gte(stfc))
# Calculate recharge in zone 21.
zone21_re = zone2_st.subtract(stfc).rename("rech")
# Implement recharge in zone 21.
new_rech = new_rech.where(zone21, zone21_re)
# Implement ST in zone 21.
new_st = new_st.where(zone21, stfc)
# CASE 2.2.
# Define zone 22: the area where PET <= P and ST < STfc.
zone22 = zone2.And(zone2_st.lt(stfc))
# Calculate APWL in zone 22.
zone22_apwl = (
stfc.multiply(-1).multiply(ee.Image.log(zone2_st.divide(stfc))).rename("apwl")
)
# Implement APWL in zone 22.
new_apwl = new_apwl.where(zone22, zone22_apwl)
# Create a mask around area where recharge can effectively be calculated.
# Where we have have PET, P, FCm, WPm (except urban areas, etc.).
mask = pet_im.gte(0).And(pr_im.gte(0)).And(fcm.gte(0)).And(wpm.gte(0))
# Apply the mask.
new_rech = new_rech.updateMask(mask)
# Add all Bands to our ee.Image.
new_image = new_rech.addBands(ee.Image([new_apwl, new_st, pr_im, pet_im]))
# Add the new ee.Image to the ee.List.
return ee.List(image_list).add(new_image)
Explanation: Iteration over an ee.ImageCollection
The procedure is implemented by means of the ee.ImageCollection.iterate method, which applies a user-supplied function to each element of a collection. For each time step, groundwater recharge is calculated using the recharge_calculator considering the previous hydric state of the soil and current meteorological conditions.
Of course, considering the TM description, several cases must be distinguished to calculate groundwater recharge. The distinction is made by the definition of binary layers with different logical operations. It allows specific calculations to be applied in areas where a given condition is true using the ee.Image.where method.
The function we apply to each element of the meteorological dataset to calculate groundwater recharge is defined as follows.
End of explanation
# Iterate the user-supplied function to the meteo collection.
rech_list = meteo.iterate(recharge_calculator, image_list)
# Remove the initial image from our list.
rech_list = ee.List(rech_list).remove(initial_image)
# Transform the list into an ee.ImageCollection.
rech_coll = ee.ImageCollection(rech_list)
Explanation: The TM procedure can now be applied to the meteorological ee.ImageCollection:
End of explanation
arr = rech_coll.getRegion(poi, scale).getInfo()
rdf = ee_array_to_df(arr, ["pr", "pet", "apwl", "st", "rech"]).sort_index()
rdf.head(12)
Explanation: Let's have a look at the result around the location of interest:
End of explanation
# Data visualization in the form of barplots.
fig, ax = plt.subplots(figsize=(15, 6))
# Barplot associated with precipitation.
rdf["pr"].plot(kind="bar", ax=ax, label="precipitation", alpha=0.5)
# Barplot associated with potential evapotranspiration.
rdf["pet"].plot(
kind="bar", ax=ax, label="potential evapotranspiration", color="orange", alpha=0.2
)
# Barplot associated with groundwater recharge
rdf["rech"].plot(kind="bar", ax=ax, label="recharge", color="green", alpha=1)
# Add a legend.
ax.legend()
# Define x/y-labels properties.
ax.set_ylabel("Intensity [mm]")
ax.set_xlabel(None)
# Define the date format and shape of x-labels.
x_labels = rdf.index.strftime("%m-%Y")
ax.set_xticklabels(x_labels, rotation=90, fontsize=10)
plt.show()
Explanation: The result can be displayed in the form of a barplot as follows:
End of explanation
# Resample the pandas dataframe on a yearly basis making the sum by year.
rdfy = rdf.resample("Y").sum()
# Calculate the mean value.
mean_recharge = rdfy["rech"].mean()
# Print the result.
print(
"The mean annual recharge at our point of interest is", int(mean_recharge), "mm/an"
)
Explanation: The result shows the distribution of precipitation, potential evapotranspiration, and groundwater recharge along the year. It shows that in our area of interest, groundwater recharge generally occurs from October to March. Even though a significant amount of precipitation occurs from April to September, evapotranspiration largely dominates because of high temperatures and sun exposure during these months. The result is that no percolation into aquifers occurs during this period.
Now the annual average recharge over the period of interest can be calculated. To do that, we resample the DataFrame we've just created:
End of explanation
def get_local_recharge(i_date, f_date, lon, lat, scale):
Returns a pandas df describing the cumulative groundwater
recharge by month
# Define the location of interest with a point.
poi = ee.Geometry.Point(lon, lat)
# Evaluate the recharge around the location of interest.
rarr = rech_coll.filterDate(i_date, f_date).getRegion(poi, scale).getInfo()
# Transform the result into a pandas dataframe.
rdf = ee_array_to_df(rarr, ["pr", "pet", "apwl", "st", "rech"]).sort_index()
return rdf
Explanation: Groundwater recharge comparison between multiple places
We now may want to get local information about groundwater recharge and/or map this variable on an area of interest.
Let's define a function to get the local information based on the ee.ImageCollection we've just built:
End of explanation
# Define the second location of interest by longitude/latitude.
lon2 = 4.137152
lat2 = 43.626945
# Calculate the local recharge condition at this location.
rdf2 = get_local_recharge(i_date, f_date, lon2, lat2, scale)
# Resample the resulting pandas dataframe on a yearly basis (sum by year).
rdf2y = rdf2.resample("Y").sum()
rdf2y.head()
# Data Visualization
fig, ax = plt.subplots(figsize=(15, 6))
ax.axes.get_yaxis().set_visible(False)
# Define the x-label locations.
x = np.arange(len(rdfy))
# Define the bar width.
width = 0.25
# Bar plot associated to groundwater recharge at the 1st location of interest.
rect1 = ax.bar(
x - width / 2, rdfy.rech, width, label="Lyon (France)", color="blue", alpha=0.5
)
# Bar plot associated to groundwater recharge at the 2nd location of interest.
rect2 = ax.bar(
x + width / 2,
rdf2y.rech,
width,
label="Montpellier (France)",
color="red",
alpha=0.5,
)
# Define a function to attach a label to each bar.
def autolabel_recharge(rects):
Attach a text label above each bar in *rects*, displaying its height.
for rect in rects:
height = rect.get_height()
ax.annotate(
"{}".format(int(height)) + " mm",
xy=(rect.get_x() + rect.get_width() / 2, height),
xytext=(0, 3), # 3 points vertical offset
textcoords="offset points",
ha="center",
va="bottom",
fontsize=8,
)
autolabel_recharge(rect1)
autolabel_recharge(rect2)
# Calculate the averaged annual recharge at both locations of interest.
place1mean = int(rdfy["rech"].mean())
place2mean = int(rdf2y["rech"].mean())
# Add an horizontal line associated with averaged annual values (location 1).
ax.hlines(
place1mean,
xmin=min(x) - width,
xmax=max(x) + width,
color="blue",
lw=0.5,
label="average " + str(place1mean) + " mm/y",
alpha=0.5,
)
# Add an horizontal line associated with averaged annual values (location 2).
ax.hlines(
place2mean,
xmin=min(x) - width,
xmax=max(x) + width,
color="red",
lw=0.5,
label="average " + str(place2mean) + " mm/y",
alpha=0.5,
)
# Add a title.
ax.set_title("Groundwater recharge comparison between two places", fontsize=12)
# Define some x/y-axis properties.
ax.set_xticks(x)
x_labels = rdfy.index.year.tolist()
ax.set_xticklabels(x_labels, rotation=45, fontsize=10)
ax.spines["left"].set_visible(False)
ax.spines["right"].set_visible(False)
ax.spines["top"].set_visible(False)
# Shrink current axis's height by 10% on the bottom.
box = ax.get_position()
ax.set_position([box.x0, box.y0 + box.height * 0.1, box.width, box.height * 0.9])
# Add a legend below current axis.
ax.legend(
loc="upper center", bbox_to_anchor=(0.5, -0.15), fancybox=True, shadow=True, ncol=2
)
plt.show()
Explanation: We now use this function on a second point of interest located near the city of Montpellier (France). This city is located in the south of France, and precipitation and groundwater recharge are expected to be much lower than in the previous case.
End of explanation
# Calculate the averaged annual recharge.
annual_rech = rech_coll.select("rech").mean().multiply(12)
# Calculate the average annual precipitation.
annual_pr = rech_coll.select("pr").mean().multiply(12)
# Get a feature collection of administrative boundaries.
countries = ee.FeatureCollection("FAO/GAUL/2015/level0").select("ADM0_NAME")
# Filter the feature collection to subset France.
france = countries.filter(ee.Filter.eq("ADM0_NAME", "France"))
# Clip the composite ee.Images around the region of interest.
rech_france = annual_rech.clip(france)
pr_france = annual_pr.clip(france)
Explanation: The result shows that the annual recharge in Lyon is almost twice as high as in the area of Montpellier. The result also shows a great variability of the annual recharge ranging from 98 mm/y to 258 mm/y in Lyon and from 16 mm/y to 147 mm/y in Montpellier.
Groundwater recharge map of France
To get a map of groundwater recharge around our region of interest, let's create a mean composite ee.Image based on our resulting ee.ImageCollection.
End of explanation
# Create a folium map.
my_map = folium.Map(location=[lat, lon], zoom_start=6, zoom_control=False)
# Set visualization parameters for recharge.
rech_vis_params = {
"bands": "rech",
"min": 0,
"max": 500,
"opacity": 1,
"palette": ["red", "orange", "yellow", "green", "blue", "purple"],
}
# Set visualization parameters for precipitation.
pr_vis_params = {
"bands": "pr",
"min": 500,
"max": 1500,
"opacity": 1,
"palette": ["white", "blue"],
}
# Define a recharge colormap.
rech_colormap = cm.LinearColormap(
colors=rech_vis_params["palette"],
vmin=rech_vis_params["min"],
vmax=rech_vis_params["max"],
)
# Define a precipitation colormap.
pr_colormap = cm.LinearColormap(
colors=pr_vis_params["palette"],
vmin=pr_vis_params["min"],
vmax=pr_vis_params["max"],
)
# Caption of the recharge colormap.
rech_colormap.caption = "Average annual recharge rate (mm/year)"
# Caption of the precipitation colormap.
pr_colormap.caption = "Average annual precipitation rate (mm/year)"
# Add the precipitation composite to the map object.
my_map.add_ee_layer(pr_france, pr_vis_params, "Precipitation")
# Add the recharge composite to the map object.
my_map.add_ee_layer(rech_france, rech_vis_params, "Recharge")
# Add a marker at both locations of interest.
folium.Marker([lat, lon], popup="Area of Lyon").add_to(my_map)
folium.Marker([lat2, lon2], popup="Area of Montpellier").add_to(my_map)
# Add the colormaps to the map.
my_map.add_child(rech_colormap)
my_map.add_child(pr_colormap)
# Add a layer control panel to the map.
my_map.add_child(folium.LayerControl())
# Display the map.
display(my_map)
Explanation: And finally the map can be drawn.
End of explanation |
2,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Guided Project 1
Learning Objectives
Step1: Step 1. Environment setup
tfx and kfp tools setup
Step2: You may need to restart the kernel at this point.
skaffold tool setup
Step3: Modify the PATH environment variable so that skaffold is available
Step4: Environment variable setup
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using Kubeflow Pipelines.
Let's set some environment variables to use Kubeflow Pipelines.
First, get your GCP project ID.
Step5: We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu.
The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard,
or you can get it from the URL of the Getting Started page where you launched this notebook.
Let's create an ENDPOINT environment variable and set it to the KFP cluster endpoint.
ENDPOINT should contain only the hostname part of the URL.
For example, if the URL of the KFP dashboard is
<a href="https
Step6: Set the image name as tfx-pipeline under the current GCP project
Step7: Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and
files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below.
This will also become the name of the project directory where your files will be put.
Step8: TFX includes the taxi template with the TFX python package.
If you are planning to solve a point-wise prediction problem,
including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
Step9: Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code,
sample data, and Jupyter Notebooks to analyse the output of the pipeline.
The taxi template uses the same Chicago Taxi dataset and ML model as
the Airflow Tutorial.
Here is brief introduction to each of the Python files
Step10: Let's quickly go over the structure of a test file to test Tensorflow code
Step11: First of all, notice that you start by importing the code you want to test by importing the corresponding module. Here we want to test the code in features.py so we import the module features
Step12: Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
Step13: Let's create a TFX pipeline using the tfx pipeline create command.
Note
Step14: While creating a pipeline, Dockerfile and build.yaml will be generated to build a Docker image.
Don't forget to add these files to the source control system (for example, git) along with other source files.
A pipeline definition file for argo will be generated, too.
The name of this file is ${PIPELINE_NAME}.tar.gz.
For example, it will be tfx_templated_pipeline.tar.gz if the name of your pipeline is my_pipeline.
It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in .gitignore which is generated automatically.
Now start an execution run with the newly created pipeline using the tfx run create command.
Note
Step15: Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed
under Experiments in the KFP Dashboard.
Clicking into the experiment will allow you to monitor progress and visualize
the artifacts created during the execution run.
However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from
the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard,
you will be able to find the pipeline, and access a wealth of information about the pipeline.
For example, you can find your runs under the Experiments menu, and when you open your
execution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator.
If you are interested in data validation, please see
Get started with Tensorflow Data Validation.
Double-click to change directory to pipeline and double-click again to open pipeline.py.
Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline.
(Tip
Step16: Check pipeline outputs
Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline.
See link below to access the dashboard
Step17: Step 6. Add components for training
In this step, you will add components for training and model validation including Transform, Trainer, ResolverNode, Evaluator, and Pusher.
Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, ResolverNode, Evaluator and Pusher to the pipeline. (Tip
Step18: When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!
Step 7. Try BigQueryExampleGen
BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse.
BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.
Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the region value in this file with the correct values for your GCP project.
Note
Step19: Step 8. Try Dataflow with KFP
Several TFX Components uses Apache Beam to implement data-parallel pipelines, and it means that you can distribute data processing workloads using Google Cloud Dataflow. In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, and DATAFLOW_BEAM_PIPELINE_ARGS.
Double-click to open pipeline.py. Change the value of enable_cache to False.
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is tfx_templated_pipeline if you didn't change.
Double-click to open kubeflow_dag_runner.py. Uncomment beam_pipeline_args. (Also make sure to comment out current beam_pipeline_args that you added in Step 7.)
Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.
Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
Step20: You can find your Dataflow jobs in Dataflow in Cloud Console.
Please reset enable_cache to True to benefit from caching execution results.
Double-click to open pipeline.py. Reset the value of enable_cache to True.
Step 9. Try Cloud AI Platform Training and Prediction with KFP
TFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services.
Before editing files, you might first have to enable AI Platform Training & Prediction API.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above.
Change directory one level up, and double-click to open kubeflow_dag_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args.
Update the pipeline and create an execution run as we did in step 5 and 6. | Python Code:
import os
Explanation: Guided Project 1
Learning Objectives:
Learn how to generate a standard TFX template pipeline using tfx template
Learn how to modify and run a templated TFX pipeline
Note: This guided project is adapted from Create a TFX pipeline using templates).
End of explanation
%%bash
TFX_PKG="tfx==0.22.0"
KFP_PKG="kfp==0.5.1"
pip freeze | grep $TFX_PKG || pip install -Uq $TFX_PKG
pip freeze | grep $KFP_PKG || pip install -Uq $KFP_PKG
Explanation: Step 1. Environment setup
tfx and kfp tools setup
End of explanation
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
%%bash
LOCAL_BIN="/home/jupyter/.local/bin"
SKAFFOLD_URI="https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64"
test -d $LOCAL_BIN || mkdir -p $LOCAL_BIN
which skaffold || (
curl -Lo skaffold $SKAFFOLD_URI &&
chmod +x skaffold &&
mv skaffold $LOCAL_BIN
)
Explanation: You may need to restart the kernel at this point.
skaffold tool setup
End of explanation
!which skaffold
Explanation: Modify the PATH environment variable so that skaffold is available:
At this point, you shoud see the skaffold tool with the command which:
End of explanation
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
Explanation: Environment variable setup
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using Kubeflow Pipelines.
Let's set some environment variables to use Kubeflow Pipelines.
First, get your GCP project ID.
End of explanation
ENDPOINT = # Enter your ENDPOINT here.
Explanation: We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu.
The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard,
or you can get it from the URL of the Getting Started page where you launched this notebook.
Let's create an ENDPOINT environment variable and set it to the KFP cluster endpoint.
ENDPOINT should contain only the hostname part of the URL.
For example, if the URL of the KFP dashboard is
<a href="https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start">https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com/#/start</a>,
ENDPOINT value becomes 1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com.
End of explanation
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE = 'gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
CUSTOM_TFX_IMAGE
Explanation: Set the image name as tfx-pipeline under the current GCP project:
End of explanation
PIPELINE_NAME = "guided_project_1"
PROJECT_DIR = os.path.join(os.path.expanduser("."), PIPELINE_NAME)
PROJECT_DIR
Explanation: Step 2. Copy the predefined template to your project directory.
In this step, we will create a working pipeline project directory and
files by copying additional files from a predefined template.
You may give your pipeline a different name by changing the PIPELINE_NAME below.
This will also become the name of the project directory where your files will be put.
End of explanation
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
%cd {PROJECT_DIR}
Explanation: TFX includes the taxi template with the TFX python package.
If you are planning to solve a point-wise prediction problem,
including classification and regresssion, this template could be used as a starting point.
The tfx template copy CLI command copies predefined template files into your project directory.
End of explanation
!python -m models.features_test
!python -m models.keras.model_test
Explanation: Step 3. Browse your copied source files
The TFX template provides basic scaffold files to build a pipeline, including Python source code,
sample data, and Jupyter Notebooks to analyse the output of the pipeline.
The taxi template uses the same Chicago Taxi dataset and ML model as
the Airflow Tutorial.
Here is brief introduction to each of the Python files:
pipeline - This directory contains the definition of the pipeline
* configs.py — defines common constants for pipeline runners
* pipeline.py — defines TFX components and a pipeline
models - This directory contains ML model definitions.
* features.py, features_test.py — defines features for the model
* preprocessing.py, preprocessing_test.py — defines preprocessing jobs using tf::Transform
models/estimator - This directory contains an Estimator based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using TF estimator
models/keras - This directory contains a Keras based model.
* constants.py — defines constants of the model
* model.py, model_test.py — defines DNN model using Keras
beam_dag_runner.py, kubeflow_dag_runner.py — define runners for each orchestration engine
Running the tests:
You might notice that there are some files with _test.py in their name.
These are unit tests of the pipeline and it is recommended to add more unit
tests as you implement your own pipelines.
You can run unit tests by supplying the module name of test files with -m flag.
You can usually get a module name by deleting .py extension and replacing / with ..
For example:
End of explanation
!tail -26 models/features_test.py
Explanation: Let's quickly go over the structure of a test file to test Tensorflow code:
End of explanation
GCS_BUCKET_NAME = GOOGLE_CLOUD_PROJECT + '-kubeflowpipelines-default'
GCS_BUCKET_NAME
!gsutil mb gs://{GCS_BUCKET_NAME}
Explanation: First of all, notice that you start by importing the code you want to test by importing the corresponding module. Here we want to test the code in features.py so we import the module features:
python
from models import features
To implement test cases start by defining your own test class inheriting from tf.test.TestCase:
python
class FeaturesTest(tf.test.TestCase):
Wen you execute the test file with
bash
python -m models.features_test
the main method
python
tf.test.main()
will parse your test class (here: FeaturesTest) and execute every method whose name starts by test. Here we have two such methods for instance:
python
def testNumberOfBucketFeatureBucketCount(self):
def testTransformedNames(self):
So when you want to add a test case, just add a method to that test class whose name starts by test. Now inside the body of these test methods is where the actual testing takes place. In this case for instance, testTransformedNames test the function features.transformed_name and makes sure it outputs what is expected.
Since your test class inherits from tf.test.TestCase it has a number of helper methods you can use to help you create tests, as for instance
python
self.assertEqual(expected_outputs, obtained_outputs)
that will fail the test case if obtained_outputs do the match the expected_outputs.
Typical examples of test case you may want to implement for machine learning code would comprise test insurring that your model builds correctly, your preprocessing function preprocesses raw data as expected, or that your model can train successfully on a few mock examples. When writing tests make sure that their execution is fast (we just want to check that the code works not actually train a performant model when testing). For that you may have to create synthetic data in your test files. For more information, read the tf.test.TestCase documentation and the Tensorflow testing best practices.
Step 4. Run your first TFX pipeline
Components in the TFX pipeline will generate outputs for each run as
ML Metadata Artifacts, and they need to be stored somewhere.
You can use any storage which the KFP cluster can access, and for this example we
will use Google Cloud Storage (GCS).
Let us create this bucket. Its name will be <YOUR_PROJECT>-kubeflowpipelines-default.
End of explanation
!gsutil cp data/data.csv gs://{GCS_BUCKET_NAME}/tfx-template/data/data.csv
Explanation: Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
End of explanation
!tfx pipeline create \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
Explanation: Let's create a TFX pipeline using the tfx pipeline create command.
Note: When creating a pipeline for KFP, we need a container image which will
be used to run our pipeline. And skaffold will build the image for us. Because skaffold
pulls base images from the docker hub, it will take 5~10 minutes when we build
the image for the first time, but it will take much less time from the second build.
End of explanation
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: While creating a pipeline, Dockerfile and build.yaml will be generated to build a Docker image.
Don't forget to add these files to the source control system (for example, git) along with other source files.
A pipeline definition file for argo will be generated, too.
The name of this file is ${PIPELINE_NAME}.tar.gz.
For example, it will be tfx_templated_pipeline.tar.gz if the name of your pipeline is my_pipeline.
It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in .gitignore which is generated automatically.
Now start an execution run with the newly created pipeline using the tfx run create command.
Note: You may see the following error Error importing tfx_bsl_extension.coders. Please ignore it.
Debugging tip: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems.
Please make sure your KFP cluster has permissions to access Google Cloud APIs.
This can be configured when you create a KFP cluster in GCP,
or see Troubleshooting document in GCP.
End of explanation
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed
under Experiments in the KFP Dashboard.
Clicking into the experiment will allow you to monitor progress and visualize
the artifacts created during the execution run.
However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from
the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard,
you will be able to find the pipeline, and access a wealth of information about the pipeline.
For example, you can find your runs under the Experiments menu, and when you open your
execution run under Experiments you can find all your artifacts from the pipeline under Artifacts menu.
Step 5. Add components for data validation.
In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator.
If you are interested in data validation, please see
Get started with Tensorflow Data Validation.
Double-click to change directory to pipeline and double-click again to open pipeline.py.
Find and uncomment the 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline.
(Tip: search for comments containing TODO(step 5):). Make sure to save pipeline.py after you edit it.
You now need to update the existing pipeline with modified pipeline definition. Use the tfx pipeline update command to update your pipeline, followed by the tfx run create command to create a new execution run of your updated pipeline.
End of explanation
print('https://' + ENDPOINT)
Explanation: Check pipeline outputs
Visit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the Experiments tab on the left, and All runs in the Experiments page. You should be able to find the latest run under the name of your pipeline.
See link below to access the dashboard:
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
print("https://" + ENDPOINT)
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Step 6. Add components for training
In this step, you will add components for training and model validation including Transform, Trainer, ResolverNode, Evaluator, and Pusher.
Double-click to open pipeline.py. Find and uncomment the 5 lines which add Transform, Trainer, ResolverNode, Evaluator and Pusher to the pipeline. (Tip: search for TODO(step 6):)
As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using tfx pipeline update, and create an execution run using tfx run create.
Verify that the pipeline DAG has changed accordingly in the Kubeflow UI:
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!
Step 7. Try BigQueryExampleGen
BigQuery is a serverless, highly scalable, and cost-effective cloud data warehouse.
BigQuery can be used as a source for training examples in TFX. In this step, we will add BigQueryExampleGen to the pipeline.
Double-click to open pipeline.py. Comment out CsvExampleGen and uncomment the line which creates an instance of BigQueryExampleGen. You also need to uncomment the query argument of the create_pipeline function.
We need to specify which GCP project to use for BigQuery, and this is done by setting --project in beam_pipeline_args when creating a pipeline.
Double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS and BIG_QUERY_QUERY. You should replace the region value in this file with the correct values for your GCP project.
Note: You MUST set your GCP region in the configs.py file before proceeding
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is my_pipeline if you didn't change.
Double-click to open kubeflow_dag_runner.py. Uncomment two arguments, query and beam_pipeline_args, for the create_pipeline function.
Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: Step 8. Try Dataflow with KFP
Several TFX Components uses Apache Beam to implement data-parallel pipelines, and it means that you can distribute data processing workloads using Google Cloud Dataflow. In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, and DATAFLOW_BEAM_PIPELINE_ARGS.
Double-click to open pipeline.py. Change the value of enable_cache to False.
Change directory one level up. Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is tfx_templated_pipeline if you didn't change.
Double-click to open kubeflow_dag_runner.py. Uncomment beam_pipeline_args. (Also make sure to comment out current beam_pipeline_args that you added in Step 7.)
Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.
Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
End of explanation
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
Explanation: You can find your Dataflow jobs in Dataflow in Cloud Console.
Please reset enable_cache to True to benefit from caching execution results.
Double-click to open pipeline.py. Reset the value of enable_cache to True.
Step 9. Try Cloud AI Platform Training and Prediction with KFP
TFX interoperates with several managed GCP services, such as Cloud AI Platform for Training and Prediction. You can set your Trainer component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can push your model to Cloud AI Platform Prediction for serving. In this step, we will set our Trainer and Pusher component to use Cloud AI Platform services.
Before editing files, you might first have to enable AI Platform Training & Prediction API.
Double-click pipeline to change directory, and double-click to open configs.py. Uncomment the definition of GOOGLE_CLOUD_REGION, GCP_AI_PLATFORM_TRAINING_ARGS and GCP_AI_PLATFORM_SERVING_ARGS. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set masterConfig.imageUri in GCP_AI_PLATFORM_TRAINING_ARGS to the same value as CUSTOM_TFX_IMAGE above.
Change directory one level up, and double-click to open kubeflow_dag_runner.py. Uncomment ai_platform_training_args and ai_platform_serving_args.
Update the pipeline and create an execution run as we did in step 5 and 6.
End of explanation |
2,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<b>This notebook divide a single mailing list corpus into threads.</b>
What it does
Step1: First, collect data from a public email archive.
Step2: Let's check the number of threads in this mailing list corpus
Step3: We can plot the number of people participating in each thread.
Step4: The duration of a thread is the amount of elapsed time between its first and last message.
Let's plot the number of threads per each number of days of duration
Step5: Export the content of each thread into a .csv file (named | Python Code:
%matplotlib inline
from bigbang.archive import Archive
from bigbang.archive import load as load_archive
from bigbang.thread import Thread
from bigbang.thread import Node
from bigbang.utils import remove_quoted
import matplotlib.pyplot as plt
import datetime
import csv
from collections import defaultdict
Explanation: <b>This notebook divide a single mailing list corpus into threads.</b>
What it does:
-identifies the more participated threads
-identifies the long lasting threads
-export each thread's emails into seperate .csv files, setting thresholds of participation and duration
Parameters to set options:
-set a single URL related to a mailing list, setting the 'url' variable
-it exports files in the file path specified in the variable ‘path’
-you can set a threshold of participation and of duration for the threads to export, by setting 'min_participation' and 'min_duration' variables
End of explanation
#insert one URL related to the mailing list of interest
url = "http://mm.icann.org/pipermail/wp4/"
try:
arch_path = '../archives/'+url[:-1].replace('://','_/')+'.csv'
arx = load_archive(arch_path)
except:
arch_path = '../archives/'+url[:-1].replace('//','/')+'.csv'
print url
arx = load_archive(arch_path)
Explanation: First, collect data from a public email archive.
End of explanation
print len(arx.get_threads())
Explanation: Let's check the number of threads in this mailing list corpus
End of explanation
n = [t.get_num_people() for t in arx.get_threads()]
plt.hist(n, bins = 20)
plt.xlabel('number of email-address in a thread')
plt.show()
Explanation: We can plot the number of people participating in each thread.
End of explanation
y = [t.get_duration().days for t in arx.get_threads()]
plt.hist(y, bins = (10))
plt.xlabel('duration of a thread(days)')
plt.show()
Explanation: The duration of a thread is the amount of elapsed time between its first and last message.
Let's plot the number of threads per each number of days of duration
End of explanation
#Insert the participation threshold (number of people)
#(for no threeshold: 'min_participation = 0')
min_participation = 0
#Insert the duration threshold (number of days)
#(for no threeshold: 'min_duration = 0')
min_duration = 0
#Insert the directory path where to save the files
path = 'c:/users/davide/bigbang/'
i = 0
for thread in arx.get_threads():
if thread.get_num_people() >= min_participation and thread.get_duration().days >= min_duration:
i += 1
f = open(path+'thread_'+str(i)+'.csv', "wb")
f_w = csv.writer(f)
f_w.writerow(thread.get_content())
f.close()
Explanation: Export the content of each thread into a .csv file (named: thread_1.csv, thread2.csv, ...).
You can set a minimum level of participation and duration, based on the previous analyses
End of explanation |
2,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing a single Spectrum
Specdal provides readers which loads [.asd, .sig, .sed] files into a common Spectrum object.
Step1: The print output shows the four components of the Spectrum object. For example, we can access the measurements as follows.
Step2: Spectrum object provides several methods for processing the measurements. Let's start by linearly resampling to the nearest integer (nm) wavelengths.
Step3: We can visualize the spectrum using pyplot. spectrum.plot is just a wrapper around spectrum.measurements.plot, so you can pass any arguments for plotting pandas.Series objects.
Step4: There are folds in the spectrum near 1000 and 1900 wavelengths. This happens because the three bands in the spectrometer has overlapping wavelengths. We can fix this using the stitch method of the Spectrum class. | Python Code:
s = specdal.Spectrum(filepath="/home/young/data/specdal/aidan_data/SVC/ACPA_F_B_SU_20160617_003.sig")
print(s)
Explanation: Processing a single Spectrum
Specdal provides readers which loads [.asd, .sig, .sed] files into a common Spectrum object.
End of explanation
print(type(s.measurement))
print(s.measurement.head())
Explanation: The print output shows the four components of the Spectrum object. For example, we can access the measurements as follows.
End of explanation
s.interpolate(method='linear')
print(s.measurement.head())
Explanation: Spectrum object provides several methods for processing the measurements. Let's start by linearly resampling to the nearest integer (nm) wavelengths.
End of explanation
s.plot()
plt.show()
Explanation: We can visualize the spectrum using pyplot. spectrum.plot is just a wrapper around spectrum.measurements.plot, so you can pass any arguments for plotting pandas.Series objects.
End of explanation
s.stitch(method='mean')
s.plot()
plt.show()
Explanation: There are folds in the spectrum near 1000 and 1900 wavelengths. This happens because the three bands in the spectrometer has overlapping wavelengths. We can fix this using the stitch method of the Spectrum class.
End of explanation |
2,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 5
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Create new features
As in Week 2, we consider features that are some transformations of inputs.
Step3: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
Step4: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
Step5: Find what features had non-zero weight.
Step6: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION
Step7: Next, we write a loop that does the following
Step8: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
2. What is the RSS on TEST data of the model with the best l1_penalty?
Step9: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
Step10: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal
Step11: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values
Step12: Now, implement a loop that search through this space of possible l1_penalty values
Step13: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find
Step14: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found
Step15: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20)
Step16: QUIZ QUESTIONS
1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
2. What features in this model have non-zero coefficients? | Python Code:
import sys
sys.path.append('C:\Anaconda2\envs\dato-env\Lib\site-packages')
import graphlab
Explanation: Regression Week 5: Feature Selection and LASSO (Interpretation)
In this notebook, you will use LASSO to select features, building on a pre-implemented solver for LASSO (using GraphLab Create, though you can use other solvers). You will:
* Run LASSO with different L1 penalties.
* Choose best L1 penalty using a validation set.
* Choose best L1 penalty using a validation set, with additional constraint on the size of subset.
In the second notebook, you will implement your own LASSO solver, using coordinate descent.
Fire up graphlab create
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
from math import log, sqrt
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
# In the dataset, 'floors' was defined with type string,
# so we'll convert them to float, before creating a new feature.
sales['floors'] = sales['floors'].astype(float)
sales['floors_square'] = sales['floors']*sales['floors']
Explanation: Create new features
As in Week 2, we consider features that are some transformations of inputs.
End of explanation
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
End of explanation
model_all = graphlab.linear_regression.create(sales, target='price', features=all_features,
validation_set=None,
l2_penalty=0., l1_penalty=1e10)
Explanation: Applying L1 penalty requires adding an extra parameter (l1_penalty) to the linear regression call in GraphLab Create. (Other tools may have separate implementations of LASSO.) Note that it's important to set l2_penalty=0 to ensure we don't introduce an additional L2 penalty.
End of explanation
# non_zero_weight = model_all.get("coefficients")["value"]
non_zero_weight = model_all["coefficients"][model_all["coefficients"]["value"] > 0]
non_zero_weight.print_rows(num_rows=20)
Explanation: Find what features had non-zero weight.
End of explanation
(training_and_validation, testing) = sales.random_split(.9,seed=1) # initial train/test split
(training, validation) = training_and_validation.random_split(0.5, seed=1) # split training into train and validate
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION:
According to this list of weights, which of the features have been chosen?
Selecting an L1 penalty
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:
* Split our sales data into 2 sets: training and test
* Further split our training data into two sets: train, validation
Be very careful that you use seed = 1 to ensure you get the same answer!
End of explanation
import numpy as np
import pprint
validation_rss = {}
for l1_penalty in np.logspace(1, 7, num=13):
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=l1_penalty)
predictions = model.predict(validation)
residuals = validation['price'] - predictions
rss = sum(residuals**2)
validation_rss[l1_penalty] = rss
# pprint.pprint(result_dict)
print min(validation_rss.items(), key=lambda x: x[1])
Explanation: Next, we write a loop that does the following:
* For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
* Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
* Report which l1_penalty produced the lowest RSS on validation data.
When you call linear_regression.create() make sure you set validation_set = None.
Note: you can turn off the print out of linear_regression.create() with verbose = False
End of explanation
model_test = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=10.0)
predictions_test = model.predict(testing)
residuals_test = testing['price'] - predictions_test
rss_test = sum(residuals_test**2)
print rss_test
Explanation: QUIZ QUESTIONS
1. What was the best value for the l1_penalty?
2. What is the RSS on TEST data of the model with the best l1_penalty?
End of explanation
non_zero_weight_test = model_test["coefficients"][model_test["coefficients"]["value"] > 0]
non_zero_weight_test.print_rows(num_rows=20)
Explanation: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
End of explanation
max_nonzeros = 7
Explanation: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal:
1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.
End of explanation
l1_penalty_values = np.logspace(8, 10, num=20)
print l1_penalty_values
Explanation: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values:
End of explanation
coef_dict = {}
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training, target ='price', features=all_features,
validation_set=None, verbose=None,
l2_penalty=0., l1_penalty=l1_penalty)
coef_dict[l1_penalty] = model['coefficients']['value'].nnz()
pprint.pprint(coef_dict)
Explanation: Now, implement a loop that search through this space of possible l1_penalty values:
For l1_penalty in np.logspace(8, 10, num=20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
Hint: model['coefficients']['value'] gives you an SArray with the parameters you learned. If you call the method .nnz() on it, you will find the number of non-zero parameters!
End of explanation
l1_penalty_min = 2976351441.6313128
l1_penalty_max = 3792690190.7322536
Explanation: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find:
* The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
* Store this value in the variable l1_penalty_min (we will use it later)
* The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
* Store this value in the variable l1_penalty_max (we will use it later)
Hint: there are many ways to do this, e.g.:
* Programmatically within the loop above
* Creating a list with the number of non-zeros for each value of l1_penalty and inspecting it to find the appropriate boundaries.
End of explanation
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
Explanation: QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found:
End of explanation
validation_rss = {}
for l1_penalty in l1_penalty_values:
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=l1_penalty)
predictions = model.predict(validation)
residuals = validation['price'] - predictions
rss = sum(residuals**2)
validation_rss[l1_penalty] = rss, model['coefficients']['value'].nnz()
for k,v in validation_rss.iteritems():
if (v[1] == max_nonzeros) and (v[0] < bestRSS):
bestRSS = v[0]
bestl1 = k
print bestRSS, bestl1
for k,v in validation_rss.iteritems():
if (v[1] == max_nonzeros) and (v[0] < bestRSS):
bestRSS = v[0]
print k, bestRSS
Explanation: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Measure the RSS of the learned model on the VALIDATION set
Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
End of explanation
model = graphlab.linear_regression.create(training, target='price', features=all_features,
validation_set=None, verbose = False,
l2_penalty=0., l1_penalty=3448968612.16)
non_zero_weight_test = model["coefficients"][model["coefficients"]["value"] > 0]
non_zero_weight_test.print_rows(num_rows=8)
Explanation: QUIZ QUESTIONS
1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
2. What features in this model have non-zero coefficients?
End of explanation |
2,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inference of the dispersion of a Gaussian with Gaussian data errors
Suppose we have data draw from a delta function with Gaussian uncertainties (all equal). How well do we limit the dispersion? Sample data
Step1: We assume that the mean is zero and implement the likelihood
Step2: Now sample with slice sampling
Step3: Dependence on $N$
We write a function that returns the 95% upper limit as a function of sample size $N$ | Python Code:
ndata= 24
data= numpy.random.normal(size=ndata)
Explanation: Inference of the dispersion of a Gaussian with Gaussian data errors
Suppose we have data draw from a delta function with Gaussian uncertainties (all equal). How well do we limit the dispersion? Sample data:
End of explanation
def loglike(sigma,data):
if sigma <= 0. or sigma > 2.: return -1000000000000000.
return -numpy.sum(0.5*numpy.log(1.+sigma**2.)+0.5*data**2./(1.+sigma**2.))
Explanation: We assume that the mean is zero and implement the likelihood
End of explanation
nsamples= 10000
samples= bovy_mcmc.slice(numpy.array([1.]),1.,loglike,(data,),
isDomainFinite=[True,True],domain=[0.,2.],
nsamples=nsamples)
hist(numpy.array(samples),
range=[0.,2.],bins=0.3*numpy.sqrt(nsamples),
histtype='step',color='k',normed=True)
x95= sorted(samples)[int(numpy.floor(0.95*nsamples))]
plot([x95,x95],ylim(),'r-')
text(0.4,0.8,r'$\sigma < %.2f\ (95\%%\ \mathrm{confidence})$' % x95,
transform=gca().transAxes,size=18.,color='r',
backgroundcolor='w')
xlabel(r'$\sigma$')
Explanation: Now sample with slice sampling
End of explanation
def uplimit(N,ntrials=30,nsamples=1000):
out= []
for ii in range(ntrials):
data= numpy.random.normal(size=N)
samples= bovy_mcmc.slice(numpy.array([1.]),1./N**0.25,loglike,(data,),
isDomainFinite=[True,True],domain=[0.,2.],
nsamples=nsamples)
out.append(sorted(samples)[int(numpy.floor(0.95*nsamples))])
return numpy.median(out)
N= 10.**numpy.linspace(1,5,21)
y= [uplimit(n) for n in N]
loglog(N,y,'ko-')
p= numpy.polyfit(numpy.log(N)[5:],numpy.log(y)[5:],deg=1)
loglog(N,numpy.exp(p[0]*numpy.log(N)+p[1]),'b-')
text(0.25,0.8,r'$\log\ \mathrm{error} = %.2f \,\log N + %.2f$' % (p[0],p[1]),
transform=gca().transAxes,size=18.)
xlabel(r'$N$')
ylabel(r'$\mathrm{95\%\ upper\ limit\ on}\ \sigma$')
Explanation: Dependence on $N$
We write a function that returns the 95% upper limit as a function of sample size $N$
End of explanation |
2,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Limpieza de Estructura Organica del PEN
Se utilizan data-cleaner y pandas para codificar la limpieza de los datos de un archivo CSV. Primero se realiza una exploración de la tabla aplicando algunas reglas de limpieza y comprobando el resultado generado. Cuando este resultado es satisfactorio, se agrega la regla de limpieza a la lista codificada que luego se utilizará para generar la versión limpia del archivo.
Inicio
Step1: Exploración y descubrimiento
Step2: Reglas de limpieza codificadas
Step3: Limpieza | Python Code:
from __future__ import unicode_literals
from __future__ import print_function
from data_cleaner import DataCleaner
import pandas as pd
input_path = "estructura-organica-raw.csv"
output_path = "estructura-organica-clean.csv"
dc = DataCleaner(input_path)
Explanation: Limpieza de Estructura Organica del PEN
Se utilizan data-cleaner y pandas para codificar la limpieza de los datos de un archivo CSV. Primero se realiza una exploración de la tabla aplicando algunas reglas de limpieza y comprobando el resultado generado. Cuando este resultado es satisfactorio, se agrega la regla de limpieza a la lista codificada que luego se utilizará para generar la versión limpia del archivo.
Inicio
End of explanation
for c in dc.df.columns:
print(c)
Explanation: Exploración y descubrimiento
End of explanation
rules = [
{
"renombrar_columnas": [
{"field": "aut_dni", "new_field": "autoridad_dni"},
{"field": "aut_cuit_cuil", "new_field": "autoridad_cuil_cuit"},
{"field": "aut_cargo", "new_field": "autoridad_cargo"},
{"field": "aut_tratamiento", "new_field": "autoridad_tratamiento"},
{"field": "aut_apellido", "new_field": "autoridad_apellido"},
{"field": "aut_nombre", "new_field": "autoridad_nombre"},
{"field": "aut_norma_designacion", "new_field": "autoridad_norma_designacion"},
{"field": "norma_competenciasobjetivos", "new_field": "norma_competencias_objetivos"},
{"field": "cordigo_postal", "new_field": "codigo_postal"}
]
},
{
"string": [
{"field": "jurisdiccion", "keep_original": False},
{"field": "unidad", "keep_original": False},
{"field": "reporta_a", "keep_original": False},
{"field": "unidad_tipo", "keep_original": False},
{"field": "autoridad_cargo", "keep_original": False},
{"field": "autoridad_tratamiento", "keep_original": False},
{"field": "autoridad_apellido", "keep_original": False},
{"field": "autoridad_nombre", "keep_original": False},
{"field": "piso_oficina", "keep_original": False},
{"field": "codigo_postal", "keep_original": False},
{"field": "domicilio", "keep_original": False},
{"field": "localidad", "keep_original": False},
{"field": "provincia", "keep_original": False},
]
},
{
"string_regex_substitute": [
{"field": "norma_competencias_objetivos", "regex_str_match": ";", "regex_str_sub": ",",
"keep_original": False},
{"field": "unidad", "regex_str_match": "\(.*\)", "regex_str_sub": "",
"keep_original": False},
{"field": "provincia", "regex_str_match": "Bs\. As\.", "regex_str_sub": "Buenos Aires",
"keep_original": False},
{"field": "autoridad_tratamiento", "regex_str_match": "\s+$", "regex_str_sub": "",
"keep_original": False},
{"field": "autoridad_tratamiento", "regex_str_match": "(.+{^\.})$", "regex_str_sub": "\g<1>.",
"keep_original": False},
{"field": "autoridad_norma_designacion", "regex_str_match": "Dto\D*", "regex_str_sub": "Decreto ",
"keep_original": False},
{"field": "web", "regex_str_match": "^.+www\.", "regex_str_sub": "http://www.",
"keep_original": False},
]
},
{
"mail_format": [
{"field": "mail"}
]
},
{
"reemplazar_string": [
{"field": "piso_oficina", "replacements": {"Oficina": ["Of.icina"]}},
{"field": "piso_oficina", "replacements": {"Piso": ["Planta"]}}
]
}
]
Explanation: Reglas de limpieza codificadas
End of explanation
dc.clean(rules)
map(print, dc.df.piso_oficina.unique())
df_actual = pd.read_csv("estructura-organica-actual.csv")
df_20160926 = pd.read_excel("originales/160926 Set de datos Administración Pública Nacional.xlsx")
df_20160927 = pd.read_csv("originales/estructura_autoridades_apn-Descarga_20160927.csv")
df_20160929 = pd.read_csv("originales/estructura_autoridades_apn-Descargado_29-09-2016.csv")
print(len(df_actual.columns), len(df_20160926.columns), len(df_20160927.columns), len(df_20160929.columns))
# nuevos campos
print(set(dc.df.columns)-set(df_actual.columns))
print(set(dc.df.columns)-set(df_20160926.columns))
print(set(dc.df.columns)-set(df_20160927.columns))
print(set(dc.df.columns)-set(df_20160929.columns))
for escalafon in dc.df.extraescalafonario.unique():
print(escalafon)
dc.df.piso_oficina.unique()
import re
re.sub("(?P<cargo>\(.+\))(?P<nombre>.+)","\g<nombre> \g<cargo>","(presidente) Juan Jose Perez.")
for unidad in dc.df.unidad.unique():
print unidad
dc.save(output_path)
dc.df.to_excel("estructura-organica.xlsx", index=False)
Explanation: Limpieza
End of explanation |
2,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note
Step1: Before we continue, note that we'll be using your Qwiklabs project id a lot in this notebook. For convenience, set it as an environment variable using the command below
Step2: Download and process data
The models you'll build will predict the income level, whether it's less than or equal to $50,000 per year, of individuals given 14 data points about each individual. You'll train your models on this UCI Census Income Dataset.
We'll read the data into a Pandas DataFrame to see what we'll be working with. It's important to shuffle our data in case the original dataset is ordered in a specific way. We use an sklearn utility called shuffle to do this, which we imported in the first cell
Step3: data.head() lets us preview the first five rows of our dataset in Pandas.
Step4: The income-level column is the thing our model will predict. This is the binary outcome of whether the individual makes more than $50,000 per year. To see the distribution of income levels in the dataset, run the following
Step5: As explained in this paper, each entry in the dataset contains the following information
about an individual
Step6: Since we don't want to train a model on our labels, we're going to separate them from the features in both the training and test datasets. Also, notice that income-level is a string datatype. For machine learning, it's better to convert this to an binary integer datatype. We do this in the next cell.
Step7: Now you're ready to build and train your first model!
Build a First Model
The model we build closely follows a template for the census dataset found on AI Hub. For our model we use an XGBoost classifier. However, before we train our model we have to pre-process the data a little bit. We build a processing pipeline using Scikit-Learn's Pipeline constructor. We apply some custom transformations that are defined in custom_transforms.py. Open the file custom_transforms.py and inspect the code. Our features are either numerical or categorical. The numerical features are age-num, and hours-per-week. These features will be processed by applying Scikit-Learn's StandardScaler function. The categorical features are workclass, education, marital-status, and relationship. These features are one-hot encoded.
Step8: To finalize the pipeline we attach an XGBoost classifier at the end. The complete pipeline object takes the raw data we loaded from csv files, processes the categorical features, processes the numerical features, concatenates the two, and then passes the result through the XGBoost classifier.
Step9: We train our model with one function call using the fit() method. We pass the fit() method our training data.
Step10: Let's go ahead and save our model as a pickle file. Executing the command below will save the trained model in the file model.pkl in the same directory as this notebook.
Step11: Save Trained Model to AI Platform
We've got our model working locally, but it would be nice if we could make predictions on it from anywhere (not just this notebook!). In this step we'll deploy it to the cloud. For detailed instructions on how to do this visit the official documenation. Note that since we have custom components in our data pipeline we need to go through a few extra steps.
Create a Cloud Storage bucket for the model
We first need to create a storage bucket to store our pickled model file. We'll point Cloud AI Platform at this file when we deploy. Run this gsutil command to create a bucket. This will ensure the name of the cloud storage bucket you create will be globally unique.
Step12: Package custom transform code
Since we're using custom transformation code we need to package it up and direct AI Platform to it when we ask it make predictions. To package our custom code we create a source distribution. The following code creates this distribution and then ports the distribution and the model file to the bucket we created. Ignore the warnings about missing meta data.
Step13: Create and Deploy Model
The following ai-platform gcloud command will create a new model in your project. We'll call this one census_income_classifier.
Step14: Now it's time to deploy the model. We can do that with this gcloud command
Step15: While this is running, check the models section of your AI Platform console. You should see your new version deploying there. When the deploy completes successfully you'll see a green check mark where the loading spinner is. The deploy should take 2-3 minutes. You will need to click on the model name in order to see the spinner/checkmark. In the command above, notice we specify prediction-class. The reason we must specify a prediction class is that by default, AI Platform prediction will call a Scikit-Learn model's predict method, which in this case returns either 0 or 1. However, the What-If Tool requires output from a model in line with a Scikit-Learn model's predict_proba method. This is because WIT wants the probabilities of the negative and positive classes, not just the final determination on which class a person belongs to. Because that allows us to do more fine-grained exploration of the model. Consequently, we must write a custom prediction routine that basically renames predict_proba as predict. The custom prediction method can be found in the file predictor.py. This file was packaged in the section Package custom transform code. By specifying prediction-class we're telling AI Platform to call our custom prediction method--basically, predict_proba-- instead of the default predict method.
Test the deployed model
To make sure your deployed model is working, test it out using gcloud to make a prediction. First, save a JSON file with one test instance for prediction
Step16: Test your model by running this code
Step17: You should see your model's prediction in the output. The first entry in the output is the model's probability that the individual makes under \$50K while the second entry is the model's confidence that the individual makes over \$50k. The two entries sum to 1.
What-If Tool
To connect the What-if Tool to your AI Platform models, you need to pass it a subset of your test examples along with the ground truth values for those examples. Let's create a Numpy array of 2000 of our test examples.
Step18: Instantiating the What-if Tool is as simple as creating a WitConfigBuilder object and passing it the AI Platform model we built. Note that it'll take a minute to load the visualization.
Step19: The default view on the What-if Tool is the Datapoint editor tab. Here, you can click on any individual data point to see its features and even change feature values. Navigate to the Performance & Fairness tab in the What-if Tool. By slicing on a feature you can view the model error for individual feature values. Finally, navigate to the Features tab in the What-if Tool. This shows you the distribution of values for each feature in your dataset. You can use this tab to make sure your dataset is balanced. For example, if we only had Asians in a population, the model's predictions wouldn't necessarily reflect real world data. This tab gives us a good opportunity to see where our dataset might fall short, so that we can go back and collect more data to make it balanced.
In the Features tab, we can look to see the distribution of values for each feature in the dataset. We can see that of the 2000 test datapoints, 1346 are from men and 1702 are from caucasions. Women and minorities seem under-represented in this dataset. That may lead to the model not learning an accurate representation of the world in which it is trying to make predictions (of course, even if it does learn an accurate representation, is that what we want the model to perpetuate? This is a much deeper question still falling under the ML fairness umbrella and worthy of discussion outside of WIT). Predictions on those under-represented groups are more likely to be inaccurate than predictions on the over-represented groups.
The features in this visualization can be sorted by a number of different metrics, including non-uniformity. With this sorting, the features that have the most non-uniform distributions are shown first. For numeric features, capital gain is very non-uniform, with most datapoints having it set to 0, but a small number having non-zero capital gains, all the way up to a maximum of 100k. For categorical features, country is the most non-uniform with most datapoints being from the USA, but there is a long tail of 40 other countries which are not well represented.
Back in the Performance & Fairness tab, we can set an input feature (or set of features) with which to slice the data. For example, setting this to sex allows us to see the breakdown of model performance on male datapoints versus female datapoints. We can see that the model is more accurate (has less false positives and false negatives) on females than males. We can also see that the model predicts high income for females much less than it does for males (8.0% of the time for females vs 27.1% of the time for males). Note, your numbers will be slightly different due to the random elements of model training.
Imagine a scenario where this simple income classifier was used to approve or reject loan applications (not a realistic example but it illustrates the point). In this case, 28% of men from the test dataset have their loans approved but only 10% of women have theirs approved. If we wished to ensure than men and women get their loans approved the same percentage of the time, that is a fairness concept called "demographic parity". One way to achieve demographic parity would be to have different classification thresholds for males and females in our model.
In this case, demographic parity can be found with both groups getting loans 16% of the time by having the male threshold at 0.67 and the female threshold at 0.31. Because of the vast difference in the properties of the male and female training data in this 1994 census dataset, we need quite different thresholds to achieve demographic parity. Notice how with the high male threshold there are many more false negatives than before, and with the low female threshold there are many more false positives than before. This is necessary to get the percentage of positive predictions to be equal between the two groups. WIT has buttons to optimize for other fairness constraints as well, such as "equal opportunity" and "equal accuracy". Note that the demographic parity numbers may be different from the ones in your text as the trained models are always a bit different.
The use of these features can help shed light on subsets of your data on which your classifier is performing very differently. Understanding biases in your datasets and data slices on which your model has disparate performance are very important parts of analyzing a model for fairness. There are many approaches to improving fairness, including augmenting training data, building fairness-related loss functions into your model training procedure, and post-training inference adjustments like those seen in WIT. We think that WIT provides a great interface for furthering ML fairness learning, but of course there is no silver bullet to improving ML fairness.
Training on a more balanced dataset
Using the What-If Tool we saw that the model we trained on the census dataset wouldn't be very considerate in a production environment. What if we retrained the model on a dataset that was more balanced? Fortunately, we have such a dataset. Let's train a new model on this balanced dataset and compare it to our original dataset using the What-If Tool.
First, let's load the balanced dataset into a Pandas dataframe.
Step20: Execute the command below to see the distribution of gender in the data.
Step21: Unlike the original dataset, this dataset has an equal number of rows for both males and females. Execute the command below to see the distriubtion of rows in the dataset of both sex and income-level.
Step22: We see that not only is the dataset balanced across gender, it's also balanced across income. Let's train a model on this data. We'll use exactly the same model pipeline as in the previous section. Scikit-Learn has a convenient utility function for copying model pipelines, clone. The clone function copies a pipeline architecture without saving learned parameter values.
Step23: As before, we save our trained model to a pickle file. Note, when we version this model in AI Platform the model in this case must be named model.pkl. It's ok to overwrite the existing model.pkl file since we'll be uploading it to Cloud Storage anyway.
Step24: Deploy the model to AI Platform using the following bash script
Step25: Now let's instantiate the What-if Tool by configuring a WitConfigBuilder. Here, we want to compare the original model we built with the one trained on the balanced census dataset. To achieve this we utilize the set_compare_ai_platform_model method. We want to compare the models on a balanced test set. The balanced test is loaded and then input to WitConfigBuilder. | Python Code:
import datetime
import pickle
import os
import pandas as pd
import xgboost as xgb
import numpy as np
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import FeatureUnion, make_pipeline
from sklearn.utils import shuffle
from sklearn.base import clone
from sklearn.model_selection import train_test_split
from witwidget.notebook.visualization import WitWidget, WitConfigBuilder
import custom_transforms
import warnings
warnings.filterwarnings(action='ignore', category=DeprecationWarning)
Explanation: Note: You may need to restart the kernel to use updated packages.
Import Python packages
Execute the command below (Shift + Enter) to load all the python libraries we'll need for the lab.
End of explanation
os.environ['QWIKLABS_PROJECT_ID'] = ''
Explanation: Before we continue, note that we'll be using your Qwiklabs project id a lot in this notebook. For convenience, set it as an environment variable using the command below:
End of explanation
train_csv_path = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data'
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
raw_train_data = pd.read_csv(train_csv_path, names=COLUMNS, skipinitialspace=True)
raw_train_data = shuffle(raw_train_data, random_state=4)
Explanation: Download and process data
The models you'll build will predict the income level, whether it's less than or equal to $50,000 per year, of individuals given 14 data points about each individual. You'll train your models on this UCI Census Income Dataset.
We'll read the data into a Pandas DataFrame to see what we'll be working with. It's important to shuffle our data in case the original dataset is ordered in a specific way. We use an sklearn utility called shuffle to do this, which we imported in the first cell:
End of explanation
raw_train_data.head()
Explanation: data.head() lets us preview the first five rows of our dataset in Pandas.
End of explanation
print(raw_train_data['income-level'].value_counts(normalize=True))
Explanation: The income-level column is the thing our model will predict. This is the binary outcome of whether the individual makes more than $50,000 per year. To see the distribution of income levels in the dataset, run the following:
End of explanation
test_csv_path = 'https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test'
raw_test_data = pd.read_csv(test_csv_path, names=COLUMNS, skipinitialspace=True, skiprows=1)
raw_test_data.head()
Explanation: As explained in this paper, each entry in the dataset contains the following information
about an individual:
age: the age of an individual
workclass: a general term to represent the employment status of an individual
fnlwgt: final weight. In other words, this is the number of people the census believes
the entry represents...
education: the highest level of education achieved by an individual.
education-num: the highest level of education achieved in numerical form.
marital-status: marital status of an individual.
occupation: the general type of occupation of an individual
relationship: represents what this individual is relative to others. For example an
individual could be a Husband. Each entry only has one relationship attribute and is
somewhat redundant with marital status.
race: Descriptions of an individual’s race
sex: the biological sex of the individual
capital-gain: capital gains for an individual
capital-loss: capital loss for an individual
hours-per-week: the hours an individual has reported to work per week
native-country: country of origin for an individual
income-level: whether or not an individual makes more than $50,000 annually
An important concept in machine learning is train / test split. We'll take the majority of our data and use it to train our model, and we'll set aside the rest for testing our model on data it's never seen before. There are many ways to create training and test datasets. Fortunately, for our census data we can simply download a pre-defined test set.
End of explanation
raw_train_features = raw_train_data.drop('income-level', axis=1).values
raw_test_features = raw_test_data.drop('income-level', axis=1).values
# Create training labels list
train_labels = (raw_train_data['income-level'] == '>50K').values.astype(int)
test_labels = (raw_test_data['income-level'] == '>50K.').values.astype(int)
Explanation: Since we don't want to train a model on our labels, we're going to separate them from the features in both the training and test datasets. Also, notice that income-level is a string datatype. For machine learning, it's better to convert this to an binary integer datatype. We do this in the next cell.
End of explanation
numerical_indices = [0, 12]
categorical_indices = [1, 3, 5, 7]
p1 = make_pipeline(
custom_transforms.PositionalSelector(categorical_indices),
custom_transforms.StripString(),
custom_transforms.SimpleOneHotEncoder()
)
p2 = make_pipeline(
custom_transforms.PositionalSelector(numerical_indices),
StandardScaler()
)
p3 = FeatureUnion([
('numericals', p1),
('categoricals', p2),
])
Explanation: Now you're ready to build and train your first model!
Build a First Model
The model we build closely follows a template for the census dataset found on AI Hub. For our model we use an XGBoost classifier. However, before we train our model we have to pre-process the data a little bit. We build a processing pipeline using Scikit-Learn's Pipeline constructor. We apply some custom transformations that are defined in custom_transforms.py. Open the file custom_transforms.py and inspect the code. Our features are either numerical or categorical. The numerical features are age-num, and hours-per-week. These features will be processed by applying Scikit-Learn's StandardScaler function. The categorical features are workclass, education, marital-status, and relationship. These features are one-hot encoded.
End of explanation
pipeline = make_pipeline(
p3,
xgb.sklearn.XGBClassifier(max_depth=4)
)
Explanation: To finalize the pipeline we attach an XGBoost classifier at the end. The complete pipeline object takes the raw data we loaded from csv files, processes the categorical features, processes the numerical features, concatenates the two, and then passes the result through the XGBoost classifier.
End of explanation
pipeline.fit(raw_train_features, train_labels)
Explanation: We train our model with one function call using the fit() method. We pass the fit() method our training data.
End of explanation
with open('model.pkl', 'wb') as model_file:
pickle.dump(pipeline, model_file)
Explanation: Let's go ahead and save our model as a pickle file. Executing the command below will save the trained model in the file model.pkl in the same directory as this notebook.
End of explanation
!gsutil mb gs://$QWIKLABS_PROJECT_ID
Explanation: Save Trained Model to AI Platform
We've got our model working locally, but it would be nice if we could make predictions on it from anywhere (not just this notebook!). In this step we'll deploy it to the cloud. For detailed instructions on how to do this visit the official documenation. Note that since we have custom components in our data pipeline we need to go through a few extra steps.
Create a Cloud Storage bucket for the model
We first need to create a storage bucket to store our pickled model file. We'll point Cloud AI Platform at this file when we deploy. Run this gsutil command to create a bucket. This will ensure the name of the cloud storage bucket you create will be globally unique.
End of explanation
%%bash
python setup.py sdist --formats=gztar
gsutil cp model.pkl gs://$QWIKLABS_PROJECT_ID/original/
gsutil cp dist/custom_transforms-0.1.tar.gz gs://$QWIKLABS_PROJECT_ID/
Explanation: Package custom transform code
Since we're using custom transformation code we need to package it up and direct AI Platform to it when we ask it make predictions. To package our custom code we create a source distribution. The following code creates this distribution and then ports the distribution and the model file to the bucket we created. Ignore the warnings about missing meta data.
End of explanation
!gcloud ai-platform models create census_income_classifier --regions us-central1
Explanation: Create and Deploy Model
The following ai-platform gcloud command will create a new model in your project. We'll call this one census_income_classifier.
End of explanation
%%bash
MODEL_NAME="census_income_classifier"
VERSION_NAME="original"
MODEL_DIR="gs://$QWIKLABS_PROJECT_ID/original/"
CUSTOM_CODE_PATH="gs://$QWIKLABS_PROJECT_ID/custom_transforms-0.1.tar.gz"
gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.15 \
--python-version 3.7 \
--origin $MODEL_DIR \
--package-uris $CUSTOM_CODE_PATH \
--prediction-class predictor.MyPredictor \
--region=global
Explanation: Now it's time to deploy the model. We can do that with this gcloud command:
End of explanation
%%writefile predictions.json
[25, "Private", 226802, "11th", 7, "Never-married", "Machine-op-inspct", "Own-child", "Black", "Male", 0, 0, 40, "United-States"]
Explanation: While this is running, check the models section of your AI Platform console. You should see your new version deploying there. When the deploy completes successfully you'll see a green check mark where the loading spinner is. The deploy should take 2-3 minutes. You will need to click on the model name in order to see the spinner/checkmark. In the command above, notice we specify prediction-class. The reason we must specify a prediction class is that by default, AI Platform prediction will call a Scikit-Learn model's predict method, which in this case returns either 0 or 1. However, the What-If Tool requires output from a model in line with a Scikit-Learn model's predict_proba method. This is because WIT wants the probabilities of the negative and positive classes, not just the final determination on which class a person belongs to. Because that allows us to do more fine-grained exploration of the model. Consequently, we must write a custom prediction routine that basically renames predict_proba as predict. The custom prediction method can be found in the file predictor.py. This file was packaged in the section Package custom transform code. By specifying prediction-class we're telling AI Platform to call our custom prediction method--basically, predict_proba-- instead of the default predict method.
Test the deployed model
To make sure your deployed model is working, test it out using gcloud to make a prediction. First, save a JSON file with one test instance for prediction:
End of explanation
!gcloud ai-platform predict --model=census_income_classifier --json-instances=predictions.json --version=original --region=global
Explanation: Test your model by running this code:
End of explanation
num_datapoints = 2000
test_examples = np.hstack(
(raw_test_features[:num_datapoints],
test_labels[:num_datapoints].reshape(-1,1)
)
)
Explanation: You should see your model's prediction in the output. The first entry in the output is the model's probability that the individual makes under \$50K while the second entry is the model's confidence that the individual makes over \$50k. The two entries sum to 1.
What-If Tool
To connect the What-if Tool to your AI Platform models, you need to pass it a subset of your test examples along with the ground truth values for those examples. Let's create a Numpy array of 2000 of our test examples.
End of explanation
config_builder = (
WitConfigBuilder(test_examples.tolist(), COLUMNS)
.set_ai_platform_model(os.environ['QWIKLABS_PROJECT_ID'], 'census_income_classifier', 'original')
.set_target_feature('income-level')
.set_model_type('classification')
.set_label_vocab(['Under 50K', 'Over 50K'])
)
WitWidget(config_builder, height=800)
Explanation: Instantiating the What-if Tool is as simple as creating a WitConfigBuilder object and passing it the AI Platform model we built. Note that it'll take a minute to load the visualization.
End of explanation
bal_data_path = 'https://storage.googleapis.com/cloud-training/dei/balanced_census_data.csv'
bal_data = pd.read_csv(bal_data_path, names=COLUMNS, skiprows=1)
bal_data.head()
Explanation: The default view on the What-if Tool is the Datapoint editor tab. Here, you can click on any individual data point to see its features and even change feature values. Navigate to the Performance & Fairness tab in the What-if Tool. By slicing on a feature you can view the model error for individual feature values. Finally, navigate to the Features tab in the What-if Tool. This shows you the distribution of values for each feature in your dataset. You can use this tab to make sure your dataset is balanced. For example, if we only had Asians in a population, the model's predictions wouldn't necessarily reflect real world data. This tab gives us a good opportunity to see where our dataset might fall short, so that we can go back and collect more data to make it balanced.
In the Features tab, we can look to see the distribution of values for each feature in the dataset. We can see that of the 2000 test datapoints, 1346 are from men and 1702 are from caucasions. Women and minorities seem under-represented in this dataset. That may lead to the model not learning an accurate representation of the world in which it is trying to make predictions (of course, even if it does learn an accurate representation, is that what we want the model to perpetuate? This is a much deeper question still falling under the ML fairness umbrella and worthy of discussion outside of WIT). Predictions on those under-represented groups are more likely to be inaccurate than predictions on the over-represented groups.
The features in this visualization can be sorted by a number of different metrics, including non-uniformity. With this sorting, the features that have the most non-uniform distributions are shown first. For numeric features, capital gain is very non-uniform, with most datapoints having it set to 0, but a small number having non-zero capital gains, all the way up to a maximum of 100k. For categorical features, country is the most non-uniform with most datapoints being from the USA, but there is a long tail of 40 other countries which are not well represented.
Back in the Performance & Fairness tab, we can set an input feature (or set of features) with which to slice the data. For example, setting this to sex allows us to see the breakdown of model performance on male datapoints versus female datapoints. We can see that the model is more accurate (has less false positives and false negatives) on females than males. We can also see that the model predicts high income for females much less than it does for males (8.0% of the time for females vs 27.1% of the time for males). Note, your numbers will be slightly different due to the random elements of model training.
Imagine a scenario where this simple income classifier was used to approve or reject loan applications (not a realistic example but it illustrates the point). In this case, 28% of men from the test dataset have their loans approved but only 10% of women have theirs approved. If we wished to ensure than men and women get their loans approved the same percentage of the time, that is a fairness concept called "demographic parity". One way to achieve demographic parity would be to have different classification thresholds for males and females in our model.
In this case, demographic parity can be found with both groups getting loans 16% of the time by having the male threshold at 0.67 and the female threshold at 0.31. Because of the vast difference in the properties of the male and female training data in this 1994 census dataset, we need quite different thresholds to achieve demographic parity. Notice how with the high male threshold there are many more false negatives than before, and with the low female threshold there are many more false positives than before. This is necessary to get the percentage of positive predictions to be equal between the two groups. WIT has buttons to optimize for other fairness constraints as well, such as "equal opportunity" and "equal accuracy". Note that the demographic parity numbers may be different from the ones in your text as the trained models are always a bit different.
The use of these features can help shed light on subsets of your data on which your classifier is performing very differently. Understanding biases in your datasets and data slices on which your model has disparate performance are very important parts of analyzing a model for fairness. There are many approaches to improving fairness, including augmenting training data, building fairness-related loss functions into your model training procedure, and post-training inference adjustments like those seen in WIT. We think that WIT provides a great interface for furthering ML fairness learning, but of course there is no silver bullet to improving ML fairness.
Training on a more balanced dataset
Using the What-If Tool we saw that the model we trained on the census dataset wouldn't be very considerate in a production environment. What if we retrained the model on a dataset that was more balanced? Fortunately, we have such a dataset. Let's train a new model on this balanced dataset and compare it to our original dataset using the What-If Tool.
First, let's load the balanced dataset into a Pandas dataframe.
End of explanation
bal_data['sex'].value_counts(normalize=True)
Explanation: Execute the command below to see the distribution of gender in the data.
End of explanation
bal_data.groupby(['sex', 'income-level'])['sex'].count()
Explanation: Unlike the original dataset, this dataset has an equal number of rows for both males and females. Execute the command below to see the distriubtion of rows in the dataset of both sex and income-level.
End of explanation
bal_data['income-level'] = bal_data['income-level'].isin(['>50K', '>50K.']).values.astype(int)
raw_bal_features = bal_data.drop('income-level', axis=1).values
bal_labels = bal_data['income-level'].values
pipeline_bal = clone(pipeline)
pipeline_bal.fit(raw_bal_features, bal_labels)
Explanation: We see that not only is the dataset balanced across gender, it's also balanced across income. Let's train a model on this data. We'll use exactly the same model pipeline as in the previous section. Scikit-Learn has a convenient utility function for copying model pipelines, clone. The clone function copies a pipeline architecture without saving learned parameter values.
End of explanation
with open('model.pkl', 'wb') as model_file:
pickle.dump(pipeline_bal, model_file)
Explanation: As before, we save our trained model to a pickle file. Note, when we version this model in AI Platform the model in this case must be named model.pkl. It's ok to overwrite the existing model.pkl file since we'll be uploading it to Cloud Storage anyway.
End of explanation
%%bash
gsutil cp model.pkl gs://$QWIKLABS_PROJECT_ID/balanced/
MODEL_NAME="census_income_classifier"
VERSION_NAME="balanced"
MODEL_DIR="gs://$QWIKLABS_PROJECT_ID/balanced/"
CUSTOM_CODE_PATH="gs://$QWIKLABS_PROJECT_ID/custom_transforms-0.1.tar.gz"
gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--runtime-version 1.15 \
--python-version 3.7 \
--origin $MODEL_DIR \
--package-uris $CUSTOM_CODE_PATH \
--prediction-class predictor.MyPredictor \
--region=global
Explanation: Deploy the model to AI Platform using the following bash script:
End of explanation
bal_test_csv_path = 'https://storage.googleapis.com/cloud-training/dei/balanced_census_data_test.csv'
bal_test_data = pd.read_csv(bal_test_csv_path, names=COLUMNS, skipinitialspace=True)
bal_test_data['income-level'] = (bal_test_data['income-level'] == '>50K').values.astype(int)
config_builder = (
WitConfigBuilder(bal_test_data.to_numpy()[1:].tolist(), COLUMNS)
.set_ai_platform_model(os.environ['QWIKLABS_PROJECT_ID'], 'census_income_classifier', 'original')
.set_compare_ai_platform_model(os.environ['QWIKLABS_PROJECT_ID'], 'census_income_classifier', 'balanced')
.set_target_feature('income-level')
.set_model_type('classification')
.set_label_vocab(['Under 50K', 'Over 50K'])
)
WitWidget(config_builder, height=800)
Explanation: Now let's instantiate the What-if Tool by configuring a WitConfigBuilder. Here, we want to compare the original model we built with the one trained on the balanced census dataset. To achieve this we utilize the set_compare_ai_platform_model method. We want to compare the models on a balanced test set. The balanced test is loaded and then input to WitConfigBuilder.
End of explanation |
2,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Analysis Tools
Assignment
Step1: Data management
Step2: First, the distribution of both the use of cannabis and the ethnicity will be shown.
Step3: Variance analysis
Now that the univariate distribution as be plotted and described, the bivariate graphics will be plotted in order to test our research hypothesis.
From the bivariate graphic below, it seems that there are some differences. For example American Indian versus Asian seems quite different.
Step4: The Chi-Square test will be applied on the all data to test the following hypothesis
Step5: The p-value of 3.7e-91 confirm that the null hypothesis can be safetly rejected.
The next obvious questions is which ethnic groups have a statistically significant difference regarding the use of cannabis. For that, the Chi-Square test will be performed on each pair of group thanks to the following code.
Step6: If we put together all p-values results and test them against our threshold of 0.005, we got the table below.
The threshold is the standard 0.05 threshold divided by the number of pairs in the explanatory variables (here 10). | Python Code:
# Magic command to insert the graph directly in the notebook
%matplotlib inline
# Load a useful Python libraries for handling data
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
import seaborn as sns
import scipy.stats as stats
import matplotlib.pyplot as plt
from IPython.display import Markdown, display
Explanation: Data Analysis Tools
Assignment: Running a Chi-Square Test of Independence
Following is the Python program I wrote to fulfill the second assignment of the Data Analysis Tools online course.
I decided to use Jupyter Notebook as it is a pretty way to write code and present results.
As the previous assignment brought me to conclude my initial research question, I will look at a possible relationship between ethnicity (explanatory variable) and use of cannabis (response variable) from the NESARC database. As both variables are categoricals, the Chi-Square Test of Independence is the method to use.
End of explanation
nesarc = pd.read_csv('nesarc_pds.csv', low_memory=False)
races = {1 : 'White',
2 : 'Black',
3 : 'American India \n Alaska',
4 : 'Asian \n Native Hawaiian \n Pacific',
5 : 'Hispanic or Latino'}
subnesarc = (nesarc[['S3BQ1A5', 'ETHRACE2A']]
.assign(S3BQ1A5=lambda x: pd.to_numeric(x['S3BQ1A5'].replace((2, 9), (0, np.nan)), errors='coerce'))
.assign(ethnicity=lambda x: pd.Categorical(x['ETHRACE2A'].map(races)),
use_cannabis=lambda x: pd.Categorical(x['S3BQ1A5']))
.dropna())
subnesarc.use_cannabis.cat.rename_categories(('No', 'Yes'), inplace=True)
Explanation: Data management
End of explanation
g = sns.countplot(subnesarc['ethnicity'])
_ = plt.title('Distribution of the ethnicity')
g = sns.countplot(subnesarc['use_cannabis'])
_ = plt.title('Distribution of ever use cannabis')
Explanation: First, the distribution of both the use of cannabis and the ethnicity will be shown.
End of explanation
g = sns.factorplot(x='ethnicity', y='S3BQ1A5', data=subnesarc,
kind="bar", ci=None)
g.set_xticklabels(rotation=90)
plt.ylabel('Ever use cannabis')
_ = plt.title('Average number of cannabis user depending on the ethnicity')
ct1 = pd.crosstab(subnesarc.use_cannabis, subnesarc.ethnicity)
display(Markdown("Contingency table of observed counts"))
ct1
# Note: normalize keyword is available starting from pandas version 0.18.1
ct2 = ct1/ct1.sum(axis=0)
display(Markdown("Contingency table of observed counts normalized over each columns"))
ct2
Explanation: Variance analysis
Now that the univariate distribution as be plotted and described, the bivariate graphics will be plotted in order to test our research hypothesis.
From the bivariate graphic below, it seems that there are some differences. For example American Indian versus Asian seems quite different.
End of explanation
stats.chi2_contingency(ct1)
Explanation: The Chi-Square test will be applied on the all data to test the following hypothesis :
The null hypothesis is There is no relationship between the use of cannabis and the ethnicity.
The alternate hypothesis is There is a relationship between the use of cannabis and the ethnicity.
End of explanation
list_races = list(races.keys())
p_values = dict()
for i in range(len(list_races)):
for j in range(i+1, len(list_races)):
race1 = races[list_races[i]]
race2 = races[list_races[j]]
subethnicity = subnesarc.ETHRACE2A.map(dict(((list_races[i], race1),(list_races[j], race2))))
comparison = pd.crosstab(subnesarc.use_cannabis, subethnicity)
display(Markdown("Crosstable to compare {} and {}".format(race1, race2)))
display(comparison)
display(comparison/comparison.sum(axis=0))
chi_square, p, _, expected_counts = stats.chi2_contingency(comparison)
p_values[(race1, race2)] = p
Explanation: The p-value of 3.7e-91 confirm that the null hypothesis can be safetly rejected.
The next obvious questions is which ethnic groups have a statistically significant difference regarding the use of cannabis. For that, the Chi-Square test will be performed on each pair of group thanks to the following code.
End of explanation
df = pd.DataFrame(p_values, index=['p-value', ])
(df.stack(level=[0, 1])['p-value']
.rename('p-value')
.to_frame()
.assign(Ha=lambda x: x['p-value'] < 0.05 / len(p_values)))
Explanation: If we put together all p-values results and test them against our threshold of 0.005, we got the table below.
The threshold is the standard 0.05 threshold divided by the number of pairs in the explanatory variables (here 10).
End of explanation |
2,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
Using FloPy to simplify the use of the MT3DMS SSM package
A multi-component transport demonstration
Step1: First, we will create a simple model structure
Step2: Create the MODFLOW packages
Step3: We'll track the cell locations for the SSM data using the MODFLOW boundary conditions.
Get a dictionary (dict) that has the SSM itype for each of the boundary types.
Step4: Add a general head boundary (ghb). The general head boundary head (bhead) is 0.1 for the first 5 stress periods with a component 1 (comp_1) concentration of 1.0 and a component 2 (comp_2) concentration of 100.0. Then bhead is increased to 0.25 and comp_1 concentration is reduced to 0.5 and comp_2 concentration is increased to 200.0
Step5: Add an injection well. The injection rate (flux) is 10.0 with a comp_1 concentration of 10.0 and a comp_2 concentration of 0.0 for all stress periods. WARNING
Step6: Add the GHB and WEL packages to the mf MODFLOW object instance.
Step7: Create the MT3DMS packages
Step8: Let's verify that stress_period_data has the right dtype
Step9: Create the SEAWAT packages
Step10: And finally, modify the vdf package to fix indense. | Python Code:
import os
import numpy as np
from flopy import modflow, mt3d, seawat
Explanation: FloPy
Using FloPy to simplify the use of the MT3DMS SSM package
A multi-component transport demonstration
End of explanation
nlay, nrow, ncol = 10, 10, 10
perlen = np.zeros((10), dtype=np.float) + 10
nper = len(perlen)
ibound = np.ones((nlay,nrow,ncol), dtype=np.int)
botm = np.arange(-1,-11,-1)
top = 0.
Explanation: First, we will create a simple model structure
End of explanation
model_ws = 'data'
modelname = 'ssmex'
mf = modflow.Modflow(modelname, model_ws=model_ws)
dis = modflow.ModflowDis(mf, nlay=nlay, nrow=nrow, ncol=ncol,
perlen=perlen, nper=nper, botm=botm, top=top,
steady=False)
bas = modflow.ModflowBas(mf, ibound=ibound, strt=top)
lpf = modflow.ModflowLpf(mf, hk=100, vka=100, ss=0.00001, sy=0.1)
oc = modflow.ModflowOc(mf)
pcg = modflow.ModflowPcg(mf)
rch = modflow.ModflowRch(mf)
Explanation: Create the MODFLOW packages
End of explanation
itype = mt3d.Mt3dSsm.itype_dict()
print(itype)
print(mt3d.Mt3dSsm.get_default_dtype())
ssm_data = {}
Explanation: We'll track the cell locations for the SSM data using the MODFLOW boundary conditions.
Get a dictionary (dict) that has the SSM itype for each of the boundary types.
End of explanation
ghb_data = {}
print(modflow.ModflowGhb.get_default_dtype())
ghb_data[0] = [(4, 4, 4, 0.1, 1.5)]
ssm_data[0] = [(4, 4, 4, 1.0, itype['GHB'], 1.0, 100.0)]
ghb_data[5] = [(4, 4, 4, 0.25, 1.5)]
ssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)]
for k in range(nlay):
for i in range(nrow):
ghb_data[0].append((k, i, 0, 0.0, 100.0))
ssm_data[0].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0))
ghb_data[5] = [(4, 4, 4, 0.25, 1.5)]
ssm_data[5] = [(4, 4, 4, 0.5, itype['GHB'], 0.5, 200.0)]
for k in range(nlay):
for i in range(nrow):
ghb_data[5].append((k, i, 0, -0.5, 100.0))
ssm_data[5].append((k, i, 0, 0.0, itype['GHB'], 0.0, 0.0))
Explanation: Add a general head boundary (ghb). The general head boundary head (bhead) is 0.1 for the first 5 stress periods with a component 1 (comp_1) concentration of 1.0 and a component 2 (comp_2) concentration of 100.0. Then bhead is increased to 0.25 and comp_1 concentration is reduced to 0.5 and comp_2 concentration is increased to 200.0
End of explanation
wel_data = {}
print(modflow.ModflowWel.get_default_dtype())
wel_data[0] = [(0, 4, 8, 10.0)]
ssm_data[0].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0))
ssm_data[5].append((0, 4, 8, 10.0, itype['WEL'], 10.0, 0.0))
Explanation: Add an injection well. The injection rate (flux) is 10.0 with a comp_1 concentration of 10.0 and a comp_2 concentration of 0.0 for all stress periods. WARNING: since we changed the SSM data in stress period 6, we need to add the well to the ssm_data for stress period 6.
End of explanation
ghb = modflow.ModflowGhb(mf, stress_period_data=ghb_data)
wel = modflow.ModflowWel(mf, stress_period_data=wel_data)
Explanation: Add the GHB and WEL packages to the mf MODFLOW object instance.
End of explanation
mt = mt3d.Mt3dms(modflowmodel=mf, modelname=modelname, model_ws=model_ws)
btn = mt3d.Mt3dBtn(mt, sconc=0, ncomp=2, sconc2=50.0)
adv = mt3d.Mt3dAdv(mt)
ssm = mt3d.Mt3dSsm(mt, stress_period_data=ssm_data)
gcg = mt3d.Mt3dGcg(mt)
Explanation: Create the MT3DMS packages
End of explanation
print(ssm.stress_period_data.dtype)
Explanation: Let's verify that stress_period_data has the right dtype
End of explanation
swt = seawat.Seawat(modflowmodel=mf, mt3dmodel=mt,
modelname=modelname, namefile_ext='nam_swt', model_ws=model_ws)
vdf = seawat.SeawatVdf(swt, mtdnconc=0, iwtable=0, indense=-1)
mf.write_input()
mt.write_input()
swt.write_input()
Explanation: Create the SEAWAT packages
End of explanation
fname = modelname + '.vdf'
f = open(os.path.join(model_ws, fname),'r')
lines = f.readlines()
f.close()
f = open(os.path.join(model_ws, fname),'w')
for line in lines:
f.write(line)
for kper in range(nper):
f.write("-1\n")
f.close()
Explanation: And finally, modify the vdf package to fix indense.
End of explanation |
2,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Web Scraping in Python
Source
In this appendix lecture we'll go over how to scrape information from the web using Python.
We'll go to a website, decide what information we want, see where and how it is stored, then scrape it and set it as a pandas DataFrame!
Some things you should consider before web scraping a website
Step1: For our quick web scraping tutorial, we'll look at some legislative reports from the University of California Web Page. Feel free to experiment with other webpages, but remember to be cautious and respectful in what you scrape and how often you do it. Always check the legality of a web scraping job.
Let's go ahead and set the url.
Step2: Now let's go ahead and set up requests to grab content form the url, and set it as a Beautiful Soup object.
Step3: Now we'll use Beautiful Soup to search for the table we want to grab!
Step4: Now we need to use Beautiful Soup to find the table entries. A 'td' tag defines a standard cell in an HTML table. The 'tr' tag defines a row in an HTML table.
We'll parse through our tables object and try to find each cell using the findALL('td') method.
There are tons of options to use with findALL in beautiful soup. You can read about them here.
Step5: Let's see what the data list looks like
Step6: Now we'll use a for loop to go through the list and grab only the cells with a pdf file in them, we'll also need to keep track of the index to set up the date of the report.
Step7: You'll notice a line to take care of '\xa0 ' This is due to a unicode error that occurs if you don't do this. Web pages can be messy and inconsistent and it is very likely you'll have to do some research to take care of problems like these.
Here's the link I used to solve this particular issue
Step8: There are other less intense options for web scraping | Python Code:
from bs4 import BeautifulSoup
import requests
import pandas as pd
from pandas import Series,DataFrame
Explanation: Web Scraping in Python
Source
In this appendix lecture we'll go over how to scrape information from the web using Python.
We'll go to a website, decide what information we want, see where and how it is stored, then scrape it and set it as a pandas DataFrame!
Some things you should consider before web scraping a website:
1.) You should check a site's terms and conditions before you scrape them.
2.) Space out your requests so you don't overload the site's server, doing this could get you blocked.
3.) Scrapers break after time - web pages change their layout all the time, you'll more than likely have to rewrite your code.
4.) Web pages are usually inconsistent, more than likely you'll have to clean up the data after scraping it.
5.) Every web page and situation is different, you'll have to spend time configuring your scraper.
To learn more about HTML I suggest theses two resources:
W3School
Codecademy
There are three modules we'll need in addition to python are:
1.) BeautifulSoup, which you can download by typing: pip install beautifulsoup4 or conda install beautifulsoup4 (for the Anaconda distrbution of Python) in your command prompt.
2.) lxml , which you can download by typing: pip install lxml or conda install lxml (for the Anaconda distrbution of Python) in your command prompt.
3.) requests, which you can download by typing: pip install requests or conda install requests (for the Anaconda distrbution of Python) in your command prompt.
We'll start with our imports:
End of explanation
url = 'http://www.ucop.edu/operating-budget/budgets-and-reports/legislative-reports/2013-14-legislative-session.html'
Explanation: For our quick web scraping tutorial, we'll look at some legislative reports from the University of California Web Page. Feel free to experiment with other webpages, but remember to be cautious and respectful in what you scrape and how often you do it. Always check the legality of a web scraping job.
Let's go ahead and set the url.
End of explanation
# Request content from web page
result = requests.get(url)
c = result.content
# Set as Beautiful Soup Object
soup = BeautifulSoup(c)
Explanation: Now let's go ahead and set up requests to grab content form the url, and set it as a Beautiful Soup object.
End of explanation
# Go to the section of interest
summary = soup.find("div",{'class':'list-land','id':'content'})
# Find the tables in the HTML
tables = summary.find_all('table')
Explanation: Now we'll use Beautiful Soup to search for the table we want to grab!
End of explanation
# Set up empty data list
data = []
# Set rows as first indexed object in tables with a row
rows = tables[0].findAll('tr')
# now grab every HTML cell in every row
for tr in rows:
cols = tr.findAll('td')
# Check to see if text is in the row
for td in cols:
text = td.find(text=True)
print text,
data.append(text)
Explanation: Now we need to use Beautiful Soup to find the table entries. A 'td' tag defines a standard cell in an HTML table. The 'tr' tag defines a row in an HTML table.
We'll parse through our tables object and try to find each cell using the findALL('td') method.
There are tons of options to use with findALL in beautiful soup. You can read about them here.
End of explanation
data
Explanation: Let's see what the data list looks like
End of explanation
# Set up empty lists
reports = []
date = []
# Se tindex counter
index = 0
# Go find the pdf cells
for item in data:
if 'pdf' in item:
# Add the date and reports
date.append(data[index-1])
# Get rid of \xa0
reports.append(item.replace(u'\xa0', u' '))
index += 1
Explanation: Now we'll use a for loop to go through the list and grab only the cells with a pdf file in them, we'll also need to keep track of the index to set up the date of the report.
End of explanation
# Set up Dates and Reports as Series
date = Series(date)
reports = Series(reports)
# Concatenate into a DataFrame
legislative_df = pd.concat([date,reports],axis=1)
# Set up the columns
legislative_df.columns = ['Date','Reports']
# Show the finished DataFrame
legislative_df
Explanation: You'll notice a line to take care of '\xa0 ' This is due to a unicode error that occurs if you don't do this. Web pages can be messy and inconsistent and it is very likely you'll have to do some research to take care of problems like these.
Here's the link I used to solve this particular issue: StackOverflow Page
Now all that is left is to organize our data into a pandas DataFrame!
End of explanation
# http://docs.python-guide.org/en/latest/scenarios/scrape/
from lxml import html
import requests
page = requests.get('http://econpy.pythonanywhere.com/ex/001.html')
tree = html.fromstring(page.content)
# inspect element
# <div title="buyer-name">Carson Busses</div>
# <span class="item-price">$29.95</span>
#This will create a list of buyers:
buyers = tree.xpath('//div[@title="buyer-name"]/text()')
#This will create a list of prices
prices = tree.xpath('//span[@class="item-price"]/text()')
print 'Buyers: ', buyers
print 'Prices: ', prices
# https://www.flightradar24.com/56.16,-52.58/7
# http://stackoverflow.com/questions/39489168/how-to-scrape-real-time-streaming-data-with-python
# If you look at the network tab in the developer console in Chrome (for example), you'll see the requests to https://data-live.flightradar24.com/zones/fcgi/feed.js?bounds=59.09,52.64,-58.77,-47.71&faa=1&mlat=1&flarm=1&adsb=1&gnd=1&air=1&vehicles=1&estimated=1&maxage=7200&gliders=1&stats=1
import requests
from bs4 import BeautifulSoup
import time
def get_count():
url = "https://data-live.flightradar24.com/zones/fcgi/feed.js?bounds=57.78,54.11,-56.40,-48.75&faa=1&mlat=1&flarm=1&adsb=1&gnd=1&air=1&vehicles=1&estimated=1&maxage=7200&gliders=1&stats=1"
# Request with fake header, otherwise you will get an 403 HTTP error
r = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'})
# Parse the JSON
data = r.json()
counter = 0
# Iterate over the elements to get the number of total flights
for element in data["stats"]["total"]:
counter += data["stats"]["total"][element]
return counter
while True:
print(get_count())
time.sleep(8)
# Hmm, that was just my first thaught. As I wrote, the code is not meant as something final
Explanation: There are other less intense options for web scraping:
Check out these two companies:
https://import.io/
https://www.kimonolabs.com/
Aside
End of explanation |
2,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 4 - Current induced domain wall motion
In this tutorial we show how spin transfer torque (STT) can be included in micromagnetic simulations. To illustrate that, we will try to move a domain wall pair using spin-polarised current.
Let us simulate a two-dimensional sample with length $L = 500 \,\text{nm}$, width $w = 20 \,\text{nm}$ and discretisation cell $(2.5 \,\text{nm}, 2.5 \,\text{nm}, 2.5 \,\text{nm})$. The material parameters are
Step1: Because we want to move a DW pair, we need to initialise the magnetisation in an appropriate way before we relax the system.
Step2: Now, we can relax the magnetisation.
Step3: Now we can add the STT term to the dynamics equation.
Step4: And drive the system for half a nano second
Step5: We see that the DW pair has moved to the positive $x$ direction.
Exercise
Modify the code below (which is a copy of the example from above) to obtain one domain wall instead of a domain wall pair and move it using the same current. | Python Code:
# Definition of parameters
L = 500e-9 # sample length (m)
w = 20e-9 # sample width (m)
d = 2.5e-9 # discretisation cell size (m)
Ms = 5.8e5 # saturation magnetisation (A/m)
A = 15e-12 # exchange energy constant (J/)
D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2)
K = 0.5e6 # uniaxial anisotropy constant (J/m**3)
u = (0, 0, 1) # easy axis
gamma = 2.211e5 # gyromagnetic ratio (m/As)
alpha = 0.3 # Gilbert damping
# Mesh definition
p1 = (0, 0, 0)
p2 = (L, w, d)
cell = (d, d, d)
mesh = oc.Mesh(p1=p1, p2=p2, cell=cell)
# Micromagnetic system definition
system = oc.System(name="domain_wall_pair")
system.hamiltonian = oc.Exchange(A=A) + \
oc.DMI(D=D, kind="interfacial") + \
oc.UniaxialAnisotropy(K=K, u=u)
system.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha)
Explanation: Tutorial 4 - Current induced domain wall motion
In this tutorial we show how spin transfer torque (STT) can be included in micromagnetic simulations. To illustrate that, we will try to move a domain wall pair using spin-polarised current.
Let us simulate a two-dimensional sample with length $L = 500 \,\text{nm}$, width $w = 20 \,\text{nm}$ and discretisation cell $(2.5 \,\text{nm}, 2.5 \,\text{nm}, 2.5 \,\text{nm})$. The material parameters are:
exchange energy constant $A = 15 \,\text{pJ}\,\text{m}^{-1}$,
Dzyaloshinskii-Moriya energy constant $D = 3 \,\text{mJ}\,\text{m}^{-2}$,
uniaxial anisotropy constant $K = 0.5 \,\text{MJ}\,\text{m}^{-3}$ with easy axis $\mathbf{u}$ in the out of plane direction $(0, 0, 1)$,
gyrotropic ratio $\gamma = 2.211 \times 10^{5} \,\text{m}\,\text{A}^{-1}\,\text{s}^{-1}$, and
Gilbert damping $\alpha=0.3$.
End of explanation
def m_value(pos):
x, y, z = pos
if 20e-9 < x < 40e-9:
return (0, 1e-8, -1)
else:
return (0, 1e-8, 1)
# We have added the y-component of 1e-8 to the magnetisation to be able to
# plot the vector field. This will not be necessary in the long run.
system.m = df.Field(mesh, value=m_value, norm=Ms)
system.m.plot_slice("z", 0);
Explanation: Because we want to move a DW pair, we need to initialise the magnetisation in an appropriate way before we relax the system.
End of explanation
md = oc.MinDriver()
md.drive(system)
system.m.plot_slice("z", 0);
Explanation: Now, we can relax the magnetisation.
End of explanation
ux = 400 # velocity in x direction (m/s)
beta = 0.5 # non-adiabatic STT parameter
system.dynamics += oc.STT(u=(ux, 0, 0), beta=beta) # please notice the use of `+=` operator
Explanation: Now we can add the STT term to the dynamics equation.
End of explanation
td = oc.TimeDriver()
td.drive(system, t=0.5e-9, n=100)
system.m.plot_slice("z", 0);
Explanation: And drive the system for half a nano second:
End of explanation
# Definition of parameters
L = 500e-9 # sample length (m)
w = 20e-9 # sample width (m)
d = 2.5e-9 # discretisation cell size (m)
Ms = 5.8e5 # saturation magnetisation (A/m)
A = 15e-12 # exchange energy constant (J/)
D = 3e-3 # Dzyaloshinkii-Moriya energy constant (J/m**2)
K = 0.5e6 # uniaxial anisotropy constant (J/m**3)
u = (0, 0, 1) # easy axis
gamma = 2.211e5 # gyromagnetic ratio (m/As)
alpha = 0.3 # Gilbert damping
# Mesh definition
p1 = (0, 0, 0)
p2 = (L, w, d)
cell = (d, d, d)
mesh = oc.Mesh(p1=p1, p2=p2, cell=cell)
# Micromagnetic system definition
system = oc.System(name="domain_wall")
system.hamiltonian = oc.Exchange(A=A) + \
oc.DMI(D=D, kind="interfacial") + \
oc.UniaxialAnisotropy(K=K, u=u)
system.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha)
def m_value(pos):
x, y, z = pos
if 20e-9 < x < 40e-9:
return (0, 1e-8, -1)
else:
return (0, 1e-8, 1)
# We have added the y-component of 1e-8 to the magnetisation to be able to
# plot the vector field. This will not be necessary in the long run.
system.m = df.Field(mesh, value=m_value, norm=Ms)
system.m.plot_slice("z", 0);
md = oc.MinDriver()
md.drive(system)
system.m.plot_slice("z", 0);
ux = 400 # velocity in x direction (m/s)
beta = 0.5 # non-adiabatic STT parameter
system.dynamics += oc.STT(u=(ux, 0, 0), beta=beta)
td = oc.TimeDriver()
td.drive(system, t=0.5e-9, n=100)
system.m.plot_slice("z", 0);
Explanation: We see that the DW pair has moved to the positive $x$ direction.
Exercise
Modify the code below (which is a copy of the example from above) to obtain one domain wall instead of a domain wall pair and move it using the same current.
End of explanation |
2,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial of how to use scikit-criteria AHP extension module
Author
Step1: In other hand AHP uses as an in put 2 totally different
$$
AHP(CvC, AvA)
$$
Where
Step2: The function ahp.t (from triangular) accept the lower half of the mattrix and return a complete mattrix with the reciprocal values
Step3: 2. Validating the data (optional)
You can validate if some mattrix has the correct values for AHP with the function
python
ahp.validate_ahp_matrix(n, mtx)
Where
Step4: Example 2 invalid Satty Values in the cell (0, 1)
Step5: Example 3
Step6: 3. Running AHP
First lets create a criteria vs criteria mattrix
Step7: And lets asume we have 3 alternatives, and because we have 3 criteria
Step8: Now run th ahp.ahp() function. This function return 6 values
The rank of the alternatives (rank).
The points of every alternative (points).
The criteria consistence index (crit_ci)
The alternative vs alternative by criteria consistency index (avabc_ci)
The criteria consistence ratio (crit_cr).
ranked, points, crit_ci, avabc_ci, crit_cr, avabc_cr
The alternative vs alternative by criteria consistency ratio (avabc_cr)
Step9: 4. Analysing the results
The rank vector
Step10: So ouer best altetnative is the first one, and the worst is the second one
The final points of every alternative is | Python Code:
from skcriteria import Data, MIN, MAX
mtx = [
[1, 2, 3], # alternative 1
[4, 5, 6], # alternative 2
]
mtx
# let's says the first two alternatives are
# for maximization and the last one for minimization
criteria = [MAX, MAX, MIN]
criteria
# et’s asume we know in our case, that the importance of
# the autonomy is the 50%, the confort only a 5% and
# the price is 45%
weights=[.5, .05, .45]
weights
data = Data(mtx, criteria, weights)
data
Explanation: Tutorial of how to use scikit-criteria AHP extension module
Author: Juan B Cabral jbc.develop@gmail.com
2018-feb-11
Considerations
This tutorial asumes that you know the AHP method
The full example is here (Spanish only)
Citation
If you use scikit-criteria or the AHP extension in a scientific publication or thesis, we would appreciate citations to the following paper:
Cabral, Juan B., Nadia Ayelen Luczywo, and José Luis Zanazzi 2016 Scikit-Criteria: Colección de Métodos de Análisis Multi-Criterio Integrado Al Stack Científico de Python. In XLV Jornadas Argentinas de Informática E Investigación Operativa (45JAIIO)-XIV Simposio Argentino de Investigación Operativa (SIO) (Buenos Aires, 2016) Pp. 59–66. http://45jaiio.sadio.org.ar/sites/default/files/Sio-23.pdf.
Bibtex entry:
bibtex
@inproceedings{scikit-criteria,
author={
Juan B Cabral and Nadia Ayelen Luczywo and Jos\'{e} Luis Zanazzi},
title={
Scikit-Criteria: Colecci\'{o}n de m\'{e}todos de an\'{a}lisis
multi-criterio integrado al stack cient\'{i}fico de {P}ython},
booktitle = {
XLV Jornadas Argentinas de Inform{\'a}tica
e Investigaci{\'o}n Operativa (45JAIIO)-
XIV Simposio Argentino de Investigaci\'{o}n Operativa (SIO)
(Buenos Aires, 2016)},
year={2016},
pages = {59--66},
url={http://45jaiio.sadio.org.ar/sites/default/files/Sio-23.pdf}
}
Instalation
Installing Scikit-Criteria: http://scikit-criteria.org/en/latest/install.html
Download the ahp.py module.
Why AHP is not part of Scikit-Criteria
The main problem is how the data are feeded to AHP. All the methods included in Scikit-Criteria uses the clasical
$$
SkC_{madm}(mtx, criteria, weights)
$$
Where
$SkC_{madm}$ is a Scikit-Criteria multi-attribute-decision-making method
$mtx$ is the alternative 2D array-like matrix, where where every column is a criteria, and every row is an alternative.
$criteria$ 1D array-like whit the same number of elements than columns has the alternative mattrix (mtx) where every component represent the optimal sense of every criteria.
$weights$ weights 1D array like.
All this 3 components can be modeled as the single scikit-criteria DATA object:
End of explanation
import ahp
Explanation: In other hand AHP uses as an in put 2 totally different
$$
AHP(CvC, AvA)
$$
Where:
$CVC$: A triangular matrix of criteria vs criteria with values from Satty Scale
$AvA$: A collection of $n$ triangular matrices of alternative vs alternative with values from Satty Scale
AHP Turorial
1. Creating triangular matrices
first we need to import the ahp module
End of explanation
mtx = ahp.t(
[[1],
[1., 1],
[1/3.0, 1/6.0, 1]])
mtx
Explanation: The function ahp.t (from triangular) accept the lower half of the mattrix and return a complete mattrix with the reciprocal values
End of explanation
# this validate the data
ahp.validate_ahp_matrix(3, mtx)
Explanation: 2. Validating the data (optional)
You can validate if some mattrix has the correct values for AHP with the function
python
ahp.validate_ahp_matrix(n, mtx)
Where:
n: is the number of rows and columns (remember all mattrix in AHP has the same rows and columns).
mtx: The mattrix to validate.
Example 1 - Correct Mattrix
End of explanation
invalid_mtx = mtx.copy()
invalid_mtx[0, 1] = 89
invalid_mtx
ahp.validate_ahp_matrix(3, invalid_mtx)
Explanation: Example 2 invalid Satty Values in the cell (0, 1)
End of explanation
invalid_mtx = mtx.copy()
invalid_mtx[0, 1] = 0.5
invalid_mtx
ahp.validate_ahp_matrix(3, invalid_mtx)
Explanation: Example 3: Matrix with un-recriprocal values
End of explanation
crit_vs_crit = ahp.t([
[1.],
[1./3., 1.],
[1./3., 1./2., 1.]
])
crit_vs_crit
Explanation: 3. Running AHP
First lets create a criteria vs criteria mattrix
End of explanation
alt_vs_alt_by_crit = [
ahp.t([[1.],
[1./5., 1.],
[1./3., 3., 1.]]),
ahp.t([
[1.],
[9., 1.],
[3., 1./5., 1.]]),
ahp.t([[1.],
[1/2., 1.],
[5., 7., 1.]]),
]
alt_vs_alt_by_crit
Explanation: And lets asume we have 3 alternatives, and because we have 3 criteria: 3 alternatives vs alternatives mattrix must be created
End of explanation
result = ahp.ahp(crit_vs_crit, alt_vs_alt_by_crit)
rank, points, crit_ci, avabc_ci, crit_cr, avabc_cr = result
Explanation: Now run th ahp.ahp() function. This function return 6 values
The rank of the alternatives (rank).
The points of every alternative (points).
The criteria consistence index (crit_ci)
The alternative vs alternative by criteria consistency index (avabc_ci)
The criteria consistence ratio (crit_cr).
ranked, points, crit_ci, avabc_ci, crit_cr, avabc_cr
The alternative vs alternative by criteria consistency ratio (avabc_cr)
End of explanation
rank
Explanation: 4. Analysing the results
The rank vector:
End of explanation
points
Explanation: So ouer best altetnative is the first one, and the worst is the second one
The final points of every alternative is:
End of explanation |
2,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overfitting demo
Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$
Step1: Create random values for x in interval [0,1)
Step2: Compute y
Step3: Add random Gaussian noise to y
Step4: Put data into an SFrame to manipulate later
Step5: Create a function to plot the data, since we'll do it many times
Step6: Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree
Step7: Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data"
Step8: Define function to plot data and predictions made, since we are going to use it many times.
Step9: Create a function that prints the polynomial coefficients in a pretty way
Step10: Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above
Step11: Inspect learned parameters
Step12: Form and plot our predictions along a grid of x values
Step13: Fit a degree-4 polynomial
Step14: Fit a degree-16 polynomial
Step15: Woah!!!! Those coefficients are crazy! On the order of 10^6.
Step16: Above
Step17: Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
Step18: Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
Step19: Run LOO cross validation for "num" values of lambda, on a log scale
Step20: Plot results of estimating LOO for each value of lambda
Step21: Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
Step22: Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
Step23: Let's look at fits for a sequence of increasing lambda values
Step24: Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree
Step25: Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "L1_penalty" | Python Code:
import graphlab
import math
import random
import numpy
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Overfitting demo
Create a dataset based on a true sinusoidal relationship
Let's look at a synthetic dataset consisting of 30 points drawn from the sinusoid $y = \sin(4x)$:
End of explanation
random.seed(1)
n = 30
x = graphlab.SArray([random.random() for i in range(n)]).sort()
Explanation: Create random values for x in interval [0,1)
End of explanation
y = x.apply(lambda x: math.sin(4*x))
Explanation: Compute y
End of explanation
e = graphlab.SArray([random.gauss(0,1.0/3.0) for i in range(n)])
y = y + e
Explanation: Add random Gaussian noise to y
End of explanation
data = graphlab.SFrame({'X1':x,'Y':y})
data
Explanation: Put data into an SFrame to manipulate later
End of explanation
def plot_data(data):
plt.plot(data['X1'],data['Y'],'k.')
plt.xlabel('x')
plt.ylabel('y')
plot_data(data)
Explanation: Create a function to plot the data, since we'll do it many times
End of explanation
def polynomial_features(data, deg):
data_copy=data.copy()
for i in range(1,deg):
data_copy['X'+str(i+1)]=data_copy['X'+str(i)]*data_copy['X1']
return data_copy
Explanation: Define some useful polynomial regression functions
Define a function to create our features for a polynomial regression model of any degree:
End of explanation
def polynomial_regression(data, deg):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,l1_penalty=0.,
validation_set=None,verbose=False)
return model
Explanation: Define a function to fit a polynomial linear regression model of degree "deg" to the data in "data":
End of explanation
def plot_poly_predictions(data, model):
plot_data(data)
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Create 200 points in the x axis and compute the predicted value for each point
xs = graphlab.SFrame({'X1':[i/200.0 for i in range(200)]})
ys = model.predict(polynomial_features(xs,deg))
# plot predictions
plt.plot(xs['X1'], ys, 'g-', label='degree ' + str(deg) + ' fit')
plt.legend(loc='upper left')
plt.axis([0,1,-1.5,2])
Explanation: Define function to plot data and predictions made, since we are going to use it many times.
End of explanation
def print_coefficients(model):
# Get the degree of the polynomial
deg = len(model.coefficients['value'])-1
# Get learned parameters as a list
w = list(model.coefficients['value'])
# Numpy has a nifty function to print out polynomials in a pretty way
# (We'll use it, but it needs the parameters in the reverse order)
print 'Learned polynomial for degree ' + str(deg) + ':'
w.reverse()
print numpy.poly1d(w)
Explanation: Create a function that prints the polynomial coefficients in a pretty way :)
End of explanation
model = polynomial_regression(data, deg=2)
Explanation: Fit a degree-2 polynomial
Fit our degree-2 polynomial to the data generated above:
End of explanation
print_coefficients(model)
Explanation: Inspect learned parameters
End of explanation
plot_poly_predictions(data,model)
Explanation: Form and plot our predictions along a grid of x values:
End of explanation
model = polynomial_regression(data, deg=4)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Fit a degree-4 polynomial
End of explanation
model = polynomial_regression(data, deg=16)
print_coefficients(model)
Explanation: Fit a degree-16 polynomial
End of explanation
plot_poly_predictions(data,model)
Explanation: Woah!!!! Those coefficients are crazy! On the order of 10^6.
End of explanation
def polynomial_ridge_regression(data, deg, l2_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=l2_penalty,
validation_set=None,verbose=False)
return model
Explanation: Above: Fit looks pretty wild, too. Here's a clear example of how overfitting is associated with very large magnitude estimated coefficients.
#
#
Ridge Regression
Ridge regression aims to avoid overfitting by adding a cost to the RSS term of standard least squares that depends on the 2-norm of the coefficients $\|w\|$. The result is penalizing fits with large coefficients. The strength of this penalty, and thus the fit vs. model complexity balance, is controled by a parameter lambda (here called "L2_penalty").
Define our function to solve the ridge objective for a polynomial regression model of any degree:
End of explanation
model = polynomial_ridge_regression(data, deg=16, l2_penalty=1e-25)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Perform a ridge fit of a degree-16 polynomial using a very small penalty strength
End of explanation
# LOO cross validation -- return the average MSE
def loo(data, deg, l2_penalty_values):
# Create polynomial features
polynomial_features(data, deg)
# Create as many folds for cross validatation as number of data points
num_folds = len(data)
folds = graphlab.cross_validation.KFold(data,num_folds)
# for each value of l2_penalty, fit a model for each fold and compute average MSE
l2_penalty_mse = []
min_mse = None
best_l2_penalty = None
for l2_penalty in l2_penalty_values:
next_mse = 0.0
for train_set, validation_set in folds:
# train model
model = graphlab.linear_regression.create(train_set,target='Y',
l2_penalty=l2_penalty,
validation_set=None,verbose=False)
# predict on validation set
y_test_predicted = model.predict(validation_set)
# compute squared error
next_mse += ((y_test_predicted-validation_set['Y'])**2).sum()
# save squared error in list of MSE for each l2_penalty
next_mse = next_mse/num_folds
l2_penalty_mse.append(next_mse)
if min_mse is None or next_mse < min_mse:
min_mse = next_mse
best_l2_penalty = l2_penalty
return l2_penalty_mse,best_l2_penalty
Explanation: Perform a ridge fit of a degree-16 polynomial using a "good" penalty strength
We will learn about cross validation later in this course as a way to select a good value of the tuning parameter (penalty strength) lambda. Here, we consider "leave one out" (LOO) cross validation, which one can show approximates average mean square error (MSE). As a result, choosing lambda to minimize the LOO error is equivalent to choosing lambda to minimize an approximation to average MSE.
End of explanation
l2_penalty_values = numpy.logspace(-4, 10, num=10)
l2_penalty_mse,best_l2_penalty = loo(data, 16, l2_penalty_values)
Explanation: Run LOO cross validation for "num" values of lambda, on a log scale
End of explanation
plt.plot(l2_penalty_values,l2_penalty_mse,'k-')
plt.xlabel('$\L2_penalty$')
plt.ylabel('LOO cross validation error')
plt.xscale('log')
plt.yscale('log')
Explanation: Plot results of estimating LOO for each value of lambda
End of explanation
best_l2_penalty
model = polynomial_ridge_regression(data, deg=16, l2_penalty=best_l2_penalty)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Find the value of lambda, $\lambda_{\mathrm{CV}}$, that minimizes the LOO cross validation error, and plot resulting fit
End of explanation
model = polynomial_ridge_regression(data, deg=16, l2_penalty=100)
print_coefficients(model)
plot_poly_predictions(data,model)
Explanation: Perform a ridge fit of a degree-16 polynomial using a very large penalty strength
End of explanation
for l2_penalty in [1e-25, 1e-20, 1e-8, 1e-3, 1e2]:
model = polynomial_ridge_regression(data, deg=16, l2_penalty=l2_penalty)
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
Explanation: Let's look at fits for a sequence of increasing lambda values
End of explanation
def polynomial_lasso_regression(data, deg, l1_penalty):
model = graphlab.linear_regression.create(polynomial_features(data,deg),
target='Y', l2_penalty=0.,
l1_penalty=l1_penalty,
validation_set=None,
solver='fista', verbose=False,
max_iterations=2000, convergence_threshold=1e-10)
return model
Explanation: Lasso Regression
Lasso regression jointly shrinks coefficients to avoid overfitting, and implicitly performs feature selection by setting some coefficients exactly to 0 for sufficiently large penalty strength lambda (here called "L1_penalty"). In particular, lasso takes the RSS term of standard least squares and adds a 1-norm cost of the coefficients $\|w\|$.
Define our function to solve the lasso objective for a polynomial regression model of any degree:
End of explanation
for l1_penalty in [1e-10, 1e-2, 1e-01, 1, 1e1]:
model = polynomial_lasso_regression(data, deg=16, l1_penalty=l1_penalty)
print 'l1_penalty = %e' % l1_penalty
print 'number of nonzeros = %d' % (model.coefficients['value']).nnz()
print_coefficients(model)
print '\n'
plt.figure()
plot_poly_predictions(data,model)
plt.title('LASSO, lambda = %.2e, # nonzeros = %d' % (l1_penalty, (model.coefficients['value']).nnz()))
Explanation: Explore the lasso solution as a function of a few different penalty strengths
We refer to lambda in the lasso case below as "L1_penalty"
End of explanation |
2,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Networks With BatchFlow
Now it's time to talk about convolutional neural networks and in this notebook you will find out how to do
Step1: You don't need to implement a MNIST dataset. It is already done for you.
Step2: We can use deep learning frameworks such as TensorFlow or PyTorch to make a neural network. These frameworks have a lot of differences under the hood. Batchflow allows us not to dive deep into each of them and use the same model configuration, thereby allowing us to build framework-agnostic models.
But before, we should define model class 'model' and channels positions 'channels' (for TensorFlow models - 'last', for PyTorch models - 'first') in config.
There are also predefined models of both frameworks. You can use them without additional configuration.
Model configuration
Step3: As we already learned from the previous tutorials, first of all you have to define model configuration and create train and test pipelines.
A little bit about the structure of batchflow model
Step5: Train pipeline
We define our custom function for data augmentation.
Step6: When config is defined, next step is to create a pipeline. Note that rotate and scale are methods of the ImagesBatch class. You can see all avalible augmentations in images tutorial.
In contrast to them apply_transform is a function from Batch class. It is worth mentioning because it runs our function custom_filter in parallel. About parallel method read docs.
Step7: Validation pipeline
Testing on the augmented data
Step8: Training process
We introduce an early stopping to terminate the model training when an average accuracy for a few last epochs will exceed 90 percent.
Step9: Take a look at the loss history during training.
Step10: Results
Our network is ready for inference. Now we don't use data augmentations. Let's take a look at the predictions.
Step11: It's always interesting to look at the images, so let's draw them. | Python Code:
import sys
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import PIL
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
# the following line is not required if BatchFlow is installed as a python package.
sys.path.append('../..')
from batchflow import D, B, V, C, R, P
from batchflow.utils import plot_images
from batchflow.opensets import MNIST
from batchflow.models.tf import TFModel
from batchflow.models.torch import TorchModel
from batchflow.models.metrics import ClassificationMetrics
plt.style.use('ggplot')
Explanation: Convolutional Neural Networks With BatchFlow
Now it's time to talk about convolutional neural networks and in this notebook you will find out how to do:
* data augmentation;
* early stopping.
End of explanation
mnist = MNIST()
Explanation: You don't need to implement a MNIST dataset. It is already done for you.
End of explanation
config = {
'model': TorchModel,
'channels': 'first'}
# or for TensorFlow model
# config = {
# 'model': TFModel,
# 'channels': 'last'}
Explanation: We can use deep learning frameworks such as TensorFlow or PyTorch to make a neural network. These frameworks have a lot of differences under the hood. Batchflow allows us not to dive deep into each of them and use the same model configuration, thereby allowing us to build framework-agnostic models.
But before, we should define model class 'model' and channels positions 'channels' (for TensorFlow models - 'last', for PyTorch models - 'first') in config.
There are also predefined models of both frameworks. You can use them without additional configuration.
Model configuration
End of explanation
model_config = {
'inputs/images/shape': B.image_shape,
'inputs/labels/classes': D.num_classes,
'initial_block/inputs': 'images',
'body': {'layout': 'cna cna cna',
'filters': [16, 32, 64],
'kernel_size': [7, 5, 3],
'strides': 2},
'head': {'layout': 'Pf',
'units': 10},
'loss': 'ce',
'optimizer': 'Adam',
'output': dict(predicted=['proba', 'labels'])
}
Explanation: As we already learned from the previous tutorials, first of all you have to define model configuration and create train and test pipelines.
A little bit about the structure of batchflow model:
* initial_block - block containing the input layers;
* body - the main part of the model;
* head - outputs layers, like global average pooling or dense layers.
Let's create a dict with configuration for our model — model_config. This dict is used when model is initialized. You can override default parameters or add new parameters by typing in a model_config key like 'body/layout' and params to this key. Similar way use it in the key 'initial_block/inputs' or 'head/units'.
The main parameter of each architecture is 'layout'. It is a sequence of letters, each letter meaning operation. For example, operations in our model:
* c - convolution layer,
* b - batch normalization,
* a - activation,
* P - global pooling,
* f - dense layer (fully connected).
In our configuration 'body/filters', 'body/kernel_size' are lists with a length equal to the number of convolutions, store individual parameters for each convolution. And 'body/strides' is an integer — therefore, the same value is used for all convolutional layers.
In docs you can read more.
End of explanation
def custom_filter(image, kernel_weights=None):
Apply filter with custom kernel to image
Parameters
----------
kernel_weights: np.array
Weights of kernel.
Returns
-------
filtered image
if kernel_weights is None:
kernel_weights = np.ones((3,3))
kernel_weights[1][1] = 10
kernel = PIL.ImageFilter.Kernel(kernel_weights.shape, kernel_weights.ravel())
return image.filter(kernel)
Explanation: Train pipeline
We define our custom function for data augmentation.
End of explanation
train_pipeline = (
mnist.train.p
.init_variable('loss_history', default=[])
.init_model('dynamic', C('model'), 'conv', config=model_config)
.apply_transform(custom_filter, src='images', p=0.8)
.shift(offset=P(R('randint', 8, size=2)), p=0.8)
.rotate(angle=P(R('uniform', -10, 10)), p=0.8)
.scale(factor=P(R('uniform', 0.8, 1.2, size=R([1, 2]))), preserve_shape=True, p=0.8)
.to_array(channels=C('channels'), dtype=np.float32)
.multiply(multiplier=1/255)
.train_model('conv', fetches='loss', images=B('images'), targets=B('labels'),
save_to=V('loss_history', mode='a'))
) << config
Explanation: When config is defined, next step is to create a pipeline. Note that rotate and scale are methods of the ImagesBatch class. You can see all avalible augmentations in images tutorial.
In contrast to them apply_transform is a function from Batch class. It is worth mentioning because it runs our function custom_filter in parallel. About parallel method read docs.
End of explanation
validation_pipeline = (
mnist.test.p
.init_variable('predictions')
.init_variable('metrics', default=None)
.import_model('conv', train_pipeline)
.apply_transform(custom_filter, src='images', p=0.8)
.shift(offset=P(R('randint', 8, size=2)), p=0.8)
.rotate(angle=P(R('uniform', -10, 10)), p=0.8)
.scale(factor=P(R('uniform', 0.8, 1.2, size=R([1, 2]))), preserve_shape=True, p=0.8)
.to_array(channels=C('channels'), dtype=np.float32)
.multiply(multiplier=1/255)
.predict_model('conv', images=B('images'),
fetches='predictions', save_to=V('predictions'))
.gather_metrics(ClassificationMetrics, targets=B('labels'), predictions=V('predictions'),
fmt='logits', axis=-1, save_to=V('metrics', mode='a'))
) << config
Explanation: Validation pipeline
Testing on the augmented data
End of explanation
MAX_ITER = 500
FREQUENCY = N_LAST = 20
batch_size = 128
for curr_iter in tqdm(range(1, MAX_ITER + 1)):
train_pipeline.next_batch(batch_size)
validation_pipeline.next_batch(batch_size)
if curr_iter % FREQUENCY == 0:
metrics = validation_pipeline.v('metrics')
accuracy = metrics[-N_LAST:].evaluate('accuracy')
#Early stopping
if accuracy > 0.9:
print('Early stop on {} iteration. Accuracy: {}'.format(curr_iter, accuracy))
break
Explanation: Training process
We introduce an early stopping to terminate the model training when an average accuracy for a few last epochs will exceed 90 percent.
End of explanation
plt.figure(figsize=(15, 5))
plt.plot(train_pipeline.v('loss_history'))
plt.xlabel("Iterations"), plt.ylabel("Loss")
plt.show()
Explanation: Take a look at the loss history during training.
End of explanation
inference_pipeline = (mnist.test.p
.init_variables('proba', 'labels')
.import_model('conv', train_pipeline)
.to_array(channels=C('channels'), dtype=np.float32)
.multiply(multiplier=1/255)
.predict_model('conv', images=B('images'),
fetches=['predicted_proba', 'predicted_labels'],
save_to=[V('proba'), V('labels')])) << config
Explanation: Results
Our network is ready for inference. Now we don't use data augmentations. Let's take a look at the predictions.
End of explanation
batch = inference_pipeline.next_batch(12, shuffle=True)
plot_images(np.squeeze(batch.images), batch.labels,
batch.pipeline.v('proba'), ncols=4, figsize=(30, 35))
Explanation: It's always interesting to look at the images, so let's draw them.
End of explanation |
2,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Preprocessing for Machine Learning
Learning Objectives
* Understand the different approaches for data preprocessing in developing ML models
* Use Dataflow to perform data preprocessing steps
Introduction
In the previous notebook we achieved an RMSE of 3.85. Let's see if we can improve upon that by creating a data preprocessing pipeline in Cloud Dataflow.
Preprocessing data for a machine learning model involves both data engineering and feature engineering. During data engineering, we convert raw data into prepared data which is necessary for the model. Feature engineering then takes that prepared data and creates the features expected by the model. We have already seen various ways we can engineer new features for a machine learning model and where those steps take place. We also have flexibility as to where data preprocessing steps can take place; for example, BigQuery, Cloud Dataflow and Tensorflow. In this lab, we'll explore different data preprocessing strategies and see how they can be accomplished with Cloud Dataflow.
One perspective in which to categorize different types of data preprocessing operations is in terms of the granularity of the operation. Here, we will consider the following three types of operations
Step1: Next, set the environment variables related to your GCP Project.
Step6: Create data preprocessing job with Cloud Dataflow
The following code reads from BigQuery and saves the data as-is on Google Cloud Storage. We could also do additional preprocessing and cleanup inside Dataflow. Note that, in this case we'd have to remember to repeat that prepreprocessing at prediction time to avoid training/serving skew. In general, it is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at how tf.transform works in another notebook. For now, we are simply moving data from BigQuery to CSV using Dataflow.
It's worth noting that while we could read from BQ directly from TensorFlow, it is quite convenient to export to CSV and do the training off CSV. We can do this at scale with Cloud Dataflow. Furthermore, because we are running this on the cloud, you should go to the GCP Console to view the status of the job. It will take several minutes for the preprocessing job to launch.
Define our query and pipeline functions
To start we'll copy over the create_query function we created in the 01_bigquery/c_extract_and_benchmark notebook.
Step7: Then, we'll write the csv we create to a Cloud Storage bucket. So, we'll look to see that the location is empty, and if not clear out its contents so that it is.
Step9: Next, we'll create a function and pipeline for preprocessing the data. First, we'll define a to_csv function which takes a row dictionary (a dictionary created from a BigQuery reader representing each row of a dataset) and returns a comma separated string for each record
Step11: Next, we define our primary preprocessing function. Reading through the code this creates a pipeline to read data from BigQuery, use our to_csv function above to make a comma separated string, then write to a file in Google Cloud Storage.
Step12: Now that we have the preprocessing pipeline function, we can execute the pipeline locally or on the cloud. To run our pipeline locally, we specify the RUNNER variable as DirectRunner. To run our pipeline in the cloud, we set RUNNER to be DataflowRunner. In either case, this variable is passed to the options dictionary that we use to instantiate the pipeline.
As with training a model, it is good practice to test your preprocessing pipeline locally with a subset of your data before running it against your entire dataset.
Run Beam pipeline locally
We'll start by testing our pipeline locally. This takes upto 5 minutes. You will see a message "Done" when it has finished.
Step13: Run Beam pipeline on Cloud Dataflow¶
Again, we'll clear out our bucket to GCS to ensure a fresh run.
Step14: The following step will take 15-20 minutes. Monitor job progress on the Dataflow section of GCP Console. Note, you can change the first arugment to "None" to process the full dataset.
Step15: Once the job finishes, we can look at the files that have been created and have a look at what they contain. You will notice that the files have been sharded into many csv files.
Step16: Develop a model with new inputs
We can now develop a model with these inputs. Download the first shard of the preprocessed data to a subfolder called sample so we can develop locally first.
Step17: To begin let's copy the model.py and task.py we developed in the previous notebooks here.
Step18: Let's have a look at the files contained within the taxifaremodel folder. Within model.py we see that feature_cols has three engineered features.
Step19: We can also see the engineered features that are created by the add_engineered_features function here.
Step20: We can try out this model on the local sample we've created to make sure everything works as expected. Note, this takes about 5 minutes to complete.
Step21: We've only done 10 training steps, so we don't expect the model to have good performance. Let's have a look at the exported files from our training job.
Step22: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
Step23: To test out prediciton with out model, we create a temporary json file containing the expected feature values.
Step24: Train on the Cloud
This will take 10-15 minutes even though the prompt immediately returns after the job is submitted. Monitor job progress on the ML Engine section of Cloud Console and wait for the training job to complete.
Step25: Once the model has finished training on the cloud, we can check the export folder to see that a model has been correctly saved.
Step26: As before, we can use the saved_model_cli to examine the exported signature.
Step27: And check out model's prediction with a local predict job on our test file.
Step28: Hyperparameter tuning
Recall the hyper-parameter tuning notebook. We can repeat the process there to decide the best parameters to use for model. Based on that run, I ended up choosing | Python Code:
#Ensure that we have the correct version of Apache Beam installed
!pip freeze | grep apache-beam || sudo pip install apache-beam[gcp]==2.12.0
import tensorflow as tf
import apache_beam as beam
import shutil
import os
print(tf.__version__)
Explanation: Data Preprocessing for Machine Learning
Learning Objectives
* Understand the different approaches for data preprocessing in developing ML models
* Use Dataflow to perform data preprocessing steps
Introduction
In the previous notebook we achieved an RMSE of 3.85. Let's see if we can improve upon that by creating a data preprocessing pipeline in Cloud Dataflow.
Preprocessing data for a machine learning model involves both data engineering and feature engineering. During data engineering, we convert raw data into prepared data which is necessary for the model. Feature engineering then takes that prepared data and creates the features expected by the model. We have already seen various ways we can engineer new features for a machine learning model and where those steps take place. We also have flexibility as to where data preprocessing steps can take place; for example, BigQuery, Cloud Dataflow and Tensorflow. In this lab, we'll explore different data preprocessing strategies and see how they can be accomplished with Cloud Dataflow.
One perspective in which to categorize different types of data preprocessing operations is in terms of the granularity of the operation. Here, we will consider the following three types of operations:
1. Instance-level transformations
2. Full-pass transformations
3. Time-windowed aggregations
Cloud Dataflow can perform each of these types of operations and is particularly useful when performing computationally expensive operations as it is an autoscaling service for batch and streaming data processing pipelines. We'll say a few words about each of these below. For more information, have a look at this article about data preprocessing for machine learning from Google Cloud.
1. Instance-level transformations
These are transformations which take place during training and prediction, looking only at values from a single data point. For example, they might include clipping the value of a feature, polynomially expand a feature, multiply two features, or compare two features to create a Boolean flag.
It is necessary to apply the same transformations at training time and at prediction time. Failure to do this results in training/serving skew and will negatively affect the performance of the model.
2. Full-pass transformations
These transformations occur during training, but occur as instance-level operations during prediction. That is, during training you must analyze the entirety of the training data to compute quantities such as maximum, minimum, mean or variance while at prediction time you need only use those values to rescale or normalize a single data point.
A good example to keep in mind is standard scaling (z-score normalization) of features for training. You need to compute the mean and standard deviation of that feature across the whole training data set, thus it is called a full-pass transformation. At prediction time you use those previously computed values to appropriately normalize the new data point. Failure to do so results in training/serving skew.
3. Time-windowed aggregations
These types of transformations occur during training and at prediction time. They involve creating a feature by summarizing real-time values by aggregating over some temporal window clause. For example, if we wanted our model to estimate the taxi trip time based on the traffic metrics for the route in the last 5 minutes, in the last 10 minutes or the last 30 minutes we would want to create a time-window to aggreagate these values.
At prediction time these aggregations have to be computed in real-time from a data stream.
Set environment variables and load necessary libraries
Apache Beam only works in Python 2 at the moment, so switch to the Python 2 kernel in the upper right hand side. Then execute the following cells to install the necessary libraries if they have not been installed already.
End of explanation
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.13" # TF version for CMLE to use
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
## ensure we predict locally with our current Python environment
gcloud config set ml_engine/local_python `which python`
Explanation: Next, set the environment variables related to your GCP Project.
End of explanation
def create_query(phase, sample_size):
basequery =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
EXTRACT(DAYOFWEEK from pickup_datetime) AS dayofweek,
EXTRACT(HOUR from pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N)) = 1
if phase == 'TRAIN':
subsample =
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 0)
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 70)
elif phase == 'VALID':
subsample =
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 70)
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 85)
elif phase == 'TEST':
subsample =
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) >= (EVERY_N * 85)
AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), EVERY_N * 100)) < (EVERY_N * 100)
query = basequery + subsample
return query.replace("EVERY_N", sample_size)
Explanation: Create data preprocessing job with Cloud Dataflow
The following code reads from BigQuery and saves the data as-is on Google Cloud Storage. We could also do additional preprocessing and cleanup inside Dataflow. Note that, in this case we'd have to remember to repeat that prepreprocessing at prediction time to avoid training/serving skew. In general, it is better to use tf.transform which will do this book-keeping for you, or to do preprocessing within your TensorFlow model. We will look at how tf.transform works in another notebook. For now, we are simply moving data from BigQuery to CSV using Dataflow.
It's worth noting that while we could read from BQ directly from TensorFlow, it is quite convenient to export to CSV and do the training off CSV. We can do this at scale with Cloud Dataflow. Furthermore, because we are running this on the cloud, you should go to the GCP Console to view the status of the job. It will take several minutes for the preprocessing job to launch.
Define our query and pipeline functions
To start we'll copy over the create_query function we created in the 01_bigquery/c_extract_and_benchmark notebook.
End of explanation
%%bash
if gsutil ls | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://$BUCKET/taxifare/ch4/taxi_preproc/
fi
Explanation: Then, we'll write the csv we create to a Cloud Storage bucket. So, we'll look to see that the location is empty, and if not clear out its contents so that it is.
End of explanation
def to_csv(rowdict):
Arguments:
-rowdict: Dictionary. The beam bigquery reader returns a PCollection in
which each row is represented as a python dictionary
Returns:
-rowstring: a comma separated string representation of the record
days = ["null", "Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"]
CSV_COLUMNS = "fare_amount,dayofweek,hourofday,pickuplon,pickuplat,dropofflon,dropofflat".split(',')
rowstring = ','.join([str(rowdict[k]) for k in CSV_COLUMNS])
return rowstring
Explanation: Next, we'll create a function and pipeline for preprocessing the data. First, we'll define a to_csv function which takes a row dictionary (a dictionary created from a BigQuery reader representing each row of a dataset) and returns a comma separated string for each record
End of explanation
import datetime
def preprocess(EVERY_N, RUNNER):
Arguments:
-EVERY_N: Integer. Sample one out of every N rows from the full dataset.
Larger values will yield smaller sample
-RUNNER: "DirectRunner" or "DataflowRunner". Specfy to run the pipeline
locally or on Google Cloud respectively.
Side-effects:
-Creates and executes dataflow pipeline.
See https://beam.apache.org/documentation/programming-guide/#creating-a-pipeline
job_name = "preprocess-taxifeatures" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S")
print("Launching Dataflow job {} ... hang on".format(job_name))
OUTPUT_DIR = "gs://{0}/taxifare/ch4/taxi_preproc/".format(BUCKET)
#dictionary of pipeline options
options = {
"staging_location": os.path.join(OUTPUT_DIR, "tmp", "staging"),
"temp_location": os.path.join(OUTPUT_DIR, "tmp"),
"job_name": "preprocess-taxifeatures" + "-" + datetime.datetime.now().strftime("%y%m%d-%H%M%S"),
"project": PROJECT,
"runner": RUNNER
}
#instantiate PipelineOptions object using options dictionary
opts = beam.pipeline.PipelineOptions(flags = [], **options)
#instantantiate Pipeline object using PipelineOptions
with beam.Pipeline(options=opts) as p:
for phase in ["TRAIN", "VALID", "TEST"]:
query = create_query(phase, EVERY_N)
outfile = os.path.join(OUTPUT_DIR, "{}.csv".format(phase))
(
p | "read_{}".format(phase) >> beam.io.Read(beam.io.BigQuerySource(query = query, use_standard_sql = True))
| "tocsv_{}".format(phase) >> beam.Map(to_csv)
| "write_{}".format(phase) >> beam.io.Write(beam.io.WriteToText(outfile))
)
print("Done")
Explanation: Next, we define our primary preprocessing function. Reading through the code this creates a pipeline to read data from BigQuery, use our to_csv function above to make a comma separated string, then write to a file in Google Cloud Storage.
End of explanation
preprocess("50*10000", "DirectRunner")
Explanation: Now that we have the preprocessing pipeline function, we can execute the pipeline locally or on the cloud. To run our pipeline locally, we specify the RUNNER variable as DirectRunner. To run our pipeline in the cloud, we set RUNNER to be DataflowRunner. In either case, this variable is passed to the options dictionary that we use to instantiate the pipeline.
As with training a model, it is good practice to test your preprocessing pipeline locally with a subset of your data before running it against your entire dataset.
Run Beam pipeline locally
We'll start by testing our pipeline locally. This takes upto 5 minutes. You will see a message "Done" when it has finished.
End of explanation
%%bash
if gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://${BUCKET}/taxifare/ch4/taxi_preproc/
fi
Explanation: Run Beam pipeline on Cloud Dataflow¶
Again, we'll clear out our bucket to GCS to ensure a fresh run.
End of explanation
preprocess("50*100", "DataflowRunner")
Explanation: The following step will take 15-20 minutes. Monitor job progress on the Dataflow section of GCP Console. Note, you can change the first arugment to "None" to process the full dataset.
End of explanation
%%bash
gsutil ls -l gs://$BUCKET/taxifare/ch4/taxi_preproc/
%%bash
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*" | head
Explanation: Once the job finishes, we can look at the files that have been created and have a look at what they contain. You will notice that the files have been sharded into many csv files.
End of explanation
%%bash
if [ -d sample ]; then
rm -rf sample
fi
mkdir sample
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/TRAIN.csv-00000-of-*" > sample/train.csv
gsutil cat "gs://$BUCKET/taxifare/ch4/taxi_preproc/VALID.csv-00000-of-*" > sample/valid.csv
Explanation: Develop a model with new inputs
We can now develop a model with these inputs. Download the first shard of the preprocessed data to a subfolder called sample so we can develop locally first.
End of explanation
%%bash
MODELDIR=./taxifaremodel
test -d $MODELDIR || mkdir $MODELDIR
cp -r ../03_model_performance/taxifaremodel/* $MODELDIR
Explanation: To begin let's copy the model.py and task.py we developed in the previous notebooks here.
End of explanation
%%bash
grep -A 15 "feature_cols =" taxifaremodel/model.py
Explanation: Let's have a look at the files contained within the taxifaremodel folder. Within model.py we see that feature_cols has three engineered features.
End of explanation
%%bash
grep -A 5 "add_engineered_features" taxifaremodel/model.py
Explanation: We can also see the engineered features that are created by the add_engineered_features function here.
End of explanation
%%bash
rm -rf taxifare.tar.gz taxi_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python -m taxifaremodel.task \
--train_data_path=${PWD}/sample/train.csv \
--eval_data_path=${PWD}/sample/valid.csv \
--output_dir=${PWD}/taxi_trained \
--train_steps=10 \
--job-dir=/tmp
Explanation: We can try out this model on the local sample we've created to make sure everything works as expected. Note, this takes about 5 minutes to complete.
End of explanation
%%bash
ls -R taxi_trained/export
Explanation: We've only done 10 training steps, so we don't expect the model to have good performance. Let's have a look at the exported files from our training job.
End of explanation
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${PWD}/taxi_trained/export/exporter/${model_dir} --all
Explanation: You can use saved_model_cli to look at the exported signature. Note that the model doesn't need any of the engineered features as inputs. It will compute latdiff, londiff, euclidean from the provided inputs, thanks to the add_engineered call in the serving_input_fn.
End of explanation
%%writefile /tmp/test.json
{"dayofweek": 0, "hourofday": 17, "pickuplon": -73.885262, "pickuplat": 40.773008, "dropofflon": -73.987232, "dropofflat": 40.732403}
%%bash
model_dir=$(ls ${PWD}/taxi_trained/export/exporter)
gcloud ml-engine local predict \
--model-dir=${PWD}/taxi_trained/export/exporter/${model_dir} \
--json-instances=/tmp/test.json
Explanation: To test out prediciton with out model, we create a temporary json file containing the expected feature values.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/taxifare/ch4/taxi_trained
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=taxifaremodel.task \
--package-path=${PWD}/taxifaremodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC \
--runtime-version=$TFVERSION \
-- \
--train_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/TRAIN*" \
--eval_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/VALID*" \
--train_steps=5000 \
--output_dir=$OUTDIR
Explanation: Train on the Cloud
This will take 10-15 minutes even though the prompt immediately returns after the job is submitted. Monitor job progress on the ML Engine section of Cloud Console and wait for the training job to complete.
End of explanation
%%bash
gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1
Explanation: Once the model has finished training on the cloud, we can check the export folder to see that a model has been correctly saved.
End of explanation
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
saved_model_cli show --dir ${model_dir} --all
Explanation: As before, we can use the saved_model_cli to examine the exported signature.
End of explanation
%%bash
model_dir=$(gsutil ls gs://${BUCKET}/taxifare/ch4/taxi_trained/export/exporter | tail -1)
gcloud ml-engine local predict \
--model-dir=${model_dir} \
--json-instances=/tmp/test.json
Explanation: And check out model's prediction with a local predict job on our test file.
End of explanation
%%bash
if gsutil ls -r gs://${BUCKET} | grep -q gs://${BUCKET}/taxifare/ch4/taxi_preproc/; then
gsutil -m rm -rf gs://${BUCKET}/taxifare/ch4/taxi_preproc/
fi
# Preprocess the entire dataset
preprocess(None, "DataflowRunner")
%%bash
WARNING -- this uses significant resources and is optional. Remove this line to run the block.
OUTDIR=gs://${BUCKET}/taxifare/feateng2m
JOBNAME=lab4a_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
TIER=STANDARD_1
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=taxifaremodel.task \
--package-path=${PWD}/taxifaremodel \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=$TIER \
--runtime-version=$TFVERSION \
-- \
--train_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/TRAIN*" \
--eval_data_path="gs://${BUCKET}/taxifare/ch4/taxi_preproc/VALID*" \
--output_dir=$OUTDIR \
--train_steps=418168 \
--hidden_units="64,64,64,8"
Explanation: Hyperparameter tuning
Recall the hyper-parameter tuning notebook. We can repeat the process there to decide the best parameters to use for model. Based on that run, I ended up choosing:
train_batch_size: 512
hidden_units: "64 64 64 8"
Let's now try a training job over a larger dataset.
(Optional) Run Cloud training on 2 million row dataset
This run uses as input 2 million rows and takes ~20 minutes with 10 workers (STANDARD_1 pricing tier). The model is exactly the same as above. The only changes are to the input (to use the larger dataset) and to the Cloud MLE tier (to use STANDARD_1 instead of BASIC -- STANDARD_1 is approximately 10x more powerful than BASIC). Because the Dataflow preprocessing takes about 15 minutes, we train here using csv files in a public bucket.
When doing distributed training, use train_steps instead of num_epochs. The distributed workers don't know how many rows there are, but we can calculate train_steps = num_rows * num_epochs / train_batch_size. In this case, we have 2141023 * 100 / 512 = 418168 train steps.
End of explanation |
2,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing PDF function in dmdd over time for SI and Anapole
On the colormaps near Q = 0, the pdf function seems to be predicting the wrong number of events for the SI and anapole models. SI should have more events in the end of the graph (days 200-350) and anapole should have more events in the beginning of the graph (days 50-150).
These graphs agree with the histogram plots that have annual modulation as well as the line graphs that I made over the summer. The SI model experiences a lower probability from days t=50-150 and a higher probability from t=200-350, and the anapole model is opposite. However this disagrees with the values
Note
Step1: Testing graphing without a normalized PDF function
The following cell tests graphing a theory plot without normalizing the PDF function. If it displays correctly now, then the problem is likely the normalization, not the PDF function or the colormap graph. If it still displays incorrectly, the problem is with the colormap. | Python Code:
pdf_list = []
times = np.linspace(0, 365, 366) #365 days to test
#test all days at same energies, where energy = 3
for i,time in enumerate(times):
value = dmdd.PDF(Q=[5.], time=time, element = 'xenon', mass = 50.,
sigma_si= 75.5, sigma_anapole = 0.,
Qmin = np.asarray([5.]), Qmax = np.asarray([10.]),
Tmin = 0, Tmax = 365)
pdf_list.append(value)
plt.plot(times, pdf_list)
plt.xlabel("Time in Days")
plt.ylabel("Predicted PDF SI Model")
plt.title("Predicted PDF over Time for SI Model")
#PDF values show correct annual modulation for events over time (not normalized)
pdf_list = []
times = np.linspace(0, 365, 366) #365 days to test
#test all days at same energies, where energy = 3
for i,time in enumerate(times):
value = dmdd.PDF(Q=[5.], time=time, element = 'xenon', mass = 50.,
sigma_si= 0., sigma_anapole = 44.25,
Qmin = np.asarray([5.]), Qmax = np.asarray([10.]),
Tmin = 0, Tmax = 365)
pdf_list.append(value)
plt.plot(times, pdf_list)
plt.xlabel("Time in Days")
plt.ylabel("Predicted PDF")
plt.title("Predicted PDF over Time for Anapole Model")
Explanation: Testing PDF function in dmdd over time for SI and Anapole
On the colormaps near Q = 0, the pdf function seems to be predicting the wrong number of events for the SI and anapole models. SI should have more events in the end of the graph (days 200-350) and anapole should have more events in the beginning of the graph (days 50-150).
These graphs agree with the histogram plots that have annual modulation as well as the line graphs that I made over the summer. The SI model experiences a lower probability from days t=50-150 and a higher probability from t=200-350, and the anapole model is opposite. However this disagrees with the values
Note: I had to do this with an unnormalized PDF, as the integral function wouldn't work from the notebook for some reason.
End of explanation
# shortcut for scattering models corresponding to rates coded in rate_UV:
anapole_model = dmdd.UV_Model('Anapole', ['mass','sigma_anapole'])
SI_model = dmdd.UV_Model('SI', ['mass','sigma_si'])
print 'model: {}, parameters: {}.'.format(anapole_model.name, anapole_model.param_names)
print 'model: {}, parameters: {}.'.format(SI_model.name, SI_model.param_names)
# intialize an Experiment with XENON target, to be passed to Simulation_AM:
xe = dmdd.Experiment('1xe', 'xenon', 5, 80, 1000, dmdd.eff.efficiency_unit, energy_resolution=True)
xe_lowQ = dmdd.Experiment('1xe', 'xenon', 5, 10, 1000, dmdd.eff.efficiency_unit, energy_resolution=True)
si_PDF = dmdd.Simulation_AM('SI', xe_lowQ, SI_model,
{'mass':50.,'sigma_si':75.5}, Qmin = np.asarray([5.]),
Qmax = np.asarray([10.]),
Tmin = 0, Tmax = 365, sigma_si = 75.5,
element = 'xenon', force_sim = True)
#make cross section particularly large to generate more events
#max pdf value is about 1.74
pdf_list = []
times = np.linspace(0, 365, 366) #365 days to test
#test all days at same energies, where energy = 3
for i,time in enumerate(times):
value = dmdd.PDF(Q=[5.], time=time, element = 'xenon', mass = 50.,
sigma_si= 75.5, sigma_anapole = 0.,
Qmin = np.asarray([5.]), Qmax = np.asarray([10.]),
Tmin = 0, Tmax = 365)
pdf_list.append(value)
plt.figure(2)
plt.plot(times, pdf_list)
plt.xlabel("Time in Days")
plt.ylabel("Predicted PDF SI Model")
plt.title("Predicted PDF over Time for SI Model")
#PDF values show correct annual modulation for events over time (not normalized)
anapole_PDF = dmdd.Simulation_AM('Anapole', xe_lowQ, anapole_model,
{'mass':50.,'sigma_anapole':44.25}, Qmin = np.asarray([5.]),
Qmax = np.asarray([10.]),
Tmin = 0, Tmax = 365, sigma_anapole = 44.25,
element = 'xenon', force_sim = True)
#make cross section particularly large to generate more events
#max pdf value is about 1.74
pdf_list = []
times = np.linspace(0, 365, 366) #365 days to test
#test all days at same energies, where energy = 3
for i,time in enumerate(times):
value = dmdd.PDF(Q=[5.], time=time, element = 'xenon', mass = 50.,
sigma_si= 0., sigma_anapole = 44.25,
Qmin = np.asarray([5.]), Qmax = np.asarray([10.]),
Tmin = 0, Tmax = 365)
pdf_list.append(value)
plt.figure(2)
plt.plot(times, pdf_list)
plt.xlabel("Time in Days")
plt.ylabel("Predicted PDF")
plt.title("Predicted PDF over Time for Anapole Model")
Explanation: Testing graphing without a normalized PDF function
The following cell tests graphing a theory plot without normalizing the PDF function. If it displays correctly now, then the problem is likely the normalization, not the PDF function or the colormap graph. If it still displays incorrectly, the problem is with the colormap.
End of explanation |
2,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spectrometer accuracy assesment using validation tarps
Background
In this lesson we will be examing the accuracy of the Neon Imaging Spectrometer (NIS) against targets with known reflectance. The targets consist of two 10 x 10 m tarps which have been specially designed to have 3% reflectance (black tarp) and 48% reflectance (white tarp) across all of the wavelengths collected by the NIS (see images below). During the Sept. 12 2016 flight over the Chequamegon-Nicolet National Forest, an area in D05 which is part of Steigerwaldt (STEI) site, these tarps were deployed in a gravel pit. During the airborne overflight, observations were also taken over the tarps with an ASD field spectrometer. The ASD measurments provide a validation source against the the airborne measurements.
To test the accuracy, we will utilize reflectance curves from the tarps as well as from the associated flight line and execute absolute and relative comparisons. The major error sources in the NIS can be generally categorized into the following sources
1) Calibration of the sensor
2) Quality of ortho-rectification
3) Accuracy of radiative transfer code and subsequent ATCOR interpolation
4) Selection of atmospheric input parameters
5) Terrain relief
6) Terrain cover
Note that the manual for ATCOR, the atmospheric correction software used by AOP, specifies the accuracy of reflectance retrievals to be between 3 and 5% of total reflectance. The tarps are located in a flat area, therefore, influences by terrain releif should be minimal. We will ahve to keep the remining errors in mind as we analyze the data.
Objective
In this lesson we will learn how to retrieve relfectance curves from a pre-specified coordainte in a NEON AOP HDF 5 file, learn how to read a tab delimited text file, retrive bad bad window indexes and mask portions of a reflectance curve, plot reflectance curves on a graph and save the file, gain an understanding of some sources of uncertainty in NIS data.
Suggested pre-requisites
Working with NEON AOP Hyperspectral Data in Python Jupyter Notebooks
Learn to Efficiently Process NEON Hyperspectral Data
We'll start by adding all of the necessary libraries to our python script
Step1: As well as our function to read the hdf5 reflectance files and associated metadata
Step2: Define the location where you are holding the data for the data institute. The h5_filename will be the flightline which contains the tarps, and the tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.
Step3: We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjecent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows
Step4: Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data
Step5: Within the reflectance curves there are areas with noisey data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure detailes in the plots due to their anamolous values. The meta data assocaited with these band locations is contained in the metadata gatherd by our function. We will pull out these areas as 'bad band windows' and determine which indexes in the reflectance curves contain the bad bands
Step6: Now join the list of indexes together into a single variable
Step7: The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function (genfromtxt) to quickly import the tarp reflectance curves observed with the ASD using the '\t' delimeter to indicate tabs are used.
Step8: Now we'll set all the data inside of those windows to NaNs (not a number) so they will not be included in the plots
Step9: The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that reads NEON AOP HDF5 files. The difference between these coordaintes gives us the x and y index of the reflectance curve.
Step10: Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualisation of thier consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.
Step11: This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (source 4). For example, the user must input the local visibility, which is related to aerosal optical thickness (AOT). We don't measure this at every site, therefore input a standard parameter for all sites.
Given the 3% reflectance tarp has much lower overall reflactance, it may be more informative to determine what the absolute difference between the two curves are and plot that as well.
Step12: From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest waveleghts as well as near the edges of the 'bad band windows.' This is related to difficulty in calibrating the sensor in these sensitive areas (source 1).
Let's now determine the result of the percent difference, which is the metric used by ATCOR to report accuracy. We can do this by calculating the ratio of the absolute difference between curves to the total reflectance | Python Code:
import h5py
import csv
import numpy as np
import os
import gdal
import matplotlib.pyplot as plt
import sys
from math import floor
import time
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
Explanation: Spectrometer accuracy assesment using validation tarps
Background
In this lesson we will be examing the accuracy of the Neon Imaging Spectrometer (NIS) against targets with known reflectance. The targets consist of two 10 x 10 m tarps which have been specially designed to have 3% reflectance (black tarp) and 48% reflectance (white tarp) across all of the wavelengths collected by the NIS (see images below). During the Sept. 12 2016 flight over the Chequamegon-Nicolet National Forest, an area in D05 which is part of Steigerwaldt (STEI) site, these tarps were deployed in a gravel pit. During the airborne overflight, observations were also taken over the tarps with an ASD field spectrometer. The ASD measurments provide a validation source against the the airborne measurements.
To test the accuracy, we will utilize reflectance curves from the tarps as well as from the associated flight line and execute absolute and relative comparisons. The major error sources in the NIS can be generally categorized into the following sources
1) Calibration of the sensor
2) Quality of ortho-rectification
3) Accuracy of radiative transfer code and subsequent ATCOR interpolation
4) Selection of atmospheric input parameters
5) Terrain relief
6) Terrain cover
Note that the manual for ATCOR, the atmospheric correction software used by AOP, specifies the accuracy of reflectance retrievals to be between 3 and 5% of total reflectance. The tarps are located in a flat area, therefore, influences by terrain releif should be minimal. We will ahve to keep the remining errors in mind as we analyze the data.
Objective
In this lesson we will learn how to retrieve relfectance curves from a pre-specified coordainte in a NEON AOP HDF 5 file, learn how to read a tab delimited text file, retrive bad bad window indexes and mask portions of a reflectance curve, plot reflectance curves on a graph and save the file, gain an understanding of some sources of uncertainty in NIS data.
Suggested pre-requisites
Working with NEON AOP Hyperspectral Data in Python Jupyter Notebooks
Learn to Efficiently Process NEON Hyperspectral Data
We'll start by adding all of the necessary libraries to our python script
End of explanation
def h5refl2array(h5_filename):
hdf5_file = h5py.File(h5_filename,'r')
#Get the site name
file_attrs_string = str(list(hdf5_file.items()))
file_attrs_string_split = file_attrs_string.split("'")
sitename = file_attrs_string_split[1]
refl = hdf5_file[sitename]['Reflectance']
reflArray = refl['Reflectance_Data']
refl_shape = reflArray.shape
wavelengths = refl['Metadata']['Spectral_Data']['Wavelength']
#Create dictionary containing relevant metadata information
metadata = {}
metadata['shape'] = reflArray.shape
metadata['mapInfo'] = refl['Metadata']['Coordinate_System']['Map_Info']
#Extract no data value & set no data value to NaN\n",
metadata['scaleFactor'] = float(reflArray.attrs['Scale_Factor'])
metadata['noDataVal'] = float(reflArray.attrs['Data_Ignore_Value'])
metadata['bad_band_window1'] = (refl.attrs['Band_Window_1_Nanometers'])
metadata['bad_band_window2'] = (refl.attrs['Band_Window_2_Nanometers'])
metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
metadata['EPSG'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
mapInfo_string = str(mapInfo); #print('Map Info:',mapInfo_string)\n",
mapInfo_split = mapInfo_string.split(",")
#Extract the resolution & convert to floating decimal number
metadata['res'] = {}
metadata['res']['pixelWidth'] = mapInfo_split[5]
metadata['res']['pixelHeight'] = mapInfo_split[6]
#Extract the upper left-hand corner coordinates from mapInfo\n",
xMin = float(mapInfo_split[3]) #convert from string to floating point number\n",
yMax = float(mapInfo_split[4])
#Calculate the xMax and yMin values from the dimensions\n",
xMax = xMin + (refl_shape[1]*float(metadata['res']['pixelWidth'])) #xMax = left edge + (# of columns * resolution)\n",
yMin = yMax - (refl_shape[0]*float(metadata['res']['pixelHeight'])) #yMin = top edge - (# of rows * resolution)\n",
metadata['extent'] = (xMin,xMax,yMin,yMax),
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = xMin
metadata['ext_dict']['xMax'] = xMax
metadata['ext_dict']['yMin'] = yMin
metadata['ext_dict']['yMax'] = yMax
hdf5_file.close
return reflArray, metadata, wavelengths
Explanation: As well as our function to read the hdf5 reflectance files and associated metadata
End of explanation
print('Start CHEQ tarp uncertainty script')
h5_filename = 'C:/RSDI_2017/data/CHEQ/H5/NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5'
tarp_48_filename = 'C:/RSDI_2017/data/CHEQ/H5/CHEQ_Tarp_48_01_refl_bavg.txt'
tarp_03_filename = 'C:/RSDI_2017/data/CHEQ/H5/CHEQ_Tarp_03_02_refl_bavg.txt'
Explanation: Define the location where you are holding the data for the data institute. The h5_filename will be the flightline which contains the tarps, and the tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.
End of explanation
tarp_48_center = np.array([727487,5078970])
tarp_03_center = np.array([727497,5078970])
Explanation: We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjecent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows:
48% reflectance tarp UTMx: 727487, UTMy: 5078970
3% reflectance tarp UTMx: 727497, UTMy: 5078970
Let's define these coordaintes
End of explanation
[reflArray,metadata,wavelengths] = h5refl2array(h5_filename)
Explanation: Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data
End of explanation
bad_band_window1 = (metadata['bad_band_window1'])
bad_band_window2 = (metadata['bad_band_window2'])
index_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]]
index_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_window2[1]]
Explanation: Within the reflectance curves there are areas with noisey data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure detailes in the plots due to their anamolous values. The meta data assocaited with these band locations is contained in the metadata gatherd by our function. We will pull out these areas as 'bad band windows' and determine which indexes in the reflectance curves contain the bad bands
End of explanation
index_bad_windows = index_bad_window1+index_bad_window2
Explanation: Now join the list of indexes together into a single variable
End of explanation
tarp_48_data = np.genfromtxt(tarp_48_filename, delimiter = '\t')
tarp_03_data = np.genfromtxt(tarp_03_filename, delimiter = '\t')
Explanation: The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function (genfromtxt) to quickly import the tarp reflectance curves observed with the ASD using the '\t' delimeter to indicate tabs are used.
End of explanation
tarp_48_data[index_bad_windows] = np.nan
tarp_03_data[index_bad_windows] = np.nan
Explanation: Now we'll set all the data inside of those windows to NaNs (not a number) so they will not be included in the plots
End of explanation
x_tarp_48_index = int((tarp_48_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_48_index = int((metadata['ext_dict']['yMax'] - tarp_48_center[1])/float(metadata['res']['pixelHeight']))
x_tarp_03_index = int((tarp_03_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_03_index = int((metadata['ext_dict']['yMax'] - tarp_03_center[1])/float(metadata['res']['pixelHeight']))
Explanation: The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that reads NEON AOP HDF5 files. The difference between these coordaintes gives us the x and y index of the reflectance curve.
End of explanation
plt.figure(1)
tarp_48_reflectance = np.asarray(reflArray[y_tarp_48_index,x_tarp_48_index,:], dtype=np.float32)/metadata['scaleFactor']
tarp_48_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_48_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_48_data[:,1], label = 'ASD Reflectance')
plt.title('CHEQ 20160912 48% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
plt.legend()
plt.savefig('CHEQ_20160912_48_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(2)
tarp_03_reflectance = np.asarray(reflArray[y_tarp_03_index,x_tarp_03_index,:], dtype=np.float32)/ metadata['scaleFactor']
tarp_03_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_03_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_03_data[:,1],label = 'ASD Reflectance')
plt.title('CHEQ 20160912 3% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
plt.legend()
plt.savefig('CHEQ_20160912_3_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
Explanation: Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualisation of thier consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.
End of explanation
plt.figure(3)
plt.plot(wavelengths,tarp_48_reflectance-tarp_48_data[:,1])
plt.title('CHEQ 20160912 48% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')
plt.savefig('CHEQ_20160912_48_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(4)
plt.plot(wavelengths,tarp_03_reflectance-tarp_03_data[:,1])
plt.title('CHEQ 20160912 3% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')
plt.savefig('CHEQ_20160912_3_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
Explanation: This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (source 4). For example, the user must input the local visibility, which is related to aerosal optical thickness (AOT). We don't measure this at every site, therefore input a standard parameter for all sites.
Given the 3% reflectance tarp has much lower overall reflactance, it may be more informative to determine what the absolute difference between the two curves are and plot that as well.
End of explanation
plt.figure(5)
plt.plot(wavelengths,100*np.divide(tarp_48_reflectance-tarp_48_data[:,1],tarp_48_data[:,1]))
plt.title('CHEQ 20160912 48% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')
plt.ylim((-100,100))
plt.savefig('CHEQ_20160912_48_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(6)
plt.plot(wavelengths,100*np.divide(tarp_03_reflectance-tarp_03_data[:,1],tarp_03_data[:,1]))
plt.title('CHEQ 20160912 3% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')
plt.ylim((-100,150))
plt.savefig('CHEQ_20160912_3_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
Explanation: From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest waveleghts as well as near the edges of the 'bad band windows.' This is related to difficulty in calibrating the sensor in these sensitive areas (source 1).
Let's now determine the result of the percent difference, which is the metric used by ATCOR to report accuracy. We can do this by calculating the ratio of the absolute difference between curves to the total reflectance
End of explanation |
2,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Units and unit conversions are BIG in engineering. Engineers solve the world's problems in teams. Any problem solved has to have a context. How heavy can a rocket be and still make it off the ground? What thickness bodypannels keep occupants save during a crash? In engineering, a number without a unit is like a fish without water. It just flops around hopelessly without context around it is useless.
How can we get help using units? Programming is one way. We are going to complete some uit conversion problmes using Python and Pint. Pint is a Python package used for unit conversions.
See the (Pint documentation) for more examples.
I recommend that undergraduate engineers use Python 3 (Python 2.7 is legacy python) and the Anaconda distribution. To use Pint, we need to install pint in our wroking version of Python. Open up the Anaconda Prompt
Step1: Before we can complete a unit conversion with the Pint package, we need to import the Pint module and instantiate a UnitRegistry object. The new ureg object contains all the units used in the examples below.
Step2: For our first problem, we will complete the following converison
Step3: To convert power to Btu/day, we use Pint's .to() method. The .to() method does not change the units of power in place. We need to assign the output of the .to() method to another variable power_in_Btu_per_day
Step4: Another probem
Step5: Next problem
Step6: This time we will use the .ito() method. Using .ito() will convert the units of accel in place.
Step7: Convert 14.31 x 10<sup>8</sup> kJ kg mm<sup>-3</sup> to cal lb<sub>m</sub> / in<sup>3</sup> | Python Code:
import platform
print('Operating System: ' + platform.system() + platform.release())
print('Python Version: '+ platform.python_version())
Explanation: Units and unit conversions are BIG in engineering. Engineers solve the world's problems in teams. Any problem solved has to have a context. How heavy can a rocket be and still make it off the ground? What thickness bodypannels keep occupants save during a crash? In engineering, a number without a unit is like a fish without water. It just flops around hopelessly without context around it is useless.
How can we get help using units? Programming is one way. We are going to complete some uit conversion problmes using Python and Pint. Pint is a Python package used for unit conversions.
See the (Pint documentation) for more examples.
I recommend that undergraduate engineers use Python 3 (Python 2.7 is legacy python) and the Anaconda distribution. To use Pint, we need to install pint in our wroking version of Python. Open up the Anaconda Prompt:
```
pip install pint
```
I am working on a Windows 10 machine. You can check your operating system and Python version using the code below:
End of explanation
import pint
ureg = pint.UnitRegistry()
Explanation: Before we can complete a unit conversion with the Pint package, we need to import the Pint module and instantiate a UnitRegistry object. The new ureg object contains all the units used in the examples below.
End of explanation
power = 252*ureg.kW
print(power)
Explanation: For our first problem, we will complete the following converison:
Convert 252 kW to Btu/day
We'll create a variable called power with units of kilowatts (kW). To create the kW unit, we'll use our ureg object.
End of explanation
power_in_Btu_per_day = power.to(ureg.Btu / ureg.day)
print(power_in_Btu_per_day)
Explanation: To convert power to Btu/day, we use Pint's .to() method. The .to() method does not change the units of power in place. We need to assign the output of the .to() method to another variable power_in_Btu_per_day
End of explanation
stress = 722*ureg.MPa
stress_in_ksi = stress.to(ureg.ksi)
print(stress_in_ksi)
Explanation: Another probem:
Convert 722 MPa to ksi
End of explanation
accel = 1.620 *ureg.m/(ureg.s**2)
print(accel)
Explanation: Next problem:
Convert 1.620 m/s<sup>2</sup> to ft/min<sup>2</sup>
End of explanation
accel.ito(ureg.ft/(ureg.min**2))
print(accel)
Explanation: This time we will use the .ito() method. Using .ito() will convert the units of accel in place.
End of explanation
quant = 14.31e8 * ureg.kJ * ureg.kg * ureg.mm**(-3)
print(quant)
quant.ito( ureg.cal*ureg.lb / (ureg.inch**3))
print(quant)
Explanation: Convert 14.31 x 10<sup>8</sup> kJ kg mm<sup>-3</sup> to cal lb<sub>m</sub> / in<sup>3</sup>
End of explanation |
2,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example
Step4: The individual data instances come in chunks seperated by blank lines. Each chunk consists of a few starting comments, and then lines of tab-seperated fields. The fields we are interested in are the 1st and 3rd, which contain the tokenized word and POS tag respectively. An example chunk is shown below
Step7: Now we need to parse the .conllu files and extract the data needed for our model. The good news is that the file is only a few megabytes so we can store everything in memory. Rather than creating a generator from scratch like we did in the previous tutorial, we will instead showcase the torch.utils.data.Dataset class. There are two main things that a Dataset must have
Step8: And let's see how this is used in practice.
Step11: The main upshot of using the Dataset class is that it makes accessing training/test observations very simple. Accordingly, this makes batch generation easy since all we need to do is randomly choose numbers and then grab those observations from the dataset - PyTorch includes a torch.utils.data.DataLoader object which handles this for you. In fact, if we were not working with sequential data we would be able to proceed straight to the modeling step from here. However, since we are working with sequential data there is one last pesky issue we need to handle - padding.
The issue is that when we are given a batch of outputs from CoNLLDataset, the sequences in the batch are likely to all be of different length. To deal with this, we define a custom collate_annotations function which adds padding to the end of the sequences in the batch so that they are all the same length. In addition, we'll have this function take care of loading the data into tensors and ensuring that the tensor dimensions are in the order expected by PyTorch.
Oh and one last annoying thing - to deal with some of the issues caused by using padded data we will be using a function called torch.nn.utils.rnn.pack_padded_sequences in our model later on. All you need to know now is that this function expects our sequences in the batch to be sorted in terms of descending length, and that we know the lengths of each sequence. So we will make sure that the collate_annotations function performs this sorting for us and returns the sequence lengths in addition to the input and target tensors.
Step12: Again let's see how this is used in practice
Step15: Model
We will use the following architecture
Step16: Training
Training is pretty much exactly the same as in the previous tutorial. There is one catch - we don't want to evaluate our loss function on pad tokens. This is easily fixed by setting the weight of the pad class to zero.
Step17: Evaluation
For tagging tasks the typical evaluation metric are accuracy and f1-score (e.g. the harmonic mean of precision and recall)
Step18: Inference
Now let's look at some of the model's predictions.
Step23: Example
Step26: Model
The model architecture we will use for sentiment classification is almost exactly the same as the one we used for tagging. The only difference is that we want the model to produce a single output at the end, not a sequence of outputs. While there are many ways to do this, a simple approach is to just use the final hidden state of the recurrent layer as the input to the fully connected layer. This approach is particularly nice in PyTorch since the forward pass of the recurrent layer returns the final hidden states as its second output (see the note in the code below if this is unclear), so we do not need to do any fancy indexing tricks to get them.
Formally, the model architecture we will use is
Step27: Training
This code should look pretty familiar by now...
Step28: Inference
Lastly, let's examine some model outputs | Python Code:
!wget https://raw.githubusercontent.com/UniversalDependencies/UD_English/master/en-ud-dev.conllu
!wget https://raw.githubusercontent.com/UniversalDependencies/UD_English/master/en-ud-test.conllu
!wget https://raw.githubusercontent.com/UniversalDependencies/UD_English/master/en-ud-train.conllu
Explanation: Example: POS Tagging
According to Wikipedia:
Part-of-speech tagging (POS tagging or PoS tagging or POST) is the process of marking up a word in a text (corpus) as corresponding to a particular part of speech, based on both its definition and its context—i.e., its relationship with adjacent and related words in a phrase, sentence, or paragraph.
Formally, given a sequence of words $\mathbf{x} = \left< x_1, x_2, \ldots, x_t \right>$ the goal is to learn a model $P(y_i \,|\, \mathbf{x})$ where $y_i$ is the POS tag associated with the $x_i$.
Note that the model is conditioned on all of $\mathbf{x}$ not just the words that occur earlier in the sentence - this is because we can assume that the entire sentence is known at the time of tagging.
Dataset
We will train our model on the Engligh Dependencies Treebank.
You can download this dataset by running the following lines:
End of explanation
from collections import Counter
class Vocab(object):
def __init__(self, iter, max_size=None, sos_token=None, eos_token=None, unk_token=None):
Initialize the vocabulary.
Args:
iter: An iterable which produces sequences of tokens used to update
the vocabulary.
max_size: (Optional) Maximum number of tokens in the vocabulary.
sos_token: (Optional) Token denoting the start of a sequence.
eos_token: (Optional) Token denoting the end of a sequence.
unk_token: (Optional) Token denoting an unknown element in a
sequence.
self.max_size = max_size
self.pad_token = '<pad>'
self.sos_token = sos_token
self.eos_token = eos_token
self.unk_token = unk_token
# Add special tokens.
id2word = [self.pad_token]
if sos_token is not None:
id2word.append(self.sos_token)
if eos_token is not None:
id2word.append(self.eos_token)
if unk_token is not None:
id2word.append(self.unk_token)
# Update counter with token counts.
counter = Counter()
for x in iter:
counter.update(x)
# Extract lookup tables.
if max_size is not None:
counts = counter.most_common(max_size)
else:
counts = counter.items()
counts = sorted(counts, key=lambda x: x[1], reverse=True)
words = [x[0] for x in counts]
id2word.extend(words)
word2id = {x: i for i, x in enumerate(id2word)}
self._id2word = id2word
self._word2id = word2id
def __len__(self):
return len(self._id2word)
def word2id(self, word):
Map a word in the vocabulary to its unique integer id.
Args:
word: Word to lookup.
Returns:
id: The integer id of the word being looked up.
if word in self._word2id:
return self._word2id[word]
elif self.unk_token is not None:
return self._word2id[self.unk_token]
else:
raise KeyError('Word "%s" not in vocabulary.' % word)
def id2word(self, id):
Map an integer id to its corresponding word in the vocabulary.
Args:
id: Integer id of the word being looked up.
Returns:
word: The corresponding word.
return self._id2word[id]
Explanation: The individual data instances come in chunks seperated by blank lines. Each chunk consists of a few starting comments, and then lines of tab-seperated fields. The fields we are interested in are the 1st and 3rd, which contain the tokenized word and POS tag respectively. An example chunk is shown below:
```
sent_id = answers-20111107193044AAvUYBv_ans-0023
text = Hope you have a crapload of fun!
1 Hope hope VERB VBP Mood=Ind|Tense=Pres|VerbForm=Fin 0 root 0:root _
2 you you PRON PRP Case=Nom|Person=2|PronType=Prs 3 nsubj 3:nsubj _
3 have have VERB VBP Mood=Ind|Tense=Pres|VerbForm=Fin 1 ccomp 1:ccomp _
4 a a DET DT Definite=Ind|PronType=Art 5 det 5:det _
5 crapload crapload NOUN NN Number=Sing 3 obj 3:obj _
6 of of ADP IN _ 7 case 7:case _
7 fun fun NOUN NN Number=Sing 5 nmod 5:nmod SpaceAfter=No
8 ! ! PUNCT . _ 1 punct 1:punct _
```
As with most real world data, we are going to need to do some preprocessing before we can use it. The first thing we are going to need is a Vocabulary to map words/POS tags to integer ids. Here is a more full-featured implementation than what we used in the first tutorial:
End of explanation
import re
from torch.utils.data import Dataset
class Annotation(object):
def __init__(self):
A helper object for storing annotation data.
self.tokens = []
self.pos_tags = []
class CoNLLDataset(Dataset):
def __init__(self, fname):
Initializes the CoNLLDataset.
Args:
fname: The .conllu file to load data from.
self.fname = fname
self.annotations = self.process_conll_file(fname)
self.token_vocab = Vocab([x.tokens for x in self.annotations],
unk_token='<unk>')
self.pos_vocab = Vocab([x.pos_tags for x in self.annotations])
def __len__(self):
return len(self.annotations)
def __getitem__(self, idx):
annotation = self.annotations[idx]
input = [self.token_vocab.word2id(x) for x in annotation.tokens]
target = [self.pos_vocab.word2id(x) for x in annotation.pos_tags]
return input, target
def process_conll_file(self, fname):
# Read the entire file.
with open(fname, 'r') as f:
raw_text = f.read()
# Split into chunks on blank lines.
chunks = re.split(r'^\n', raw_text, flags=re.MULTILINE)
# Process each chunk into an annotation.
annotations = []
for chunk in chunks:
annotation = Annotation()
lines = chunk.split('\n')
# Iterate over all lines in the chunk.
for line in lines:
# If line is empty ignore it.
if len(line)==0:
continue
# If line is a commend ignore it.
if line[0] == '#':
continue
# Otherwise split on tabs and retrieve the token and the
# POS tag fields.
fields = line.split('\t')
annotation.tokens.append(fields[1])
annotation.pos_tags.append(fields[3])
if (len(annotation.tokens) > 0) and (len(annotation.pos_tags) > 0):
annotations.append(annotation)
return annotations
Explanation: Now we need to parse the .conllu files and extract the data needed for our model. The good news is that the file is only a few megabytes so we can store everything in memory. Rather than creating a generator from scratch like we did in the previous tutorial, we will instead showcase the torch.utils.data.Dataset class. There are two main things that a Dataset must have:
A __len__ method which let's you know how many data points are in the dataset.
A __getitem__ method which is used to support integer indexing.
Here's an example of how to define these methods for the English Dependencies Treebank data.
End of explanation
dataset = CoNLLDataset('en-ud-train.conllu')
input, target = dataset[0]
print('Example input: %s\n' % input)
print('Example target: %s\n' % target)
print('Translated input: %s\n' % ' '.join(dataset.token_vocab.id2word(x) for x in input))
print('Translated target: %s\n' % ' '.join(dataset.pos_vocab.id2word(x) for x in target))
Explanation: And let's see how this is used in practice.
End of explanation
import torch
from torch.autograd import Variable
def pad(sequences, max_length, pad_value=0):
Pads a list of sequences.
Args:
sequences: A list of sequences to be padded.
max_length: The length to pad to.
pad_value: The value used for padding.
Returns:
A list of padded sequences.
out = []
for sequence in sequences:
padded = sequence + [0]*(max_length - len(sequence))
out.append(padded)
return out
def collate_annotations(batch):
Function used to collate data returned by CoNLLDataset.
# Get inputs, targets, and lengths.
inputs, targets = zip(*batch)
lengths = [len(x) for x in inputs]
# Sort by length.
sort = sorted(zip(inputs, targets, lengths),
key=lambda x: x[2],
reverse=True)
inputs, targets, lengths = zip(*sort)
# Pad.
max_length = max(lengths)
inputs = pad(inputs, max_length)
targets = pad(targets, max_length)
# Transpose.
inputs = list(map(list, zip(*inputs)))
targets = list(map(list, zip(*targets)))
# Convert to PyTorch variables.
inputs = Variable(torch.LongTensor(inputs))
targets = Variable(torch.LongTensor(targets))
lengths = Variable(torch.LongTensor(lengths))
if torch.cuda.is_available():
inputs = inputs.cuda()
targets = targets.cuda()
lengths = lengths.cuda()
return inputs, targets, lengths
Explanation: The main upshot of using the Dataset class is that it makes accessing training/test observations very simple. Accordingly, this makes batch generation easy since all we need to do is randomly choose numbers and then grab those observations from the dataset - PyTorch includes a torch.utils.data.DataLoader object which handles this for you. In fact, if we were not working with sequential data we would be able to proceed straight to the modeling step from here. However, since we are working with sequential data there is one last pesky issue we need to handle - padding.
The issue is that when we are given a batch of outputs from CoNLLDataset, the sequences in the batch are likely to all be of different length. To deal with this, we define a custom collate_annotations function which adds padding to the end of the sequences in the batch so that they are all the same length. In addition, we'll have this function take care of loading the data into tensors and ensuring that the tensor dimensions are in the order expected by PyTorch.
Oh and one last annoying thing - to deal with some of the issues caused by using padded data we will be using a function called torch.nn.utils.rnn.pack_padded_sequences in our model later on. All you need to know now is that this function expects our sequences in the batch to be sorted in terms of descending length, and that we know the lengths of each sequence. So we will make sure that the collate_annotations function performs this sorting for us and returns the sequence lengths in addition to the input and target tensors.
End of explanation
from torch.utils.data import DataLoader
for inputs, targets, lengths in DataLoader(dataset, batch_size=16, collate_fn=collate_annotations):
print('Inputs: %s\n' % inputs.data)
print('Targets: %s\n' % targets.data)
print('Lengths: %s\n' % lengths.data)
# Usually we'd keep sampling batches, but here we'll just break
break
Explanation: Again let's see how this is used in practice:
End of explanation
from torch import nn
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
class Tagger(nn.Module):
def __init__(self,
input_vocab_size,
output_vocab_size,
embedding_dim=64,
hidden_size=64,
bidirectional=True):
Initializes the tagger.
Args:
input_vocab_size: Size of the input vocabulary.
output_vocab_size: Size of the output vocabulary.
embedding_dim: Dimension of the word embeddings.
hidden_size: Number of units in each LSTM hidden layer.
bidirectional: Whether or not to use a bidirectional rnn.
# Always do this!!!
super(Tagger, self).__init__()
# Store parameters
self.input_vocab_size = input_vocab_size
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
self.hidden_size = hidden_size
self.bidirectional = bidirectional
# Define layers
self.word_embeddings = nn.Embedding(input_vocab_size, embedding_dim,
padding_idx=0)
self.rnn = nn.GRU(embedding_dim, hidden_size,
bidirectional=bidirectional,
dropout=0.9)
if bidirectional:
self.fc = nn.Linear(2*hidden_size, output_vocab_size)
else:
self.fc = nn.Linear(hidden_size, output_vocab_size)
self.activation = nn.LogSoftmax(dim=2)
def forward(self, x, lengths=None, hidden=None):
Computes a forward pass of the language model.
Args:
x: A LongTensor w/ dimension [seq_len, batch_size].
lengths: The lengths of the sequences in x.
hidden: Hidden state to be fed into the lstm.
Returns:
net: Probability of the next word in the sequence.
hidden: Hidden state of the lstm.
seq_len, batch_size = x.size()
# If no hidden state is provided, then default to zeros.
if hidden is None:
if self.bidirectional:
num_directions = 2
else:
num_directions = 1
hidden = Variable(torch.zeros(num_directions, batch_size, self.hidden_size))
if torch.cuda.is_available():
hidden = hidden.cuda()
net = self.word_embeddings(x)
# Pack before feeding into the RNN.
if lengths is not None:
lengths = lengths.data.view(-1).tolist()
net = pack_padded_sequence(net, lengths)
net, hidden = self.rnn(net, hidden)
# Unpack after
if lengths is not None:
net, _ = pad_packed_sequence(net)
net = self.fc(net)
net = self.activation(net)
return net, hidden
Explanation: Model
We will use the following architecture:
Embed the input words into a 200 dimensional vector space.
Feed the word embeddings into a (bidirectional) GRU.
Feed the GRU outputs into a fully connected layer.
Use a softmax activation to get the probabilities of the different labels.
There is one complication which arises during the forward computation. As was noted in the dataset section, the input sequences are padded. This causes an issue since we do not want to waste computational resources feeding these pad tokens into the RNN. In PyTorch, we can deal with this issue by converting the sequence data into a torch.nn.utils.rnn.PackedSequence object before feeding it into the RNN. In essence, a PackedSequence flattens the sequence and batch dimensions of a tensor, and also contains metadata so that PyTorch knows when to re-initialize the hidden state when fed into a recurrent layer. If this seems confusing, do not worry. To use the PackedSequence in practice you will almost always perform the following steps:
Before feeding data into a recurrent layer, transform it into a PackedSequence by using the function torch.nn.utils.rnn.pack_padded_sequence().
Feed the PackedSequence into the recurrent layer.
Transform the output back into a regular tensor by using the function torch.nn.utils.rnn.pad_packed_sequence().
See the model implementation below for a working example:
End of explanation
import numpy as np
# Load datasets.
train_dataset = CoNLLDataset('en-ud-train.conllu')
dev_dataset = CoNLLDataset('en-ud-dev.conllu')
dev_dataset.token_vocab = train_dataset.token_vocab
dev_dataset.pos_vocab = train_dataset.pos_vocab
# Hyperparameters / constants.
input_vocab_size = len(train_dataset.token_vocab)
output_vocab_size = len(train_dataset.pos_vocab)
batch_size = 16
epochs = 6
# Initialize the model.
model = Tagger(input_vocab_size, output_vocab_size)
if torch.cuda.is_available():
model = model.cuda()
# Loss function weights.
weight = torch.ones(output_vocab_size)
weight[0] = 0
if torch.cuda.is_available():
weight = weight.cuda()
# Initialize loss function and optimizer.
loss_function = torch.nn.NLLLoss(weight)
optimizer = torch.optim.Adam(model.parameters())
# Main training loop.
data_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True,
collate_fn=collate_annotations)
dev_loader = DataLoader(dev_dataset, batch_size=batch_size, shuffle=False,
collate_fn=collate_annotations)
losses = []
i = 0
for epoch in range(epochs):
for inputs, targets, lengths in data_loader:
optimizer.zero_grad()
outputs, _ = model(inputs, lengths=lengths)
outputs = outputs.view(-1, output_vocab_size)
targets = targets.view(-1)
loss = loss_function(outputs, targets)
loss.backward()
optimizer.step()
losses.append(loss.data[0])
if (i % 10) == 0:
# Compute dev loss over entire dev set.
# NOTE: This is expensive. In your work you may want to only use a
# subset of the dev set.
dev_losses = []
for inputs, targets, lengths in dev_loader:
outputs, _ = model(inputs, lengths=lengths)
outputs = outputs.view(-1, output_vocab_size)
targets = targets.view(-1)
loss = loss_function(outputs, targets)
dev_losses.append(loss.data[0])
avg_train_loss = np.mean(losses)
avg_dev_loss = np.mean(dev_losses)
losses = []
print('Iteration %i - Train Loss: %0.6f - Dev Loss: %0.6f' % (i, avg_train_loss, avg_dev_loss), end='\r')
torch.save(model, 'pos_tagger.pt')
i += 1
torch.save(model, 'pos_tagger.final.pt')
Explanation: Training
Training is pretty much exactly the same as in the previous tutorial. There is one catch - we don't want to evaluate our loss function on pad tokens. This is easily fixed by setting the weight of the pad class to zero.
End of explanation
# Collect the predictions and targets
y_true = []
y_pred = []
for inputs, targets, lengths in dev_loader:
outputs, _ = model(inputs, lengths=lengths)
_, preds = torch.max(outputs, dim=2)
targets = targets.view(-1)
preds = preds.view(-1)
if torch.cuda.is_available():
targets = targets.cpu()
preds = preds.cpu()
y_true.append(targets.data.numpy())
y_pred.append(preds.data.numpy())
# Stack into numpy arrays
y_true = np.concatenate(y_true)
y_pred = np.concatenate(y_pred)
# Compute accuracy
acc = np.mean(y_true[y_true != 0] == y_pred[y_true != 0])
print('Accuracy - %0.6f\n' % acc)
# Evaluate f1-score
from sklearn.metrics import f1_score
score = f1_score(y_true, y_pred, average=None)
print('F1-scores:\n')
for label, score in zip(dev_dataset.pos_vocab._id2word[1:], score[1:]):
print('%s - %0.6f' % (label, score))
Explanation: Evaluation
For tagging tasks the typical evaluation metric are accuracy and f1-score (e.g. the harmonic mean of precision and recall):
$$ \text{f1-score} = 2 \frac{\text{precision} * \text{recall}}{\text{precision} + \text{recall}} $$
Here are the results for our final model:
End of explanation
model = torch.load('pos_tagger.final.pt')
def inference(sentence):
# Convert words to id tensor.
ids = [[dataset.token_vocab.word2id(x)] for x in sentence]
ids = Variable(torch.LongTensor(ids))
if torch.cuda.is_available():
ids = ids.cuda()
# Get model output.
output, _ = model(ids)
_, preds = torch.max(output, dim=2)
if torch.cuda.is_available():
preds = preds.cpu()
preds = preds.data.view(-1).numpy()
pos_tags = [dataset.pos_vocab.id2word(x) for x in preds]
for word, tag in zip(sentence, pos_tags):
print('%s - %s' % (word, tag))
sentence = "sdfgkj asd;glkjsdg ;lkj .".split()
inference(sentence)
Explanation: Inference
Now let's look at some of the model's predictions.
End of explanation
import torch
from collections import Counter
from torch.autograd import Variable
from torch.utils.data import Dataset
class Annotation(object):
def __init__(self):
A helper object for storing annotation data.
self.tokens = []
self.sentiment = None
class SentimentDataset(Dataset):
def __init__(self, fname):
Initializes the SentimentDataset.
Args:
fname: The .tsv file to load data from.
self.fname = fname
self.annotations = self.process_tsv_file(fname)
self.token_vocab = Vocab([x.tokens for x in self.annotations],
unk_token='<unk>')
def __len__(self):
return len(self.annotations)
def __getitem__(self, idx):
annotation = self.annotations[idx]
input = [self.token_vocab.word2id(x) for x in annotation.tokens]
target = annotation.sentiment
return input, target
def process_tsv_file(self, fname):
# Read the entire file.
with open(fname, 'r') as f:
lines = f.readlines()
annotations = []
observed_ids = set()
for line in lines[1:]:
annotation = Annotation()
_, sentence_id, sentence, sentiment = line.split('\t')
sentence_id = sentence_id
if sentence_id in observed_ids:
continue
else:
observed_ids.add(sentence_id)
annotation.tokens = sentence.split()
annotation.sentiment = int(sentiment)
if len(annotation.tokens) > 0:
annotations.append(annotation)
return annotations
def pad(sequences, max_length, pad_value=0):
Pads a list of sequences.
Args:
sequences: A list of sequences to be padded.
max_length: The length to pad to.
pad_value: The value used for padding.
Returns:
A list of padded sequences.
out = []
for sequence in sequences:
padded = sequence + [0]*(max_length - len(sequence))
out.append(padded)
return out
def collate_annotations(batch):
Function used to collate data returned by CoNLLDataset.
# Get inputs, targets, and lengths.
inputs, targets = zip(*batch)
lengths = [len(x) for x in inputs]
# Sort by length.
sort = sorted(zip(inputs, targets, lengths),
key=lambda x: x[2],
reverse=True)
inputs, targets, lengths = zip(*sort)
# Pad.
max_length = max(lengths)
inputs = pad(inputs, max_length)
# Transpose.
inputs = list(map(list, zip(*inputs)))
# Convert to PyTorch variables.
inputs = Variable(torch.LongTensor(inputs))
targets = Variable(torch.LongTensor(targets))
lengths = Variable(torch.LongTensor(lengths))
if torch.cuda.is_available():
inputs = inputs.cuda()
targets = targets.cuda()
lengths = lengths.cuda()
return inputs, targets, lengths
Explanation: Example: Sentiment Analysis
According to Wikipedia:
Opinion mining (sometimes known as sentiment analysis or emotion AI) refers to the use of natural language processing, text analysis, computational linguistics, and biometrics to systematically identify, extract, quantify, and study affective states and subjective information.
Formally, given a sequence of words $\mathbf{x} = \left< x_1, x_2, \ldots, x_t \right>$ the goal is to learn a model $P(y \,|\, \mathbf{x})$ where $y$ is the sentiment associated to the sentence. This is very similar to the problem above, with the exception that we only want a single output for each sentence not a sentence. Accordingly, we will only highlight the neccessary changes that need to be made.
Dataset
We will be using the Kaggle 'Sentiment Analysis on Movie Reviews' dataset [link]. You will need to agree to the Kaggle terms of service in order to download this data. The following code can be used to process this data.
End of explanation
from torch import nn
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
class SentimentClassifier(nn.Module):
def __init__(self,
input_vocab_size,
output_vocab_size,
embedding_dim=64,
hidden_size=64):
Initializes the tagger.
Args:
input_vocab_size: Size of the input vocabulary.
output_vocab_size: Size of the output vocabulary.
embedding_dim: Dimension of the word embeddings.
hidden_size: Number of units in each LSTM hidden layer.
# Always do this!!!
super(SentimentClassifier, self).__init__()
# Store parameters
self.input_vocab_size = input_vocab_size
self.output_vocab_size = output_vocab_size
self.embedding_dim = embedding_dim
self.hidden_size = hidden_size
# Define layers
self.word_embeddings = nn.Embedding(input_vocab_size, embedding_dim,
padding_idx=0)
self.rnn = nn.GRU(embedding_dim, hidden_size, dropout=0.9)
self.fc = nn.Linear(hidden_size, output_vocab_size)
self.activation = nn.LogSoftmax(dim=2)
def forward(self, x, lengths=None, hidden=None):
Computes a forward pass of the language model.
Args:
x: A LongTensor w/ dimension [seq_len, batch_size].
lengths: The lengths of the sequences in x.
hidden: Hidden state to be fed into the lstm.
Returns:
net: Probability of the next word in the sequence.
hidden: Hidden state of the lstm.
seq_len, batch_size = x.size()
# If no hidden state is provided, then default to zeros.
if hidden is None:
hidden = Variable(torch.zeros(1, batch_size, self.hidden_size))
if torch.cuda.is_available():
hidden = hidden.cuda()
net = self.word_embeddings(x)
if lengths is not None:
lengths_list = lengths.data.view(-1).tolist()
net = pack_padded_sequence(net, lengths_list)
net, hidden = self.rnn(net, hidden)
# NOTE: we are using hidden as the input to the fully-connected layer, not net!!!
net = self.fc(hidden)
net = self.activation(net)
return net, hidden
Explanation: Model
The model architecture we will use for sentiment classification is almost exactly the same as the one we used for tagging. The only difference is that we want the model to produce a single output at the end, not a sequence of outputs. While there are many ways to do this, a simple approach is to just use the final hidden state of the recurrent layer as the input to the fully connected layer. This approach is particularly nice in PyTorch since the forward pass of the recurrent layer returns the final hidden states as its second output (see the note in the code below if this is unclear), so we do not need to do any fancy indexing tricks to get them.
Formally, the model architecture we will use is:
Embed the input words into a 200 dimensional vector space.
Feed the word embeddings into a GRU.
Feed the final hidden state output by the GRU into a fully connected layer.
Use a softmax activation to get the probabilities of the different labels.
End of explanation
import numpy as np
from torch.utils.data import DataLoader
# Load dataset.
sentiment_dataset = SentimentDataset('train.tsv')
# Hyperparameters / constants.
input_vocab_size = len(sentiment_dataset.token_vocab)
output_vocab_size = 5
batch_size = 16
epochs = 7
# Initialize the model.
model = SentimentClassifier(input_vocab_size, output_vocab_size)
if torch.cuda.is_available():
model = model.cuda()
# Initialize loss function and optimizer.
loss_function = torch.nn.NLLLoss()
optimizer = torch.optim.Adam(model.parameters())
# Main training loop.
data_loader = DataLoader(sentiment_dataset, batch_size=batch_size, shuffle=True,
collate_fn=collate_annotations)
losses = []
i = 0
for epoch in range(epochs):
for inputs, targets, lengths in data_loader:
optimizer.zero_grad()
outputs, _ = model(inputs, lengths=lengths)
outputs = outputs.view(-1, output_vocab_size)
targets = targets.view(-1)
loss = loss_function(outputs, targets)
loss.backward()
optimizer.step()
losses.append(loss.data[0])
if (i % 100) == 0:
average_loss = np.mean(losses)
losses = []
print('Iteration %i - Loss: %0.6f' % (i, average_loss), end='\r')
if (i % 1000) == 0:
torch.save(model, 'sentiment_classifier.pt')
i += 1
torch.save(model, 'sentiment_classifier.final.pt')
Explanation: Training
This code should look pretty familiar by now...
End of explanation
model = torch.load('sentiment_classifier.final.pt')
def inference(sentence):
# Convert words to id tensor.
ids = [[sentiment_dataset.token_vocab.word2id(x)] for x in sentence]
ids = Variable(torch.LongTensor(ids))
if torch.cuda.is_available():
ids = ids.cuda()
# Get model output.
output, _ = model(ids)
_, pred = torch.max(output, dim=2)
if torch.cuda.is_available():
pred = pred.cpu()
pred = pred.data.view(-1).numpy()
print('Sentence: %s' % ' '.join(sentence))
print('Sentiment (0=negative, 4=positive): %i' % pred)
sentence = 'Zot zot .'.split()
inference(sentence)
Explanation: Inference
Lastly, let's examine some model outputs:
End of explanation |
2,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pick an example to test if load.cc works
Step1: Inspect the protobuf containing the model's architecture and logic | Python Code:
# -- inputs
X_test[0]
# -- predicted output (using Keras)
yhat[0]
Explanation: Pick an example to test if load.cc works
End of explanation
from tensorflow.core.framework import graph_pb2
# -- read in the graph
f = open("models/graph.pb", "rb")
graph_def = graph_pb2.GraphDef()
graph_def.ParseFromString(f.read())
import tensorflow as tf
# -- actually import the graph described by graph_def
tf.import_graph_def(graph_def, name = '')
for node in graph_def.node:
print node.name
Explanation: Inspect the protobuf containing the model's architecture and logic
End of explanation |
2,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Demonstration of SMPS Calculations
The SMPS calculations require two main packages from atmPy - smps and dma. dma contains the DMA class and its children. The children of DMA simply contain the definition of the dimensions of the DMA used in the current SMPS instance. In this case, we use the definition of the NOAA wide DMA, which has the dimensions $r_i = 0.0312$, $r_o = 0.03613$ and $l = 0.34054$ where all units are in meters. The SMPS object provides a set of utilities for taking the scan data, applying a transfer function and correcting the distribution for multiple charges to produce a size distribution.
The import of the sizedistribution package allows us to manipulate the output of the DMA ($dN/d\log{D_p}$) such that we can pull out other representations of the size distribution. The remaining packages are simply used for data manipulation.
Step1: The first thing we do in the analysis is we create a new SMPS object with the DMA instance we wish to use. Here, we also set the initial directory to search for SMPS data. When a new SMPS object is created, an open file dialog window will be produced and the user may select one or many files to analyze. The file names will be stored in the SMPS attribute files.
Step2: Determining the Lag
In order to properly analyze a scan, we must first align the data such that the particle concentrations are consistent with the conditions in the DMA. Although conditions such as voltages and flows adjust to changes almost immediately, there will be a lag in the particle response due to isntrument residence time. We can determine the lag either through a correlation or heuristically. The SMPS provides a function getLag which takes an integer indicating the file index to select in the attribute files containing the array of files selected by the user. The optional input, delta, allows the user to offset the result of the correlation by some amount to provide a more reasonable estimate of the lag.
The SMPS
Step3: Processing the Files
Once the user has pointed to the files they wish to use, they can begin processing of the files using the function SMPS
Step4: Charge Correction
The SMPS scans mobilities which are a function of voltage. In truth, at each voltage setpoint, the DMA allows a range of particle mobilities through the instrument. This range is expressed by the DMA transfer function as
\begin{equation}
\Omega = 1/q_a\max{\left[q_a,q_s,\left[\frac{1}{2}\left(q_a+q_s\right)-\right]\right]}
\end{equation}
\begin{equation}
Z=\frac{neCc(D_p)}{3\pi\mu(T)*D_p}
\end{equation}
In any charge correction, we must assume that there are no particles beyond the topmost bin. This allows us to make the assumption that all particles in that bin are singly charged. In order to determine the total number of particles in the current bin, we can simply use Wiedensohler's equation for the charging efficiency for singly charged particles. Starting at the topmost bin, we can calculate the total number of particles as
\begin{equation}
\frac{N_1(D_p,i)}{f_1}=N(D_p)
\end{equation}
where $N_1$ is the number of particles of size $D_p$ having 1 charge, $f_1$ is the charging efficiency for singly charged particles and $N(D_p)$ is the total number of particles diameter $D_p$. Using the initial number of particles, we can then calculated the number of multiply charged particles in a similar fashion.
Once these numbers have been calculated, we can determine the location of the multiply charged particles (i.e. the diameter bin with which they have been identified). To do
Finally, to get the total number of particles in the bin, we can apply the sum
\begin{equation}
N(D_p) = \frac{N_1(D_p)}{f_1}\sum_{i=0}^\inf{f_i}
\end{equation}
However, in each of these cases, only a finite number of particles may be available in each bin, so in the code, we will have to take the minimum of the following
Step5: Use of the SizeDistr Object
The sizeditribution package contains some classes and routines for ready manipulation of the data. But first, we will need to convert the data of interest to a PANDAS data frame with the time as index.
Step6: In addition, we will need to convert the bin centers produced by the SMPS object to bin edges. To do this, we will make the simple assumption that the bin edges are just the halfway points between the centers. For the edge cases, we will simply take the difference between the smallest bin center and the halfway point between the first and second bin centers and subtract this value from the smallest diameter. Similarly, for the largest diameter, we will take the difference between the halfway point between the largest and second largest bin centers and the largest bin center and add it to the largest bin center.
Step7: Once we have the corresponding SizeDistr object, we can now change the current distribution which is in $dN/d\log D_p$ space and change this to a surface area distribution in log space. This will produce a new object that we will call sfSD.
Step8: To get an overall view, we can further manipulate the data to produce average distributions from the entire time series. | Python Code:
from atmPy.instruments.DMA import smps
from atmPy.instruments.DMA import dma
from matplotlib import colors
import matplotlib.pyplot as plt
from numpy import meshgrid
import numpy as np
import pandas as pd
from matplotlib.dates import date2num
from matplotlib import dates
from atmPy import sizedistribution as sd
%matplotlib inline
Explanation: Demonstration of SMPS Calculations
The SMPS calculations require two main packages from atmPy - smps and dma. dma contains the DMA class and its children. The children of DMA simply contain the definition of the dimensions of the DMA used in the current SMPS instance. In this case, we use the definition of the NOAA wide DMA, which has the dimensions $r_i = 0.0312$, $r_o = 0.03613$ and $l = 0.34054$ where all units are in meters. The SMPS object provides a set of utilities for taking the scan data, applying a transfer function and correcting the distribution for multiple charges to produce a size distribution.
The import of the sizedistribution package allows us to manipulate the output of the DMA ($dN/d\log{D_p}$) such that we can pull out other representations of the size distribution. The remaining packages are simply used for data manipulation.
End of explanation
hagis = smps.SMPS(dma.NoaaWide(),scan_folder="C:/Users/mrichardson/Documents/HAGIS/SMPS/Scans")
Explanation: The first thing we do in the analysis is we create a new SMPS object with the DMA instance we wish to use. Here, we also set the initial directory to search for SMPS data. When a new SMPS object is created, an open file dialog window will be produced and the user may select one or many files to analyze. The file names will be stored in the SMPS attribute files.
End of explanation
hagis.getLag(10, delta=10)
Explanation: Determining the Lag
In order to properly analyze a scan, we must first align the data such that the particle concentrations are consistent with the conditions in the DMA. Although conditions such as voltages and flows adjust to changes almost immediately, there will be a lag in the particle response due to isntrument residence time. We can determine the lag either through a correlation or heuristically. The SMPS provides a function getLag which takes an integer indicating the file index to select in the attribute files containing the array of files selected by the user. The optional input, delta, allows the user to offset the result of the correlation by some amount to provide a more reasonable estimate of the lag.
The SMPS::getLag method will produce two plots. The first is the results from the attempted correlation and the second shows how the two scans align with the lag estimate, both the smoothed and raw data. This method will set the lag attribute in the SMPS instance which will be used in future calculations. This attribute is directly accessible if the user wishes to adjust it.
End of explanation
hagis.lag = 10
hagis.proc_files()
Explanation: Processing the Files
Once the user has pointed to the files they wish to use, they can begin processing of the files using the function SMPS::procFiles(). Each file is processed as follows:
The raw data concerning the conditions is truncated for both the up and down scans to the beginning and end of the respective scans. The important parameters here are the values $t_{scan}$ and $t_{dwell}$ from the header of the files. The range of the data from the upscan spans the indices 0 to $t_{scan}$ and the range for the down data is $t_{scan}+t_{dwell}$ to $2\times t_{scan}+t_{dwell}$.
The CN data is adjusted based on the lag. This data is truncated for the up scan as $t_{lag}$ to $t_{lag} + t_{scan}$ where $t_{lag}$ is the lag time determined by the user (possibly with the function SMPS::getLag(). In the downward scan, the data array is reversed and the data is truncated to the range $t_{dwell}-t_{lag}$ to $t_{scan}+t_{dwell}-t_{lag}$. In all cases, the CN concentration is calculated from the 1 second buffer and the CPC flow rate as $N_{1 s}/Q_{cpc}$.
The truncated [CN] is then smoothed using a Lowess smoothing function for both the up and down data.
Diameters for each of the corresponding [CN] are then calculated from the set point voltage (rather than the measured voltage).
The resulting diameters and smoothed [CN] are then run through a transfer function. In this case, the transfer function is a simple full width half max based off of the mobility range of the current voltage. This function allows us to produce a $d\log D_p$ for the ditribution.
The distribution is then corrected based on the algorithm described below.
The charge corrected distribution is then converted to a logarithmic distribution using the values from the FWHM function.
The resulting distribution is then interpolated onto a logarithmically distributed array that consists of bin ranging from 1 to 1000 nm.
End of explanation
hagis.date
index = []
for i,e in enumerate(hagis.date):
if e is None:
index.append(i)
print(index)
if index:
hagis.date = np.delete(hagis.date, index)
hagis.dn_interp = np.delete(hagis.dn_interp,index, axis=0)
xfmt = dates.DateFormatter('%m/%d %H:%M')
xi = date2num(hagis.date)
XI, YI = meshgrid(xi, hagis.diam_interp)
#XI = dates.datetime.datetime.fromtimestamp(XI)
Z = hagis.dn_interp.transpose()
Z[np.where(Z <= 0)] = np.nan
pmax = 1e6 # 10**np.ceil(np.log10(np.amax(Z[np.where(Z > 0)])))
pmin = 1 #10**np.floor(np.log10(np.amin(Z[np.where(Z > 0)])))
fig, ax = plt.subplots()
pc = ax.pcolor(XI, YI, Z, cmap=plt.cm.jet, norm=colors.LogNorm(pmin, pmax, clip=False), alpha=0.8)
plt.colorbar(pc)
plt.yscale('log')
plt.ylim(5, 1000)
ax.xaxis.set_major_formatter(xfmt)
fig.autofmt_xdate()
fig.tight_layout()
Explanation: Charge Correction
The SMPS scans mobilities which are a function of voltage. In truth, at each voltage setpoint, the DMA allows a range of particle mobilities through the instrument. This range is expressed by the DMA transfer function as
\begin{equation}
\Omega = 1/q_a\max{\left[q_a,q_s,\left[\frac{1}{2}\left(q_a+q_s\right)-\right]\right]}
\end{equation}
\begin{equation}
Z=\frac{neCc(D_p)}{3\pi\mu(T)*D_p}
\end{equation}
In any charge correction, we must assume that there are no particles beyond the topmost bin. This allows us to make the assumption that all particles in that bin are singly charged. In order to determine the total number of particles in the current bin, we can simply use Wiedensohler's equation for the charging efficiency for singly charged particles. Starting at the topmost bin, we can calculate the total number of particles as
\begin{equation}
\frac{N_1(D_p,i)}{f_1}=N(D_p)
\end{equation}
where $N_1$ is the number of particles of size $D_p$ having 1 charge, $f_1$ is the charging efficiency for singly charged particles and $N(D_p)$ is the total number of particles diameter $D_p$. Using the initial number of particles, we can then calculated the number of multiply charged particles in a similar fashion.
Once these numbers have been calculated, we can determine the location of the multiply charged particles (i.e. the diameter bin with which they have been identified). To do
Finally, to get the total number of particles in the bin, we can apply the sum
\begin{equation}
N(D_p) = \frac{N_1(D_p)}{f_1}\sum_{i=0}^\inf{f_i}
\end{equation}
However, in each of these cases, only a finite number of particles may be available in each bin, so in the code, we will have to take the minimum of the following:
\begin{equation}
\delta{N(k)}=\min{\left(\frac{f_iN_1}{f_1},N_k\right)}
\end{equation}
where $\delta{N(k)}$ is the number of particles to remove from bin $k$ and $N_k$ is the number of particles in bin $k$.
Output
In the following, we take the results from the SMPS::procFiles() method and produce a color map of size distributions in $dN/d\log D_p$ space as a function of time. The attribute date from the instance of SMPS is a set of DateTime for each scan based on the start time of the file and the scan time collected from the header.
End of explanation
dataframe = pd.DataFrame(hagis.dn_interp)
dataframe.index = hagis.date
Explanation: Use of the SizeDistr Object
The sizeditribution package contains some classes and routines for ready manipulation of the data. But first, we will need to convert the data of interest to a PANDAS data frame with the time as index.
End of explanation
binedges = (hagis.diam_interp[1:]+hagis.diam_interp[:-1])/2
first = hagis.diam_interp[0] -(binedges[0]-hagis.diam_interp[0])
last = hagis.diam_interp[-1]+ (hagis.diam_interp[-1]-binedges[-1])
binedges = np.append([first],binedges)
binedges=np.append(binedges,[last])
sizeDistr = sd.SizeDist_TS(dataframe,binedges, 'dNdlogDp')
f,a,b,c = sizeDistr.plot(vmax = pmax, vmin = pmin, norm='log', showMinorTickLabels=False, cmap=plt.cm.jet)
a.set_ylim((5,1000))
Explanation: In addition, we will need to convert the bin centers produced by the SMPS object to bin edges. To do this, we will make the simple assumption that the bin edges are just the halfway points between the centers. For the edge cases, we will simply take the difference between the smallest bin center and the halfway point between the first and second bin centers and subtract this value from the smallest diameter. Similarly, for the largest diameter, we will take the difference between the halfway point between the largest and second largest bin centers and the largest bin center and add it to the largest bin center.
End of explanation
sfSD = sizeDistr.convert2dSdlogDp()
from imp import reload
reload(sd)
f,a,b,c = sfSD.plot(vmax = 1e10, vmin = 1e4, norm='log', showMinorTickLabels=False,removeTickLabels=['200','300','400',] ,cmap =plt.cm.jet)
a.set_ylim((5,1000))
Explanation: Once we have the corresponding SizeDistr object, we can now change the current distribution which is in $dN/d\log D_p$ space and change this to a surface area distribution in log space. This will produce a new object that we will call sfSD.
End of explanation
avgAt = sizeDistr.average_overAllTime()
f,a = avgAt.plot(norm='log')
# a.set_yscale('log')
avgAtS = sfSD.average_overAllTime()
f,a= avgAtS.plot(norm='log')
a.set_yscale('log')
Explanation: To get an overall view, we can further manipulate the data to produce average distributions from the entire time series.
End of explanation |
2,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
====================================================================
Decoding in sensor space data using the Common Spatial Pattern (CSP)
====================================================================
Decoding applied to MEG data in sensor space decomposed using CSP.
Here the classifier is applied to features extracted on CSP filtered signals.
See http
Step1: Set parameters and read data
Step2: Decoding in sensor space using a linear SVM | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Romain Trachel <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
Explanation: ====================================================================
Decoding in sensor space data using the Common Spatial Pattern (CSP)
====================================================================
Decoding applied to MEG data in sensor space decomposed using CSP.
Here the classifier is applied to features extracted on CSP filtered signals.
See http://en.wikipedia.org/wiki/Common_spatial_pattern and [1]_.
References
.. [1] Zoltan J. Koles. The quantitative extraction and topographic mapping
of the abnormal components in the clinical EEG. Electroencephalography
and Clinical Neurophysiology, 79(6):440--447, December 1991.
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, vis_l=3)
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(2, None, fir_design='firwin') # replace baselining with high-pass
events = mne.read_events(event_fname)
raw.info['bads'] = ['MEG 2443'] # set bad channels
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=False,
exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=None, preload=True)
labels = epochs.events[:, -1]
evoked = epochs.average()
Explanation: Set parameters and read data
End of explanation
from sklearn.svm import SVC # noqa
from sklearn.model_selection import ShuffleSplit # noqa
from mne.decoding import CSP # noqa
n_components = 3 # pick some components
svc = SVC(C=1, kernel='linear')
csp = CSP(n_components=n_components, norm_trace=False)
# Define a monte-carlo cross-validation generator (reduce variance):
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
scores = []
epochs_data = epochs.get_data()
for train_idx, test_idx in cv.split(labels):
y_train, y_test = labels[train_idx], labels[test_idx]
X_train = csp.fit_transform(epochs_data[train_idx], y_train)
X_test = csp.transform(epochs_data[test_idx])
# fit classifier
svc.fit(X_train, y_train)
scores.append(svc.score(X_test, y_test))
# Printing the results
class_balance = np.mean(labels == labels[0])
class_balance = max(class_balance, 1. - class_balance)
print("Classification accuracy: %f / Chance level: %f" % (np.mean(scores),
class_balance))
# Or use much more convenient scikit-learn cross_val_score function using
# a Pipeline
from sklearn.pipeline import Pipeline # noqa
from sklearn.model_selection import cross_val_score # noqa
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=42)
clf = Pipeline([('CSP', csp), ('SVC', svc)])
scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1)
print(scores.mean()) # should match results above
# And using reuglarized csp with Ledoit-Wolf estimator
csp = CSP(n_components=n_components, reg='ledoit_wolf', norm_trace=False)
clf = Pipeline([('CSP', csp), ('SVC', svc)])
scores = cross_val_score(clf, epochs_data, labels, cv=cv, n_jobs=1)
print(scores.mean()) # should get better results than above
# plot CSP patterns estimated on full data for visualization
csp.fit_transform(epochs_data, labels)
data = csp.patterns_
fig, axes = plt.subplots(1, 4)
for idx in range(4):
mne.viz.plot_topomap(data[idx], evoked.info, axes=axes[idx], show=False)
fig.suptitle('CSP patterns')
fig.tight_layout()
fig.show()
Explanation: Decoding in sensor space using a linear SVM
End of explanation |
2,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
First a non-learning test
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few). | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = 252*2 + 28
STARTING_DAYS_AHEAD = 20
POSSIBLE_FRACTIONS = np.arange(0.0, 1.1, 0.1).round(decimals=3).tolist()
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.9999,
dyna_iterations=0,
name='Agent_{}'.format(i)) for i in index]
POSSIBLE_FRACTIONS
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
Explanation: In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 10
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
import pickle
with open('../../data/simple_q_learner_10_actions.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation |
2,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
Before starting here, all the instructions on the installation page should be completed!
Here you will learn how to
Step1: Make sure that your environment path is set to match the correct version of pandeia
Step2: Load blank exo dictionary
To start, load in a blank exoplanet dictionary with empty keys. You will fill these out for yourself in the next step.
Step3: Edit exoplanet observation inputs
Editting each keys are annoying. But, do this carefully or it could result in nonsense runs
Step4: Edit exoplanet host star inputs
Note... If you select phoenix you do not have to provide a starpath, w_unit or f_unit, but you do have to provide a temp, metal and logg. If you select user you do not need to provide a temp, metal and logg, but you do need to provide units and starpath.
Option 1) Grab stellar model from database
Step5: Option 1) Input as dictionary or filename
Step6: Edit exoplanet inputs using one of three options
1) user specified
2) constant value
3) select from grid
1) Edit exoplanet planet inputs if using your own model
Step7: 2) Users can also add in a constant temperature or a constant transit depth
Step8: 3) Select from grid
NOTE
Step9: Load in instrument dictionary (OPTIONAL)
Step 2 is optional because PandExo has the functionality to automatically load in instrument dictionaries. Skip this if you plan on observing with one of the following and want to use the subarray with the smallest frame time and the readout mode with 1 frame/1 group (standard)
Step10: Don't know what instrument options there are?
Step11: Adjusting the Background Level
You may want to think about adjusting the background level of your observation, based on the position of your target. PandExo two options and three levels for the position
Step12: Running NIRISS SOSS Order 2
PandExo only will extract a single order at a time. By default, it is set to extract Order 1. Below you can see how to extract the second order.
NOTE! Users should be careful with this calculation. Saturation will be limited by the first order. Therefore, I suggest running one calculation with ngroup='optmize' for Order 1. This will give you an idea of a good number of groups to use. Then, you can use that in this order 2 calculation.
Step13: Running PandExo
You have four options for running PandExo. All of them are accessed through attribute jdi.run_pandexo. See examples below.
jdi.run_pandexo(exo, inst, param_space = 0, param_range = 0,save_file = True,
output_path=os.getcwd(), output_file = '', verbose=True)
Option 1- Run single instrument mode, single planet
If you forget which instruments are available run jdi.print_isntruments() and pick one
Step14: Note, you can turn off print statements with verbose=False
Option 2- Run single instrument mode (with user dict), single planet
This is the same thing as option 1 but instead of feeding it a list of keys, you can feed it a instrument dictionary (this is for users who wanted to simulate something NOT pre defined within pandexo)
Step15: Option 3- Run several modes, single planet
Use several modes from print_isntruments() options.
Step16: Option 4- Run single mode, several planet cases
Use a single modes from print_isntruments() options. But explore parameter space with respect to any parameter in the exo dict. The example below shows how to loop over several planet models
You can loop through anything in the exoplanet dictionary. It will be planet, star or observation followed by whatever you want to loop through in that set.
i.e. planet+exopath, star+temp, star+metal, star+logg, observation+sat_level.. etc | Python Code:
import warnings
warnings.filterwarnings('ignore')
import pandexo.engine.justdoit as jdi # THIS IS THE HOLY GRAIL OF PANDEXO
import numpy as np
import os
#pip install pandexo.engine --upgrade
Explanation: Getting Started
Before starting here, all the instructions on the installation page should be completed!
Here you will learn how to:
set planet properties
set stellar properties
run default instrument modes
adjust instrument modes
run pandexo
End of explanation
print(os.environ['pandeia_refdata'] )
import pandeia.engine
print(pandeia.engine.__version__)
Explanation: Make sure that your environment path is set to match the correct version of pandeia
End of explanation
exo_dict = jdi.load_exo_dict()
print(exo_dict.keys())
#print(exo_dict['star']['w_unit'])
Explanation: Load blank exo dictionary
To start, load in a blank exoplanet dictionary with empty keys. You will fill these out for yourself in the next step.
End of explanation
exo_dict['observation']['sat_level'] = 80 #saturation level in percent of full well
exo_dict['observation']['sat_unit'] = '%'
exo_dict['observation']['noccultations'] = 1 #number of transits
exo_dict['observation']['R'] = None #fixed binning. I usually suggest ZERO binning.. you can always bin later
#without having to redo the calcualtion
exo_dict['observation']['baseline_unit'] = 'total' #Defines how you specify out of transit observing time
#'frac' : fraction of time in transit versus out = in/out
#'total' : total observing time (seconds)
exo_dict['observation']['baseline'] = 4.0*60.0*60.0 #in accordance with what was specified above (total observing time)
exo_dict['observation']['noise_floor'] = 0 #this can be a fixed level or it can be a filepath
#to a wavelength dependent noise floor solution (units are ppm)
Explanation: Edit exoplanet observation inputs
Editting each keys are annoying. But, do this carefully or it could result in nonsense runs
End of explanation
#OPTION 1 get start from database
exo_dict['star']['type'] = 'phoenix' #phoenix or user (if you have your own)
exo_dict['star']['mag'] = 8.0 #magnitude of the system
exo_dict['star']['ref_wave'] = 1.25 #For J mag = 1.25, H = 1.6, K =2.22.. etc (all in micron)
exo_dict['star']['temp'] = 5500 #in K
exo_dict['star']['metal'] = 0.0 # as log Fe/H
exo_dict['star']['logg'] = 4.0 #log surface gravity cgs
Explanation: Edit exoplanet host star inputs
Note... If you select phoenix you do not have to provide a starpath, w_unit or f_unit, but you do have to provide a temp, metal and logg. If you select user you do not need to provide a temp, metal and logg, but you do need to provide units and starpath.
Option 1) Grab stellar model from database
End of explanation
#Let's create a little fake stellar input
import scipy.constants as sc
wl = np.linspace(0.8, 5, 3000)
nu = sc.c/(wl*1e-6) # frequency in sec^-1
teff = 5500.0
planck_5500K = nu**3 / (np.exp(sc.h*nu/sc.k/teff) - 1)
#can either be dictionary input
starflux = {'f':planck_5500K, 'w':wl}
#or can be as a stellar file
#starflux = 'planck_5500K.dat'
#with open(starflux, 'w') as sf:
# for w,f in zip(wl, planck_5500K):
# sf.write(f'{w:.15f} {f:.15e}\n')
exo_dict['star']['type'] = 'user'
exo_dict['star']['mag'] = 8.0 #magnitude of the system
exo_dict['star']['ref_wave'] = 1.25
exo_dict['star']['starpath'] = starflux
exo_dict['star']['w_unit'] = 'um'
exo_dict['star']['f_unit'] = 'erg/cm2/s/Hz'
Explanation: Option 1) Input as dictionary or filename
End of explanation
exo_dict['planet']['type'] ='user' #tells pandexo you are uploading your own spectrum
exo_dict['planet']['exopath'] = 'wasp12b.txt'
#or as a dictionary
#exo_dict['planet']['exopath'] = {'f':spectrum, 'w':wavelength}
exo_dict['planet']['w_unit'] = 'cm' #other options include "um","nm" ,"Angs", "sec" (for phase curves)
exo_dict['planet']['f_unit'] = 'rp^2/r*^2' #other options are 'fp/f*'
exo_dict['planet']['transit_duration'] = 2.0*60.0*60.0 #transit duration
exo_dict['planet']['td_unit'] = 's' #Any unit of time in accordance with astropy.units can be added
Explanation: Edit exoplanet inputs using one of three options
1) user specified
2) constant value
3) select from grid
1) Edit exoplanet planet inputs if using your own model
End of explanation
exo_dict['planet']['type'] = 'constant' #tells pandexo you want a fixed transit depth
exo_dict['planet']['transit_duration'] = 2.0*60.0*60.0 #transit duration
exo_dict['planet']['td_unit'] = 's'
exo_dict['planet']['radius'] = 1
exo_dict['planet']['r_unit'] = 'R_jup' #Any unit of distance in accordance with astropy.units can be added here
exo_dict['star']['radius'] = 1
exo_dict['star']['r_unit'] = 'R_sun' #Same deal with astropy.units here
exo_dict['planet']['f_unit'] = 'rp^2/r*^2' #this is what you would do for primary transit
#ORRRRR....
#if you wanted to instead to secondary transit at constant temperature
#exo_dict['planet']['f_unit'] = 'fp/f*'
#exo_dict['planet']['temp'] = 1000
Explanation: 2) Users can also add in a constant temperature or a constant transit depth
End of explanation
exo_dict['planet']['type'] = 'grid' #tells pandexo you want to pull from the grid
exo_dict['planet']['temp'] = 1000 #grid: 500, 750, 1000, 1250, 1500, 1750, 2000, 2250, 2500
exo_dict['planet']['chem'] = 'noTiO' #options: 'noTiO' and 'eqchem', noTiO is chemical eq. without TiO
exo_dict['planet']['cloud'] = 'ray10' #options: nothing: '0',
# Weak, medium, strong scattering: ray10,ray100, ray1000
# Weak, medium, strong cloud: flat1,flat10, flat100
exo_dict['planet']['mass'] = 1
exo_dict['planet']['m_unit'] = 'M_jup' #Any unit of mass in accordance with astropy.units can be added here
exo_dict['planet']['radius'] = 1
exo_dict['planet']['r_unit'] = 'R_jup' #Any unit of distance in accordance with astropy.units can be added here
exo_dict['star']['radius'] = 1
exo_dict['star']['r_unit'] = 'R_sun' #Same deal with astropy.units here
Explanation: 3) Select from grid
NOTE: Currently only the fortney grid for hot Jupiters from Fortney+2010 is supported. Holler though, if you want another grid supported
End of explanation
jdi.print_instruments()
inst_dict = jdi.load_mode_dict('NIRSpec G140H')
#loading in instrument dictionaries allow you to personalize some of
#the fields that are predefined in the templates. The templates have
#the subbarays with the lowest frame times and the readmodes with 1 frame per group.
#if that is not what you want. change these fields
#Try printing this out to get a feel for how it is structured:
print(inst_dict['configuration'])
#Another way to display this is to print out the keys
inst_dict.keys()
Explanation: Load in instrument dictionary (OPTIONAL)
Step 2 is optional because PandExo has the functionality to automatically load in instrument dictionaries. Skip this if you plan on observing with one of the following and want to use the subarray with the smallest frame time and the readout mode with 1 frame/1 group (standard):
- NIRCam F444W
- NIRSpec Prism
- NIRSpec G395M
- NIRSpec G395H
- NIRSpec G235H
- NIRSpec G235M
- NIRCam F322W
- NIRSpec G140M
- NIRSpec G140H
- MIRI LRS
- NIRISS SOSS
End of explanation
print("SUBARRAYS")
print(jdi.subarrays('nirspec'))
print("FILTERS")
print(jdi.filters('nircam'))
print("DISPERSERS")
print(jdi.dispersers('nirspec'))
#you can try personalizing some of these fields
inst_dict["configuration"]["detector"]["ngroup"] = 'optimize' #running "optimize" will select the maximum
#possible groups before saturation.
#You can also write in any integer between 2-65536
inst_dict["configuration"]["detector"]["subarray"] = 'substrip256' #change the subbaray
Explanation: Don't know what instrument options there are?
End of explanation
inst_dict['background'] = 'ecliptic'
inst_dict['background_level'] = 'high'
Explanation: Adjusting the Background Level
You may want to think about adjusting the background level of your observation, based on the position of your target. PandExo two options and three levels for the position:
ecliptic or minzodi
low, medium, high
End of explanation
inst_dict = jdi.load_mode_dict('NIRISS SOSS')
inst_dict['strategy']['order'] = 2
inst_dict['configuration']['detector']['subarray'] = 'substrip256'
ngroup_from_order1_run = 2
inst_dict["configuration"]["detector"]["ngroup"] = ngroup_from_order1_run
Explanation: Running NIRISS SOSS Order 2
PandExo only will extract a single order at a time. By default, it is set to extract Order 1. Below you can see how to extract the second order.
NOTE! Users should be careful with this calculation. Saturation will be limited by the first order. Therefore, I suggest running one calculation with ngroup='optmize' for Order 1. This will give you an idea of a good number of groups to use. Then, you can use that in this order 2 calculation.
End of explanation
jdi.print_instruments()
result = jdi.run_pandexo(exo_dict,['NIRCam F322W2'], verbose=True)
Explanation: Running PandExo
You have four options for running PandExo. All of them are accessed through attribute jdi.run_pandexo. See examples below.
jdi.run_pandexo(exo, inst, param_space = 0, param_range = 0,save_file = True,
output_path=os.getcwd(), output_file = '', verbose=True)
Option 1- Run single instrument mode, single planet
If you forget which instruments are available run jdi.print_isntruments() and pick one
End of explanation
inst_dict = jdi.load_mode_dict('NIRSpec G140H')
#personalize subarray
inst_dict["configuration"]["detector"]["subarray"] = 'sub2048'
result = jdi.run_pandexo(exo_dict, inst_dict)
Explanation: Note, you can turn off print statements with verbose=False
Option 2- Run single instrument mode (with user dict), single planet
This is the same thing as option 1 but instead of feeding it a list of keys, you can feed it a instrument dictionary (this is for users who wanted to simulate something NOT pre defined within pandexo)
End of explanation
#choose select
result = jdi.run_pandexo(exo_dict,['NIRSpec G140M','NIRSpec G235M','NIRSpec G395M'],
output_file='three_nirspec_modes.p',verbose=True)
#run all
#result = jdi.run_pandexo(exo_dict, ['RUN ALL'], save_file = False)
Explanation: Option 3- Run several modes, single planet
Use several modes from print_isntruments() options.
End of explanation
#looping over different exoplanet models
jdi.run_pandexo(exo_dict, ['NIRCam F444W'], param_space = 'planet+exopath',
param_range = os.listdir('/path/to/location/of/models'),
output_path = '/path/to/output/simulations')
#looping over different stellar temperatures
jdi.run_pandexo(exo_dict, ['NIRCam F444W'], param_space = 'star+temp',
param_range = np.linspace(5000,8000,2),
output_path = '/path/to/output/simulations')
#looping over different saturation levels
jdi.run_pandexo(exo_dict, ['NIRCam F444W'], param_space = 'observation+sat_level',
param_range = np.linspace(.5,1,5),
output_path = '/path/to/output/simulations')
Explanation: Option 4- Run single mode, several planet cases
Use a single modes from print_isntruments() options. But explore parameter space with respect to any parameter in the exo dict. The example below shows how to loop over several planet models
You can loop through anything in the exoplanet dictionary. It will be planet, star or observation followed by whatever you want to loop through in that set.
i.e. planet+exopath, star+temp, star+metal, star+logg, observation+sat_level.. etc
End of explanation |
2,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading data
Simple stuff. We're loading in a CSV here, and we'll run the describe function over it to get the lay of the land.
Step1: In journalism, we're primarily concerned with using data analysis for two purposes
Step2: One record isn't super useful, so we'll do 10
Step3: If we want, we can keep drilling down. Maybe we should also limit our inquiry to, say, La Guardia.
Step4: Huh, does LGA struggle more than usual to get its planes to Atlanta on time? Let's live dangerously and make a boxplot.
(Spoiler alert
Step5: And so on.
Of course data journalists are also in the business of finding trends, so let's do some of that.
Describing trends
Being good, accountability-minded reporters, one thing we might be interested in is each airline's on-time performance throughout our sample. Here's one way to check that
Step6: Huh. Looks like the median flight from most of these carriers tends to show up pretty early. How does that change when we look at the mean?
Step7: A little less generous. We can spend some time debating which portrayal is more fair, but the large difference between the two is still worth noting.
We can, of course, also drill down by destination
Step8: And if we want a more user-friendly display ...
Step9: BONUS! Correlation
Up until now, we've spent a lot of time seeing how variables act in isolation -- mainly focusing on arrival delays. But sometimes we might also want to see how two variables interact. That's where correlation comes into play.
For example, let's test one of my personal suspicions that longer flights (measured in distance) tend to experience longer delays.
Step10: And now we'll make a crude visualization, just to show off | Python Code:
df = pd.read_csv('data/ontime_reports_may_2015_ny.csv')
df.describe()
Explanation: Loading data
Simple stuff. We're loading in a CSV here, and we'll run the describe function over it to get the lay of the land.
End of explanation
df.sort('ARR_DELAY', ascending=False).head(1)
Explanation: In journalism, we're primarily concerned with using data analysis for two purposes:
Finding needles in haystacks
And describing trends
We'll spend a little time looking at the first before we move on to the second.
Needles in haystacks
Let's start with the longest delays:
End of explanation
df.sort('ARR_DELAY', ascending=False).head(10)
Explanation: One record isn't super useful, so we'll do 10:
End of explanation
la_guardia_flights = df[df['ORIGIN'] == 'LGA']
la_guardia_flights.sort('ARR_DELAY', ascending=False).head(10)
Explanation: If we want, we can keep drilling down. Maybe we should also limit our inquiry to, say, La Guardia.
End of explanation
lga_to_atl = df[df['DEST'] == 'ATL']
lga_to_atl.boxplot('ACTUAL_ELAPSED_TIME', by='ORIGIN')
Explanation: Huh, does LGA struggle more than usual to get its planes to Atlanta on time? Let's live dangerously and make a boxplot.
(Spoiler alert: JFK is marginally worse)
End of explanation
df.groupby('CARRIER').median()['ARR_DELAY']
Explanation: And so on.
Of course data journalists are also in the business of finding trends, so let's do some of that.
Describing trends
Being good, accountability-minded reporters, one thing we might be interested in is each airline's on-time performance throughout our sample. Here's one way to check that:
End of explanation
df.groupby('CARRIER').mean()['ARR_DELAY']
Explanation: Huh. Looks like the median flight from most of these carriers tends to show up pretty early. How does that change when we look at the mean?
End of explanation
df.groupby(['CARRIER', 'ORIGIN']).median()['ARR_DELAY']
Explanation: A little less generous. We can spend some time debating which portrayal is more fair, but the large difference between the two is still worth noting.
We can, of course, also drill down by destination:
End of explanation
df.boxplot('ARR_DELAY', by='CARRIER')
Explanation: And if we want a more user-friendly display ...
End of explanation
df.corr()
Explanation: BONUS! Correlation
Up until now, we've spent a lot of time seeing how variables act in isolation -- mainly focusing on arrival delays. But sometimes we might also want to see how two variables interact. That's where correlation comes into play.
For example, let's test one of my personal suspicions that longer flights (measured in distance) tend to experience longer delays.
End of explanation
import matplotlib.pyplot as plt
plt.matshow(df.corr())
Explanation: And now we'll make a crude visualization, just to show off:
End of explanation |
2,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Data augmentation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download a dataset
This tutorial uses the tf_flowers dataset. For convenience, download the dataset using TensorFlow Datasets. If you would like to learn about other ways of importing data, check out the load images tutorial.
Step3: The flowers dataset has five classes.
Step4: Let's retrieve an image from the dataset and use it to demonstrate data augmentation.
Step5: Use Keras preprocessing layers
Resizing and rescaling
You can use the Keras preprocessing layers to resize your images to a consistent shape (with tf.keras.layers.Resizing), and to rescale pixel values (with tf.keras.layers.Rescaling).
Step6: Note
Step7: Verify that the pixels are in the [0, 1] range
Step8: Data augmentation
You can use the Keras preprocessing layers for data augmentation as well, such as tf.keras.layers.RandomFlip and tf.keras.layers.RandomRotation.
Let's create a few preprocessing layers and apply them repeatedly to the same image.
Step9: There are a variety of preprocessing layers you can use for data augmentation including tf.keras.layers.RandomContrast, tf.keras.layers.RandomCrop, tf.keras.layers.RandomZoom, and others.
Two options to use the Keras preprocessing layers
There are two ways you can use these preprocessing layers, with important trade-offs.
Option 1
Step10: There are two important points to be aware of in this case
Step11: With this approach, you use Dataset.map to create a dataset that yields batches of augmented images. In this case
Step12: Train a model
For completeness, you will now train a model using the datasets you have just prepared.
The Sequential model consists of three convolution blocks (tf.keras.layers.Conv2D) with a max pooling layer (tf.keras.layers.MaxPooling2D) in each of them. There's a fully-connected layer (tf.keras.layers.Dense) with 128 units on top of it that is activated by a ReLU activation function ('relu'). This model has not been tuned for accuracy (the goal is to show you the mechanics).
Step13: Choose the tf.keras.optimizers.Adam optimizer and tf.keras.losses.SparseCategoricalCrossentropy loss function. To view training and validation accuracy for each training epoch, pass the metrics argument to Model.compile.
Step14: Train for a few epochs
Step15: Custom data augmentation
You can also create custom data augmentation layers.
This section of the tutorial shows two ways of doing so
Step16: Next, implement a custom layer by subclassing
Step17: Both of these layers can be used as described in options 1 and 2 above.
Using tf.image
The above Keras preprocessing utilities are convenient. But, for finer control, you can write your own data augmentation pipelines or layers using tf.data and tf.image. (You may also want to check out TensorFlow Addons Image
Step18: Retrieve an image to work with
Step19: Let's use the following function to visualize and compare the original and augmented images side-by-side
Step20: Data augmentation
Flip an image
Flip an image either vertically or horizontally with tf.image.flip_left_right
Step21: Grayscale an image
You can grayscale an image with tf.image.rgb_to_grayscale
Step22: Saturate an image
Saturate an image with tf.image.adjust_saturation by providing a saturation factor
Step23: Change image brightness
Change the brightness of image with tf.image.adjust_brightness by providing a brightness factor
Step24: Center crop an image
Crop the image from center up to the image part you desire using tf.image.central_crop
Step25: Rotate an image
Rotate an image by 90 degrees with tf.image.rot90
Step26: Random transformations
Warning
Step27: Randomly change image contrast
Randomly change the contrast of image using tf.image.stateless_random_contrast by providing a contrast range and seed. The contrast range is chosen randomly in the interval [lower, upper] and is associated with the given seed.
Step28: Randomly crop an image
Randomly crop image using tf.image.stateless_random_crop by providing target size and seed. The portion that gets cropped out of image is at a randomly chosen offset and is associated with the given seed.
Step29: Apply augmentation to a dataset
Let's first download the image dataset again in case they are modified in the previous sections.
Step30: Next, define a utility function for resizing and rescaling the images. This function will be used in unifying the size and scale of images in the dataset
Step31: Let's also define the augment function that can apply the random transformations to the images. This function will be used on the dataset in the next step.
Step32: Option 1
Step33: Map the augment function to the training dataset
Step34: Option 2
Step35: Map the wrapper function f to the training dataset, and the resize_and_rescale function—to the validation and test sets | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras import layers
Explanation: Data augmentation
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/images/data_augmentation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This tutorial demonstrates data augmentation: a technique to increase the diversity of your training set by applying random (but realistic) transformations, such as image rotation.
You will learn how to apply data augmentation in two ways:
Use the Keras preprocessing layers, such as tf.keras.layers.Resizing, tf.keras.layers.Rescaling, tf.keras.layers.RandomFlip, and tf.keras.layers.RandomRotation.
Use the tf.image methods, such as tf.image.flip_left_right, tf.image.rgb_to_grayscale, tf.image.adjust_brightness, tf.image.central_crop, and tf.image.stateless_random*.
Setup
End of explanation
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: Download a dataset
This tutorial uses the tf_flowers dataset. For convenience, download the dataset using TensorFlow Datasets. If you would like to learn about other ways of importing data, check out the load images tutorial.
End of explanation
num_classes = metadata.features['label'].num_classes
print(num_classes)
Explanation: The flowers dataset has five classes.
End of explanation
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
Explanation: Let's retrieve an image from the dataset and use it to demonstrate data augmentation.
End of explanation
IMG_SIZE = 180
resize_and_rescale = tf.keras.Sequential([
layers.Resizing(IMG_SIZE, IMG_SIZE),
layers.Rescaling(1./255)
])
Explanation: Use Keras preprocessing layers
Resizing and rescaling
You can use the Keras preprocessing layers to resize your images to a consistent shape (with tf.keras.layers.Resizing), and to rescale pixel values (with tf.keras.layers.Rescaling).
End of explanation
result = resize_and_rescale(image)
_ = plt.imshow(result)
Explanation: Note: The rescaling layer above standardizes pixel values to the [0, 1] range. If instead you wanted it to be [-1, 1], you would write tf.keras.layers.Rescaling(1./127.5, offset=-1).
You can visualize the result of applying these layers to an image.
End of explanation
print("Min and max pixel values:", result.numpy().min(), result.numpy().max())
Explanation: Verify that the pixels are in the [0, 1] range:
End of explanation
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal_and_vertical"),
layers.RandomRotation(0.2),
])
# Add the image to a batch.
image = tf.cast(tf.expand_dims(image, 0), tf.float32)
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = data_augmentation(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0])
plt.axis("off")
Explanation: Data augmentation
You can use the Keras preprocessing layers for data augmentation as well, such as tf.keras.layers.RandomFlip and tf.keras.layers.RandomRotation.
Let's create a few preprocessing layers and apply them repeatedly to the same image.
End of explanation
model = tf.keras.Sequential([
# Add the preprocessing layers you created earlier.
resize_and_rescale,
data_augmentation,
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
# Rest of your model.
])
Explanation: There are a variety of preprocessing layers you can use for data augmentation including tf.keras.layers.RandomContrast, tf.keras.layers.RandomCrop, tf.keras.layers.RandomZoom, and others.
Two options to use the Keras preprocessing layers
There are two ways you can use these preprocessing layers, with important trade-offs.
Option 1: Make the preprocessing layers part of your model
End of explanation
aug_ds = train_ds.map(
lambda x, y: (resize_and_rescale(x, training=True), y))
Explanation: There are two important points to be aware of in this case:
Data augmentation will run on-device, synchronously with the rest of your layers, and benefit from GPU acceleration.
When you export your model using model.save, the preprocessing layers will be saved along with the rest of your model. If you later deploy this model, it will automatically standardize images (according to the configuration of your layers). This can save you from the effort of having to reimplement that logic server-side.
Note: Data augmentation is inactive at test time so input images will only be augmented during calls to Model.fit (not Model.evaluate or Model.predict).
Option 2: Apply the preprocessing layers to your dataset
End of explanation
batch_size = 32
AUTOTUNE = tf.data.AUTOTUNE
def prepare(ds, shuffle=False, augment=False):
# Resize and rescale all datasets.
ds = ds.map(lambda x, y: (resize_and_rescale(x), y),
num_parallel_calls=AUTOTUNE)
if shuffle:
ds = ds.shuffle(1000)
# Batch all datasets.
ds = ds.batch(batch_size)
# Use data augmentation only on the training set.
if augment:
ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y),
num_parallel_calls=AUTOTUNE)
# Use buffered prefetching on all datasets.
return ds.prefetch(buffer_size=AUTOTUNE)
train_ds = prepare(train_ds, shuffle=True, augment=True)
val_ds = prepare(val_ds)
test_ds = prepare(test_ds)
Explanation: With this approach, you use Dataset.map to create a dataset that yields batches of augmented images. In this case:
Data augmentation will happen asynchronously on the CPU, and is non-blocking. You can overlap the training of your model on the GPU with data preprocessing, using Dataset.prefetch, shown below.
In this case the preprocessing layers will not be exported with the model when you call Model.save. You will need to attach them to your model before saving it or reimplement them server-side. After training, you can attach the preprocessing layers before export.
You can find an example of the first option in the Image classification tutorial. Let's demonstrate the second option here.
Apply the preprocessing layers to the datasets
Configure the training, validation, and test datasets with the Keras preprocessing layers you created earlier. You will also configure the datasets for performance, using parallel reads and buffered prefetching to yield batches from disk without I/O become blocking. (Learn more dataset performance in the Better performance with the tf.data API guide.)
Note: Data augmentation should only be applied to the training set.
End of explanation
model = tf.keras.Sequential([
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
Explanation: Train a model
For completeness, you will now train a model using the datasets you have just prepared.
The Sequential model consists of three convolution blocks (tf.keras.layers.Conv2D) with a max pooling layer (tf.keras.layers.MaxPooling2D) in each of them. There's a fully-connected layer (tf.keras.layers.Dense) with 128 units on top of it that is activated by a ReLU activation function ('relu'). This model has not been tuned for accuracy (the goal is to show you the mechanics).
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: Choose the tf.keras.optimizers.Adam optimizer and tf.keras.losses.SparseCategoricalCrossentropy loss function. To view training and validation accuracy for each training epoch, pass the metrics argument to Model.compile.
End of explanation
epochs=5
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
loss, acc = model.evaluate(test_ds)
print("Accuracy", acc)
Explanation: Train for a few epochs:
End of explanation
def random_invert_img(x, p=0.5):
if tf.random.uniform([]) < p:
x = (255-x)
else:
x
return x
def random_invert(factor=0.5):
return layers.Lambda(lambda x: random_invert_img(x, factor))
random_invert = random_invert()
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = random_invert(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0].numpy().astype("uint8"))
plt.axis("off")
Explanation: Custom data augmentation
You can also create custom data augmentation layers.
This section of the tutorial shows two ways of doing so:
First, you will create a tf.keras.layers.Lambda layer. This is a good way to write concise code.
Next, you will write a new layer via subclassing, which gives you more control.
Both layers will randomly invert the colors in an image, according to some probability.
End of explanation
class RandomInvert(layers.Layer):
def __init__(self, factor=0.5, **kwargs):
super().__init__(**kwargs)
self.factor = factor
def call(self, x):
return random_invert_img(x)
_ = plt.imshow(RandomInvert()(image)[0])
Explanation: Next, implement a custom layer by subclassing:
End of explanation
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: Both of these layers can be used as described in options 1 and 2 above.
Using tf.image
The above Keras preprocessing utilities are convenient. But, for finer control, you can write your own data augmentation pipelines or layers using tf.data and tf.image. (You may also want to check out TensorFlow Addons Image: Operations and TensorFlow I/O: Color Space Conversions.)
Since the flowers dataset was previously configured with data augmentation, let's reimport it to start fresh:
End of explanation
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
Explanation: Retrieve an image to work with:
End of explanation
def visualize(original, augmented):
fig = plt.figure()
plt.subplot(1,2,1)
plt.title('Original image')
plt.imshow(original)
plt.subplot(1,2,2)
plt.title('Augmented image')
plt.imshow(augmented)
Explanation: Let's use the following function to visualize and compare the original and augmented images side-by-side:
End of explanation
flipped = tf.image.flip_left_right(image)
visualize(image, flipped)
Explanation: Data augmentation
Flip an image
Flip an image either vertically or horizontally with tf.image.flip_left_right:
End of explanation
grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
_ = plt.colorbar()
Explanation: Grayscale an image
You can grayscale an image with tf.image.rgb_to_grayscale:
End of explanation
saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)
Explanation: Saturate an image
Saturate an image with tf.image.adjust_saturation by providing a saturation factor:
End of explanation
bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)
Explanation: Change image brightness
Change the brightness of image with tf.image.adjust_brightness by providing a brightness factor:
End of explanation
cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image, cropped)
Explanation: Center crop an image
Crop the image from center up to the image part you desire using tf.image.central_crop:
End of explanation
rotated = tf.image.rot90(image)
visualize(image, rotated)
Explanation: Rotate an image
Rotate an image by 90 degrees with tf.image.rot90:
End of explanation
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_brightness = tf.image.stateless_random_brightness(
image, max_delta=0.95, seed=seed)
visualize(image, stateless_random_brightness)
Explanation: Random transformations
Warning: There are two sets of random image operations: tf.image.random* and tf.image.stateless_random*. Using tf.image.random* operations is strongly discouraged as they use the old RNGs from TF 1.x. Instead, please use the random image operations introduced in this tutorial. For more information, refer to Random number generation.
Applying random transformations to the images can further help generalize and expand the dataset. The current tf.image API provides eight such random image operations (ops):
tf.image.stateless_random_brightness
tf.image.stateless_random_contrast
tf.image.stateless_random_crop
tf.image.stateless_random_flip_left_right
tf.image.stateless_random_flip_up_down
tf.image.stateless_random_hue
tf.image.stateless_random_jpeg_quality
tf.image.stateless_random_saturation
These random image ops are purely functional: the output only depends on the input. This makes them simple to use in high performance, deterministic input pipelines. They require a seed value be input each step. Given the same seed, they return the same results independent of how many times they are called.
Note: seed is a Tensor of shape (2,) whose values are any integers.
In the following sections, you will:
1. Go over examples of using random image operations to transform an image.
2. Demonstrate how to apply random transformations to a training dataset.
Randomly change image brightness
Randomly change the brightness of image using tf.image.stateless_random_brightness by providing a brightness factor and seed. The brightness factor is chosen randomly in the range [-max_delta, max_delta) and is associated with the given seed.
End of explanation
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_contrast = tf.image.stateless_random_contrast(
image, lower=0.1, upper=0.9, seed=seed)
visualize(image, stateless_random_contrast)
Explanation: Randomly change image contrast
Randomly change the contrast of image using tf.image.stateless_random_contrast by providing a contrast range and seed. The contrast range is chosen randomly in the interval [lower, upper] and is associated with the given seed.
End of explanation
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_crop = tf.image.stateless_random_crop(
image, size=[210, 300, 3], seed=seed)
visualize(image, stateless_random_crop)
Explanation: Randomly crop an image
Randomly crop image using tf.image.stateless_random_crop by providing target size and seed. The portion that gets cropped out of image is at a randomly chosen offset and is associated with the given seed.
End of explanation
(train_datasets, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: Apply augmentation to a dataset
Let's first download the image dataset again in case they are modified in the previous sections.
End of explanation
def resize_and_rescale(image, label):
image = tf.cast(image, tf.float32)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
image = (image / 255.0)
return image, label
Explanation: Next, define a utility function for resizing and rescaling the images. This function will be used in unifying the size and scale of images in the dataset:
End of explanation
def augment(image_label, seed):
image, label = image_label
image, label = resize_and_rescale(image, label)
image = tf.image.resize_with_crop_or_pad(image, IMG_SIZE + 6, IMG_SIZE + 6)
# Make a new seed.
new_seed = tf.random.experimental.stateless_split(seed, num=1)[0, :]
# Random crop back to the original size.
image = tf.image.stateless_random_crop(
image, size=[IMG_SIZE, IMG_SIZE, 3], seed=seed)
# Random brightness.
image = tf.image.stateless_random_brightness(
image, max_delta=0.5, seed=new_seed)
image = tf.clip_by_value(image, 0, 1)
return image, label
Explanation: Let's also define the augment function that can apply the random transformations to the images. This function will be used on the dataset in the next step.
End of explanation
# Create a `Counter` object and `Dataset.zip` it together with the training set.
counter = tf.data.experimental.Counter()
train_ds = tf.data.Dataset.zip((train_datasets, (counter, counter)))
Explanation: Option 1: Using tf.data.experimental.Counter
Create a tf.data.experimental.Counter object (let's call it counter) and Dataset.zip the dataset with (counter, counter). This will ensure that each image in the dataset gets associated with a unique value (of shape (2,)) based on counter which later can get passed into the augment function as the seed value for random transformations.
End of explanation
train_ds = (
train_ds
.shuffle(1000)
.map(augment, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
Explanation: Map the augment function to the training dataset:
End of explanation
# Create a generator.
rng = tf.random.Generator.from_seed(123, alg='philox')
# Create a wrapper function for updating seeds.
def f(x, y):
seed = rng.make_seeds(2)[0]
image, label = augment((x, y), seed)
return image, label
Explanation: Option 2: Using tf.random.Generator
Create a tf.random.Generator object with an initial seed value. Calling the make_seeds function on the same generator object always returns a new, unique seed value.
Define a wrapper function that: 1) calls the make_seeds function; and 2) passes the newly generated seed value into the augment function for random transformations.
Note: tf.random.Generator objects store RNG state in a tf.Variable, which means it can be saved as a checkpoint or in a SavedModel. For more details, please refer to Random number generation.
End of explanation
train_ds = (
train_datasets
.shuffle(1000)
.map(f, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
Explanation: Map the wrapper function f to the training dataset, and the resize_and_rescale function—to the validation and test sets:
End of explanation |
2,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extracting the time series of activations in a label
We first apply a dSPM inverse operator to get signed activations in a label
(with positive and negative values) and we then compare different strategies
to average the times series in a label. We compare a simple average, with an
averaging using the dipoles normal (flip mode) and then a PCA,
also using a sign flip.
Step1: Compute inverse solution
Step2: View source activations
Step3: Using vector solutions
It's also possible to compute label time courses for a | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import matplotlib.pyplot as plt
import matplotlib.patheffects as path_effects
import mne
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, apply_inverse
print(__doc__)
data_path = sample.data_path()
label = 'Aud-lh'
label_fname = data_path + '/MEG/sample/labels/%s.label' % label
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))
inverse_operator = read_inverse_operator(fname_inv)
src = inverse_operator['src']
Explanation: Extracting the time series of activations in a label
We first apply a dSPM inverse operator to get signed activations in a label
(with positive and negative values) and we then compare different strategies
to average the times series in a label. We compare a simple average, with an
averaging using the dipoles normal (flip mode) and then a PCA,
also using a sign flip.
End of explanation
pick_ori = "normal" # Get signed values to see the effect of sign flip
stc = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori=pick_ori)
label = mne.read_label(label_fname)
stc_label = stc.in_label(label)
modes = ('mean', 'mean_flip', 'pca_flip')
tcs = dict()
for mode in modes:
tcs[mode] = stc.extract_label_time_course(label, src, mode=mode)
print("Number of vertices : %d" % len(stc_label.data))
Explanation: Compute inverse solution
End of explanation
fig, ax = plt.subplots(1)
t = 1e3 * stc_label.times
ax.plot(t, stc_label.data.T, 'k', linewidth=0.5, alpha=0.5)
pe = [path_effects.Stroke(linewidth=5, foreground='w', alpha=0.5),
path_effects.Normal()]
for mode, tc in tcs.items():
ax.plot(t, tc[0], linewidth=3, label=str(mode), path_effects=pe)
xlim = t[[0, -1]]
ylim = [-27, 22]
ax.legend(loc='upper right')
ax.set(xlabel='Time (ms)', ylabel='Source amplitude',
title='Activations in Label %r' % (label.name),
xlim=xlim, ylim=ylim)
mne.viz.tight_layout()
Explanation: View source activations
End of explanation
pick_ori = 'vector'
stc_vec = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori=pick_ori)
data = stc_vec.extract_label_time_course(label, src)
fig, ax = plt.subplots(1)
stc_vec_label = stc_vec.in_label(label)
colors = ['#EE6677', '#228833', '#4477AA']
for ii, name in enumerate('XYZ'):
color = colors[ii]
ax.plot(t, stc_vec_label.data[:, ii].T, color=color, lw=0.5, alpha=0.5,
zorder=5 - ii)
ax.plot(t, data[0, ii], lw=3, color=color, label='+' + name, zorder=8 - ii,
path_effects=pe)
ax.legend(loc='upper right')
ax.set(xlabel='Time (ms)', ylabel='Source amplitude',
title='Mean vector activations in Label %r' % (label.name,),
xlim=xlim, ylim=ylim)
mne.viz.tight_layout()
Explanation: Using vector solutions
It's also possible to compute label time courses for a
:class:mne.VectorSourceEstimate, but only with mode='mean'.
End of explanation |
2,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Input pipeline
Zastosowano tu następującą strategie
Step1: test queue
Step2: Testing
Możemy wykorzystać feed_dict by wykonać graf operacji na ndanych testowych. | Python Code:
def read_data(filename_queue):
reader = tf.TFRecordReader()
_, se = reader.read(filename_queue)
f = tf.parse_single_example(se,features={'image/encoded':tf.FixedLenFeature([],tf.string),
'image/class/label':tf.FixedLenFeature([],tf.int64),
'image/height':tf.FixedLenFeature([],tf.int64),
'image/width':tf.FixedLenFeature([],tf.int64)})
image = tf.image.decode_png(f['image/encoded'],channels=3)
image.set_shape( (32,32,3) )
return image,f['image/class/label']
tf.reset_default_graph()
fq = tf.train.string_input_producer([dane_train])
image_data, label = read_data(filename_queue=fq)
batch_size = 128
images, sparse_labels = tf.train.shuffle_batch( [image_data,label],batch_size=batch_size,
num_threads=2,
capacity=1000+3*batch_size,
min_after_dequeue=1000
)
images = (tf.cast(images,tf.float32)-128.0)/33.0
Explanation: Input pipeline
Zastosowano tu następującą strategie:
do trenowania używamy kolejki zarządzanej przez tensorflow
to testowania, pobieramy z kolejki dane w postaci tablic numpy i przekazujemy je do tensorflow z użyciem feed_dict
net
End of explanation
fq_test = tf.train.string_input_producer([dane])
test_image_data, test_label = read_data(filename_queue=fq_test)
batch_size = 128
test_images, test_sparse_labels = tf.train.batch( [test_image_data,test_label],batch_size=batch_size,
num_threads=2,
capacity=1000+3*batch_size,
)
test_images = (tf.cast(test_images,tf.float32)-128.0)/33.0
net = tf.contrib.layers.conv2d( images, 32, 3, padding='VALID')
net = tf.contrib.layers.max_pool2d( net, 2, 2, padding='VALID')
net = tf.contrib.layers.conv2d( net, 32, 3, padding='VALID')
net = tf.contrib.layers.max_pool2d( net, 2, 2, padding='VALID')
net = tf.contrib.layers.conv2d( net, 32, 3, padding='VALID')
net = tf.contrib.layers.max_pool2d( net, 2, 2, padding='VALID')
net = tf.contrib.layers.fully_connected(tf.reshape(net,[-1,2*2*32]), 32)
net = tf.contrib.layers.fully_connected(net, 10, activation_fn=None)
logits = net
xent = tf.losses.sparse_softmax_cross_entropy(sparse_labels,net)
loss = tf.reduce_mean( xent)
opt = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train_op = opt.minimize(loss)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.InteractiveSession(config=config)
tf.global_variables_initializer().run()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess,coord=coord)
!ls cifar_convet.ckpt*
global_step = 0
if global_step>0:
saver=tf.train.Saver()
saver.restore(sess,'cifar_convet.ckpt-%d'%global_step)
%%time
lvals = []
for i in range(global_step,global_step+200000):
l, _ = sess.run([loss,train_op])
if i%10==0:
clear_output(wait=True)
print(l,i+1)
if i%100==0:
Images,Labels = sess.run([test_images,test_sparse_labels])
predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)
r_test = np.sum(predicted==Labels)/Labels.size
Images,Labels = sess.run([images,sparse_labels])
predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)
r = np.sum(predicted==Labels)/Labels.size
lvals.append([i,l,r,r_test])
global_step = i+1
global_step
lvals = np.array(lvals)
plt.plot(lvals[:,0],lvals[:,1])
plt.plot(lvals[:,0],lvals[:,3])
plt.plot(lvals[:,0],lvals[:,2])
saver.restore(sess,'cifar_convet.ckpt')
sess.run(test_sparse_labels).shape
sess.run(test_sparse_labels).shape
label2txt = ["airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck" ]
Explanation: test queue
End of explanation
Images,Labels = sess.run([test_images,test_sparse_labels])
predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)
np.sum(predicted==Labels)/Labels.size
for ith in range(5):
print (label2txt[Labels[ith]],(label2txt[predicted[ith]]))
plt.imshow((Images[ith]*33+128).astype(np.uint8))
plt.show()
%%time
l_lst =[]
for i in range(1):
Images,Labels = sess.run([test_images,test_sparse_labels])
predicted = np.argmax(sess.run(logits,feed_dict={images: Images}),axis=1)
rlst = np.sum(predicted==Labels)/Labels.size
print(rlst)
saver = tf.train.Saver()
saver.save(sess,'cifar_convet.ckpt',global_step=global_step)
Explanation: Testing
Możemy wykorzystać feed_dict by wykonać graf operacji na ndanych testowych.
End of explanation |
2,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hypsometric analysis of Mountain Ranges
Carlos H. Grohmann
Institute of Energy and Environment
University of São Paulo, São Paulo, Brazil
guano -at- usp -dot- br
Hypsometry
Hypsometric analysis as the study of land elevations about a given datum can be traced back to the works of German Geographer Albrecht Penck$^1$, although its modern implementation is usually related to a seminal paper by A.N.Strahler$^2$. The area-altitude distribution can be shown as hypsographic or hypsometric curves. The hypsographic curve uses absolute units of measure, where elevation is plotted on the ordinate and the area above a given elevation on the abscissa. The hypsometric curve uses adimensional axes to show the relation of area an elevation of a point about the total area and maximum elevation of a region$^{3,4,5}$ (Supplementary Figure 1A).
One important point is that both representations are cumulative curves, not simple histograms of elevation distribution. The Empirical Cumulative Distribution Function (ECDF) of a DEM can be used to calculate the accumulated area (or relative area) per elevation and used to construct a hypsometric curve (as in the R package hydroTSM$^6$). To plot the hypsographic curve, on the other hand, the real area of pixels is needed, and the ECDF cannot be used.
The area of the pixels of a DEM in a 'Latitude-Longitude' projection will decrease towards the pole as the length of an arc of longitude tends to zero at $90^\circ$, and this variation must be considered for a proper hypsometric analysis. This could be achieved by calculating the size of the pixels (as shown in the code below) or by using tools such as the R package raster$^7$, or the GRASS-GIS$^8$ module r.stats.
Natural Earth Data
Elsen & Tingley used a data set of "182 expert-delineated mountain ranges" available from Natural Earth. The authors misinterpreted the metadata and stated that the data set is "roughly accurate to 50m". That is not the case. Natural Earth distributes geographical data at three scales
Step1: Define functions
The haversine formula is used to calculate the distance between two points on a spherical approximation of the Earth. Adapted from http
Step2: Define variables for shapefiles and GeoTIFF
Step3: Import GeoTIFF
Step4: Get GeoTransformation parameters, calculate image extents
Step5: Load shapefiles (for plotting only)
Step6: Create basemap with shaded relief image and mountain range boundaries
Step7: Mask original raster with shapefiles
Uses external gdalwarp utility. Pixels outside the boundary polygon will be assigned a -9999 value.
Step8: Load clipped rasters
The -9999 value is set to NaN (Not a Number), in a masked Numpy array.
Step9: Set yres to a positive value
Used to calculate the area of each pixel.
Step10: Calculate pixel size (in km) along the N-S direction
This value (dy) does not change with Latitutde
Step11: Calculate pixel size along the E-W direction, create array with area values
E-W dimension (dx) of pixels change with latitude. The haversine function is used to calculate it and area is approximated as (dx * dy).
Step12: Get base statistics for clipped rasters and calculate Elevation values used in hypsometric analysis
Step13: Make a masked array of cell area and calculate Area values used in hypsometric analysis
Step14: Plot hypsographic (absolute values) curve
Step15: Plot hypsometric (normalized values) curve
Step16: Make histograms
Histograms of DEM can be of frequency (cell count per elevation) or of area per elevation.
Step17: Simple frequency (cell count) histograms
Step18: Histograms of area per elevation
These can be calculated by
Step19: To calculate the area of pixels per elevation, we use the ndimage function from SciPy. It sums the values in one array (area) based on occurence a second array (elevation). A third array is used as an index (from 0 to max+1).
Step20: Plot histograms
Step21: We can compare both methods and see that approximating the area of pixels by the mean cell size gives results very close to those obtained by calculating the area of each pixel. | Python Code:
import sys, os
import numpy as np
import math as math
import numpy.ma as ma
from matplotlib import cm
from matplotlib.colors import LightSource
from scipy import ndimage
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
%matplotlib inline
# import osgeo libs after basemap, so it
# won't cause conflicts (Assertion failed..)
# with mannualy-installed GEOS
import gdal, ogr
import shapefile as shpf
Explanation: Hypsometric analysis of Mountain Ranges
Carlos H. Grohmann
Institute of Energy and Environment
University of São Paulo, São Paulo, Brazil
guano -at- usp -dot- br
Hypsometry
Hypsometric analysis as the study of land elevations about a given datum can be traced back to the works of German Geographer Albrecht Penck$^1$, although its modern implementation is usually related to a seminal paper by A.N.Strahler$^2$. The area-altitude distribution can be shown as hypsographic or hypsometric curves. The hypsographic curve uses absolute units of measure, where elevation is plotted on the ordinate and the area above a given elevation on the abscissa. The hypsometric curve uses adimensional axes to show the relation of area an elevation of a point about the total area and maximum elevation of a region$^{3,4,5}$ (Supplementary Figure 1A).
One important point is that both representations are cumulative curves, not simple histograms of elevation distribution. The Empirical Cumulative Distribution Function (ECDF) of a DEM can be used to calculate the accumulated area (or relative area) per elevation and used to construct a hypsometric curve (as in the R package hydroTSM$^6$). To plot the hypsographic curve, on the other hand, the real area of pixels is needed, and the ECDF cannot be used.
The area of the pixels of a DEM in a 'Latitude-Longitude' projection will decrease towards the pole as the length of an arc of longitude tends to zero at $90^\circ$, and this variation must be considered for a proper hypsometric analysis. This could be achieved by calculating the size of the pixels (as shown in the code below) or by using tools such as the R package raster$^7$, or the GRASS-GIS$^8$ module r.stats.
Natural Earth Data
Elsen & Tingley used a data set of "182 expert-delineated mountain ranges" available from Natural Earth. The authors misinterpreted the metadata and stated that the data set is "roughly accurate to 50m". That is not the case. Natural Earth distributes geographical data at three scales: "1:10m" (1:10,000,000), "1:50m" (1:50,000,000) and "1:250m" (1:250,000,000). Despite the use of a lower case "m" to indicate the 1:1,000,000 scale, the documentation is clear:
"Primarily derived from Patterson’s Physical Map of the World. Polygons defined by international team of volunteers. The boundaries of physical regions should be taken with a grain of salt. They are roughly accurate to 50m scale, although the number of features included is to the 10m scale. Use these polygons to for map algebra operations at your own risk!"
The README file for this dataset is available at http://www.naturalearthdata.com/downloads/10m-physical-vectors/10m-physical-labels/ and Tom Patterson's Map can be accessed at http://www.shadedrelief.com/world/index.html.
The maps in Figure 1(B-F) show Natural Earth polygons (in black) and polygons delimiting the same mountain ranges at larger scales (in red) for five of the mountain ranges analysed by Elsen & Tingley:
Alps (range #09 of Elsen & Tingley)
Blue Ridge (range #30)
Ibiapaba (range #136)
Cachimbo (range #140)
Espinhaco (range #141)
The differences between the boundaries are considered to be large enough to influence the results obtained by Elsen & Tingley, as it can be seen in the graphics of Supplementary Figure 1(B-F).
Computer Code
In this Supplementary Information I intent to show how the "1:50m" boundaries used to delineate the mountain ranges will influence on the hypsometric analysis. Additionally, Python code is presented for the calculation of hypsographic and hypsometric curves, and for the "hypsographic histograms" used by Elsen & Tingley. The code is presented as an IPython (Jupyter) Notebook, available at GitHub (https://github.com/CarlosGrohmann/hypsometric), where the data directory contains all necessary GeoTIFFs and shapefiles.
The plots shown in the code are low-resolution examples of the results obtained. The reader is referred to the Supplementary Figure 1 for the final plots of each mountain range analysed here.
Data
The data used in this supplementary information was acquired from the following sources:
Natural Earth - Boundaries of mountain ranges derived from Patterson’s Physical Map of the World and used by Elsen & Tingley. Scale 1:50,000,000. Available at http://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/physical/ne_10m_geography_regions_polys.zip (Last access: 2015-06-17)
Alps - Boundary of the Alps from the "Eco-pedological Map for the Alpine Territory" project (ECALP). No scale indicated. Available at http://eusoils.jrc.ec.europa.eu/projects/alpsis/Ecalp_data.html (Last access: 2015-06-17)
Blue Ridge (USA) - Boundary of Blue Ridge range at 1:7,000,000. From: Fenneman, N.M., and Johnson, D.W., 1946, Physiographic Divisions of the United States, U.S. Geological Survey (USGS), Washington, D.C.; Available at http://water.usgs.gov/GIS/metadata/usgswrd/XML/physio.xml (Last access: 2015-06-17)
Cachimbo, Ibiapaba and Espinhaco Ranges (Brazil) - From: IBGE (Brazilian Institute of Geography and Statistics), 2006. Map of Landscape Units of Brazil at 1:5,000,000 (Instituto Brasileiro de Geografia e Estatística, 2006. Mapa de unidades de relevo do Brasil 1:5.000.000). Available at ftp://geoftp.ibge.gov.br/mapas_tematicos/mapas_murais/shapes/relevo/ (Last access: 2015-06-17)
Supplementary References
1 - Penck, A., 1894, Morphologie der Erdoberfläche, Stuttgart, J. Engelhorn, 2 vols.
2 - Strahler, A.N., 1952. Hypsometric (area-altitude) analysis of erosional topography. Bulletin of the Geological Society of America, 63, 1117-1142.
3 - Péguy, C.P., 1942. Principes de morphométrie alpine. Revue de Géographie Alpine, 30, 453-486.
4 - Langbein, W.B., 1947. Topographic characteristics of drainage basin. U.S. Geological Survey, Water Supply Paper 968-C, 125-157.
5 - Luo, W., 1998. Hypsometric analysis with a Geographic Information System. Computers & Geosciences, 24, 815-821.
6 - Zambrano-Bigiarini, M., 2014. hydroTSM: Time series management, analysis and interpolation for hydrological modelling. R package. Available at: http://cran.r-project.org/web/packages/hydroTSM/index.html. (Last access: 2015-06-17)
7 - Hijmans, R. J., 2015. raster: Geographic Data Analysis and Modeling. R package. Available at: http://cran.r-project.org/web/packages/raster/index.html. (Last access: 2015-06-17
8 - Neteler, M., Bowman, M.H., Landa, M., Metz, M., 2012. GRASS GIS: A multi-purpose open source GIS. Environmental Modelling & Software, 31, 124-130.
Python code
Import required packages
End of explanation
# auxiliar functions
def roundBase(x, base=5):
return int(base * round(float(x)/base))
def roundUp(x, base=50):
return int(base * np.ceil(float(x)/base))
def roundDown(x, base=50):
return int(base * np.floor(float(x)/base))
def haversine(lon1, lat1, lon2, lat2, r=6371.009):
R = r # Earth radius in kilometers
dLat = math.radians(lat2 - lat1)
dLon = math.radians(lon2 - lon1)
lat1 = math.radians(lat1)
lat2 = math.radians(lat2)
a = math.sin(dLat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dLon/2)**2
c = 2 * math.asin(math.sqrt(a))
return R * c
Explanation: Define functions
The haversine formula is used to calculate the distance between two points on a spherical approximation of the Earth. Adapted from http://rosettacode.org/wiki/Haversine_formula#Python. Latitude and Longitude must be in decimal degrees. The value used here is the Mean Radius for Earth as defined by the International Union of Geodesy and Geophysics (IUGG).
End of explanation
# files
dataDir = './data/'
mountain = 'cachimbo' # 'alps', 'blueRidge', 'espinhaco', 'cachimbo', 'ibiapaba'
mtn = mountain + '.shp'
mtn_NE = mountain + '_NEarth.shp'
tiff = mountain + '.tif'
# label for 5M/7M boundaries
source = 'IBGE'# brazilian maps
# source = 'ECALP' # Alps
# source = 'Fenneman & Johnson 1946' # USA Physiographic Provinces
Explanation: Define variables for shapefiles and GeoTIFF
End of explanation
rast = gdal.Open(tiff)
rast_band = rast.GetRasterBand(1)
rast_array = rast.ReadAsArray()
rast_stats = rast_band.GetStatistics( True, True )
rast_min = rast_stats[0]
rast_max = rast_stats[1]
Explanation: Import GeoTIFF
End of explanation
w_lon, xdim, rot1, n_lat, rot2, ydim = rast.GetGeoTransform()
e_lon = w_lon + xdim * rast.RasterXSize
s_lat = n_lat + ydim * rast.RasterYSize
Explanation: Get GeoTransformation parameters, calculate image extents
End of explanation
bound_5M = shpf.Reader(mtn)
bound_5M_lonlat = np.array(bound_5M.shape().points)
bound_NE = shpf.Reader(mtn_NE)
bound_NE_lonlat = np.array(bound_NE.shape().points)
Explanation: Load shapefiles (for plotting only)
End of explanation
m = Basemap(projection='merc', llcrnrlat=s_lat, urcrnrlat=n_lat, llcrnrlon=w_lon, \
urcrnrlon=e_lon, resolution='c')
ls = LightSource(azdeg=135,altdeg=25)
rgb = ls.shade(rast_array,plt.cm.Greys)
m_shade = m.imshow(rgb, origin='upper')
m_color = m.imshow(rast_array, origin='upper',cmap=plt.cm.terrain, alpha=0.8, vmin=-150)
bounds = range(0, roundUp(rast_max), 50)
cbar = m.colorbar(size='3%', boundaries=bounds)
cbar.ax.tick_params(labelsize=8)
m.drawmapscale(lon=e_lon-0.8, lat=s_lat+0.5, lon0=e_lon, lat0=s_lat, length=100)
xticks = np.arange(roundBase(w_lon), roundBase(e_lon), 2)
yticks = np.arange(roundBase(s_lat), roundBase(n_lat), 2)
m.drawparallels(yticks, linewidth=0.2, labels=[1,0,0,0], fontsize=9) # draw parallels
m.drawmeridians(xticks, linewidth=0.2, labels=[0,0,1,0], fontsize=9) # draw meridians
m.plot(bound_NE_lonlat[:,0], bound_NE_lonlat[:,1], c='k', label='Natural Earth', latlon=True)
m.plot(bound_5M_lonlat[:,0], bound_5M_lonlat[:,1], c='r', label=source, latlon=True)
lg = plt.legend(loc='upper right', fontsize=9)
lg.get_frame().set_alpha(.8) # A little transparency
# plt.show()
# plt.savefig(mtn + '.pdf', dpi=600, bbox_inches='tight')
# plt.clf()
Explanation: Create basemap with shaded relief image and mountain range boundaries
End of explanation
# 5M limits
out_mtn = dataDir + mountain + '_clip_5M.tif'
os.system('gdalwarp -overwrite -dstnodata -9999 -cutline %s %s %s' %(mtn, tiff, out_mtn))
# Natural Earth
out_NE = dataDir + mountain + '_clip_NE.tif'
os.system('gdalwarp -overwrite -dstnodata -9999 -cutline %s %s %s' %(mtn_NE, tiff, out_NE))
Explanation: Mask original raster with shapefiles
Uses external gdalwarp utility. Pixels outside the boundary polygon will be assigned a -9999 value.
End of explanation
# 5M
rast_clip = gdal.Open(out_mtn)
clip_bd = rast_clip.GetRasterBand(1)
clip_array = rast_clip.ReadAsArray()
clip_mask = ma.masked_where(clip_array == -9999, clip_array)
# NatEarth
rast_clip_NE = gdal.Open(out_NE)
clip_NE_bd = rast_clip_NE.GetRasterBand(1)
clip_NE_array = rast_clip_NE.ReadAsArray()
clip_NE_mask = ma.masked_where(clip_NE_array == -9999, clip_NE_array)
Explanation: Load clipped rasters
The -9999 value is set to NaN (Not a Number), in a masked Numpy array.
End of explanation
if ydim < 0:
yres = ydim * -1.0
Explanation: Set yres to a positive value
Used to calculate the area of each pixel.
End of explanation
dy = haversine(0, 0, 0, ydim, r=6371.009)
Explanation: Calculate pixel size (in km) along the N-S direction
This value (dy) does not change with Latitutde
End of explanation
# array with indices
rows, cols = np.indices(rast_array.shape)
nrows = rast_array.shape[0]
ncols = rast_array.shape[1]
# new array for area values
area_array = np.empty(rast_array.shape)
# nested loop to create array with area values
for row in range(nrows):
for col in range(ncols):
y = row
lat = n_lat - ((y - 0.5) * yres)
dx = haversine(0, lat, xdim, lat, r=6371.009)
area_array[row,col] = dx * dy
Explanation: Calculate pixel size along the E-W direction, create array with area values
E-W dimension (dx) of pixels change with latitude. The haversine function is used to calculate it and area is approximated as (dx * dy).
End of explanation
# elevation 5M
stats_clip = clip_bd.GetStatistics( True, True )
clip_min = stats_clip[0]
clip_max = stats_clip[1]
# heigh of point/contour above base of basin
clip_array_comp = ma.compressed(clip_mask)
h_clip = clip_array_comp - clip_min
# total height of basin
H_clip = clip_max - clip_min
# normalize elev for hypsometric curve
elevNorm_clip = h_clip / H_clip
# elevation NatEarth
stats_clip_NE = clip_NE_bd.GetStatistics( True, True )
clip_NE_min = stats_clip_NE[0]
clip_NE_max = stats_clip_NE[1]
clip_array_NE_comp = ma.compressed(clip_NE_mask)
h_clip_NE = clip_array_NE_comp - clip_min
H_clip_NE = clip_NE_max - clip_NE_min
elevNorm_clip_NE = h_clip_NE / H_clip_NE
Explanation: Get base statistics for clipped rasters and calculate Elevation values used in hypsometric analysis
End of explanation
# cell area 5M
area_clip = ma.masked_where(clip_array == -9999, area_array)
# total area of basin/area
area_clip_sum = np.sum(area_clip)
# cumulative area for hypsographyc curve
area_clip_csum = np.cumsum(ma.compressed(area_clip))
# normalized area for hypsometric curve
area_norm_clip = area_clip / area_clip_sum
area_norm_csum = np.cumsum(ma.compressed(area_norm_clip))
# cell area NatEarth
area_clip_NE = ma.masked_where(clip_NE_array == -9999, area_array)
area_clip_sum_NE = np.sum(area_clip_NE)
area_clip_csum_NE = np.cumsum(ma.compressed(area_clip_NE))
area_norm_clip_NE = area_clip_NE / area_clip_sum_NE
area_norm_csum_NE = np.cumsum(ma.compressed(area_norm_clip_NE))
Explanation: Make a masked array of cell area and calculate Area values used in hypsometric analysis
End of explanation
# 5M
plt.plot(area_clip_csum[::-1], np.sort(ma.compressed(clip_mask)), c='r', label=source)
# NatEarth
plt.plot(area_clip_csum_NE[::-1], np.sort(ma.compressed(clip_NE_mask)), c='k', \
label='Natural Earth')
# decorations
plt.ylabel('Elevation')
plt.xlabel('Area km^2')
plt.title('Hypsographic curve for ' + mountain)
# plt.ylim(0.0, 5000.0)
lg = plt.legend(loc='upper right', fontsize=9)
# fighist = mountain + '_hypsographic.pdf'
# plt.savefig(fighist)
# plt.clf()
Explanation: Plot hypsographic (absolute values) curve
End of explanation
# 5M
plt.plot(area_norm_csum[::-1], np.sort(ma.compressed(elevNorm_clip)), c='r', label=source)
# NatEarth
plt.plot(area_norm_csum_NE[::-1], np.sort(ma.compressed(elevNorm_clip_NE)), c='k', \
label='Natural Earth')
# decorations
plt.xlim(0.0,1.0)
plt.ylim(0.0,1.0)
plt.ylabel('Elevation: h/H')
plt.xlabel('Area: a/A')
plt.title('Hypsometric curve for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
# fighist = mountain + '_hypsometric.pdf'
# plt.savefig(fighist)
# plt.clf()
Explanation: Plot hypsometric (normalized values) curve
End of explanation
# define bins for all histograms
binsize = 50
# 5M
bins_clip = range(0, roundUp(clip_max), binsize)
bincenters = [i + binsize/2 for i in bins_clip]
# Nat Earth
bins_clip_NE = range(0, roundUp(clip_NE_max), binsize)
bincenters_NE = [i + binsize/2 for i in bins_clip_NE]
Explanation: Make histograms
Histograms of DEM can be of frequency (cell count per elevation) or of area per elevation.
End of explanation
# 5M
vals, edges = np.histogram(clip_array_comp, bins=bins_clip)
plt.plot(bincenters[:-1], vals, c='r', label='IBGE')
# NatEarth
vals_NE, edges_NE = np.histogram(clip_array_NE_comp, bins=bins_clip_NE)
plt.plot(bincenters_NE[:-1], vals_NE, c='k', label='Natural Earth')
# decorations
plt.ylabel('Elevation frequency (counts)')
plt.xlabel('Elevation (m)')
plt.title('Frequency histograms for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
# plt.show()
# fighist = mountain + '_histogram_frequency.pdf'
# plt.savefig(fighist)
# plt.clf()
Explanation: Simple frequency (cell count) histograms
End of explanation
# i) approximating area by mean cell size
mean_area_clip = np.mean(area_clip)
mean_area_clip_NE = np.mean(area_clip_NE)
# 5M
vals, edges = np.histogram(clip_array_comp, bins=bins_clip)
plt.plot(bincenters[:-1], vals * mean_area_clip, c='r', label='IBGE')
# NatEarth
vals_NE, edges_NE = np.histogram(clip_array_NE_comp, bins=bins_clip_NE)
plt.plot(bincenters_NE[:-1], vals_NE * mean_area_clip_NE, c='k', label='Natural Earth')
# decorations
plt.ylabel('Area km2 (approx)')
plt.xlabel('Elevation (m)')
plt.title('Area (approx) histograms for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
# plt.show()
# fighist = mountain + '_histogram_area_approx.pdf'
# plt.savefig(fighist)
# plt.clf()
Explanation: Histograms of area per elevation
These can be calculated by:
Approximating the area by the mean cell size, where total area = cell count * mean area of pixels
Calculating area per elevation
End of explanation
# ii) calculating area per elevation
# 5M data
clip_range = np.arange(0, int(clip_max)+1)
sum_area_clip = ndimage.sum(area_array, clip_array, clip_range)
# sum the values of areas in each bin
bins_sum = []
for i in bincenters:
low = i - (binsize / 2)
up = i + (binsize / 2)
b_sum = np.sum(sum_area_clip[low:up])
bins_sum.append(b_sum)
# Natural Earth
clip_range_NE = np.arange(0, int(clip_NE_max)+1)
sum_area_clip = ndimage.sum(area_array, clip_NE_array, clip_range_NE)
bins_sum_NE = []
for i in bincenters_NE:
low = i - (binsize / 2)
up = i + (binsize / 2)
b_sum = np.sum(sum_area_clip[low:up])
bins_sum_NE.append(b_sum)
Explanation: To calculate the area of pixels per elevation, we use the ndimage function from SciPy. It sums the values in one array (area) based on occurence a second array (elevation). A third array is used as an index (from 0 to max+1).
End of explanation
# 5M
plt.plot(bincenters, bins_sum, c='r', label='IBGE')
# Natural Earth
plt.plot(bincenters_NE, bins_sum_NE, c='k', label='Natural Earth')
# decorations
plt.ylabel('Area km2 (calc)')
plt.xlabel('Elevation (m)')
plt.title('Area (calc) histograms for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
# plt.show()
# fighist = mountain + '_histogram_area_calc.pdf'
# plt.savefig(fighist)
# plt.clf()
Explanation: Plot histograms
End of explanation
# 5M area - calculated
plt.plot(bincenters, bins_sum, c='r', label='calculated')
#5M area - approximated
plt.plot(bincenters[:-1], vals * mean_area_clip, 'o', c='k', ms=4, label='approximated')
# plt.plot(bins_sum[:-1],vals * mean_area_clip, 'ko-')
# decorations
plt.ylabel('Area km2')
plt.xlabel('Elevation (m)')
plt.title('Area histograms for ' + mountain)
lg = plt.legend(loc='upper right', fontsize=9)
Explanation: We can compare both methods and see that approximating the area of pixels by the mean cell size gives results very close to those obtained by calculating the area of each pixel.
End of explanation |
2,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project
Step1: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation
Step3: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset
Step4: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable
Step5: Answer
Step6: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint
Step7: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint
Step9: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint
Step10: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
Step11: Answer
Step12: Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
#from sklearn.cross_validation import ShuffleSplit
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the Boston housing dataset
data = pd.read_csv('housing.csv')
prices = data['MEDV']
features = data.drop('MEDV', axis = 1)
# Success
print(("Boston housing dataset has {} data points with {} variables each.").format(*data.shape))
Explanation: Machine Learning Engineer Nanodegree
Model Evaluation & Validation
Project: Predicting Boston Housing Prices
Welcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.
The dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:
- 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed.
- 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed.
- The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded.
- The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.
Run the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
prices_a = prices.as_matrix()
# TODO: Minimum price of the data
minimum_price = np.amin(prices_a)
# TODO: Maximum price of the data
maximum_price = np.amax(prices_a)
# TODO: Mean price of the data
mean_price = np.mean(prices_a)
# TODO: Median price of the data
median_price = np.median(prices_a)
# TODO: Standard deviation of prices of the data
std_price = np.std(prices_a)
# Show the calculated statistics
print("Statistics for Boston housing dataset:\n")
print(("Minimum price: ${:,.2f}").format(minimum_price))
print("Maximum price: ${:,.2f}".format(maximum_price))
print("Mean price: ${:,.2f}".format(mean_price))
print("Median price ${:,.2f}".format(median_price))
print("Standard deviation of prices: ${:,.2f}".format(std_price))
Explanation: Data Exploration
In this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.
Since the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.
Implementation: Calculate Statistics
For your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.
In the code cell below, you will need to implement the following:
- Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices.
- Store each calculation in their respective variable.
End of explanation
# TODO: Import 'r2_score'
from sklearn.metrics import r2_score
def performance_metric(y_true, y_predict):
Calculates and returns the performance score between
true and predicted values based on the metric chosen.
# TODO: Calculate the performance score between 'y_true' and 'y_predict'
score = r2_score(y_true, y_predict)
# Return the score
return score
Explanation: Question 1 - Feature Observation
As a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):
- 'RM' is the average number of rooms among homes in the neighborhood.
- 'LSTAT' is the percentage of homeowners in the neighborhood considered "lower class" (working poor).
- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.
Using your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.
Hint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?
Answer: RM should increase the MEDV as it increases its value (directly proportional) because it impacts with the house size (more rooms). LSTAT should be reversely proportional to MEDV, because as we have more "lower class" in the neighborhood, we probably will have a simpler house (not well finished) and face more security problems, so lower price. PTRATIO, it is hard to guess, because more students could be a family area that is good, but we need to analyze the LSTAT in conjunction, also if we have more teachers, could be a more quite region (good) or a not residential area that will be (not good). I think there's a correlation with the other classes that gives us a better hint.
Developing a Model
In this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.
Implementation: Define a Performance Metric
It is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how "good" that model is at making predictions.
The values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable.
For the performance_metric function in the code cell below, you will need to implement the following:
- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.
- Assign the performance score to the score variable.
End of explanation
# Calculate the performance of this model
score = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])
print(("Model has a coefficient of determination, R^2, of {:.3f}.").format(score))
Explanation: Question 2 - Goodness of Fit
Assume that a dataset contains five data points and a model made the following predictions for the target variable:
| True Value | Prediction |
| :-------------: | :--------: |
| 3.0 | 2.5 |
| -0.5 | 0.0 |
| 2.0 | 2.1 |
| 7.0 | 7.8 |
| 4.2 | 5.3 |
Would you consider this model to have successfully captured the variation of the target variable? Why or why not?
Run the code cell below to use the performance_metric function and calculate this model's coefficient of determination.
End of explanation
# TODO: Import 'train_test_split'
from sklearn.model_selection import train_test_split
prices_a = prices.as_matrix()
features_a = features.as_matrix()
# TODO: Shuffle and split the data into training and testing subsets
X_train, X_test, y_train, y_test = train_test_split(features_a, prices_a, test_size=0.2, random_state=42)
# Success
print("Training and testing split was successful.")
Explanation: Answer: The estmator got an excellent R2 value, 0.923 is very close to 1, so it can predicts very well the model. But I think it could not captures the variation (of a real example) because it has only 5 data points, and that is too few. However, it has a good prediction to the model for the given data, and for that it captures the variation.
Implementation: Shuffle and Split Data
Your next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.
For the code cell below, you will need to implement the following:
- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.
- Split the data into 80% training and 20% testing.
- Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.
- Assign the train and testing splits to X_train, X_test, y_train, and y_test.
End of explanation
# Produce learning curves for varying training set sizes and maximum depths
vs.ModelLearning(features, prices)
Explanation: Question 3 - Training and Testing
What is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?
Hint: What could go wrong with not having a way to test your model?
Answer: If you don't have a test subset, you can not estimate if your predictor is good enought for the "new data", or data that is not in the traning set, so you will probably overfit the predicted model and you can not verify it until new data arives. So, probably, the predictor will work great with the training set, but could not be a good estimator for the real data. Also, it will be hard to analyze the predictor efficiency.
Analyzing Model Performance
In this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.
Learning Curves
The following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination.
Run the code cell below and use these graphs to answer the following question.
End of explanation
vs.ModelComplexity(X_train, y_train)
Explanation: Question 4 - Learning the Data
Choose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?
Hint: Are the learning curves converging to particular scores?
Answer: Analysing max_depth = 10, with more training points we improve our model, but we can notice that at some time it do not increase the score. As we add more training points we have better test results because we start to generalize our model, but it also has a convergence, so more training points do not affect the test at some point. Until it reaches the stagnation point, the added points were good, but after that, it could not improve the model.
Complexity Curves
The following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function.
Run the code cell below and use this graph to answer the following two questions.
End of explanation
# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import make_scorer
from sklearn.grid_search import GridSearchCV
#from sklearn.model_selection import ShuffleSplit
from sklearn.cross_validation import ShuffleSplit
def fit_model(X, y):
Performs grid search over the 'max_depth' parameter for a
decision tree regressor trained on the input data [X, y].
# Create cross-validation sets from the training data
cv_sets = ShuffleSplit(X.shape[0], n_iter = 10, test_size = 0.20, random_state = 0)
#cv_sets = ShuffleSplit(n_splits = 10, test_size = 0.20, random_state = 0)
# TODO: Create a decision tree regressor object
regressor = DecisionTreeRegressor(random_state=0)
# TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10
#params = {'max_depth': range(1,11)}
params = {'max_depth': [1,2,3,4,5,6,7,8,9,10]}
# TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'
scoring_fnc = make_scorer(performance_metric, greater_is_better = True)
# TODO: Create the grid search object
#grid = GridSearchCV(regressor, param_grid=params, scoring=scoring_fnc, cv=cv_sets.get_n_splits())
grid = GridSearchCV(regressor, param_grid=params, scoring=scoring_fnc, cv=cv_sets)
# Fit the grid search object to the data to compute the optimal model
grid = grid.fit(X, y)
# Return the optimal model after fitting the data
return grid.best_estimator_
Explanation: Question 5 - Bias-Variance Tradeoff
When the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?
Hint: How do you know when a model is suffering from high bias or high variance?
Answer: If the model uses a maximum depth of 1, it will be oversimplified, and it suffers from high bias. At the figure above, it is possible to say that the performance is low (r2), even for the training score, what means that it is not capturing the features correlation.
If it uses a maximum depth of 10, it, probably, is overfitting (not generalized well), so it suffers from the high variance. It is noticed because the error on the test set is much higher (lower r2) that in the training set. So it a start to get not generalized behavior, but some specific behavior from the training set.
Question 6 - Best-Guess Optimal Model
Which maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?
Answer: We can use 3 or 4 as de maximum depth (I go with 3). The validation score has an inflection at 4, so it starts to decrease the validation score (changes the tangential direction).
Evaluating Model Performance
In this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.
Question 7 - Grid Search
What is the grid search technique and how it can be applied to optimize a learning algorithm?
Answer: It is a systematically way to optimize an algorithm parameter (or parameters). GridSearch tries to analyze the parameters variation we want to test and select the optimal values for them. It outputs parameters that achieve the best performance (as a combination). The parameters can be manually specified and which range or values we want to use. Also, it depends on the training set.
Question 8 - Cross-Validation
What is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?
Hint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?
Answer: The k-fold cross-validation creates k equal sized subsamples. k times, one subsample is selected for validation and the remaining k - 1 are used for training. After all tries, k-fold computes the average error. It could generalize the model and gives more accuracy because the model will be trained and test aginst all data, and also " matters less how the data gets divided". Each data value will be used k -1 times for training and one time for validation. We can also use the multiple training for optimization, as we have more sets to test. A simple train/test split does not give a good way to evalute the best parameter, because the goal rely on get a good result with train and test set, so there no much changes to do, but, with k-fold we need to get a good result with much more train/test variation, so we force the estimator to be more generalized.
Implementation: Fitting a Model
Your final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.
In addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_splits') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.
For the fit_model function in the code cell below, you will need to implement the following:
- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.
- Assign this object to the 'regressor' variable.
- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.
- Use make_scorer from sklearn.metrics to create a scoring function object.
- Pass the performance_metric function as a parameter to the object.
- Assign this scoring function to the 'scoring_fnc' variable.
- Use GridSearchCV from sklearn.grid_search to create a grid search object.
- Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object.
- Assign the GridSearchCV object to the 'grid' variable.
End of explanation
# Fit the training data to the model using grid search
reg = fit_model(X_train, y_train)
# Produce the value for 'max_depth'
print(("Parameter 'max_depth' is {} for the optimal model.").format(reg.get_params()['max_depth']))
Explanation: Making Predictions
Once a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.
Question 9 - Optimal Model
What maximum depth does the optimal model have? How does this result compare to your guess in Question 6?
Run the code block below to fit the decision tree regressor to the training data and produce an optimal model.
End of explanation
# Produce a matrix for client data
client_data = [[5, 17, 15], # Client 1
[4, 32, 22], # Client 2
[8, 3, 12]] # Client 3
# Show predictions
for i, price in enumerate(reg.predict(client_data)):
print(("Predicted selling price for Client {}'s home: ${:,.2f}").format(i+1, price))
features.describe()
Explanation: Answer: The answer is 4, and it is the range we "guess" on question 6. But it is logical because we are optimizing one parameter (so it is not so difficult) and we have the graphs to know the exact estimator behavior against the parameter change.
Question 10 - Predicting Selling Prices
Imagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:
| Feature | Client 1 | Client 2 | Client 3 |
| :---: | :---: | :---: | :---: |
| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |
| Neighborhood poverty level (as %) | 17% | 32% | 3% |
| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |
What price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?
Hint: Use the statistics you calculated in the Data Exploration section to help justify your response.
Run the code block below to have your optimized model make predictions for each client's home.
End of explanation
vs.PredictTrials(features, prices, fit_model, client_data)
Explanation: Answer:
Client 1: 403025.00
The value looks good, the value is close to the medium, and if we analyse the features, we can note that this client want the features close to the medium of each feature. So the predictor will go to something to close to it.
Client 2: 237478.72
Also it is good, the client chooses values close to the minimum RM (which decreases the price), close to the maximum LSTAT (which also decreases the price) and PTRATIO at the maximum. So all those features forces the estimator to a lower price.
Client 3: 931636.36
This client is the oposite of client 2, the RM is too close to the maximum value, LSTAT is close to the minimum and the PTRATIO is close to the minimum. Thus, it is a set that the estimator will try to reach a higher value.
Sensitivity
An optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.
End of explanation |
2,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccma', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CCCMA
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:47
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
2,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2. kolokvij 2011/2012, rešitve
1. naloga
Poišči največjo in najmanjšo vrednost, ki jo zavzame funkcija
$$f(x) = x^4 + 2x^3 - 2x^2 + 1.$$
Step1: Kandadati za ekstreme so stacionarne točke in krajišči našega intervala. Stacionarne točke poiščemo z rešitvijo enačbe $f'(x)=0$.
Step2: Določi tudi intervale naraščanja in padanja.
Funkcija narasca kjer je prvi odvod vecji od nic.
Step3: Funkcija pada, kjer je prvi odvod manjsi od nic.
Step4: 2. naloga
Odpira se kavna hisa kava. Lastniki hise zelijo mesanico kave prodajati v licnih plocevinastih skatlicah, ki imajo obliko tristrane prizme s prostornino 1. Osnovna ploskev je enakostranicni trikotnik s stranico $a$, visina prizme je $b$. Pomagaj jim poiskati optimalno velikost skatlice
Step5: Izrazavo za $b$ vstavimo v enacbo za povrsino, jo odvajamo in poiscemo resitev enacbe $p'(a) = 0.$
Step6: 3. naloga
Izracunaj nedolocena integrala
$$\int \frac{(1+\log(x))^2}{x}dx$$
in
$$ \int (x^2 -2)e^xdx .$$
Step7: 4. naloga
Step8: Presečišča na zgornji sliki se nahajajo pri $x_1=1$ in $x_2=2$.
Ploščina lika je enaka $ \log(4) - 0.5 $. Najlažje jo izračunamo tako, da lik razdelimo na dva dela glede na $x$-koordinato
Step9: Skiciraj graf odvoda $F'$ nato pa na isto sliko se graf $F$.
Z rdečo je narisan odvod, z zeleno pa graf funkcije $F$. | Python Code:
f = lambda x: x**4 + 2*x**3 - 2*x**2 + 1
x = sympy.Symbol('x', real=True)
Explanation: 2. kolokvij 2011/2012, rešitve
1. naloga
Poišči največjo in najmanjšo vrednost, ki jo zavzame funkcija
$$f(x) = x^4 + 2x^3 - 2x^2 + 1.$$
End of explanation
eq = Eq(f(x).diff(), 0)
eq
critical_points = sympy.solve(eq)
critical_points
end_points = [-3, 1]
points = [(y, f(y)) for y in critical_points + end_points]
points
min(points, key=lambda point: point[1]), max(points, key=lambda point: point[1])
Explanation: Kandadati za ekstreme so stacionarne točke in krajišči našega intervala. Stacionarne točke poiščemo z rešitvijo enačbe $f'(x)=0$.
End of explanation
sympy.solvers.reduce_inequalities([f(x).diff(x) > 0])
Explanation: Določi tudi intervale naraščanja in padanja.
Funkcija narasca kjer je prvi odvod vecji od nic.
End of explanation
sympy.solvers.reduce_inequalities([f(x).diff(x) < 0])
Explanation: Funkcija pada, kjer je prvi odvod manjsi od nic.
End of explanation
a = sympy.Symbol('a', real=True, positive=True)
b = sympy.Symbol('b', real=True, positive=True)
v = lambda a, b: a**2*b*sympy.sqrt(3)/2
p = lambda a, b: a**2*sympy.sqrt(3) + 3*a*b
b = solve(Eq(v(a,b), 1), b)[0]
b
Explanation: 2. naloga
Odpira se kavna hisa kava. Lastniki hise zelijo mesanico kave prodajati v licnih plocevinastih skatlicah, ki imajo obliko tristrane prizme s prostornino 1. Osnovna ploskev je enakostranicni trikotnik s stranico $a$, visina prizme je $b$. Pomagaj jim poiskati optimalno velikost skatlice: koliksna naj bosta $a$ in $b$, da bodo za izdelavo porabili cim manj plocevine?
End of explanation
val_a = sympy.solve(p(a, b).diff())[0]
val_b = b.subs(a, val_a)
val_a, val_b
Explanation: Izrazavo za $b$ vstavimo v enacbo za povrsino, jo odvajamo in poiscemo resitev enacbe $p'(a) = 0.$
End of explanation
x = sympy.Symbol('x')
f = lambda x: (1+sympy.log(x))**2/x
g = lambda x: (x**2 - 2)*sympy.exp(x)
sympy.integrate(f(x))
sympy.integrate(g(x))
Explanation: 3. naloga
Izracunaj nedolocena integrala
$$\int \frac{(1+\log(x))^2}{x}dx$$
in
$$ \int (x^2 -2)e^xdx .$$
End of explanation
pyplot.ylim([-1, 5])
pyplot.xlim([-1, 5])
x = numpy.linspace(0, 2, 100)
pyplot.plot([-1, 5], [0, 6], color='g')
[xs, ys] = sample_function(lambda x: 2.0/x + 2, 0.01, 5, 0.01)
pyplot.plot(xs, ys, color='r')
pyplot.axvline(0, color='y')
pyplot.show()
Explanation: 4. naloga
End of explanation
x = sympy.Symbol('x', real=True)
f = lambda x: sympy.log(1+x**2)/x
F = lambda x: sympy.integrate(f(t), (t, -1, x))
f(x)
Explanation: Presečišča na zgornji sliki se nahajajo pri $x_1=1$ in $x_2=2$.
Ploščina lika je enaka $ \log(4) - 0.5 $. Najlažje jo izračunamo tako, da lik razdelimo na dva dela glede na $x$-koordinato: na del med $0$ in $1$ in na del med $1$ in $2$.
5. naloga
Funkcija $F$ ima predpis
$$F(x) = \int_{-1}^x \frac{\log(1+t^2)}{t}dt.$$
Poisci odvod funckcije $F$.
Njen odvod je kar enak funkciji pod integralom.
End of explanation
%matplotlib inline
from math import log
pylab.rcParams['figure.figsize'] = (10.0, 8.0)
pylab.ylim([-1, 3])
pylab.xlim([-4, 4])
[xs, ys] = sample_function(f, -5, 5, 0.1)
pyplot.plot(xs, ys, color='r')
[xs, ys] = sample_function(lambda x: F(x), -5, 5, 0.2)
pyplot.plot(xs, ys, color='g')
pyplot.show()
Explanation: Skiciraj graf odvoda $F'$ nato pa na isto sliko se graf $F$.
Z rdečo je narisan odvod, z zeleno pa graf funkcije $F$.
End of explanation |
2,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Viscoelastic wave equation implementation on a staggered grid
This is a first attempt at implementing the viscoelastic wave equation as described in [1]. See also the FDELMODC implementation by Jan Thorbecke [2].
In the following example, a three dimensional toy problem will be introduced consisting of a single Ricker source located at (100, 50, 35) in a 200 m $\times$ 100 m $\times$ 100 m domain.
Step1: The model domain is now constructed. It consists of an upper layer of water, 50 m in depth, and a lower rock layer separated by a 4 m thick sediment layer.
Step2: Now create a Devito vsicoelastic model generating an appropriate computational grid along with absorbing boundary layers
Step3: The source frequency is now set along with the required model parameters
Step4: Generate Devito time functions for the velocity, stress and memory variables appearing in the viscoelastic model equations. By default, the initial data of each field will be set to zero.
Step5: And now the source and PDE's are constructed
Step6: We now create and then run the operator
Step7: Before plotting some results, let us first look at the shape of the data stored in one of our time functions
Step8: Since our functions are first order in time, the time dimension is of length 2. The spatial extent of the data includes the absorbing boundary layers in each dimension (i.e. each spatial dimension is padded by 20 grid points to the left and to the right).
The total number of instances in time considered is obtained from
Step9: Hence 223 time steps were executed. Thus the final time step will be stored in index given by
Step10: Now, let us plot some 2D slices of the fields vx and szz at the final time step | Python Code:
# Required imports:
import numpy as np
import sympy as sp
from devito import *
from examples.seismic.source import RickerSource, TimeAxis
from examples.seismic import ModelViscoelastic, plot_image
Explanation: Viscoelastic wave equation implementation on a staggered grid
This is a first attempt at implementing the viscoelastic wave equation as described in [1]. See also the FDELMODC implementation by Jan Thorbecke [2].
In the following example, a three dimensional toy problem will be introduced consisting of a single Ricker source located at (100, 50, 35) in a 200 m $\times$ 100 m $\times$ 100 m domain.
End of explanation
# Domain size:
extent = (200., 100., 100.) # 200 x 100 x 100 m domain
h = 1.0 # Desired grid spacing
shape = (int(extent[0]/h+1), int(extent[1]/h+1), int(extent[2]/h+1))
# Model physical parameters:
vp = np.zeros(shape)
qp = np.zeros(shape)
vs = np.zeros(shape)
qs = np.zeros(shape)
rho = np.zeros(shape)
# Set up three horizontally separated layers:
vp[:,:,:int(0.5*shape[2])+1] = 1.52
qp[:,:,:int(0.5*shape[2])+1] = 10000.
vs[:,:,:int(0.5*shape[2])+1] = 0.
qs[:,:,:int(0.5*shape[2])+1] = 0.
rho[:,:,:int(0.5*shape[2])+1] = 1.05
vp[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 1.6
qp[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 40.
vs[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 0.4
qs[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 30.
rho[:,:,int(0.5*shape[2])+1:int(0.5*shape[2])+1+int(4/h)] = 1.3
vp[:,:,int(0.5*shape[2])+1+int(4/h):] = 2.2
qp[:,:,int(0.5*shape[2])+1+int(4/h):] = 100.
vs[:,:,int(0.5*shape[2])+1+int(4/h):] = 1.2
qs[:,:,int(0.5*shape[2])+1+int(4/h):] = 70.
rho[:,:,int(0.5*shape[2])+1+int(4/h):] = 2.
Explanation: The model domain is now constructed. It consists of an upper layer of water, 50 m in depth, and a lower rock layer separated by a 4 m thick sediment layer.
End of explanation
# Create model
origin = (0, 0, 0)
spacing = (h, h, h)
so = 4 # FD space order (Note that the time order is by default 1).
nbl = 20 # Number of absorbing boundary layers cells
model = ModelViscoelastic(space_order=so, vp=vp, qp=qp, vs=vs, qs=qs,
b=1/rho, origin=origin, shape=shape, spacing=spacing,
nbl=nbl)
# As pointed out in Thorbecke's implementation and documentation, the viscoelastic wave euqation is
# not always stable with the standard elastic CFL condition. We enforce a smaller critical dt here
# to ensure the stability.
model.dt_scale = .9
Explanation: Now create a Devito vsicoelastic model generating an appropriate computational grid along with absorbing boundary layers:
End of explanation
# Source freq. in MHz (note that the source is defined below):
f0 = 0.12
# Thorbecke's parameter notation
l = model.lam
mu = model.mu
ro = model.b
k = 1.0/(l + 2*mu)
pi = l + 2*mu
t_s = (sp.sqrt(1.+1./model.qp**2)-1./model.qp)/f0
t_ep = 1./(f0**2*t_s)
t_es = (1.+f0*model.qs*t_s)/(f0*model.qs-f0**2*t_s)
# Time step in ms and time range:
t0, tn = 0., 30.
dt = model.critical_dt
time_range = TimeAxis(start=t0, stop=tn, step=dt)
Explanation: The source frequency is now set along with the required model parameters:
End of explanation
# PDE fn's:
x, y, z = model.grid.dimensions
damp = model.damp
# Staggered grid setup:
# Velocity:
v = VectorTimeFunction(name="v", grid=model.grid, time_order=1, space_order=so)
# Stress:
tau = TensorTimeFunction(name='t', grid=model.grid, space_order=so, time_order=1)
# Memory variable:
r = TensorTimeFunction(name='r', grid=model.grid, space_order=so, time_order=1)
s = model.grid.stepping_dim.spacing # Symbolic representation of the model grid spacing
Explanation: Generate Devito time functions for the velocity, stress and memory variables appearing in the viscoelastic model equations. By default, the initial data of each field will be set to zero.
End of explanation
# Source
src = RickerSource(name='src', grid=model.grid, f0=f0, time_range=time_range)
src.coordinates.data[:] = np.array([100., 50., 35.])
# The source injection term
src_xx = src.inject(field=tau[0, 0].forward, expr=src*s)
src_yy = src.inject(field=tau[1, 1].forward, expr=src*s)
src_zz = src.inject(field=tau[2, 2].forward, expr=src*s)
# Particle velocity
u_v = Eq(v.forward, model.damp * (v + s*ro*div(tau)))
# Stress equations:
u_t = Eq(tau.forward, model.damp * (s*r.forward + tau +
s * (l * t_ep / t_s * diag(div(v.forward)) +
mu * t_es / t_s * (grad(v.forward) + grad(v.forward).T))))
# Memory variable equations:
u_r = Eq(r.forward, damp * (r - s / t_s * (r + l * (t_ep/t_s-1) * diag(div(v.forward)) +
mu * (t_es/t_s-1) * (grad(v.forward) + grad(v.forward).T) )))
Explanation: And now the source and PDE's are constructed:
End of explanation
# Create the operator:
op = Operator([u_v, u_r, u_t] + src_xx + src_yy + src_zz,
subs=model.spacing_map)
#NBVAL_IGNORE_OUTPUT
# Execute the operator:
op(dt=dt)
Explanation: We now create and then run the operator:
End of explanation
v[0].data.shape
Explanation: Before plotting some results, let us first look at the shape of the data stored in one of our time functions:
End of explanation
time_range.num
Explanation: Since our functions are first order in time, the time dimension is of length 2. The spatial extent of the data includes the absorbing boundary layers in each dimension (i.e. each spatial dimension is padded by 20 grid points to the left and to the right).
The total number of instances in time considered is obtained from:
End of explanation
np.mod(time_range.num,2)
Explanation: Hence 223 time steps were executed. Thus the final time step will be stored in index given by:
End of explanation
#NBVAL_SKIP
# Mid-points:
mid_x = int(0.5*(v[0].data.shape[1]-1))+1
mid_y = int(0.5*(v[0].data.shape[2]-1))+1
# Plot some selected results:
plot_image(v[0].data[1, :, mid_y, :], cmap="seismic")
plot_image(v[0].data[1, mid_x, :, :], cmap="seismic")
plot_image(tau[2, 2].data[1, :, mid_y, :], cmap="seismic")
plot_image(tau[2, 2].data[1, mid_x, :, :], cmap="seismic")
#NBVAL_IGNORE_OUTPUT
assert np.isclose(norm(v[0]), 0.102959, atol=1e-4, rtol=0)
Explanation: Now, let us plot some 2D slices of the fields vx and szz at the final time step:
End of explanation |
2,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Scatter plots
I'll start with the data from the BRFSS again.
Step2: The following function selects a random subset of a DataFrame.
Step3: I'll extract the height in cm and the weight in kg of the respondents in the sample.
Step4: Here's a simple scatter plot with alpha=1, so each data point is fully saturated.
Step5: The data fall in obvious columns because they were rounded off. We can reduce this visual artifact by adding some random noice to the data.
NOTE
Step6: Heights were probably rounded off to the nearest inch, which is 2.8 cm, so I'll add random values from -1.4 to 1.4.
Step7: And here's what the jittered data look like.
Step8: The columns are gone, but now we have a different problem
Step9: That's better. This version of the figure shows the location and shape of the distribution most accurately. There are still some apparent columns and rows where, most likely, people reported their height and weight using rounded values. If that effect is important, this figure makes it apparent; if it is not important, we could use more aggressive jittering to minimize it.
An alternative to a scatter plot is something like a HexBin plot, which breaks the plane into bins, counts the number of respondents in each bin, and colors each bin in proportion to its count.
Step10: In this case the binned plot does a pretty good job of showing the location and shape of the distribution. It obscures the row and column effects, which may or may not be a good thing.
Exercise
Step11: Plotting percentiles
Sometimes a better way to get a sense of the relationship between variables is to divide the dataset into groups using one variable, and then plot percentiles of the other variable.
First I'll drop any rows that are missing height or weight.
Step12: Then I'll divide the dataset into groups by height.
Step13: Here are the number of respondents in each group
Step14: Now we can compute the CDF of weight within each group.
Step15: And then extract the 25th, 50th, and 75th percentile from each group.
Step16: Exercise
Step17: Correlation
The following function computes the covariance of two variables using NumPy's dot function.
Step18: And here's an example
Step19: Covariance is useful for some calculations, but it doesn't mean much by itself. The coefficient of correlation is a standardized version of covariance that is easier to interpret.
Step20: The correlation of height and weight is about 0.51, which is a moderately strong correlation.
Step21: NumPy provides a function that computes correlations, too
Step22: The result is a matrix with self-correlations on the diagonal (which are always 1), and cross-correlations on the off-diagonals (which are always symmetric).
Pearson's correlation is not robust in the presence of outliers, and it tends to underestimate the strength of non-linear relationships.
Spearman's correlation is more robust, and it can handle non-linear relationships as long as they are monotonic. Here's a function that computes Spearman's correlation
Step23: For heights and weights, Spearman's correlation is a little higher
Step24: A Pandas Series provides a method that computes correlations, and it offers spearman as one of the options.
Step25: The result is the same as for the one we wrote.
Step26: An alternative to Spearman's correlation is to transform one or both of the variables in a way that makes the relationship closer to linear, and the compute Pearson's correlation.
Step29: Exercises
Using data from the NSFG, make a scatter plot of birth weight versus mother’s age. Plot percentiles of birth weight versus mother’s age. Compute Pearson’s and Spearman’s correlations. How would you characterize the relationship between these variables? | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import brfss
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
df = brfss.ReadBrfss(nrows=None)
Explanation: Scatter plots
I'll start with the data from the BRFSS again.
End of explanation
def SampleRows(df, nrows, replace=False):
indices = np.random.choice(df.index, nrows, replace=replace)
sample = df.loc[indices]
return sample
Explanation: The following function selects a random subset of a DataFrame.
End of explanation
sample = SampleRows(df, 5000)
heights, weights = sample.htm3, sample.wtkg2
Explanation: I'll extract the height in cm and the weight in kg of the respondents in the sample.
End of explanation
thinkplot.Scatter(heights, weights, alpha=1)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
Explanation: Here's a simple scatter plot with alpha=1, so each data point is fully saturated.
End of explanation
def Jitter(values, jitter=0.5):
n = len(values)
return np.random.normal(0, jitter, n) + values
Explanation: The data fall in obvious columns because they were rounded off. We can reduce this visual artifact by adding some random noice to the data.
NOTE: The version of Jitter in the book uses noise with a uniform distribution. Here I am using a normal distribution. The normal distribution does a better job of blurring artifacts, but the uniform distribution might be more true to the data.
End of explanation
heights = Jitter(heights, 1.4)
weights = Jitter(weights, 0.5)
Explanation: Heights were probably rounded off to the nearest inch, which is 2.8 cm, so I'll add random values from -1.4 to 1.4.
End of explanation
thinkplot.Scatter(heights, weights, alpha=1.0)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
Explanation: And here's what the jittered data look like.
End of explanation
thinkplot.Scatter(heights, weights, alpha=0.1, s=10)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
Explanation: The columns are gone, but now we have a different problem: saturation. Where there are many overlapping points, the plot is not as dark as it should be, which means that the outliers are darker than they should be, which gives the impression that the data are more scattered than they actually are.
This is a surprisingly common problem, even in papers published in peer-reviewed journals.
We can usually solve the saturation problem by adjusting alpha and the size of the markers, s.
End of explanation
thinkplot.HexBin(heights, weights)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
Explanation: That's better. This version of the figure shows the location and shape of the distribution most accurately. There are still some apparent columns and rows where, most likely, people reported their height and weight using rounded values. If that effect is important, this figure makes it apparent; if it is not important, we could use more aggressive jittering to minimize it.
An alternative to a scatter plot is something like a HexBin plot, which breaks the plane into bins, counts the number of respondents in each bin, and colors each bin in proportion to its count.
End of explanation
# Solution
# With smaller markers, I needed more aggressive jittering to
# blur the measurement artifacts
# With this dataset, using all of the rows might be more trouble
# than it's worth. Visualizing a subset of the data might be
# more practical and more effective.
heights = Jitter(df.htm3, 2.8)
weights = Jitter(df.wtkg2, 1.0)
thinkplot.Scatter(heights, weights, alpha=0.01, s=2)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
Explanation: In this case the binned plot does a pretty good job of showing the location and shape of the distribution. It obscures the row and column effects, which may or may not be a good thing.
Exercise: So far we have been working with a subset of only 5000 respondents. When we include the entire dataset, making an effective scatterplot can be tricky. As an exercise, experiment with Scatter and HexBin to make a plot that represents the entire dataset well.
End of explanation
cleaned = df.dropna(subset=['htm3', 'wtkg2'])
Explanation: Plotting percentiles
Sometimes a better way to get a sense of the relationship between variables is to divide the dataset into groups using one variable, and then plot percentiles of the other variable.
First I'll drop any rows that are missing height or weight.
End of explanation
bins = np.arange(135, 210, 5)
indices = np.digitize(cleaned.htm3, bins)
groups = cleaned.groupby(indices)
Explanation: Then I'll divide the dataset into groups by height.
End of explanation
for i, group in groups:
print(i, len(group))
Explanation: Here are the number of respondents in each group:
End of explanation
mean_heights = [group.htm3.mean() for i, group in groups]
cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups]
Explanation: Now we can compute the CDF of weight within each group.
End of explanation
for percent in [75, 50, 25]:
weight_percentiles = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(mean_heights, weight_percentiles, label=label)
thinkplot.Config(xlabel='Height (cm)',
ylabel='Weight (kg)',
axis=[140, 210, 20, 200],
legend=False)
Explanation: And then extract the 25th, 50th, and 75th percentile from each group.
End of explanation
# Solution
bins = np.arange(140, 210, 10)
indices = np.digitize(cleaned.htm3, bins)
groups = cleaned.groupby(indices)
cdfs = [thinkstats2.Cdf(group.wtkg2) for i, group in groups]
thinkplot.PrePlot(len(cdfs))
thinkplot.Cdfs(cdfs)
thinkplot.Config(xlabel='Weight (kg)',
ylabel='CDF',
axis=[20, 200, 0, 1],
legend=False)
Explanation: Exercise: Yet another option is to divide the dataset into groups and then plot the CDF for each group. As an exercise, divide the dataset into a smaller number of groups and plot the CDF for each group.
End of explanation
def Cov(xs, ys, meanx=None, meany=None):
xs = np.asarray(xs)
ys = np.asarray(ys)
if meanx is None:
meanx = np.mean(xs)
if meany is None:
meany = np.mean(ys)
cov = np.dot(xs-meanx, ys-meany) / len(xs)
return cov
Explanation: Correlation
The following function computes the covariance of two variables using NumPy's dot function.
End of explanation
heights, weights = cleaned.htm3, cleaned.wtkg2
Cov(heights, weights)
Explanation: And here's an example:
End of explanation
def Corr(xs, ys):
xs = np.asarray(xs)
ys = np.asarray(ys)
meanx, varx = thinkstats2.MeanVar(xs)
meany, vary = thinkstats2.MeanVar(ys)
corr = Cov(xs, ys, meanx, meany) / np.sqrt(varx * vary)
return corr
Explanation: Covariance is useful for some calculations, but it doesn't mean much by itself. The coefficient of correlation is a standardized version of covariance that is easier to interpret.
End of explanation
Corr(heights, weights)
Explanation: The correlation of height and weight is about 0.51, which is a moderately strong correlation.
End of explanation
np.corrcoef(heights, weights)
Explanation: NumPy provides a function that computes correlations, too:
End of explanation
import pandas as pd
def SpearmanCorr(xs, ys):
xranks = pd.Series(xs).rank()
yranks = pd.Series(ys).rank()
return Corr(xranks, yranks)
Explanation: The result is a matrix with self-correlations on the diagonal (which are always 1), and cross-correlations on the off-diagonals (which are always symmetric).
Pearson's correlation is not robust in the presence of outliers, and it tends to underestimate the strength of non-linear relationships.
Spearman's correlation is more robust, and it can handle non-linear relationships as long as they are monotonic. Here's a function that computes Spearman's correlation:
End of explanation
SpearmanCorr(heights, weights)
Explanation: For heights and weights, Spearman's correlation is a little higher:
End of explanation
def SpearmanCorr(xs, ys):
xs = pd.Series(xs)
ys = pd.Series(ys)
return xs.corr(ys, method='spearman')
Explanation: A Pandas Series provides a method that computes correlations, and it offers spearman as one of the options.
End of explanation
SpearmanCorr(heights, weights)
Explanation: The result is the same as for the one we wrote.
End of explanation
Corr(cleaned.htm3, np.log(cleaned.wtkg2))
Explanation: An alternative to Spearman's correlation is to transform one or both of the variables in a way that makes the relationship closer to linear, and the compute Pearson's correlation.
End of explanation
import first
live, firsts, others = first.MakeFrames()
live = live.dropna(subset=['agepreg', 'totalwgt_lb'])
# Solution
ages = live.agepreg
weights = live.totalwgt_lb
print('Corr', Corr(ages, weights))
print('SpearmanCorr', SpearmanCorr(ages, weights))
# Solution
def BinnedPercentiles(df):
Bin the data by age and plot percentiles of weight for each bin.
df: DataFrame
bins = np.arange(10, 48, 3)
indices = np.digitize(df.agepreg, bins)
groups = df.groupby(indices)
ages = [group.agepreg.mean() for i, group in groups][1:-1]
cdfs = [thinkstats2.Cdf(group.totalwgt_lb) for i, group in groups][1:-1]
thinkplot.PrePlot(3)
for percent in [75, 50, 25]:
weights = [cdf.Percentile(percent) for cdf in cdfs]
label = '%dth' % percent
thinkplot.Plot(ages, weights, label=label)
thinkplot.Config(xlabel="Mother's age (years)",
ylabel='Birth weight (lbs)',
xlim=[14, 45], legend=True)
BinnedPercentiles(live)
# Solution
def ScatterPlot(ages, weights, alpha=1.0, s=20):
Make a scatter plot and save it.
ages: sequence of float
weights: sequence of float
alpha: float
thinkplot.Scatter(ages, weights, alpha=alpha)
thinkplot.Config(xlabel='Age (years)',
ylabel='Birth weight (lbs)',
xlim=[10, 45],
ylim=[0, 15],
legend=False)
ScatterPlot(ages, weights, alpha=0.05, s=10)
# Solution
# My conclusions:
# 1) The scatterplot shows a weak relationship between the variables but
# it is hard to see clearly.
# 2) The correlations support this. Pearson's is around 0.07, Spearman's
# is around 0.09. The difference between them suggests some influence
# of outliers or a non-linear relationsip.
# 3) Plotting percentiles of weight versus age suggests that the
# relationship is non-linear. Birth weight increases more quickly
# in the range of mother's age from 15 to 25. After that, the effect
# is weaker.
Explanation: Exercises
Using data from the NSFG, make a scatter plot of birth weight versus mother’s age. Plot percentiles of birth weight versus mother’s age. Compute Pearson’s and Spearman’s correlations. How would you characterize the relationship between these variables?
End of explanation |
2,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Graphs
Step1: Introduction
A graph $G=(V,E)$ is a collection of vertices $V$ and edges $E$ between the vertices in $V$. Graphs often model interactions such as social networks, a network of computers, links on webpages, or pixels in a image. The set of vertices $V$ often represents a set of objects such as people, computers, and cats. Meanwhile the set of edges $E$ consists of vertex pairs, and usually, vertex pairs encode a relationship between objects. If graph $G$ represents a social network, the vertices could represent people, and the existence of an edge between two people could represent whether they mutually know each other. In fact, there are two types of edges
Step2: Basic Properties
Phase Transitions
Random graphs undergo a phase transition for certain properties suchs connectedness. For a random graph, there exist a $p(n)$ for all $p > p(n)$ and a fixed $n$ for which there are no isolated vertices with high probability.
It can be shown that for $$p(n) = \frac{\log n}{n}$$ the probability of isolated components goes to zero[6].
Step3: It's a Small World
In a fully connected random graph, the diameter of the graph becomes extremely small relative to the number vertices. The diameter is the longest shortest path between two nodes. The small world phenomenon was observed in social networks in Milgram's Small World Experiment[3]. Milgram's experiments are commonly refered in popular culture as the six degrees of seperation. The example below shows a Facebook social network from the SNAP dataset
Step4: In the social network above, the diameter is 8 and the number of people in the social network is 4039. Although the $G(n,p)$ random graph model has the property of a small diameter, social networks have a property not found in the $G(n,p)$. Social networks tend to have a higher clustering coefficient[4] than $G(n,p)$ model. The clustering cofficient captures the notion of triad closures. In simple terms, your friends are probably friends with your other friends. Other random graph models such as the Watts-Strogatz have been proposed to deal with this problem of clustering coefficient [5].
References | Python Code:
import networkx as nx
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore') #NetworkX has some deprecation warnings
Explanation: Random Graphs
End of explanation
params = [(10,0.1),(10,.5),(10,0.9),(20,0.1),(20,.5),(20,0.9)]
plt.figure(figsize=(15,10))
idx = 1
for (n,p) in params:
G = nx.gnp_random_graph(n,p)
vertex_colors = np.random.rand(n)
plt.subplot(2,3,idx)
nx.draw_circular(G,node_color = vertex_colors)
plt.title("G$(%d,%.2f)$" %(n,p))
idx+=1
plt.suptitle('Sample Random Graphs',fontsize=15)
plt.show()
Explanation: Introduction
A graph $G=(V,E)$ is a collection of vertices $V$ and edges $E$ between the vertices in $V$. Graphs often model interactions such as social networks, a network of computers, links on webpages, or pixels in a image. The set of vertices $V$ often represents a set of objects such as people, computers, and cats. Meanwhile the set of edges $E$ consists of vertex pairs, and usually, vertex pairs encode a relationship between objects. If graph $G$ represents a social network, the vertices could represent people, and the existence of an edge between two people could represent whether they mutually know each other. In fact, there are two types of edges: directed and undirected. An undirected edge represents mutual relation such as friendship in a social network, and a directed edge represents relation that is directional. For example, you have a crush on someone, but they don't have a crush on you. In short, a graph models pairwise relations/interaction(edges) between objects(vertices).
A random graph is a graph whose construction is a result of an underlying random process or distribution over a set of graphs. Random graph can help us model or infer properties of other graphs whose construction is random or appears to be random. Examples of random graphs may include the graph of the internet and a social network. The simplest model is the $G(n,p)$ model ,and it is due to Erdős and Rényi and independently Gilbert [1,2].
The $G(n,p)$ model
The $G(n,p)$ model consist of two parameters where $n$ is the number of vertices, and $p$ is the probability of forming an undirected edge between vertices. During the construction of a random graph, one visits each vertex with probability $p$ one adds an edge. Examples of realization for different parameters are shown below:
End of explanation
num_samples = 1000
n = 20
num_steps = 51
p = np.linspace(0,1,num_steps)
prob_connected = np.zeros(num_steps)
for i in range(num_steps):
for j in range(num_samples):
G = nx.gnp_random_graph(n,p[i])
num_connected = nx.number_connected_components(G)
isFully_connected = float(num_connected==1)
prob_connected[i] = prob_connected[i] + (isFully_connected - prob_connected[i])/(j+1)
plt.figure(figsize=(15,10))
plt.plot(p,prob_connected)
plt.title('Empirical Phase Transition for $G(10,p)$')
plt.xlim([0,1])
plt.ylim([0,1])
plt.show()
Explanation: Basic Properties
Phase Transitions
Random graphs undergo a phase transition for certain properties suchs connectedness. For a random graph, there exist a $p(n)$ for all $p > p(n)$ and a fixed $n$ for which there are no isolated vertices with high probability.
It can be shown that for $$p(n) = \frac{\log n}{n}$$ the probability of isolated components goes to zero[6].
End of explanation
G = nx.read_edgelist('facebook_combined.txt')
d = nx.diameter(G)
n = len(G.nodes())
vertex_colors = np.random.rand(n)
plt.figure(figsize=(15,10))
nx.draw_spring(G,node_color = vertex_colors)
plt.title('SNAP Facebook Ego Network with a diameter of %d and %d vertices' %(d,n),fontsize=15)
plt.show()
Explanation: It's a Small World
In a fully connected random graph, the diameter of the graph becomes extremely small relative to the number vertices. The diameter is the longest shortest path between two nodes. The small world phenomenon was observed in social networks in Milgram's Small World Experiment[3]. Milgram's experiments are commonly refered in popular culture as the six degrees of seperation. The example below shows a Facebook social network from the SNAP dataset:
End of explanation
len(G.nodes())
Explanation: In the social network above, the diameter is 8 and the number of people in the social network is 4039. Although the $G(n,p)$ random graph model has the property of a small diameter, social networks have a property not found in the $G(n,p)$. Social networks tend to have a higher clustering coefficient[4] than $G(n,p)$ model. The clustering cofficient captures the notion of triad closures. In simple terms, your friends are probably friends with your other friends. Other random graph models such as the Watts-Strogatz have been proposed to deal with this problem of clustering coefficient [5].
References:
Erdős and A. Rényi, On Random Graphs, Publ. Math. 6, 290 (1959).
Gilbert, Random Graphs, Ann. Math. Stat., 30, 1141 (1959).
Milgram, Stanley (May 1967). "The Small World Problem". Psychology Today. Ziff-Davis Publishing Company.
M. Chiang, Networked Life: 20 Questions and Answers, Cambridge University Press, August 2012.
Watts, D. J.; Strogatz, S. H. (1998). "Collective dynamics of 'small-world' networks" (PDF). Nature. 393 (6684): 440–442.
Jackson, Matthew O. Social and economic networks. Princeton university press, 2010.
End of explanation |
2,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python pandas Q&A video series by Data School
YouTube playlist and GitHub repository
Table of contents
<a href="#1.-What-is-pandas%3F-%28video%29">What is pandas?</a>
<a href="#2.-How-do-I-read-a-tabular-data-file-into-pandas%3F-%28video%29">How do I read a tabular data file into pandas?</a>
<a href="#3.-How-do-I-select-a-pandas-Series-from-a-DataFrame%3F-%28video%29">How do I select a pandas Series from a DataFrame?</a>
<a href="#4.-Why-do-some-pandas-commands-end-with-parentheses-%28and-others-don't%29%3F-%28video%29">Why do some pandas commands end with parentheses (and others don't)?</a>
<a href="#5.-How-do-I-rename-columns-in-a-pandas-DataFrame%3F-%28video%29">How do I rename columns in a pandas DataFrame?</a>
<a href="#6.-How-do-I-remove-columns-from-a-pandas-DataFrame%3F-%28video%29">How do I remove columns from a pandas DataFrame?</a>
<a href="#7.-How-do-I-sort-a-pandas-DataFrame-or-a-Series%3F-%28video%29">How do I sort a pandas DataFrame or a Series?</a>
<a href="#8.-How-do-I-filter-rows-of-a-pandas-DataFrame-by-column-value%3F-%28video%29">How do I filter rows of a pandas DataFrame by column value?</a>
<a href="#9.-How-do-I-apply-multiple-filter-criteria-to-a-pandas-DataFrame%3F-%28video%29">How do I apply multiple filter criteria to a pandas DataFrame?</a>
<a href="#10.-Your-pandas-questions-answered%21-%28video%29">Your pandas questions answered!</a>
<a href="#11.-How-do-I-use-the-%22axis%22-parameter-in-pandas%3F-%28video%29">How do I use the "axis" parameter in pandas?</a>
<a href="#12.-How-do-I-use-string-methods-in-pandas%3F-%28video%29">How do I use string methods in pandas?</a>
<a href="#13.-How-do-I-change-the-data-type-of-a-pandas-Series%3F-%28video%29">How do I change the data type of a pandas Series?</a>
<a href="#14.-When-should-I-use-a-%22groupby%22-in-pandas%3F-%28video%29">When should I use a "groupby" in pandas?</a>
<a href="#15.-How-do-I-explore-a-pandas-Series%3F-%28video%29">How do I explore a pandas Series?</a>
<a href="#16.-How-do-I-handle-missing-values-in-pandas%3F-%28video%29">How do I handle missing values in pandas?</a>
<a href="#17.-What-do-I-need-to-know-about-the-pandas-index%3F-%28Part-1%29-%28video%29">What do I need to know about the pandas index? (Part 1)</a>
<a href="#18.-What-do-I-need-to-know-about-the-pandas-index%3F-%28Part-2%29-%28video%29">What do I need to know about the pandas index? (Part 2)</a>
<a href="#19.-How-do-I-select-multiple-rows-and-columns-from-a-pandas-DataFrame%3F-%28video%29">How do I select multiple rows and columns from a pandas DataFrame?</a>
<a href="#20.-When-should-I-use-the-%22inplace%22-parameter-in-pandas%3F-%28video%29">When should I use the "inplace" parameter in pandas?</a>
<a href="#21.-How-do-I-make-my-pandas-DataFrame-smaller-and-faster%3F-%28video%29">How do I make my pandas DataFrame smaller and faster?</a>
<a href="#22.-How-do-I-use-pandas-with-scikit-learn-to-create-Kaggle-submissions%3F-%28video%29">How do I use pandas with scikit-learn to create Kaggle submissions?</a>
<a href="#23.-More-of-your-pandas-questions-answered%21-%28video%29">More of your pandas questions answered!</a>
<a href="#24.-How-do-I-create-dummy-variables-in-pandas%3F-%28video%29">How do I create dummy variables in pandas?</a>
<a href="#25.-How-do-I-work-with-dates-and-times-in-pandas%3F-%28video%29">How do I work with dates and times in pandas?</a>
<a href="#26.-How-do-I-find-and-remove-duplicate-rows-in-pandas%3F-%28video%29">How do I find and remove duplicate rows in pandas?</a>
<a href="#27.-How-do-I-avoid-a-SettingWithCopyWarning-in-pandas%3F-%28video%29">How do I avoid a SettingWithCopyWarning in pandas?</a>
Step1: 1. What is pandas?
pandas main page
pandas installation instructions
Anaconda distribution of Python (includes pandas)
How to use the IPython/Jupyter notebook (video)
2. How do I read a tabular data file into pandas?
Step2: Documentation for read_table
Step3: 3. How do I select a pandas Series from a DataFrame?
Step4: OR
Step5: Bracket notation will always work, whereas dot notation has limitations
Step6: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
4. Why do some pandas commands end with parentheses (and others don't)?
Step7: Methods end with parentheses, while attributes don't
Step8: Documentation for describe
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
5. How do I rename columns in a pandas DataFrame?
Step9: Documentation for rename
Step10: Documentation for read_csv
Step11: Documentation for str.replace
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
6. How do I remove columns from a pandas DataFrame?
Step12: Documentation for drop
Step13: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
7. How do I sort a pandas DataFrame or a Series?
Step14: Note
Step15: Documentation for sort_values for a Series. (Prior to version 0.17, use order instead.)
Step16: Documentation for sort_values for a DataFrame. (Prior to version 0.17, use sort instead.)
Step17: Summary of changes to the sorting API in pandas 0.17
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
8. How do I filter rows of a pandas DataFrame by column value?
Step18: Goal
Step19: Documentation for loc
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
9. How do I apply multiple filter criteria to a pandas DataFrame?
Step20: Understanding logical operators
Step21: Rules for specifying multiple filter criteria in pandas
Step22: Goal
Step23: Documentation for isin
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
10. Your pandas questions answered!
Question
Step24: Question
Step25: Documentation for read_csv
Question
Step26: Question
Step27: Documentation for iterrows
Question
Step28: Documentation for select_dtypes
Question
Step29: Documentation for describe
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
11. How do I use the "axis" parameter in pandas?
Step30: Documentation for drop
Step31: When referring to rows or columns with the axis parameter
Step32: Documentation for mean
Step33: When performing a mathematical operation with the axis parameter
Step34: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
12. How do I use string methods in pandas?
Step35: String handling section of the pandas API reference
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
13. How do I change the data type of a pandas Series?
Step36: Documentation for astype
Step37: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
14. When should I use a "groupby" in pandas?
Step38: Documentation for groupby
Step39: Documentation for agg
Step40: Documentation for plot
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
15. How do I explore a pandas Series?
Step41: Exploring a non-numeric Series
Step42: Documentation for describe
Step43: Documentation for value_counts
Step44: Documentation for unique and nunique
Step45: Documentation for crosstab
Exploring a numeric Series
Step46: Documentation for mean
Step47: Documentation for plot
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
16. How do I handle missing values in pandas?
Step48: What does "NaN" mean?
"NaN" is not a string, rather it's a special value
Step49: Documentation for isnull and notnull
Step50: This calculation works because
Step51: How to handle missing values depends on the dataset as well as the nature of your analysis. Here are some options
Step52: Documentation for dropna
Step53: Documentation for value_counts
Step54: Documentation for fillna
Step55: Working with missing data in pandas
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
17. What do I need to know about the pandas index?
Step56: What is the index used for?
identification
selection
alignment (covered in the next video)
Step57: Documentation for loc
Step58: Documentation for set_index
Step59: Documentation for reset_index
Step60: Indexing and selecting data
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
18. What do I need to know about the pandas index?
Step61: Documentation for set_index
Step62: Documentation for value_counts
Step63: Documentation for sort_values and sort_index
What is the index used for?
identification (covered in the previous video)
selection (covered in the previous video)
alignment
Step64: Documentation for Series
Step65: The two Series were aligned by their indexes.
If a value is missing in either Series, the result is marked as NaN.
Alignment enables us to easily work with incomplete data.
Step66: Documentation for concat
Indexing and selecting data
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
19. How do I select multiple rows and columns from a pandas DataFrame?
Step67: The loc method is used to select rows and columns by label. You can pass it
Step68: The iloc method is used to select rows and columns by integer position. You can pass it
Step69: The ix method is used to select rows and columns by label or integer position, and should only be used when you need to mix label-based and integer-based selection in the same call.
Step70: Rules for using numbers with ix
Step71: Summary of the pandas API for selection
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
20. When should I use the "inplace" parameter in pandas?
Step72: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
21. How do I make my pandas DataFrame smaller and faster?
Step73: Documentation for info and memory_usage
Step74: The category data type should only be used with a string Series that has a small number of possible values.
Step75: Overview of categorical data in pandas
API reference for categorical methods
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
22. How do I use pandas with scikit-learn to create Kaggle submissions?
Step76: Goal
Step77: Note
Step78: Video series
Step79: Documentation for the DataFrame constructor
Step80: Documentation for to_csv
Step81: Documentation for to_pickle and read_pickle
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
23. More of your pandas questions answered!
Question
Step82: Documentation for isnull
Question
Step83: Documentation for loc and iloc
Step84: Question
Step85: Documentation for sample
Step86: Documentation for isin
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
24. How do I create dummy variables in pandas?
Step87: Documentation for map
Step88: Generally speaking
Step89: How to translate these values back to the original 'Embarked' value
Step90: Documentation for concat
Step91: Documentation for get_dummies
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
25. How do I work with dates and times in pandas?
Step92: Documentation for to_datetime
Step93: API reference for datetime properties and methods
Step94: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
26. How do I find and remove duplicate rows in pandas?
Step95: Logic for duplicated
Step96: Documentation for drop_duplicates
Step97: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
27. How do I avoid a SettingWithCopyWarning in pandas?
Step98: Goal
Step99: Problem
Step100: Solution
Step101: Summary
Step102: Goal
Step103: Problem
Step104: Solution | Python Code:
# conventional way to import pandas
import pandas as pd
# get Pansda's vesrion #
print ('Pandas version', pd.__version__)
Explanation: Python pandas Q&A video series by Data School
YouTube playlist and GitHub repository
Table of contents
<a href="#1.-What-is-pandas%3F-%28video%29">What is pandas?</a>
<a href="#2.-How-do-I-read-a-tabular-data-file-into-pandas%3F-%28video%29">How do I read a tabular data file into pandas?</a>
<a href="#3.-How-do-I-select-a-pandas-Series-from-a-DataFrame%3F-%28video%29">How do I select a pandas Series from a DataFrame?</a>
<a href="#4.-Why-do-some-pandas-commands-end-with-parentheses-%28and-others-don't%29%3F-%28video%29">Why do some pandas commands end with parentheses (and others don't)?</a>
<a href="#5.-How-do-I-rename-columns-in-a-pandas-DataFrame%3F-%28video%29">How do I rename columns in a pandas DataFrame?</a>
<a href="#6.-How-do-I-remove-columns-from-a-pandas-DataFrame%3F-%28video%29">How do I remove columns from a pandas DataFrame?</a>
<a href="#7.-How-do-I-sort-a-pandas-DataFrame-or-a-Series%3F-%28video%29">How do I sort a pandas DataFrame or a Series?</a>
<a href="#8.-How-do-I-filter-rows-of-a-pandas-DataFrame-by-column-value%3F-%28video%29">How do I filter rows of a pandas DataFrame by column value?</a>
<a href="#9.-How-do-I-apply-multiple-filter-criteria-to-a-pandas-DataFrame%3F-%28video%29">How do I apply multiple filter criteria to a pandas DataFrame?</a>
<a href="#10.-Your-pandas-questions-answered%21-%28video%29">Your pandas questions answered!</a>
<a href="#11.-How-do-I-use-the-%22axis%22-parameter-in-pandas%3F-%28video%29">How do I use the "axis" parameter in pandas?</a>
<a href="#12.-How-do-I-use-string-methods-in-pandas%3F-%28video%29">How do I use string methods in pandas?</a>
<a href="#13.-How-do-I-change-the-data-type-of-a-pandas-Series%3F-%28video%29">How do I change the data type of a pandas Series?</a>
<a href="#14.-When-should-I-use-a-%22groupby%22-in-pandas%3F-%28video%29">When should I use a "groupby" in pandas?</a>
<a href="#15.-How-do-I-explore-a-pandas-Series%3F-%28video%29">How do I explore a pandas Series?</a>
<a href="#16.-How-do-I-handle-missing-values-in-pandas%3F-%28video%29">How do I handle missing values in pandas?</a>
<a href="#17.-What-do-I-need-to-know-about-the-pandas-index%3F-%28Part-1%29-%28video%29">What do I need to know about the pandas index? (Part 1)</a>
<a href="#18.-What-do-I-need-to-know-about-the-pandas-index%3F-%28Part-2%29-%28video%29">What do I need to know about the pandas index? (Part 2)</a>
<a href="#19.-How-do-I-select-multiple-rows-and-columns-from-a-pandas-DataFrame%3F-%28video%29">How do I select multiple rows and columns from a pandas DataFrame?</a>
<a href="#20.-When-should-I-use-the-%22inplace%22-parameter-in-pandas%3F-%28video%29">When should I use the "inplace" parameter in pandas?</a>
<a href="#21.-How-do-I-make-my-pandas-DataFrame-smaller-and-faster%3F-%28video%29">How do I make my pandas DataFrame smaller and faster?</a>
<a href="#22.-How-do-I-use-pandas-with-scikit-learn-to-create-Kaggle-submissions%3F-%28video%29">How do I use pandas with scikit-learn to create Kaggle submissions?</a>
<a href="#23.-More-of-your-pandas-questions-answered%21-%28video%29">More of your pandas questions answered!</a>
<a href="#24.-How-do-I-create-dummy-variables-in-pandas%3F-%28video%29">How do I create dummy variables in pandas?</a>
<a href="#25.-How-do-I-work-with-dates-and-times-in-pandas%3F-%28video%29">How do I work with dates and times in pandas?</a>
<a href="#26.-How-do-I-find-and-remove-duplicate-rows-in-pandas%3F-%28video%29">How do I find and remove duplicate rows in pandas?</a>
<a href="#27.-How-do-I-avoid-a-SettingWithCopyWarning-in-pandas%3F-%28video%29">How do I avoid a SettingWithCopyWarning in pandas?</a>
End of explanation
# read a dataset of Chipotle orders directly from a URL and store the results in a DataFrame
orders = pd.read_table('data/chipotle.tsv')
# examine the first 5 rows
orders.head()
Explanation: 1. What is pandas?
pandas main page
pandas installation instructions
Anaconda distribution of Python (includes pandas)
How to use the IPython/Jupyter notebook (video)
2. How do I read a tabular data file into pandas?
End of explanation
users = pd.read_table('data/u.user')
# examine the first 5 rows
users.head()
users = pd.read_table('data/u.user', sep='|')
# examine the first 5 rows
users.head()
users = pd.read_table('data/u.user', sep='|', header=None)
# examine the first 5 rows
users.head()
user_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']
users = pd.read_table('data/u.user', sep='|', header=None, names=user_cols)
# examine the first 5 rows
users.head()
Explanation: Documentation for read_table
End of explanation
# read a dataset of UFO reports into a DataFrame
ufo = pd.read_table('data/ufo.csv', sep=',')
Explanation: 3. How do I select a pandas Series from a DataFrame?
End of explanation
# read_csv is equivalent to read_table, except it assumes a comma separator
ufo = pd.read_csv('data/ufo.csv')
type(ufo)
# examine the first 5 rows
ufo.head()
# select the 'City' Series using bracket notation
ufo['Colors Reported']
type(ufo['City'])
# or equivalently, use dot notation - see notes below
ufo.City
Explanation: OR
End of explanation
# create a new 'Location' Series (must use bracket notation to define the Series name)
ufo['Location'] = ufo.City + ', ' + ufo.State
ufo.head()
Explanation: Bracket notation will always work, whereas dot notation has limitations:
Dot notation doesn't work if there are spaces in the Series name
Dot notation doesn't work if the Series has the same name as a DataFrame method or attribute (like 'head' or 'shape')
Dot notation can't be used to define the name of a new Series (see below)
End of explanation
# read a dataset of top-rated IMDb movies into a DataFrame
movies = pd.read_csv('data/imdb_1000.csv')
Explanation: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
4. Why do some pandas commands end with parentheses (and others don't)?
End of explanation
# example method: show the first 5 rows
movies.head()
# example method: calculate summary statistics
movies.describe()
# example attribute: number of rows and columns
movies.shape
# example attribute: data type of each column
movies.dtypes
# use an optional parameter to the describe method to summarize only 'object' columns
movies.describe(include=['object'])
Explanation: Methods end with parentheses, while attributes don't:
End of explanation
# read a dataset of UFO reports into a DataFrame
ufo = pd.read_csv('data/ufo.csv')
# examine the column names
ufo.columns
# rename two of the columns by using the 'rename' method
ufo.rename(columns={'Colors Reported':'Colors_Reported', 'Shape Reported':'Shape_Reported'}, inplace=True)
ufo.columns
Explanation: Documentation for describe
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
5. How do I rename columns in a pandas DataFrame?
End of explanation
# replace all of the column names by overwriting the 'columns' attribute
ufo_cols = ['city', 'colors reported', 'shape reported', 'state', 'time']
ufo.columns = ufo_cols
ufo.columns
# replace the column names during the file reading process by using the 'names' parameter
ufo = pd.read_csv('data/ufo.csv', header=0, names=ufo_cols)
ufo.head()
Explanation: Documentation for rename
End of explanation
# replace all spaces with underscores in the column names by using the 'str.replace' method
ufo.columns = ufo.columns.str.replace(' ', '_')
ufo.columns
Explanation: Documentation for read_csv
End of explanation
# read a dataset of UFO reports into a DataFrame
ufo = pd.read_csv('data/ufo.csv')
ufo.head()
# remove a single column (axis=1 refers to columns)
ufo.drop('Colors Reported', axis=1, inplace=True)
ufo.head()
Explanation: Documentation for str.replace
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
6. How do I remove columns from a pandas DataFrame?
End of explanation
# remove multiple columns at once
ufo.drop(['City', 'State'], axis=1, inplace=True)
ufo.head()
# remove multiple rows at once (axis=0 refers to rows)
ufo.drop([0, 1], axis=0, inplace=True)
ufo.head()
Explanation: Documentation for drop
End of explanation
# read a dataset of top-rated IMDb movies into a DataFrame
movies = pd.read_csv('data/imdb_1000.csv')
movies.head()
Explanation: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
7. How do I sort a pandas DataFrame or a Series?
End of explanation
# sort the 'title' Series in ascending order (returns a Series)
movies.title.sort_values()
# sort in descending order instead
movies.title.sort_values(ascending=False).head()
Explanation: Note: None of the sorting methods below affect the underlying data. (In other words, the sorting is temporary).
End of explanation
# sort the entire DataFrame by the 'title' Series (returns a DataFrame)
movies.sort_values('title').head()
# sort in descending order instead
movies.sort_values('title', ascending=False).head()
Explanation: Documentation for sort_values for a Series. (Prior to version 0.17, use order instead.)
End of explanation
# sort the DataFrame first by 'content_rating', then by 'duration'
movies.sort_values(['content_rating', 'duration']).head()
Explanation: Documentation for sort_values for a DataFrame. (Prior to version 0.17, use sort instead.)
End of explanation
# read a dataset of top-rated IMDb movies into a DataFrame
movies = pd.read_csv('data/imdb_1000.csv')
movies.head()
# examine the number of rows and columns
movies.shape
Explanation: Summary of changes to the sorting API in pandas 0.17
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
8. How do I filter rows of a pandas DataFrame by column value?
End of explanation
# create a list in which each element refers to a DataFrame row: True if the row satisfies the condition, False otherwise
booleans = []
for length in movies.duration:
if length >= 200:
booleans.append(True)
else:
booleans.append(False)
# confirm that the list has the same length as the DataFrame
len(booleans)
# examine the first five list elements
booleans[0:5]
# convert the list to a Series
is_long = pd.Series(booleans)
is_long.head()
# use bracket notation with the boolean Series to tell the DataFrame which rows to display
movies[is_long]
# simplify the steps above: no need to write a for loop to create 'is_long' since pandas will broadcast the comparison
is_long = movies.duration >= 200
movies[is_long]
# or equivalently, write it in one line (no need to create the 'is_long' object)
movies[movies.duration >= 200]
# select the 'genre' Series from the filtered DataFrame
is_long = movies.duration >= 200
movies[is_long].genre
# or equivalently, use the 'loc' method
movies.loc[movies.duration >= 200, 'genre']
Explanation: Goal: Filter the DataFrame rows to only show movies with a 'duration' of at least 200 minutes.
End of explanation
# read a dataset of top-rated IMDb movies into a DataFrame
movies = pd.read_csv('data/imdb_1000.csv')
movies.head()
# filter the DataFrame to only show movies with a 'duration' of at least 200 minutes
movies[movies.duration >= 200]
Explanation: Documentation for loc
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
9. How do I apply multiple filter criteria to a pandas DataFrame?
End of explanation
# demonstration of the 'and' operator
print(True and True)
print(True and False)
print(False and False)
# demonstration of the 'or' operator
print(True or True)
print(True or False)
print(False or False)
Explanation: Understanding logical operators:
and: True only if both sides of the operator are True
or: True if either side of the operator is True
End of explanation
# CORRECT: use the '&' operator to specify that both conditions are required
movies[(movies.duration >=200) & (movies.genre == 'Drama')]
# INCORRECT: using the '|' operator would have shown movies that are either long or dramas (or both)
movies[(movies.duration >=200) | (movies.genre == 'Drama')].head()
Explanation: Rules for specifying multiple filter criteria in pandas:
use & instead of and
use | instead of or
add parentheses around each condition to specify evaluation order
Goal: Further filter the DataFrame of long movies (duration >= 200) to only show movies which also have a 'genre' of 'Drama'
End of explanation
# use the '|' operator to specify that a row can match any of the three criteria
movies[(movies.genre == 'Crime') | (movies.genre == 'Drama') | (movies.genre == 'Action')].tail(20)
# or equivalently, use the 'isin' method
#movies[movies.genre.isin(['Crime', 'Drama', 'Action'])].head(10)
Explanation: Goal: Filter the original DataFrame to show movies with a 'genre' of 'Crime' or 'Drama' or 'Action'
End of explanation
# read a dataset of UFO reports into a DataFrame, and check the columns
ufo = pd.read_csv('data/ufo.csv')
ufo.columns
# specify which columns to include by name
ufo = pd.read_csv('data/ufo.csv', usecols=['City', 'State'])
# or equivalently, specify columns by position
ufo = pd.read_csv('data/ufo.csv', usecols=[0, 4])
ufo.columns
Explanation: Documentation for isin
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
10. Your pandas questions answered!
Question: When reading from a file, how do I read in only a subset of the columns?
End of explanation
# specify how many rows to read
ufo = pd.read_csv('data/ufo.csv', nrows=3)
ufo
Explanation: Question: When reading from a file, how do I read in only a subset of the rows?
End of explanation
# Series are directly iterable (like a list)
for c in ufo.City:
print(c)
Explanation: Documentation for read_csv
Question: How do I iterate through a Series?
End of explanation
# various methods are available to iterate through a DataFrame
for index, row in ufo.iterrows():
print(index, row.City, row.State)
Explanation: Question: How do I iterate through a DataFrame?
End of explanation
# read a dataset of alcohol consumption into a DataFrame, and check the data types
drinks = pd.read_csv('data/drinks.csv')
drinks.dtypes
# only include numeric columns in the DataFrame
import numpy as np
drinks.select_dtypes(include=[np.number]).dtypes
Explanation: Documentation for iterrows
Question: How do I drop all non-numeric columns from a DataFrame?
End of explanation
# describe all of the numeric columns
drinks.describe()
# pass the string 'all' to describe all columns
drinks.describe(include='all')
# pass a list of data types to only describe certain types
drinks.describe(include=['object', 'float64'])
# pass a list even if you only want to describe a single data type
drinks.describe(include=['object'])
Explanation: Documentation for select_dtypes
Question: How do I know whether I should pass an argument as a string or a list?
End of explanation
# read a dataset of alcohol consumption into a DataFrame
drinks = pd.read_csv('data/drinks.csv')
drinks.head()
# drop a column (temporarily)
drinks.drop('continent', axis=1).head()
Explanation: Documentation for describe
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
11. How do I use the "axis" parameter in pandas?
End of explanation
# drop a row (temporarily)
drinks.drop(2, axis=0).head()
Explanation: Documentation for drop
End of explanation
# calculate the mean of each numeric column
drinks.mean()
# or equivalently, specify the axis explicitly
drinks.mean(axis=0)
Explanation: When referring to rows or columns with the axis parameter:
axis 0 refers to rows
axis 1 refers to columns
End of explanation
# calculate the mean of each row
drinks.mean(axis=1).head()
Explanation: Documentation for mean
End of explanation
# 'index' is an alias for axis 0
drinks.mean(axis='index')
# 'columns' is an alias for axis 1
drinks.mean(axis='columns').head()
Explanation: When performing a mathematical operation with the axis parameter:
axis 0 means the operation should "move down" the row axis
axis 1 means the operation should "move across" the column axis
End of explanation
# read a dataset of Chipotle orders into a DataFrame
orders = pd.read_table('data/chipotle.tsv')
orders.head()
# normal way to access string methods in Python
'hello'.upper()
# string methods for pandas Series are accessed via 'str'
orders.item_name.str.upper().head()
# string method 'contains' checks for a substring and returns a boolean Series
orders.item_name.str.contains('Chicken').head()
# use the boolean Series to filter the DataFrame
orders[orders.item_name.str.contains('Chicken')].head()
# string methods can be chained together
orders.choice_description.str.replace('[', '').str.replace(']', '').head()
# many pandas string methods support regular expressions (regex)
orders.choice_description.str.replace('[\[\]]', '').head()
Explanation: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
12. How do I use string methods in pandas?
End of explanation
# read a dataset of alcohol consumption into a DataFrame
drinks = pd.read_csv('data/drinks.csv')
drinks.head()
# examine the data type of each Series
drinks.dtypes
# change the data type of an existing Series
drinks['beer_servings'] = drinks.beer_servings.astype(float)
drinks.dtypes
Explanation: String handling section of the pandas API reference
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
13. How do I change the data type of a pandas Series?
End of explanation
# alternatively, change the data type of a Series while reading in a file
drinks = pd.read_csv('data/drinks.csv', dtype={'beer_servings':float})
drinks.dtypes
# read a dataset of Chipotle orders into a DataFrame
orders = pd.read_table('data/chipotle.tsv')
orders.head()
# examine the data type of each Series
orders.dtypes
# convert a string to a number in order to do math
orders.item_price.str.replace('$', '').astype(float).mean()
# string method 'contains' checks for a substring and returns a boolean Series
orders.item_name.str.contains('Chicken').head()
# convert a boolean Series to an integer (False = 0, True = 1)
orders.item_name.str.contains('Chicken').astype(int).head()
Explanation: Documentation for astype
End of explanation
# read a dataset of alcohol consumption into a DataFrame
drinks = pd.read_csv('data/drinks.tsv')
drinks.head()
# calculate the mean beer servings across the entire dataset
drinks.beer_servings.mean()
# calculate the mean beer servings just for countries in Africa
drinks[drinks.continent=='Africa'].beer_servings.mean()
# calculate the mean beer servings for each continent
drinks.groupby('continent').beer_servings.mean()
Explanation: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
14. When should I use a "groupby" in pandas?
End of explanation
# other aggregation functions (such as 'max') can also be used with groupby
drinks.groupby('continent').beer_servings.max()
# multiple aggregation functions can be applied simultaneously
drinks.groupby('continent').beer_servings.agg(['count', 'mean', 'min', 'max'])
Explanation: Documentation for groupby
End of explanation
# specifying a column to which the aggregation function should be applied is not required
drinks.groupby('continent').mean()
# allow plots to appear in the notebook
%matplotlib inline
# side-by-side bar plot of the DataFrame directly above
drinks.groupby('continent').mean().plot(kind='bar')
Explanation: Documentation for agg
End of explanation
# read a dataset of top-rated IMDb movies into a DataFrame
movies = pd.read_csv('data/imbd_1000.csv')
movies.head()
# examine the data type of each Series
movies.dtypes
Explanation: Documentation for plot
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
15. How do I explore a pandas Series?
End of explanation
# count the non-null values, unique values, and frequency of the most common value
movies.genre.describe()
Explanation: Exploring a non-numeric Series:
End of explanation
# count how many times each value in the Series occurs
movies.genre.value_counts()
Explanation: Documentation for describe
End of explanation
# display percentages instead of raw counts
movies.genre.value_counts(normalize=True)
# 'value_counts' (like many pandas methods) outputs a Series
type(movies.genre.value_counts())
# thus, you can add another Series method on the end
movies.genre.value_counts().head()
# display the unique values in the Series
movies.genre.unique()
# count the number of unique values in the Series
movies.genre.nunique()
Explanation: Documentation for value_counts
End of explanation
# compute a cross-tabulation of two Series
pd.crosstab(movies.genre, movies.content_rating)
Explanation: Documentation for unique and nunique
End of explanation
# calculate various summary statistics
movies.duration.describe()
# many statistics are implemented as Series methods
movies.duration.mean()
Explanation: Documentation for crosstab
Exploring a numeric Series:
End of explanation
# 'value_counts' is primarily useful for categorical data, not numerical data
movies.duration.value_counts().head()
# allow plots to appear in the notebook
%matplotlib inline
# histogram of the 'duration' Series (shows the distribution of a numerical variable)
movies.duration.plot(kind='hist')
# bar plot of the 'value_counts' for the 'genre' Series
movies.genre.value_counts().plot(kind='bar')
Explanation: Documentation for mean
End of explanation
# read a dataset of UFO reports into a DataFrame
ufo = pd.read_csv('data/ufo.csv')
ufo.tail()
Explanation: Documentation for plot
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
16. How do I handle missing values in pandas?
End of explanation
# 'isnull' returns a DataFrame of booleans (True if missing, False if not missing)
ufo.isnull().tail()
# 'nonnull' returns the opposite of 'isnull' (True if not missing, False if missing)
ufo.notnull().tail()
Explanation: What does "NaN" mean?
"NaN" is not a string, rather it's a special value: numpy.nan.
It stands for "Not a Number" and indicates a missing value.
read_csv detects missing values (by default) when reading the file, and replaces them with this special value.
Documentation for read_csv
End of explanation
# count the number of missing values in each Series
ufo.isnull().sum()
Explanation: Documentation for isnull and notnull
End of explanation
# use the 'isnull' Series method to filter the DataFrame rows
ufo[ufo.City.isnull()].head()
Explanation: This calculation works because:
The sum method for a DataFrame operates on axis=0 by default (and thus produces column sums).
In order to add boolean values, pandas converts True to 1 and False to 0.
End of explanation
# examine the number of rows and columns
ufo.shape
# if 'any' values are missing in a row, then drop that row
ufo.dropna(how='any').shape
Explanation: How to handle missing values depends on the dataset as well as the nature of your analysis. Here are some options:
End of explanation
# 'inplace' parameter for 'dropna' is False by default, thus rows were only dropped temporarily
ufo.shape
# if 'all' values are missing in a row, then drop that row (none are dropped in this case)
ufo.dropna(how='all').shape
# if 'any' values are missing in a row (considering only 'City' and 'Shape Reported'), then drop that row
ufo.dropna(subset=['City', 'Shape Reported'], how='any').shape
# if 'all' values are missing in a row (considering only 'City' and 'Shape Reported'), then drop that row
ufo.dropna(subset=['City', 'Shape Reported'], how='all').shape
# 'value_counts' does not include missing values by default
ufo['Shape Reported'].value_counts().head()
# explicitly include missing values
ufo['Shape Reported'].value_counts(dropna=False).head()
Explanation: Documentation for dropna
End of explanation
# fill in missing values with a specified value
ufo['Shape Reported'].fillna(value='VARIOUS', inplace=True)
Explanation: Documentation for value_counts
End of explanation
# confirm that the missing values were filled in
ufo['Shape Reported'].value_counts().head()
Explanation: Documentation for fillna
End of explanation
# read a dataset of alcohol consumption into a DataFrame
drinks = pd.read_csv('data/drinks.csv')
drinks.head()
# every DataFrame has an index (sometimes called the "row labels")
drinks.index
# column names are also stored in a special "index" object
drinks.columns
# neither the index nor the columns are included in the shape
drinks.shape
# index and columns both default to integers if you don't define them
pd.read_table('data/imbd_1000.csv', header=None, sep='|').head()
Explanation: Working with missing data in pandas
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
17. What do I need to know about the pandas index?
End of explanation
# identification: index remains with each row when filtering the DataFrame
drinks[drinks.continent=='South America']
# selection: select a portion of the DataFrame using the index
drinks.loc[23, 'beer_servings']
Explanation: What is the index used for?
identification
selection
alignment (covered in the next video)
End of explanation
# set an existing column as the index
drinks.set_index('country', inplace=True)
drinks.head()
Explanation: Documentation for loc
End of explanation
# 'country' is now the index
drinks.index
# 'country' is no longer a column
drinks.columns
# 'country' data is no longer part of the DataFrame contents
drinks.shape
# country name can now be used for selection
drinks.loc['Brazil', 'beer_servings']
# index name is optional
drinks.index.name = None
drinks.head()
# restore the index name, and move the index back to a column
drinks.index.name = 'country'
drinks.reset_index(inplace=True)
drinks.head()
Explanation: Documentation for set_index
End of explanation
# many DataFrame methods output a DataFrame
drinks.describe()
# you can interact with any DataFrame using its index and columns
drinks.describe().loc['25%', 'beer_servings']
Explanation: Documentation for reset_index
End of explanation
# read a dataset of alcohol consumption into a DataFrame
drinks = pd.read_csv('data/drinks.csv')
drinks.head()
# every DataFrame has an index
drinks.index
# every Series also has an index (which carries over from the DataFrame)
drinks.continent.head()
# set 'country' as the index
drinks.set_index('country', inplace=True)
Explanation: Indexing and selecting data
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
18. What do I need to know about the pandas index?
End of explanation
# Series index is on the left, values are on the right
drinks.continent.head()
# another example of a Series (output from the 'value_counts' method)
drinks.continent.value_counts()
Explanation: Documentation for set_index
End of explanation
# access the Series index
drinks.continent.value_counts().index
# access the Series values
drinks.continent.value_counts().values
# elements in a Series can be selected by index (using bracket notation)
drinks.continent.value_counts()['Africa']
# any Series can be sorted by its values
drinks.continent.value_counts().sort_values()
# any Series can also be sorted by its index
drinks.continent.value_counts().sort_index()
Explanation: Documentation for value_counts
End of explanation
# 'beer_servings' Series contains the average annual beer servings per person
drinks.beer_servings.head()
# create a Series containing the population of two countries
people = pd.Series([3000000, 85000], index=['Albania', 'Andorra'], name='population')
people
Explanation: Documentation for sort_values and sort_index
What is the index used for?
identification (covered in the previous video)
selection (covered in the previous video)
alignment
End of explanation
# calculate the total annual beer servings for each country
(drinks.beer_servings * people).head()
Explanation: Documentation for Series
End of explanation
# concatenate the 'drinks' DataFrame with the 'population' Series (aligns by the index)
pd.concat([drinks, people], axis=1).head()
Explanation: The two Series were aligned by their indexes.
If a value is missing in either Series, the result is marked as NaN.
Alignment enables us to easily work with incomplete data.
End of explanation
# read a dataset of UFO reports into a DataFrame
ufo = pd.read_csv('data/ufo.csv')
ufo.head(3)
Explanation: Documentation for concat
Indexing and selecting data
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
19. How do I select multiple rows and columns from a pandas DataFrame?
End of explanation
# row 0, all columns
ufo.loc[0, :]
# rows 0 and 1 and 2, all columns
ufo.loc[[0, 1, 2], :]
# rows 0 through 2 (inclusive), all columns
ufo.loc[0:2, :]
# this implies "all columns", but explicitly stating "all columns" is better
ufo.loc[0:2]
# rows 0 through 2 (inclusive), column 'City'
ufo.loc[0:2, 'City']
# rows 0 through 2 (inclusive), columns 'City' and 'State'
ufo.loc[0:2, ['City', 'State']]
# accomplish the same thing using double brackets - but using 'loc' is preferred since it's more explicit
ufo[['City', 'State']].head(3)
# rows 0 through 2 (inclusive), columns 'City' through 'State' (inclusive)
ufo.loc[0:2, 'City':'State']
# accomplish the same thing using 'head' and 'drop'
ufo.head(3).drop('Time', axis=1)
# rows in which the 'City' is 'Oakland', column 'State'
ufo.loc[ufo.City=='Oakland', 'State']
# accomplish the same thing using "chained indexing" - but using 'loc' is preferred since chained indexing can cause problems
ufo[ufo.City=='Oakland'].State
Explanation: The loc method is used to select rows and columns by label. You can pass it:
A single label
A list of labels
A slice of labels
A boolean Series
A colon (which indicates "all labels")
End of explanation
# rows in positions 0 and 1, columns in positions 0 and 3
ufo.iloc[[0, 1], [0, 3]]
# rows in positions 0 through 2 (exclusive), columns in positions 0 through 4 (exclusive)
ufo.iloc[0:2, 0:4]
# rows in positions 0 through 2 (exclusive), all columns
ufo.iloc[0:2, :]
# accomplish the same thing - but using 'iloc' is preferred since it's more explicit
ufo[0:2]
Explanation: The iloc method is used to select rows and columns by integer position. You can pass it:
A single integer position
A list of integer positions
A slice of integer positions
A colon (which indicates "all integer positions")
End of explanation
# read a dataset of alcohol consumption into a DataFrame and set 'country' as the index
drinks = pd.read_csv('data/drinks.csv', index_col='country')
drinks.head()
# row with label 'Albania', column in position 0
drinks.ix['Albania', 0]
# row in position 1, column with label 'beer_servings'
drinks.ix[1, 'beer_servings']
Explanation: The ix method is used to select rows and columns by label or integer position, and should only be used when you need to mix label-based and integer-based selection in the same call.
End of explanation
# rows 'Albania' through 'Andorra' (inclusive), columns in positions 0 through 2 (exclusive)
drinks.ix['Albania':'Andorra', 0:2]
# rows 0 through 2 (inclusive), columns in positions 0 through 2 (exclusive)
ufo.ix[0:2, 0:2]
Explanation: Rules for using numbers with ix:
If the index is strings, numbers are treated as integer positions, and thus slices are exclusive on the right.
If the index is integers, numbers are treated as labels, and thus slices are inclusive.
End of explanation
# read a dataset of UFO reports into a DataFrame
ufo = pd.read_csv('data/ufo.csv')
ufo.head()
ufo.shape
# remove the 'City' column (doesn't affect the DataFrame since inplace=False)
ufo.drop('City', axis=1).head()
# confirm that the 'City' column was not actually removed
ufo.head()
# remove the 'City' column (does affect the DataFrame since inplace=True)
ufo.drop('City', axis=1, inplace=True)
# confirm that the 'City' column was actually removed
ufo.head()
# drop a row if any value is missing from that row (doesn't affect the DataFrame since inplace=False)
ufo.dropna(how='any').shape
# confirm that no rows were actually removed
ufo.shape
# use an assignment statement instead of the 'inplace' parameter
ufo = ufo.set_index('Time')
ufo.tail()
# fill missing values using "backward fill" strategy (doesn't affect the DataFrame since inplace=False)
ufo.fillna(method='bfill').tail()
# compare with "forward fill" strategy (doesn't affect the DataFrame since inplace=False)
ufo.fillna(method='ffill').tail()
Explanation: Summary of the pandas API for selection
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
20. When should I use the "inplace" parameter in pandas?
End of explanation
# read a dataset of alcohol consumption into a DataFrame
drinks = pd.read_csv('data/drinks.csv')
drinks.head()
# exact memory usage is unknown because object columns are references elsewhere
drinks.info()
# force pandas to calculate the true memory usage
drinks.info(memory_usage='deep')
# calculate the memory usage for each Series (in bytes)
drinks.memory_usage(deep=True)
Explanation: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
21. How do I make my pandas DataFrame smaller and faster?
End of explanation
# use the 'category' data type (new in pandas 0.15) to store the 'continent' strings as integers
drinks['continent'] = drinks.continent.astype('category')
drinks.dtypes
# 'continent' Series appears to be unchanged
drinks.continent.head()
# strings are now encoded (0 means 'Africa', 1 means 'Asia', 2 means 'Europe', etc.)
drinks.continent.cat.codes.head()
# memory usage has been drastically reduced
drinks.memory_usage(deep=True)
# repeat this process for the 'country' Series
drinks['country'] = drinks.country.astype('category')
drinks.memory_usage(deep=True)
# memory usage increased because we created 193 categories
drinks.country.cat.categories
Explanation: Documentation for info and memory_usage
End of explanation
# create a small DataFrame from a dictionary
df = pd.DataFrame({'ID':[100, 101, 102, 103], 'quality':['good', 'very good', 'good', 'excellent']})
df
# sort the DataFrame by the 'quality' Series (alphabetical order)
df.sort_values('quality')
# define a logical ordering for the categories
df['quality'] = df.quality.astype('category', categories=['good', 'very good', 'excellent'], ordered=True)
df.quality
# sort the DataFrame by the 'quality' Series (logical order)
df.sort_values('quality')
# comparison operators work with ordered categories
df.loc[df.quality > 'good', :]
Explanation: The category data type should only be used with a string Series that has a small number of possible values.
End of explanation
# read the training dataset from Kaggle's Titanic competition into a DataFrame
train = pd.read_csv('http://bit.ly/kaggletrain')
train.head()
Explanation: Overview of categorical data in pandas
API reference for categorical methods
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
22. How do I use pandas with scikit-learn to create Kaggle submissions?
End of explanation
# create a feature matrix 'X' by selecting two DataFrame columns
feature_cols = ['Pclass', 'Parch']
X = train.loc[:, feature_cols]
X.shape
# create a response vector 'y' by selecting a Series
y = train.Survived
y.shape
Explanation: Goal: Predict passenger survival aboard the Titanic based on passenger attributes
Video: What is machine learning, and how does it work?
End of explanation
# fit a classification model to the training data
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression()
logreg.fit(X, y)
Explanation: Note: There is no need to convert these pandas objects to NumPy arrays. scikit-learn will understand these objects as long as they are entirely numeric and the proper shapes.
End of explanation
# read the testing dataset from Kaggle's Titanic competition into a DataFrame
test = pd.read_csv('http://bit.ly/kaggletest')
test.head()
# create a feature matrix from the testing data that matches the training data
X_new = test.loc[:, feature_cols]
X_new.shape
# use the fitted model to make predictions for the testing set observations
new_pred_class = logreg.predict(X_new)
# create a DataFrame of passenger IDs and testing set predictions
pd.DataFrame({'PassengerId':test.PassengerId, 'Survived':new_pred_class}).head()
Explanation: Video series: Introduction to machine learning with scikit-learn
End of explanation
# ensure that PassengerID is the first column by setting it as the index
pd.DataFrame({'PassengerId':test.PassengerId, 'Survived':new_pred_class}).set_index('PassengerId').head()
# write the DataFrame to a CSV file that can be submitted to Kaggle
pd.DataFrame({'PassengerId':test.PassengerId, 'Survived':new_pred_class}).set_index('PassengerId').to_csv('sub.csv')
Explanation: Documentation for the DataFrame constructor
End of explanation
# save a DataFrame to disk ("pickle it")
train.to_pickle('train.pkl')
# read a pickled object from disk ("unpickle it")
pd.read_pickle('train.pkl').head()
Explanation: Documentation for to_csv
End of explanation
# read a dataset of UFO reports into a DataFrame
ufo = pd.read_csv('http://bit.ly/uforeports')
ufo.head()
# use 'isnull' as a top-level function
pd.isnull(ufo).head()
# equivalent: use 'isnull' as a DataFrame method
ufo.isnull().head()
Explanation: Documentation for to_pickle and read_pickle
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
23. More of your pandas questions answered!
Question: Could you explain how to read the pandas documentation?
pandas API reference
Question: What is the difference between ufo.isnull() and pd.isnull(ufo)?
End of explanation
# label-based slicing is inclusive of the start and stop
ufo.loc[0:4, :]
# position-based slicing is inclusive of the start and exclusive of the stop
ufo.iloc[0:4, :]
Explanation: Documentation for isnull
Question: Why are DataFrame slices inclusive when using .loc, but exclusive when using .iloc?
End of explanation
# 'iloc' is simply following NumPy's slicing convention...
ufo.values[0:4, :]
# ...and NumPy is simply following Python's slicing convention
'python'[0:4]
# 'loc' is inclusive of the stopping label because you don't necessarily know what label will come after it
ufo.loc[0:4, 'City':'State']
Explanation: Documentation for loc and iloc
End of explanation
# sample 3 rows from the DataFrame without replacement (new in pandas 0.16.1)
ufo.sample(n=3)
Explanation: Question: How do I randomly sample rows from a DataFrame?
End of explanation
# use the 'random_state' parameter for reproducibility
ufo.sample(n=3, random_state=42)
# sample 75% of the DataFrame's rows without replacement
train = ufo.sample(frac=0.75, random_state=99)
# store the remaining 25% of the rows in another DataFrame
test = ufo.loc[~ufo.index.isin(train.index), :]
Explanation: Documentation for sample
End of explanation
# read the training dataset from Kaggle's Titanic competition
train = pd.read_csv('http://bit.ly/kaggletrain')
train.head()
# create the 'Sex_male' dummy variable using the 'map' method
train['Sex_male'] = train.Sex.map({'female':0, 'male':1})
train.head()
Explanation: Documentation for isin
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
24. How do I create dummy variables in pandas?
End of explanation
# alternative: use 'get_dummies' to create one column for every possible value
pd.get_dummies(train.Sex).head()
Explanation: Documentation for map
End of explanation
# drop the first dummy variable ('female') using the 'iloc' method
pd.get_dummies(train.Sex).iloc[:, 1:].head()
# add a prefix to identify the source of the dummy variables
pd.get_dummies(train.Sex, prefix='Sex').iloc[:, 1:].head()
# use 'get_dummies' with a feature that has 3 possible values
pd.get_dummies(train.Embarked, prefix='Embarked').head(10)
# drop the first dummy variable ('C')
pd.get_dummies(train.Embarked, prefix='Embarked').iloc[:, 1:].head(10)
Explanation: Generally speaking:
If you have "K" possible values for a categorical feature, you only need "K-1" dummy variables to capture all of the information about that feature.
One convention is to drop the first dummy variable, which defines that level as the "baseline".
End of explanation
# save the DataFrame of dummy variables and concatenate them to the original DataFrame
embarked_dummies = pd.get_dummies(train.Embarked, prefix='Embarked').iloc[:, 1:]
train = pd.concat([train, embarked_dummies], axis=1)
train.head()
Explanation: How to translate these values back to the original 'Embarked' value:
0, 0 means C
1, 0 means Q
0, 1 means S
End of explanation
# reset the DataFrame
train = pd.read_csv('http://bit.ly/kaggletrain')
train.head()
# pass the DataFrame to 'get_dummies' and specify which columns to dummy (it drops the original columns)
pd.get_dummies(train, columns=['Sex', 'Embarked']).head()
# use the 'drop_first' parameter (new in pandas 0.18) to drop the first dummy variable for each feature
pd.get_dummies(train, columns=['Sex', 'Embarked'], drop_first=True).head()
Explanation: Documentation for concat
End of explanation
# read a dataset of UFO reports into a DataFrame
ufo = pd.read_csv('http://bit.ly/uforeports')
ufo.head()
# 'Time' is currently stored as a string
ufo.dtypes
# hour could be accessed using string slicing, but this approach breaks too easily
ufo.Time.str.slice(-5, -3).astype(int).head()
# convert 'Time' to datetime format
ufo['Time'] = pd.to_datetime(ufo.Time)
ufo.head()
ufo.dtypes
Explanation: Documentation for get_dummies
[<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
25. How do I work with dates and times in pandas?
End of explanation
# convenient Series attributes are now available
ufo.Time.dt.hour.head()
ufo.Time.dt.weekday_name.head()
ufo.Time.dt.dayofyear.head()
Explanation: Documentation for to_datetime
End of explanation
# convert a single string to datetime format (outputs a timestamp object)
ts = pd.to_datetime('1/1/1999')
ts
# compare a datetime Series with a timestamp
ufo.loc[ufo.Time >= ts, :].head()
# perform mathematical operations with timestamps (outputs a timedelta object)
ufo.Time.max() - ufo.Time.min()
# timedelta objects also have attributes you can access
(ufo.Time.max() - ufo.Time.min()).days
# allow plots to appear in the notebook
%matplotlib inline
# count the number of UFO reports per year
ufo['Year'] = ufo.Time.dt.year
ufo.Year.value_counts().sort_index().head()
# plot the number of UFO reports per year (line plot is the default)
ufo.Year.value_counts().sort_index().plot()
Explanation: API reference for datetime properties and methods
End of explanation
# read a dataset of movie reviewers into a DataFrame
user_cols = ['user_id', 'age', 'gender', 'occupation', 'zip_code']
users = pd.read_table('http://bit.ly/movieusers', sep='|', header=None, names=user_cols, index_col='user_id')
users.head()
users.shape
# detect duplicate zip codes: True if an item is identical to a previous item
users.zip_code.duplicated().tail()
# count the duplicate items (True becomes 1, False becomes 0)
users.zip_code.duplicated().sum()
# detect duplicate DataFrame rows: True if an entire row is identical to a previous row
users.duplicated().tail()
# count the duplicate rows
users.duplicated().sum()
Explanation: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
26. How do I find and remove duplicate rows in pandas?
End of explanation
# examine the duplicate rows (ignoring the first occurrence)
users.loc[users.duplicated(keep='first'), :]
# examine the duplicate rows (ignoring the last occurrence)
users.loc[users.duplicated(keep='last'), :]
# examine the duplicate rows (including all duplicates)
users.loc[users.duplicated(keep=False), :]
# drop the duplicate rows (inplace=False by default)
users.drop_duplicates(keep='first').shape
users.drop_duplicates(keep='last').shape
users.drop_duplicates(keep=False).shape
Explanation: Logic for duplicated:
keep='first' (default): Mark duplicates as True except for the first occurrence.
keep='last': Mark duplicates as True except for the last occurrence.
keep=False: Mark all duplicates as True.
End of explanation
# only consider a subset of columns when identifying duplicates
users.duplicated(subset=['age', 'zip_code']).sum()
users.drop_duplicates(subset=['age', 'zip_code']).shape
Explanation: Documentation for drop_duplicates
End of explanation
# read a dataset of top-rated IMDb movies into a DataFrame
movies = pd.read_csv('http://bit.ly/imdbratings')
movies.head()
# count the missing values in the 'content_rating' Series
movies.content_rating.isnull().sum()
# examine the DataFrame rows that contain those missing values
movies[movies.content_rating.isnull()]
# examine the unique values in the 'content_rating' Series
movies.content_rating.value_counts()
Explanation: [<a href="#Python-pandas-Q&A-video-series-by-Data-School">Back to top</a>]
27. How do I avoid a SettingWithCopyWarning in pandas?
End of explanation
# first, locate the relevant rows
movies[movies.content_rating=='NOT RATED'].head()
# then, select the 'content_rating' Series from those rows
movies[movies.content_rating=='NOT RATED'].content_rating.head()
# finally, replace the 'NOT RATED' values with 'NaN' (imported from NumPy)
import numpy as np
movies[movies.content_rating=='NOT RATED'].content_rating = np.nan
Explanation: Goal: Mark the 'NOT RATED' values as missing values, represented by 'NaN'.
End of explanation
# the 'content_rating' Series has not changed
movies.content_rating.isnull().sum()
Explanation: Problem: That statement involves two operations, a __getitem__ and a __setitem__. pandas can't guarantee whether the __getitem__ operation returns a view or a copy of the data.
If __getitem__ returns a view of the data, __setitem__ will affect the 'movies' DataFrame.
But if __getitem__ returns a copy of the data, __setitem__ will not affect the 'movies' DataFrame.
End of explanation
# replace the 'NOT RATED' values with 'NaN' (does not cause a SettingWithCopyWarning)
movies.loc[movies.content_rating=='NOT RATED', 'content_rating'] = np.nan
# this time, the 'content_rating' Series has changed
movies.content_rating.isnull().sum()
Explanation: Solution: Use the loc method, which replaces the 'NOT RATED' values in a single __setitem__ operation.
End of explanation
# create a DataFrame only containing movies with a high 'star_rating'
top_movies = movies.loc[movies.star_rating >= 9, :]
top_movies
Explanation: Summary: Use the loc method any time you are selecting rows and columns in the same statement.
More information: Modern Pandas (Part 1)
End of explanation
# overwrite the relevant cell with the correct duration
top_movies.loc[0, 'duration'] = 150
Explanation: Goal: Fix the 'duration' for 'The Shawshank Redemption'.
End of explanation
# 'top_movies' DataFrame has been updated
top_movies
# 'movies' DataFrame has not been updated
movies.head(1)
Explanation: Problem: pandas isn't sure whether 'top_movies' is a view or a copy of 'movies'.
End of explanation
# explicitly create a copy of 'movies'
top_movies = movies.loc[movies.star_rating >= 9, :].copy()
# pandas now knows that you are updating a copy instead of a view (does not cause a SettingWithCopyWarning)
top_movies.loc[0, 'duration'] = 150
# 'top_movies' DataFrame has been updated
top_movies
Explanation: Solution: Any time you are attempting to create a DataFrame copy, use the copy method.
End of explanation |
2,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two implementations of heterodyne detection
Step1: Introduction
Homodyne and hetrodyne detection are techniques for measuring the quadratures of a field using photocounters. Homodyne detection (on-resonant) measures one quadrature and with heterodyne detection (off-resonant) both quadratures can be detected simulateously.
The evolution of a quantum system that is coupled to a field that is monitored with homodyne and heterodyne detector can be described with stochastic master equations. This notebook compares two different ways to implement the heterodyne detection stochastic master equation in QuTiP.
Deterministic reference
Step2: Heterodyne implementation #1
Stochastic master equation for heterodyne in Milburn's formulation
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + \gamma\mathcal{D}[a]\rho(t) dt + \frac{1}{\sqrt{2}} dW_1(t) \sqrt{\gamma} \mathcal{H}[a] \rho(t) + \frac{1}{\sqrt{2}} dW_2(t) \sqrt{\gamma} \mathcal{H}[-ia] \rho(t)$
where $\mathcal{D}$ is the standard Lindblad dissipator superoperator, and $\mathcal{H}$ is defined as above,
and $dW_i(t)$ is a normal distributed increment with $E[dW_i(t)] = \sqrt{dt}$.
In QuTiP format we have
Step3: $D_{2}^{(1)}[A]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[a] \rho =
\frac{1}{\sqrt{2}} \mathcal{H}[A] \rho =
\frac{1}{\sqrt{2}}(A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho)
\rightarrow \frac{1}{\sqrt{2}} \left{(A_L + A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L + A_R^\dagger)\rho_v] \rho_v\right}$
$D_{2}^{(2)}[A]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[-ia] \rho
= \frac{1}{\sqrt{2}} \mathcal{H}[-iA] \rho =
\frac{-i}{\sqrt{2}}(A\rho - \rho A^\dagger - \mathrm{Tr}[A\rho - \rho A^\dagger] \rho)
\rightarrow \frac{-i}{\sqrt{2}} \left{(A_L - A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L - A_R^\dagger)\rho_v] \rho_v\right}$
Step4: The heterodyne currents for the $x$ and $y$ quadratures are
$J_x(t) = \sqrt{\gamma}\left<x\right> + \sqrt{2} \xi(t)$
$J_y(t) = \sqrt{\gamma}\left<y\right> + \sqrt{2} \xi(t)$
where $\xi(t) = \frac{dW}{dt}$.
In qutip we define these measurement operators using the m_ops = [[x, y]] and the coefficients to the noise terms dW_factor = [sqrt(2/gamma), sqrt(2/gamma)].
Step5: Heterodyne implementation #2
Step6: Implementation #3
Step7: Common problem
For some systems, the resulting density matrix can become unphysical due to the accumulation of computation error.
Step8: Using smaller integration steps by increasing the nsubstep will lower the numerical errors.
The solver algorithm used affect the convergence and numerical error.
Notable solvers are
Step9: Versions | Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
from qutip import *
Explanation: Two implementations of heterodyne detection: direct heterodyne and as two homodyne measurements
Copyright (C) 2011 and later, Paul D. Nation & Robert J. Johansson
End of explanation
N = 15
w0 = 1.0 * 2 * np.pi
A = 0.1 * 2 * np.pi
times = np.linspace(0, 15, 301)
gamma = 0.25
ntraj = 150
nsubsteps = 50
a = destroy(N)
x = a + a.dag()
y = -1.0j*(a - a.dag())
H = w0 * a.dag() * a + A * (a + a.dag())
rho0 = coherent(N, np.sqrt(5.0), method='analytic')
c_ops = [np.sqrt(gamma) * a]
e_ops = [a.dag() * a, x, y]
result_ref = mesolve(H, rho0, times, c_ops, e_ops)
plot_expectation_values(result_ref);
Explanation: Introduction
Homodyne and hetrodyne detection are techniques for measuring the quadratures of a field using photocounters. Homodyne detection (on-resonant) measures one quadrature and with heterodyne detection (off-resonant) both quadratures can be detected simulateously.
The evolution of a quantum system that is coupled to a field that is monitored with homodyne and heterodyne detector can be described with stochastic master equations. This notebook compares two different ways to implement the heterodyne detection stochastic master equation in QuTiP.
Deterministic reference
End of explanation
from qutip.expect import expect_rho_vec
L = liouvillian(H)
D = lindblad_dissipator(c_ops[0])
d1_operator = L + D
def d1_rho_func(t, rho_vec):
return d1_operator * rho_vec
Explanation: Heterodyne implementation #1
Stochastic master equation for heterodyne in Milburn's formulation
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + \gamma\mathcal{D}[a]\rho(t) dt + \frac{1}{\sqrt{2}} dW_1(t) \sqrt{\gamma} \mathcal{H}[a] \rho(t) + \frac{1}{\sqrt{2}} dW_2(t) \sqrt{\gamma} \mathcal{H}[-ia] \rho(t)$
where $\mathcal{D}$ is the standard Lindblad dissipator superoperator, and $\mathcal{H}$ is defined as above,
and $dW_i(t)$ is a normal distributed increment with $E[dW_i(t)] = \sqrt{dt}$.
In QuTiP format we have:
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + D_{1}[A]\rho(t) dt + D_{2}^{(1)}[A]\rho(t) dW_1 + D_{2}^{(2)}[A]\rho(t) dW_2$
where $A = \sqrt{\gamma} a$, so we can identify
$\displaystyle D_{1}[A]\rho = \gamma \mathcal{D}[a]\rho = \mathcal{D}[A]\rho$
End of explanation
B1 = spre(c_ops[0]) + spost(c_ops[0].dag())
B2 = spre(c_ops[0]) + spost(c_ops[0].dag())
def d2_rho_func(t, rho_vec):
e1 = expect_rho_vec(B1.data, rho_vec, False)
drho1 = B1 * rho_vec - e1 * rho_vec
e1 = expect_rho_vec(B2.data, rho_vec, False)
drho2 = B2 * rho_vec - e1 * rho_vec
return np.vstack([1.0/np.sqrt(2) * drho1, -1.0j/np.sqrt(2) * drho2])
Explanation: $D_{2}^{(1)}[A]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[a] \rho =
\frac{1}{\sqrt{2}} \mathcal{H}[A] \rho =
\frac{1}{\sqrt{2}}(A\rho + \rho A^\dagger - \mathrm{Tr}[A\rho + \rho A^\dagger] \rho)
\rightarrow \frac{1}{\sqrt{2}} \left{(A_L + A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L + A_R^\dagger)\rho_v] \rho_v\right}$
$D_{2}^{(2)}[A]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[-ia] \rho
= \frac{1}{\sqrt{2}} \mathcal{H}[-iA] \rho =
\frac{-i}{\sqrt{2}}(A\rho - \rho A^\dagger - \mathrm{Tr}[A\rho - \rho A^\dagger] \rho)
\rightarrow \frac{-i}{\sqrt{2}} \left{(A_L - A_R^\dagger)\rho_v - \mathrm{Tr}[(A_L - A_R^\dagger)\rho_v] \rho_v\right}$
End of explanation
result = general_stochastic(ket2dm(rho0), times, d1_rho_func, d2_rho_func,
e_ops=[spre(op) for op in e_ops],
len_d2=2, ntraj=ntraj, nsubsteps=nsubsteps*2, solver="platen",
dW_factors=[np.sqrt(2/gamma), np.sqrt(2/gamma)],
m_ops=[spre(x), spre(y)],
store_measurement=True, map_func=parallel_map)
plot_expectation_values([result, result_ref]);
fig, ax = plt.subplots(figsize=(8,4))
for m in result.measurement:
ax.plot(times, m[:, 0].real, 'b', alpha=0.05)
ax.plot(times, m[:, 1].real, 'r', alpha=0.05)
ax.plot(times, result_ref.expect[1], 'b', lw=2);
ax.plot(times, result_ref.expect[2], 'r', lw=2);
ax.set_ylim(-10, 10)
ax.set_xlim(0, times.max())
ax.set_xlabel('time', fontsize=12)
ax.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'k', lw=2);
ax.plot(times, np.array(result.measurement).mean(axis=0)[:,1].real, 'k', lw=2);
Explanation: The heterodyne currents for the $x$ and $y$ quadratures are
$J_x(t) = \sqrt{\gamma}\left<x\right> + \sqrt{2} \xi(t)$
$J_y(t) = \sqrt{\gamma}\left<y\right> + \sqrt{2} \xi(t)$
where $\xi(t) = \frac{dW}{dt}$.
In qutip we define these measurement operators using the m_ops = [[x, y]] and the coefficients to the noise terms dW_factor = [sqrt(2/gamma), sqrt(2/gamma)].
End of explanation
opt = Options()
opt.store_states = True
result = smesolve(H, rho0, times, [], [np.sqrt(gamma/2) * a, -1.0j * np.sqrt(gamma/2) * a],
e_ops, ntraj=100, nsubsteps=nsubsteps, solver="taylor15",
m_ops=[x, y], dW_factors=[np.sqrt(2/gamma), np.sqrt(2/gamma)],
method='homodyne', store_measurement=True,
map_func=parallel_map)
plot_expectation_values([result, result_ref])
fig, ax = plt.subplots(figsize=(8,4))
for m in result.measurement:
ax.plot(times, m[:, 0].real, 'b', alpha=0.05)
ax.plot(times, m[:, 1].real, 'r', alpha=0.05)
ax.plot(times, result_ref.expect[1], 'b', lw=2);
ax.plot(times, result_ref.expect[2], 'r', lw=2);
ax.set_xlim(0, times.max())
ax.set_ylim(-25, 25)
ax.set_xlabel('time', fontsize=12)
ax.plot(times, np.array(result.measurement).mean(axis=0)[:,0].real, 'k', lw=2);
ax.plot(times, np.array(result.measurement).mean(axis=0)[:,1].real, 'k', lw=2);
Explanation: Heterodyne implementation #2: using two homodyne measurements
We can also write the heterodyne equation as
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + \frac{1}{2}\gamma\mathcal{D}[a]\rho(t) dt + \frac{1}{\sqrt{2}} dW_1(t) \sqrt{\gamma} \mathcal{H}[a] \rho(t) + \frac{1}{2}\gamma\mathcal{D}[a]\rho(t) dt + \frac{1}{\sqrt{2}} dW_2(t) \sqrt{\gamma} \mathcal{H}[-ia] \rho(t)$
And using the QuTiP format for two stochastic collapse operators, we have:
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + D_{1}[A_1]\rho(t) dt + D_{2}[A_1]\rho(t) dW_1 + D_{1}[A_2]\rho(t) dt + D_{2}[A_2]\rho(t) dW_2$
so we can also identify
$\displaystyle D_{1}[A_1]\rho = \frac{1}{2}\gamma \mathcal{D}[a]\rho = \mathcal{D}[\sqrt{\gamma}a/\sqrt{2}]\rho = \mathcal{D}[A_1]\rho$
$\displaystyle D_{1}[A_2]\rho = \frac{1}{2}\gamma \mathcal{D}[a]\rho = \mathcal{D}[-i\sqrt{\gamma}a/\sqrt{2}]\rho = \mathcal{D}[A_2]\rho$
$D_{2}[A_1]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[a] \rho = \mathcal{H}[A_1] \rho$
$D_{2}[A_2]\rho = \frac{1}{\sqrt{2}} \sqrt{\gamma} \mathcal{H}[-ia] \rho = \mathcal{H}[A_2] \rho $
where $A_1 = \sqrt{\gamma} a / \sqrt{2}$ and $A_2 = -i \sqrt{\gamma} a / \sqrt{2}$.
In summary we have
$\displaystyle d\rho(t) = -i[H, \rho(t)]dt + \sum_i\left{\mathcal{D}[A_i]\rho(t) dt + \mathcal{H}[A_i]\rho(t) dW_i\right}$
which is a simultaneous homodyne detection with $A_1 = \sqrt{\gamma}a/\sqrt{2}$ and $A_2 = -i\sqrt{\gamma}a/\sqrt{2}$
Here the two heterodyne currents for the $x$ and $y$ quadratures are
$J_x(t) = \sqrt{\gamma/2}\left<x\right> + \xi(t)$
$J_y(t) = \sqrt{\gamma/2}\left<y\right> + \xi(t)$
where $\xi(t) = \frac{dW}{dt}$.
In qutip we can use the predefined homodyne solver for solving this problem.
End of explanation
result = smesolve(H, rho0, times, [], [np.sqrt(gamma) * a],
e_ops, ntraj=ntraj, nsubsteps=nsubsteps, solver="taylor15",
method='heterodyne', store_measurement=True,
map_func=parallel_map)
plot_expectation_values([result, result_ref]);
fig, ax = plt.subplots(figsize=(8,4))
for m in result.measurement:
ax.plot(times, m[:, 0, 0].real / np.sqrt(gamma), 'b', alpha=0.05)
ax.plot(times, m[:, 0, 1].real / np.sqrt(gamma), 'r', alpha=0.05)
ax.plot(times, result_ref.expect[1], 'b', lw=2);
ax.plot(times, result_ref.expect[2], 'r', lw=2);
ax.set_xlim(0, times.max())
ax.set_ylim(-15, 15)
ax.set_xlabel('time', fontsize=12)
ax.plot(times, np.array(result.measurement).mean(axis=0)[:, 0, 0].real / np.sqrt(gamma), 'k', lw=2);
ax.plot(times, np.array(result.measurement).mean(axis=0)[:, 0, 1].real / np.sqrt(gamma), 'k', lw=2);
Explanation: Implementation #3: builtin function for heterodyne
End of explanation
N = 5
w0 = 1.0 * 2 * np.pi
A = 0.1 * 2 * np.pi
times = np.linspace(0, 15, 301)
gamma = 0.25
ntraj = 150
nsubsteps = 50
a = destroy(N)
x = a + a.dag()
y = -1.0j*(a - a.dag())
H = w0 * a.dag() * a + A * (a + a.dag())
rho0 = coherent(N, np.sqrt(5.0), method='analytic')
c_ops = [np.sqrt(gamma) * a]
e_ops = [a.dag() * a, x, y]
opt = Options()
opt.store_states = True
result = smesolve(H, rho0, times, [], [np.sqrt(gamma) * a],
e_ops, ntraj=1, nsubsteps=5, solver="euler",
method='heterodyne', store_measurement=True,
map_func=parallel_map, options=opt, normalize=False)
result.states[0][100]
sp.linalg.eigh(result.states[0][10].full())
Explanation: Common problem
For some systems, the resulting density matrix can become unphysical due to the accumulation of computation error.
End of explanation
help(stochastic_solvers)
Explanation: Using smaller integration steps by increasing the nsubstep will lower the numerical errors.
The solver algorithm used affect the convergence and numerical error.
Notable solvers are:
- euler: order 0.5 fastest, but lowest order. Only solver that accept non-commuting sc_ops
- rouchon: order 1.0?, build to keep the density matrix physical
- taylor1.5: order 1.5, default solver, reasonably fast for good convergence.
- taylor2.0: order 2.0, even better convergence but can only take 1 homodyne sc_ops.
To list list all available solver, use help(stochastic_solvers)
End of explanation
from qutip.ipynbtools import version_table
version_table()
Explanation: Versions
End of explanation |
2,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Epochs data
Step1: This tutorial focuses on visualization of epoched data. All of the functions
introduced here are basically high level matplotlib functions with built in
intelligence to work with epoched data. All the methods return a handle to
matplotlib figure instance.
Events used for constructing the epochs here are the triggers for subject
being presented a smiley face at the center of the visual field. More of the
paradigm at BABDHIFJ.
All plotting functions start with plot. Let's start with the most
obvious.
Step2: The numbers at the top refer to the event id of the epoch. The number at the
bottom is the running numbering for the epochs.
Since we did no artifact correction or rejection, there are epochs
contaminated with blinks and saccades. For instance, epoch number 1 seems to
be contaminated by a blink (scroll to the bottom to view the EOG channel).
This epoch can be marked for rejection by clicking on top of the browser
window. The epoch should turn red when you click it. This means that it will
be dropped as the browser window is closed.
It is possible to plot event markers on epoched data by passing events
keyword to the epochs plotter. The events are plotted as vertical lines and
they follow the same coloring scheme as
Step3: To plot individual channels as an image, where you see all the epochs at one
glance, you can use function
Step4: We can also give an overview of all channels by calculating the global
field power (or other other aggregation methods). However, combining
multiple channel types (e.g., MEG and EEG) in this way is not sensible.
Instead, we can use the group_by parameter. Setting group_by to
'type' combines channels by type.
group_by can also be used to group channels into arbitrary groups, e.g.
regions of interests, by providing a dictionary containing
group name -> channel indices mappings.
Step5: You also have functions for plotting channelwise information arranged into a
shape of the channel array. The image plotting uses automatic scaling by
default, but noisy channels and different channel types can cause the scaling
to be a bit off. Here we define the limits by hand. | Python Code:
import os.path as op
import mne
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')
raw = mne.io.read_raw_fif(
op.join(data_path, 'sample_audvis_raw.fif'), preload=True)
raw.load_data().filter(None, 9, fir_design='firwin')
raw.set_eeg_reference('average', projection=True) # set EEG average reference
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5)
Explanation: Visualize Epochs data
End of explanation
epochs.plot(block=True)
Explanation: This tutorial focuses on visualization of epoched data. All of the functions
introduced here are basically high level matplotlib functions with built in
intelligence to work with epoched data. All the methods return a handle to
matplotlib figure instance.
Events used for constructing the epochs here are the triggers for subject
being presented a smiley face at the center of the visual field. More of the
paradigm at BABDHIFJ.
All plotting functions start with plot. Let's start with the most
obvious. :func:mne.Epochs.plot offers an interactive browser that allows
rejection by hand when called in combination with a keyword block=True.
This blocks the execution of the script until the browser window is closed.
End of explanation
events = mne.pick_events(events, include=[5, 32])
mne.viz.plot_events(events)
epochs['smiley'].plot(events=events)
Explanation: The numbers at the top refer to the event id of the epoch. The number at the
bottom is the running numbering for the epochs.
Since we did no artifact correction or rejection, there are epochs
contaminated with blinks and saccades. For instance, epoch number 1 seems to
be contaminated by a blink (scroll to the bottom to view the EOG channel).
This epoch can be marked for rejection by clicking on top of the browser
window. The epoch should turn red when you click it. This means that it will
be dropped as the browser window is closed.
It is possible to plot event markers on epoched data by passing events
keyword to the epochs plotter. The events are plotted as vertical lines and
they follow the same coloring scheme as :func:mne.viz.plot_events. The
events plotter gives you all the events with a rough idea of the timing.
Since the colors are the same, the event plotter can also function as a
legend for the epochs plotter events. It is also possible to pass your own
colors via event_colors keyword. Here we can plot the reaction times
between seeing the smiley face and the button press (event 32).
When events are passed, the epoch numbering at the bottom is switched off by
default to avoid overlaps. You can turn it back on via settings dialog by
pressing o key. You should check out help at the lower left corner of the
window for more information about the interactive features.
End of explanation
epochs.plot_image(278, cmap='interactive', sigma=1., vmin=-250, vmax=250)
Explanation: To plot individual channels as an image, where you see all the epochs at one
glance, you can use function :func:mne.Epochs.plot_image. It shows the
amplitude of the signal over all the epochs plus an average (evoked response)
of the activation. We explicitly set interactive colorbar on (it is also on
by default for plotting functions with a colorbar except the topo plots). In
interactive mode you can scale and change the colormap with mouse scroll and
up/down arrow keys. You can also drag the colorbar with left/right mouse
button. Hitting space bar resets the scale.
End of explanation
epochs.plot_image(combine='gfp', group_by='type', sigma=2., cmap="YlGnBu_r")
Explanation: We can also give an overview of all channels by calculating the global
field power (or other other aggregation methods). However, combining
multiple channel types (e.g., MEG and EEG) in this way is not sensible.
Instead, we can use the group_by parameter. Setting group_by to
'type' combines channels by type.
group_by can also be used to group channels into arbitrary groups, e.g.
regions of interests, by providing a dictionary containing
group name -> channel indices mappings.
End of explanation
epochs.plot_topo_image(vmin=-250, vmax=250, title='ERF images', sigma=2.)
Explanation: You also have functions for plotting channelwise information arranged into a
shape of the channel array. The image plotting uses automatic scaling by
default, but noisy channels and different channel types can cause the scaling
to be a bit off. Here we define the limits by hand.
End of explanation |
2,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Generate music with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Download the Maestro dataset
Step3: The dataset contains about 1,200 MIDI files.
Step4: Process a MIDI file
First, use pretty_midi to parse a single MIDI file and inspect the format of the notes. If you would like to download the MIDI file below to play on your computer, you can do so in colab by writing files.download(sample_file).
Step5: Generate a PrettyMIDI object for the sample MIDI file.
Step6: Play the sample file. The playback widget may take several seconds to load.
Step7: Do some inspection on the MIDI file. What kinds of instruments are used?
Step8: Extract notes
Step9: You will use three variables to represent a note when training the model
Step10: It may be easier to interpret the note names rather than the pitches, so you can use the function below to convert from the numeric pitch values to note names.
The note name shows the type of note, accidental and octave number
(e.g. C#4).
Step11: To visualize the musical piece, plot the note pitch, start and end across the length of the track (i.e. piano roll). Start with the first 100 notes
Step12: Plot the notes for the entire track.
Step13: Check the distribution of each note variable.
Step14: Create a MIDI file
You can generate your own MIDI file from a list of notes using the function below.
Step15: Play the generated MIDI file and see if there is any difference.
Step16: As before, you can write files.download(example_file) to download and play this file.
Create the training dataset
Create the training dataset by extracting notes from the MIDI files. You can start by using a small number of files, and experiment later with more. This may take a couple minutes.
Step17: Next, create a tf.data.Dataset from the parsed notes.
Step19: You will train the model on batches of sequences of notes. Each example will consist of a sequence of notes as the input features, and next note as the label. In this way, the model will be trained to predict the next note in a sequence. You can find a diagram explaining this process (and more details) in Text classification with an RNN.
You can use the handy window function with size seq_length to create the features and labels in this format.
Step20: Set the sequence length for each example. Experiment with different lengths (e.g. 50, 100, 150) to see which one works best for the data, or use hyperparameter tuning. The size of the vocabulary (vocab_size) is set to 128 representing all the pitches supported by pretty_midi.
Step21: The shape of the dataset is (100,1), meaning that the model will take 100 notes as input, and learn to predict the following note as output.
Step22: Batch the examples, and configure the dataset for performance.
Step23: Create and train the model
The model will have three outputs, one for each note variable. For step and duration, you will use a custom loss function based on mean squared error that encourages the model to output non-negative values.
Step24: Testing the model.evaluate function, you can see that the pitch loss is significantly greater than the step and duration losses.
Note that loss is the total loss computed by summing all the other losses and is currently dominated by the pitch loss.
Step25: One way balance this is to use the loss_weights argument to compile
Step26: The loss then becomes the weighted sum of the individual losses.
Step27: Train the model.
Step29: Generate notes
To use the model to generate notes, you will first need to provide a starting sequence of notes. The function below generates one note from a sequence of notes.
For note pitch, it draws a sample from softmax distribution of notes produced by the model, and does not simply pick the note with the highest probability.
Always picking the note with the highest probability would lead to repetitive sequences of notes being generated.
The temperature parameter can be used to control the randomness of notes generated. You can find more details on temperature in Text generation with an RNN.
Step30: Now generate some notes. You can play around with temperature and the starting sequence in next_notes and see what happens.
Step31: You can also download the audio file by adding the two lines below
Step32: Check the distributions of pitch, step and duration. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
!sudo apt install -y fluidsynth
!pip install --upgrade pyfluidsynth
!pip install pretty_midi
import collections
import datetime
import fluidsynth
import glob
import numpy as np
import pathlib
import pandas as pd
import pretty_midi
import seaborn as sns
import tensorflow as tf
from IPython import display
from matplotlib import pyplot as plt
from typing import Dict, List, Optional, Sequence, Tuple
seed = 42
tf.random.set_seed(seed)
np.random.seed(seed)
# Sampling rate for audio playback
_SAMPLING_RATE = 16000
Explanation: Generate music with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/audio/music_generation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/audio/music_generation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/audio/music_generation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/audio/music_generation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This tutorial shows you how to generate musical notes using a simple RNN. You will train a model using a collection of piano MIDI files from the MAESTRO dataset. Given a sequence of notes, your model will learn to predict the next note in the sequence. You can generate a longer sequences of notes by calling the model repeatedly.
This tutorial contains complete code to parse and create MIDI files. You can learn more about how RNNs work by visiting Text generation with an RNN.
Setup
This tutorial uses the pretty_midi library to create and parse MIDI files, and pyfluidsynth for generating audio playback in Colab.
End of explanation
data_dir = pathlib.Path('data/maestro-v2.0.0')
if not data_dir.exists():
tf.keras.utils.get_file(
'maestro-v2.0.0-midi.zip',
origin='https://storage.googleapis.com/magentadata/datasets/maestro/v2.0.0/maestro-v2.0.0-midi.zip',
extract=True,
cache_dir='.', cache_subdir='data',
)
Explanation: Download the Maestro dataset
End of explanation
filenames = glob.glob(str(data_dir/'**/*.mid*'))
print('Number of files:', len(filenames))
Explanation: The dataset contains about 1,200 MIDI files.
End of explanation
sample_file = filenames[1]
print(sample_file)
Explanation: Process a MIDI file
First, use pretty_midi to parse a single MIDI file and inspect the format of the notes. If you would like to download the MIDI file below to play on your computer, you can do so in colab by writing files.download(sample_file).
End of explanation
pm = pretty_midi.PrettyMIDI(sample_file)
Explanation: Generate a PrettyMIDI object for the sample MIDI file.
End of explanation
def display_audio(pm: pretty_midi.PrettyMIDI, seconds=30):
waveform = pm.fluidsynth(fs=_SAMPLING_RATE)
# Take a sample of the generated waveform to mitigate kernel resets
waveform_short = waveform[:seconds*_SAMPLING_RATE]
return display.Audio(waveform_short, rate=_SAMPLING_RATE)
display_audio(pm)
Explanation: Play the sample file. The playback widget may take several seconds to load.
End of explanation
print('Number of instruments:', len(pm.instruments))
instrument = pm.instruments[0]
instrument_name = pretty_midi.program_to_instrument_name(instrument.program)
print('Instrument name:', instrument_name)
Explanation: Do some inspection on the MIDI file. What kinds of instruments are used?
End of explanation
for i, note in enumerate(instrument.notes[:10]):
note_name = pretty_midi.note_number_to_name(note.pitch)
duration = note.end - note.start
print(f'{i}: pitch={note.pitch}, note_name={note_name},'
f' duration={duration:.4f}')
Explanation: Extract notes
End of explanation
def midi_to_notes(midi_file: str) -> pd.DataFrame:
pm = pretty_midi.PrettyMIDI(midi_file)
instrument = pm.instruments[0]
notes = collections.defaultdict(list)
# Sort the notes by start time
sorted_notes = sorted(instrument.notes, key=lambda note: note.start)
prev_start = sorted_notes[0].start
for note in sorted_notes:
start = note.start
end = note.end
notes['pitch'].append(note.pitch)
notes['start'].append(start)
notes['end'].append(end)
notes['step'].append(start - prev_start)
notes['duration'].append(end - start)
prev_start = start
return pd.DataFrame({name: np.array(value) for name, value in notes.items()})
raw_notes = midi_to_notes(sample_file)
raw_notes.head()
Explanation: You will use three variables to represent a note when training the model: pitch, step and duration. The pitch is the perceptual quality of the sound as a MIDI note number.
The step is the time elapsed from the previous note or start of the track.
The duration is how long the note will be playing in seconds and is the difference between the note end and note start times.
Extract the notes from the sample MIDI file.
End of explanation
get_note_names = np.vectorize(pretty_midi.note_number_to_name)
sample_note_names = get_note_names(raw_notes['pitch'])
sample_note_names[:10]
Explanation: It may be easier to interpret the note names rather than the pitches, so you can use the function below to convert from the numeric pitch values to note names.
The note name shows the type of note, accidental and octave number
(e.g. C#4).
End of explanation
def plot_piano_roll(notes: pd.DataFrame, count: Optional[int] = None):
if count:
title = f'First {count} notes'
else:
title = f'Whole track'
count = len(notes['pitch'])
plt.figure(figsize=(20, 4))
plot_pitch = np.stack([notes['pitch'], notes['pitch']], axis=0)
plot_start_stop = np.stack([notes['start'], notes['end']], axis=0)
plt.plot(
plot_start_stop[:, :count], plot_pitch[:, :count], color="b", marker=".")
plt.xlabel('Time [s]')
plt.ylabel('Pitch')
_ = plt.title(title)
plot_piano_roll(raw_notes, count=100)
Explanation: To visualize the musical piece, plot the note pitch, start and end across the length of the track (i.e. piano roll). Start with the first 100 notes
End of explanation
plot_piano_roll(raw_notes)
Explanation: Plot the notes for the entire track.
End of explanation
def plot_distributions(notes: pd.DataFrame, drop_percentile=2.5):
plt.figure(figsize=[15, 5])
plt.subplot(1, 3, 1)
sns.histplot(notes, x="pitch", bins=20)
plt.subplot(1, 3, 2)
max_step = np.percentile(notes['step'], 100 - drop_percentile)
sns.histplot(notes, x="step", bins=np.linspace(0, max_step, 21))
plt.subplot(1, 3, 3)
max_duration = np.percentile(notes['duration'], 100 - drop_percentile)
sns.histplot(notes, x="duration", bins=np.linspace(0, max_duration, 21))
plot_distributions(raw_notes)
Explanation: Check the distribution of each note variable.
End of explanation
def notes_to_midi(
notes: pd.DataFrame,
out_file: str,
instrument_name: str,
velocity: int = 100, # note loudness
) -> pretty_midi.PrettyMIDI:
pm = pretty_midi.PrettyMIDI()
instrument = pretty_midi.Instrument(
program=pretty_midi.instrument_name_to_program(
instrument_name))
prev_start = 0
for i, note in notes.iterrows():
start = float(prev_start + note['step'])
end = float(start + note['duration'])
note = pretty_midi.Note(
velocity=velocity,
pitch=int(note['pitch']),
start=start,
end=end,
)
instrument.notes.append(note)
prev_start = start
pm.instruments.append(instrument)
pm.write(out_file)
return pm
example_file = 'example.midi'
example_pm = notes_to_midi(
raw_notes, out_file=example_file, instrument_name=instrument_name)
Explanation: Create a MIDI file
You can generate your own MIDI file from a list of notes using the function below.
End of explanation
display_audio(example_pm)
Explanation: Play the generated MIDI file and see if there is any difference.
End of explanation
num_files = 5
all_notes = []
for f in filenames[:num_files]:
notes = midi_to_notes(f)
all_notes.append(notes)
all_notes = pd.concat(all_notes)
n_notes = len(all_notes)
print('Number of notes parsed:', n_notes)
Explanation: As before, you can write files.download(example_file) to download and play this file.
Create the training dataset
Create the training dataset by extracting notes from the MIDI files. You can start by using a small number of files, and experiment later with more. This may take a couple minutes.
End of explanation
key_order = ['pitch', 'step', 'duration']
train_notes = np.stack([all_notes[key] for key in key_order], axis=1)
notes_ds = tf.data.Dataset.from_tensor_slices(train_notes)
notes_ds.element_spec
Explanation: Next, create a tf.data.Dataset from the parsed notes.
End of explanation
def create_sequences(
dataset: tf.data.Dataset,
seq_length: int,
vocab_size = 128,
) -> tf.data.Dataset:
Returns TF Dataset of sequence and label examples.
seq_length = seq_length+1
# Take 1 extra for the labels
windows = dataset.window(seq_length, shift=1, stride=1,
drop_remainder=True)
# `flat_map` flattens the" dataset of datasets" into a dataset of tensors
flatten = lambda x: x.batch(seq_length, drop_remainder=True)
sequences = windows.flat_map(flatten)
# Normalize note pitch
def scale_pitch(x):
x = x/[vocab_size,1.0,1.0]
return x
# Split the labels
def split_labels(sequences):
inputs = sequences[:-1]
labels_dense = sequences[-1]
labels = {key:labels_dense[i] for i,key in enumerate(key_order)}
return scale_pitch(inputs), labels
return sequences.map(split_labels, num_parallel_calls=tf.data.AUTOTUNE)
Explanation: You will train the model on batches of sequences of notes. Each example will consist of a sequence of notes as the input features, and next note as the label. In this way, the model will be trained to predict the next note in a sequence. You can find a diagram explaining this process (and more details) in Text classification with an RNN.
You can use the handy window function with size seq_length to create the features and labels in this format.
End of explanation
seq_length = 25
vocab_size = 128
seq_ds = create_sequences(notes_ds, seq_length, vocab_size)
seq_ds.element_spec
Explanation: Set the sequence length for each example. Experiment with different lengths (e.g. 50, 100, 150) to see which one works best for the data, or use hyperparameter tuning. The size of the vocabulary (vocab_size) is set to 128 representing all the pitches supported by pretty_midi.
End of explanation
for seq, target in seq_ds.take(1):
print('sequence shape:', seq.shape)
print('sequence elements (first 10):', seq[0: 10])
print()
print('target:', target)
Explanation: The shape of the dataset is (100,1), meaning that the model will take 100 notes as input, and learn to predict the following note as output.
End of explanation
batch_size = 64
buffer_size = n_notes - seq_length # the number of items in the dataset
train_ds = (seq_ds
.shuffle(buffer_size)
.batch(batch_size, drop_remainder=True)
.cache()
.prefetch(tf.data.experimental.AUTOTUNE))
train_ds.element_spec
Explanation: Batch the examples, and configure the dataset for performance.
End of explanation
def mse_with_positive_pressure(y_true: tf.Tensor, y_pred: tf.Tensor):
mse = (y_true - y_pred) ** 2
positive_pressure = 10 * tf.maximum(-y_pred, 0.0)
return tf.reduce_mean(mse + positive_pressure)
input_shape = (seq_length, 3)
learning_rate = 0.005
inputs = tf.keras.Input(input_shape)
x = tf.keras.layers.LSTM(128)(inputs)
outputs = {
'pitch': tf.keras.layers.Dense(128, name='pitch')(x),
'step': tf.keras.layers.Dense(1, name='step')(x),
'duration': tf.keras.layers.Dense(1, name='duration')(x),
}
model = tf.keras.Model(inputs, outputs)
loss = {
'pitch': tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True),
'step': mse_with_positive_pressure,
'duration': mse_with_positive_pressure,
}
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
model.compile(loss=loss, optimizer=optimizer)
model.summary()
Explanation: Create and train the model
The model will have three outputs, one for each note variable. For step and duration, you will use a custom loss function based on mean squared error that encourages the model to output non-negative values.
End of explanation
losses = model.evaluate(train_ds, return_dict=True)
losses
Explanation: Testing the model.evaluate function, you can see that the pitch loss is significantly greater than the step and duration losses.
Note that loss is the total loss computed by summing all the other losses and is currently dominated by the pitch loss.
End of explanation
model.compile(
loss=loss,
loss_weights={
'pitch': 0.05,
'step': 1.0,
'duration':1.0,
},
optimizer=optimizer,
)
Explanation: One way balance this is to use the loss_weights argument to compile:
End of explanation
model.evaluate(train_ds, return_dict=True)
Explanation: The loss then becomes the weighted sum of the individual losses.
End of explanation
callbacks = [
tf.keras.callbacks.ModelCheckpoint(
filepath='./training_checkpoints/ckpt_{epoch}',
save_weights_only=True),
tf.keras.callbacks.EarlyStopping(
monitor='loss',
patience=5,
verbose=1,
restore_best_weights=True),
]
%%time
epochs = 50
history = model.fit(
train_ds,
epochs=epochs,
callbacks=callbacks,
)
plt.plot(history.epoch, history.history['loss'], label='total loss')
plt.show()
Explanation: Train the model.
End of explanation
def predict_next_note(
notes: np.ndarray,
keras_model: tf.keras.Model,
temperature: float = 1.0) -> int:
Generates a note IDs using a trained sequence model.
assert temperature > 0
# Add batch dimension
inputs = tf.expand_dims(notes, 0)
predictions = model.predict(inputs)
pitch_logits = predictions['pitch']
step = predictions['step']
duration = predictions['duration']
pitch_logits /= temperature
pitch = tf.random.categorical(pitch_logits, num_samples=1)
pitch = tf.squeeze(pitch, axis=-1)
duration = tf.squeeze(duration, axis=-1)
step = tf.squeeze(step, axis=-1)
# `step` and `duration` values should be non-negative
step = tf.maximum(0, step)
duration = tf.maximum(0, duration)
return int(pitch), float(step), float(duration)
Explanation: Generate notes
To use the model to generate notes, you will first need to provide a starting sequence of notes. The function below generates one note from a sequence of notes.
For note pitch, it draws a sample from softmax distribution of notes produced by the model, and does not simply pick the note with the highest probability.
Always picking the note with the highest probability would lead to repetitive sequences of notes being generated.
The temperature parameter can be used to control the randomness of notes generated. You can find more details on temperature in Text generation with an RNN.
End of explanation
temperature = 2.0
num_predictions = 120
sample_notes = np.stack([raw_notes[key] for key in key_order], axis=1)
# The initial sequence of notes; pitch is normalized similar to training
# sequences
input_notes = (
sample_notes[:seq_length] / np.array([vocab_size, 1, 1]))
generated_notes = []
prev_start = 0
for _ in range(num_predictions):
pitch, step, duration = predict_next_note(input_notes, model, temperature)
start = prev_start + step
end = start + duration
input_note = (pitch, step, duration)
generated_notes.append((*input_note, start, end))
input_notes = np.delete(input_notes, 0, axis=0)
input_notes = np.append(input_notes, np.expand_dims(input_note, 0), axis=0)
prev_start = start
generated_notes = pd.DataFrame(
generated_notes, columns=(*key_order, 'start', 'end'))
generated_notes.head(10)
out_file = 'output.mid'
out_pm = notes_to_midi(
generated_notes, out_file=out_file, instrument_name=instrument_name)
display_audio(out_pm)
Explanation: Now generate some notes. You can play around with temperature and the starting sequence in next_notes and see what happens.
End of explanation
plot_piano_roll(generated_notes)
Explanation: You can also download the audio file by adding the two lines below:
from google.colab import files
files.download(out_file)
Visualize the generated notes.
End of explanation
plot_distributions(generated_notes)
Explanation: Check the distributions of pitch, step and duration.
End of explanation |
2,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
xbatch and batch
Step1: peek
Step2: bracket
Step3: <pre>
1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9
0----5----0----5----0----5----0----5----0----5----0----5----0----5----0----5----0----5----0
[0 [1 [2 [3 [4 [5 [6 [7 [8 [9 [10 [11 [12 [13 [14 [15 [16 [17
]0 ]1 ]2 ]3 ]4 ]5 ]6 ]7 ]8 ]9 ]10 ]11 ]12 ]13 ]14 ]15 ]16 ]17
^ ^^^^ ^ ^ ^ ^^^ ^ ^
| |||| | | | ||| | |
</pre> | Python Code:
for x in utils.xbatch(2, range(10)):
print(x)
for x in utils.xbatch(3, ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']):
print(x)
for x in utils.xbatch(3, ('Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun',
'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec')):
print(x)
for x in utils.xbatch(2, np.array(range(10))):
print(x)
utils.xbatch(2, range(10))
utils.batch(2, range(10))
utils.batch(3, [429, 5, 2, 14, 42, 132, 1, 1])
utils.batch(4, range(10))
Explanation: xbatch and batch
End of explanation
it = utils.xbatch(2, range(10))
first_three, new_it = utils.peek(it, 3)
print('First three:', first_three)
print('Iterating through new_it:')
for x in new_it:
print(x)
print('Iterating through it:')
for x in it:
print(x)
it = utils.xbatch(2, range(10))
first_three, new_it = utils.peek(it, 3)
print('First three:', first_three)
print('Iterating through it:')
for x in it:
print(x)
Explanation: peek
End of explanation
data = [8, 11, 12, 13, 14, 27, 29, 37, 49, 50, 51, 79, 85]
Explanation: bracket
End of explanation
utils.bracket(data, 3, 5)
utils.bracket(data, 3, 5, intervals_right_closed=True)
utils.bracket(data, 3, 5, coalesce=True)
utils.bracket(data, 3, 5, intervals_right_closed=True, coalesce=True)
data = [dt.date(2017, 1, 31) + dt.timedelta(days=x) for x in [8, 11, 12, 13, 14, 27, 29, 37, 49, 50, 51, 79, 85]];
data
utils.bracket(data, dt.date(2017, 2, 3), dt.timedelta(days=5))
utils.bracket(data, dt.date(2017, 2, 3), dt.timedelta(days=5), intervals_right_closed=True)
utils.bracket(data, dt.date(2017, 2, 3), dt.timedelta(days=5), coalesce=True)
utils.bracket(data, dt.date(2017, 2, 3), dt.timedelta(days=5), intervals_right_closed=True, coalesce=True)
data = [dt.datetime(2017, 1, 31, 0, 0, 0) + dt.timedelta(minutes=x) for x in [8, 11, 12, 13, 14, 27, 29, 37, 49, 50, 51, 79, 85]];
data
utils.bracket(data, dt.datetime(2017, 1, 31, 0, 3, 0), dt.timedelta(minutes=5))
utils.bracket(data, dt.datetime(2017, 1, 31, 0, 3, 0), dt.timedelta(minutes=5), intervals_right_closed=True)
utils.bracket(data, dt.datetime(2017, 1, 31, 0, 3, 0), dt.timedelta(minutes=5), coalesce=True)
utils.bracket(data, dt.datetime(2017, 1, 31, 0, 3, 0), dt.timedelta(minutes=5), intervals_right_closed=True, coalesce=True)
Explanation: <pre>
1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9
0----5----0----5----0----5----0----5----0----5----0----5----0----5----0----5----0----5----0
[0 [1 [2 [3 [4 [5 [6 [7 [8 [9 [10 [11 [12 [13 [14 [15 [16 [17
]0 ]1 ]2 ]3 ]4 ]5 ]6 ]7 ]8 ]9 ]10 ]11 ]12 ]13 ]14 ]15 ]16 ]17
^ ^^^^ ^ ^ ^ ^^^ ^ ^
| |||| | | | ||| | |
</pre>
End of explanation |
2,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rozwiązywanie stochastycznych równań różniczkowych z CUDA
Równania stochastyczne są niezwykle pożytecznym narzędziem w modelowaniu zarówno procesów fizycznych, biolgicznych czy chemicznych a nawet ekonomicznych (wycena instrumentów pochodnych).
Klasycznym przykładem problemu z jakim się spotykami przy numerycznym rozwiązywaniu równań stochastycznych jest konieczność uśrednienia po wielu niezależnych od siebie realizacjach procesu losowego. Mówiąc wprost musimy rozwiązać numerycznie wiele razy to samo równanie różniczkowe, za każdym razem zmieniając "seed" generatora liczb losowych. Jest to idealny problem dla urządzenia GPU, gdzie generacja niezależnych trajektorii wielu kopii tego samego układu jest w stanie wykorzystać maksymalnie jego możliwości obliczeniowe.
Poniżej przedstawiamy implementację algorytmu, wg. pierwszego przykładu z pracy
Step1: lub
Step4: W pewnych bardziej zaawansowanych przypadkach, można zastosować system szablonów np. mako templates (w projekcie http
Step5: Mając gotowe jądro, można wykonac testowe uruchomienie
Step6: Wynikiem działania programu jest $N$ liczb określających końcowe położenie cząstki. Możemy zwizualizować je wykorzystując np. hostogram
Step7: Dane referencyjne dla walidacji
W tablicy hist_ref znajdują się dane referencyjne dla celów walidacji. Możemy sprawdzić czy program działa tak jak ten w pracy referencyjnej | Python Code:
print('%(language)04d a nawiasy {} ' % {"language": 1234, "number": 2})
Explanation: Rozwiązywanie stochastycznych równań różniczkowych z CUDA
Równania stochastyczne są niezwykle pożytecznym narzędziem w modelowaniu zarówno procesów fizycznych, biolgicznych czy chemicznych a nawet ekonomicznych (wycena instrumentów pochodnych).
Klasycznym przykładem problemu z jakim się spotykami przy numerycznym rozwiązywaniu równań stochastycznych jest konieczność uśrednienia po wielu niezależnych od siebie realizacjach procesu losowego. Mówiąc wprost musimy rozwiązać numerycznie wiele razy to samo równanie różniczkowe, za każdym razem zmieniając "seed" generatora liczb losowych. Jest to idealny problem dla urządzenia GPU, gdzie generacja niezależnych trajektorii wielu kopii tego samego układu jest w stanie wykorzystać maksymalnie jego możliwości obliczeniowe.
Poniżej przedstawiamy implementację algorytmu, wg. pierwszego przykładu z pracy: http://arxiv.org/abs/0903.3852
Róźnicą będzie skorzystanie z pycuda, zamiast C. Co ciekawe, taka modyfikacja jest w stanie przyśpieszyć kernel obliczniowe o ok 25%. Spowodowane jest to zastosowaniem metoprogramowania. Pewne parametry, które nie zmieniają się podczas wykonywania kodu są "klejane" do źródła jako konkrente wartości liczbowe, co ułatwia kompilatorowi nvcc optymalizacje.
W tym przykładzie wykorzystamy własny generator liczb losowych i transformację Boxa-Mullera (zamiast np. curand).
Przykład ten może być z łatwością zmodyfikowany na dowolny układ SDE, dlatego można do traktować jako szablon dla własnych zagadnień.
Struktura programu
Szablony
Niezwykle pomocne w programowaniu w pycuda jest zastosowanie metaprogramowania - to jest - piszemy program piszący nasz kernel. Tutaj mamy najprostszy wariant, po prostu pewne parametry równań, wpisujemy automatycznie do tekstu jądra. W pythonie jest przydatne formatowanie "stringów" np.:
End of explanation
print('{zmienna} a nawiasy: {{}}'.format( **{"zmienna": 123} ))
Explanation: lub:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pycuda.gpuarray as gpuarray
from pycuda.curandom import rand as curand
from pycuda.compiler import SourceModule
import pycuda.driver as cuda
cuda.init()
device = cuda.Device(0)
ctx = device.make_context()
print (device.name(), device.compute_capability(),device.total_memory()/1024.**3,"GB")
blocks = 2**11
block_size = 2**8
N = blocks*block_size
omega = 4.9
spp = 100
dt = 2.0*np.pi/omega/spp
pars = {'samples':spp,'N':N,'dt':dt,'gam':0.9,'d0':0.001,'omega':omega,'force':0.1,'amp':4.2}
rng_src =
#define PI 3.14159265358979f
/*
* Return a uniformly distributed random number from the
* [0;1] range.
*/
__device__ float rng_uni(unsigned int *state)
{
unsigned int x = *state;
x = x ^ (x >> 13);
x = x ^ (x << 17);
x = x ^ (x >> 5);
*state = x;
return x / 4294967296.0f;
}
/*
* Generate two normal variates given two uniform variates.
*/
__device__ void bm_trans(float& u1, float& u2)
{
float r = sqrtf(-2.0f * logf(u1));
float phi = 2.0f * PI * u2;
u1 = r * cosf(phi);
u2 = r * sinf(phi);
}
src =
__device__ inline void diffEq(float &nx, float &nv, float x, float v, float t)
{{
nx = v;
nv = -2.0f * PI * cosf(2.0f * PI * x) + {amp}f * cosf({omega}f * t) + {force}f - {gam}f * v;
}}
__global__ void SDE(float *cx,float *cv,unsigned int *rng_state, float ct)
{{
int idx = blockDim.x*blockIdx.x + threadIdx.x;
float n1, n2;
unsigned int lrng_state;
float xim, vim, xt1, vt1, xt2, vt2,t,x,v;
lrng_state = rng_state[idx];
t = ct;
x = cx[idx];
v = cv[idx];
for (int i=1;i<={samples};i++) {{
n1 = rng_uni(&lrng_state);
n2 = rng_uni(&lrng_state);
bm_trans(n1, n2);
diffEq(xt1, vt1, x, v, t);
xim = x + xt1 * {dt}f;
vim = v + vt1 * {dt}f + sqrtf({dt}f * {gam}f * {d0}f * 2.0f) * n1;
t = ct + i * {dt}f;
diffEq(xt2, vt2, xim, vim, t);
x += 0.5f * {dt}f * (xt1 + xt2);
v += 0.5f * {dt}f * (vt1 + vt2) + sqrtf(2.0f * {dt}f * {gam}f * {d0}f) * n2;
}}
cx[idx] = x;
cv[idx] = v;
rng_state[idx] = lrng_state;;
}}
.format(**pars)
mod = SourceModule(rng_src + src,options=["--use_fast_math"])
SDE = mod.get_function("SDE")
print( "kernel ready for ",block_size,"N =",N,N/1e6)
print(spp,N)
Explanation: W pewnych bardziej zaawansowanych przypadkach, można zastosować system szablonów np. mako templates (w projekcie http://sailfish.us.edu.pl).
Struktura kernela
Jądro:
__global__ void SDE(float *cx,float *cv,unsigned int *rng_state, float ct)
jest funkcją CUDA typu __global__, jako parametry przyjmuje tablice cx i cv, będące zmiennymi stanu układu dwóch równań różniczkowch:
$$ \dot x = v$$
$$ \dot v = -2\pi \cos(2.0\pi x) + A \cos(\omega t) + F - \gamma v$$
Ponadto w wywołaniu przekazujemy czas (przez wartość) oraz wskaźnik do stanu generatora liczb losowych na GPU.
Funkje dostępne dla jądra z GPU to:
generator liczb losowych o rozkładzie jednostajnym:
__device__ float rng_uni(unsigned int *state)
transformacja Boxa-Mullera:
__device__ void bm_trans(float& u1, float& u2)
i wreszczcie funkcja obliczająca prawe strony układu równań:
__device__ inline void diffEq(float &nx, float &nv, float x, float v, float t)
Zauważmy, że dla poprawienia wydajności, każde wywołanie kernela, powoduje wielokrotne (określone przez parametr spp) wykonanie pętli iteracyjnej.
End of explanation
import time
x = np.zeros(N,dtype=np.float32)
v = np.ones(N,dtype=np.float32)
rng_state = np.array(np.random.randint(1,2147483648,size=N),dtype=np.uint32)
x_g = gpuarray.to_gpu(x)
v_g = gpuarray.to_gpu(v)
rng_state_g = gpuarray.to_gpu(rng_state)
start = time.time()
for i in range(0,200000,spp):
t = i * 2.0 * np.pi /omega /spp;
SDE(x_g, v_g, rng_state_g, np.float32(t), block=(block_size,1,1), grid=(blocks,1))
ctx.synchronize()
elapsed = (time.time() - start)
x=x_g.get()
print (elapsed,N/1e6, 200000*N/elapsed/1024.**3,"Giter/sek")
Explanation: Mając gotowe jądro, można wykonac testowe uruchomienie:
End of explanation
h = np.histogram(x,bins=50,range=(-150, 100) )
plt.plot(h[1][1:],h[0])
Explanation: Wynikiem działania programu jest $N$ liczb określających końcowe położenie cząstki. Możemy zwizualizować je wykorzystując np. hostogram:
End of explanation
hist_ref = (np.array([ 46, 72, 134, 224, 341, 588, 917, 1504, 2235,\
3319, 4692, 6620, 8788, 11700, 15139, 18702, 22881, 26195,\
29852, 32700, 35289, 36232, 36541, 35561, 33386, 30638, 27267,\
23533, 19229, 16002, 12646, 9501, 7111, 5079, 3405, 2313,\
1515, 958, 573, 370, 213, 103, 81, 28, 15,\
7, 3, 2, 0, 0]),\
np.array([-150., -145., -140., -135., -130., -125., -120., -115., -110.,\
-105., -100., -95., -90., -85., -80., -75., -70., -65.,\
-60., -55., -50., -45., -40., -35., -30., -25., -20.,\
-15., -10., -5., 0., 5., 10., 15., 20., 25.,\
30., 35., 40., 45., 50., 55., 60., 65., 70.,\
75., 80., 85., 90., 95., 100.]) )
plt.hist(x,bins=50,range=(-150, 100) )
plt.plot((hist_ref[1][1:]+hist_ref[1][:-1])/2.0,hist_ref[0],'r')
Explanation: Dane referencyjne dla walidacji
W tablicy hist_ref znajdują się dane referencyjne dla celów walidacji. Możemy sprawdzić czy program działa tak jak ten w pracy referencyjnej:
End of explanation |
2,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Does Trivers-Willard apply to people?
This notebook contains a "one-day paper", my attempt to pose a research question, answer it, and publish the results in one work day.
Copyright 2016 Allen B. Downey
MIT License
Step2: Trivers-Willard
According to Wikipedia, the Trivers-Willard hypothesis
Step3: I have to recode sex as 0 or 1 to make logit happy.
Step4: All births are from 2014.
Step5: Mother's age
Step6: Residence status (1=resident)
Step7: Mother's race (1=White, 2=Black, 3=American Indian or Alaskan Native, 4=Asian or Pacific Islander)
Step8: Mother's Hispanic origin (0=Non-Hispanic)
Step9: Marital status (1=Married)
Step10: Paternity acknowledged, if unmarried (Y=yes, N=no, X=not applicable, U=unknown).
I recode X (not applicable because married) as Y (paternity acknowledged).
Step11: Mother's education level
Step12: Father's age, in 10 ranges
Step13: Father's race
Step14: Father's Hispanic origin (0=non-hispanic, other values indicate country of origin)
Step15: Father's education level
Step16: Live birth order.
Step17: Number of prenatal visits, in 11 ranges
Step18: Whether the mother is eligible for food stamps
Step19: Mother's height in inches
Step20: Mother's BMI in 6 ranges
Step21: Payment method (1=Medicaid, 2=Private insurance, 3=Self pay, 4=Other)
Step22: Sex of baby
Step25: Regression models
Here are some functions I'll use to interpret the results of logistic regression
Step26: Now I'll run models with each variable, one at a time.
Mother's age seems to have no predictive value
Step27: The estimated ratios for young mothers is higher, and the ratio for older mothers is lower, but neither is statistically significant.
Step28: Neither does residence status
Step29: Mother's race seems to have predictive value. Relative to whites, black and Native American mothers have more girls; Asians have more boys.
Step30: Hispanic mothers have more girls.
Step31: If the mother is married or unmarried but paternity is acknowledged, the sex ratio is higher (more boys)
Step32: Being unmarried predicts more girls.
Step33: Each level of mother's education predicts a small increase in the probability of a boy.
Step34: Older fathers are slightly more likely to have girls (but this apparent effect could be due to chance).
Step35: Predictions based on father's race are similar to those based on mother's race
Step36: If the father is Hispanic, that predicts more girls.
Step37: Father's education level might predict more boys, but the apparent effect could be due to chance.
Step38: Babies with high birth order are slightly more likely to be girls.
Step39: Strangely, prenatal visits are associated with an increased probability of girls.
Step40: The effect seems to be non-linear at zero, so I'm adding a boolean for no prenatal visits.
Step41: If the mother qualifies for food stamps, she is more likely to have a girl.
Step42: Mother's height seems to have no predictive value.
Step43: Mother's with higher BMI are more likely to have girls.
Step44: If payment was made by Medicaid, the baby is more likely to be a girl. Private insurance, self-payment, and other payment method are associated with more boys.
Step45: Adding controls
However, none of the previous results should be taken too seriously. We only tested one variable at a time, and many of these apparent effects disappear when we add control variables.
In particular, if we control for father's race and Hispanic origin, the mother's race has no additional predictive value.
Step46: In fact, once we control for father's race and Hispanic origin, almost every other variable becomes statistically insignificant, including acknowledged paternity.
Step47: Being married still predicts more boys.
Step48: The effect of education disappears.
Step49: The effect of birth order disappears.
Step50: WIC is no longer associated with more girls.
Step51: The effect of obesity disappears.
Step52: The effect of payment method is diminished, but self-payment is still associated with more boys.
Step53: But the effect of prenatal visits is still a strong predictor of more girls.
Step54: And the effect is even stronger if we add a boolean to capture the nonlinearity at 0 visits.
Step55: More controls
Now if we control for father's race and Hispanic origin as well as number of prenatal visits, the effect of marriage disappears.
Step56: The effect of payment method disappears.
Step57: Here's a version with the addition of a boolean for no prenatal visits.
Step58: Now, surprisingly, the mother's age has a small effect.
Step59: So does the father's age. But both age effects are small and borderline significant.
Step60: What's up with prenatal visits?
The predictive power of prenatal visits is still surprising to me. To make sure we're controlled for race, I'll select cases where both parents are white
Step61: And compute sex ratios for each level of previs
Step62: The effect holds up. People with fewer than average prenatal visits are substantially more likely to have boys. | Python Code:
from __future__ import print_function, division
import thinkstats2
import thinkplot
import pandas as pd
import numpy as np
import statsmodels.formula.api as smf
%matplotlib inline
Explanation: Does Trivers-Willard apply to people?
This notebook contains a "one-day paper", my attempt to pose a research question, answer it, and publish the results in one work day.
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
names = ['year', 'mager9', 'restatus', 'mbrace', 'mhisp_r',
'mar_p', 'dmar', 'meduc', 'fagerrec11', 'fbrace', 'fhisp_r', 'feduc',
'lbo_rec', 'previs_rec', 'wic', 'height', 'bmi_r', 'pay_rec', 'sex']
colspecs = [(15, 18),
(93, 93),
(138, 138),
(143, 143),
(148, 148),
(152, 152),
(153, 153),
(155, 155),
(186, 187),
(191, 191),
(195, 195),
(197, 197),
(212, 212),
(272, 273),
(281, 281),
(555, 556),
(533, 533),
(413, 413),
(436, 436),
]
colspecs = [(start-1, end) for start, end in colspecs]
df = None
filename = 'Nat2012PublicUS.r20131217.gz'
#df = pd.read_fwf(filename, compression='gzip', header=None, names=names, colspecs=colspecs)
#df.head()
# store the dataframe for faster loading
#store = pd.HDFStore('store.h5')
#store['births2013'] = df
#store.close()
# load the dataframe
store = pd.HDFStore('store.h5')
df = store['births2013']
store.close()
def series_to_ratio(series):
Takes a boolean series and computes sex ratio.
boys = np.mean(series)
return np.round(100 * boys / (1-boys)).astype(int)
Explanation: Trivers-Willard
According to Wikipedia, the Trivers-Willard hypothesis:
"...suggests that female mammals are able to adjust offspring sex ratio in response to their maternal condition. For example, it may predict greater parental investment in males by parents in 'good conditions' and greater investment in females by parents in 'poor conditions' (relative to parents in good condition)."
For humans, the hypothesis suggests that people with relatively high social status might be more likely to have boys. Some studies have shown evidence for this hypothesis, but based on my very casual survey, it is not persuasive.
To test whether the T-W hypothesis holds up in humans, I downloaded birth data for the nearly 4 million babies born in the U.S. in 2014.
I selected variables that seemed likely to be related to social status and used logistic regression to identify variables associated with sex ratio.
Summary of results
Running regression with one variable at a time, many of the variables have a statistically significant effect on sex ratio, with the sign of the effect generally in the direction predicted by T-W.
However, many of the variables are also correlated with race. If we control for either the mother's race or the father's race, or both, most other variables have no additional predictive power.
Contrary to other reports, the age of the parents seems to have no predictive power.
Strangely, the variable that shows the strongest and most consistent relationship with sex ratio is the number of prenatal visits. Although it seems obvious that prenatal visits are a proxy for quality of health care and general socioeconomic status, the sign of the effect is opposite what T-W predicts; that is, more prenatal visits is a strong predictor of lower sex ratio (more girls).
Following convention, I report sex ratio in terms of boys per 100 girls. The overall sex ratio at birth is about 105; that is, 105 boys are born for every 100 girls.
Data cleaning
Here's how I loaded the data:
End of explanation
df['boy'] = (df.sex=='M').astype(int)
df.boy.value_counts().sort_index()
Explanation: I have to recode sex as 0 or 1 to make logit happy.
End of explanation
df.year.value_counts().sort_index()
Explanation: All births are from 2014.
End of explanation
df.mager9.value_counts().sort_index()
var = 'mager9'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df.mager9.isnull().mean()
df['youngm'] = df.mager9<=2
df['oldm'] = df.mager9>=7
df.youngm.mean(), df.oldm.mean()
Explanation: Mother's age:
End of explanation
df.restatus.value_counts().sort_index()
var = 'restatus'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Residence status (1=resident)
End of explanation
df.mbrace.value_counts().sort_index()
var = 'mbrace'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Mother's race (1=White, 2=Black, 3=American Indian or Alaskan Native, 4=Asian or Pacific Islander)
End of explanation
df.mhisp_r.replace([9], np.nan, inplace=True)
df.mhisp_r.value_counts().sort_index()
def copy_null(df, oldvar, newvar):
df.loc[df[oldvar].isnull(), newvar] = np.nan
df['mhisp'] = df.mhisp_r > 0
copy_null(df, 'mhisp_r', 'mhisp')
df.mhisp.isnull().mean(), df.mhisp.mean()
var = 'mhisp'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Mother's Hispanic origin (0=Non-Hispanic)
End of explanation
df.dmar.value_counts().sort_index()
var = 'dmar'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Marital status (1=Married)
End of explanation
df.mar_p.replace(['U'], np.nan, inplace=True)
df.mar_p.replace(['X'], 'Y', inplace=True)
df.mar_p.value_counts().sort_index()
var = 'mar_p'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Paternity acknowledged, if unmarried (Y=yes, N=no, X=not applicable, U=unknown).
I recode X (not applicable because married) as Y (paternity acknowledged).
End of explanation
df.meduc.replace([9], np.nan, inplace=True)
df.meduc.value_counts().sort_index()
var = 'meduc'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['lowed'] = df.meduc <= 2
copy_null(df, 'meduc', 'lowed')
df.lowed.isnull().mean(), df.lowed.mean()
Explanation: Mother's education level
End of explanation
df.fagerrec11.replace([11], np.nan, inplace=True)
df.fagerrec11.value_counts().sort_index()
var = 'fagerrec11'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['youngf'] = df.fagerrec11<=2
copy_null(df, 'fagerrec11', 'youngf')
df.youngf.isnull().mean(), df.youngf.mean()
df['oldf'] = df.fagerrec11>=8
copy_null(df, 'fagerrec11', 'oldf')
df.oldf.isnull().mean(), df.oldf.mean()
Explanation: Father's age, in 10 ranges
End of explanation
df.fbrace.replace([9], np.nan, inplace=True)
df.fbrace.value_counts().sort_index()
var = 'fbrace'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Father's race
End of explanation
df.fhisp_r.replace([9], np.nan, inplace=True)
df.fhisp_r.value_counts().sort_index()
df['fhisp'] = df.fhisp_r > 0
copy_null(df, 'fhisp_r', 'fhisp')
df.fhisp.isnull().mean(), df.fhisp.mean()
var = 'fhisp'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Father's Hispanic origin (0=non-hispanic, other values indicate country of origin)
End of explanation
df.feduc.replace([9], np.nan, inplace=True)
df.feduc.value_counts().sort_index()
var = 'feduc'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Father's education level
End of explanation
df.lbo_rec.replace([9], np.nan, inplace=True)
df.lbo_rec.value_counts().sort_index()
var = 'lbo_rec'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['highbo'] = df.lbo_rec >= 5
copy_null(df, 'lbo_rec', 'highbo')
df.highbo.isnull().mean(), df.highbo.mean()
Explanation: Live birth order.
End of explanation
df.previs_rec.replace([12], np.nan, inplace=True)
df.previs_rec.value_counts().sort_index()
df.previs_rec.mean()
df['previs'] = df.previs_rec - 7
var = 'previs'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['no_previs'] = df.previs_rec <= 1
copy_null(df, 'previs_rec', 'no_previs')
df.no_previs.isnull().mean(), df.no_previs.mean()
Explanation: Number of prenatal visits, in 11 ranges
End of explanation
df.wic.replace(['U'], np.nan, inplace=True)
df.wic.value_counts().sort_index()
var = 'wic'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Whether the mother is eligible for food stamps
End of explanation
df.height.replace([99], np.nan, inplace=True)
df.height.value_counts().sort_index()
df['mshort'] = df.height<60
copy_null(df, 'height', 'mshort')
df.mshort.isnull().mean(), df.mshort.mean()
df['mtall'] = df.height>=70
copy_null(df, 'height', 'mtall')
df.mtall.isnull().mean(), df.mtall.mean()
var = 'mshort'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
var = 'mtall'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Mother's height in inches
End of explanation
df.bmi_r.replace([9], np.nan, inplace=True)
df.bmi_r.value_counts().sort_index()
var = 'bmi_r'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
df['obese'] = df.bmi_r >= 4
copy_null(df, 'bmi_r', 'obese')
df.obese.isnull().mean(), df.obese.mean()
Explanation: Mother's BMI in 6 ranges
End of explanation
df.pay_rec.replace([9], np.nan, inplace=True)
df.pay_rec.value_counts().sort_index()
var = 'pay_rec'
df[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: Payment method (1=Medicaid, 2=Private insurance, 3=Self pay, 4=Other)
End of explanation
df.sex.value_counts().sort_index()
Explanation: Sex of baby
End of explanation
def logodds_to_ratio(logodds):
Convert from log odds to probability.
odds = np.exp(logodds)
return 100 * odds
def summarize(results):
Summarize parameters in terms of birth ratio.
inter_or = results.params['Intercept']
inter_rat = logodds_to_ratio(inter_or)
for value, lor in results.params.iteritems():
if value=='Intercept':
continue
rat = logodds_to_ratio(inter_or + lor)
code = '*' if results.pvalues[value] < 0.05 else ' '
print('%-20s %0.1f %0.1f' % (value, inter_rat, rat), code)
Explanation: Regression models
Here are some functions I'll use to interpret the results of logistic regression
End of explanation
model = smf.logit('boy ~ mager9', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Now I'll run models with each variable, one at a time.
Mother's age seems to have no predictive value:
End of explanation
model = smf.logit('boy ~ youngm + oldm', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: The estimated ratios for young mothers is higher, and the ratio for older mothers is lower, but neither is statistically significant.
End of explanation
model = smf.logit('boy ~ C(restatus)', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Neither does residence status
End of explanation
model = smf.logit('boy ~ C(mbrace)', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Mother's race seems to have predictive value. Relative to whites, black and Native American mothers have more girls; Asians have more boys.
End of explanation
model = smf.logit('boy ~ mhisp', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Hispanic mothers have more girls.
End of explanation
model = smf.logit('boy ~ C(mar_p)', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: If the mother is married or unmarried but paternity is acknowledged, the sex ratio is higher (more boys)
End of explanation
model = smf.logit('boy ~ C(dmar)', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Being unmarried predicts more girls.
End of explanation
model = smf.logit('boy ~ meduc', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ lowed', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Each level of mother's education predicts a small increase in the probability of a boy.
End of explanation
model = smf.logit('boy ~ fagerrec11', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ youngf + oldf', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Older fathers are slightly more likely to have girls (but this apparent effect could be due to chance).
End of explanation
model = smf.logit('boy ~ C(fbrace)', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Predictions based on father's race are similar to those based on mother's race: more girls for black and Native American fathers; more boys for Asian fathers.
End of explanation
model = smf.logit('boy ~ fhisp', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: If the father is Hispanic, that predicts more girls.
End of explanation
model = smf.logit('boy ~ feduc', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Father's education level might predict more boys, but the apparent effect could be due to chance.
End of explanation
model = smf.logit('boy ~ lbo_rec', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ highbo', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Babies with high birth order are slightly more likely to be girls.
End of explanation
model = smf.logit('boy ~ previs', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Strangely, prenatal visits are associated with an increased probability of girls.
End of explanation
model = smf.logit('boy ~ no_previs + previs', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: The effect seems to be non-linear at zero, so I'm adding a boolean for no prenatal visits.
End of explanation
model = smf.logit('boy ~ wic', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: If the mother qualifies for food stamps, she is more likely to have a girl.
End of explanation
model = smf.logit('boy ~ height', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ mtall + mshort', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Mother's height seems to have no predictive value.
End of explanation
model = smf.logit('boy ~ bmi_r', data=df)
results = model.fit()
summarize(results)
results.summary()
model = smf.logit('boy ~ obese', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Mother's with higher BMI are more likely to have girls.
End of explanation
model = smf.logit('boy ~ C(pay_rec)', data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: If payment was made by Medicaid, the baby is more likely to be a girl. Private insurance, self-payment, and other payment method are associated with more boys.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + C(mbrace) + mhisp')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Adding controls
However, none of the previous results should be taken too seriously. We only tested one variable at a time, and many of these apparent effects disappear when we add control variables.
In particular, if we control for father's race and Hispanic origin, the mother's race has no additional predictive value.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + mar_p')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: In fact, once we control for father's race and Hispanic origin, almost every other variable becomes statistically insignificant, including acknowledged paternity.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + dmar')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Being married still predicts more boys.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + lowed')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: The effect of education disappears.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + highbo')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: The effect of birth order disappears.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + wic')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: WIC is no longer associated with more girls.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + obese')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: The effect of obesity disappears.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + C(pay_rec)')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: The effect of payment method is diminished, but self-payment is still associated with more boys.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + previs')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: But the effect of prenatal visits is still a strong predictor of more girls.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + previs + no_previs')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: And the effect is even stronger if we add a boolean to capture the nonlinearity at 0 visits.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + previs + dmar')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: More controls
Now if we control for father's race and Hispanic origin as well as number of prenatal visits, the effect of marriage disappears.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + previs + C(pay_rec)')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: The effect of payment method disappears.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + previs + no_previs')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Here's a version with the addition of a boolean for no prenatal visits.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + previs + no_previs + mager9')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: Now, surprisingly, the mother's age has a small effect.
End of explanation
formula = ('boy ~ C(fbrace) + fhisp + previs + no_previs + fagerrec11')
model = smf.logit(formula, data=df)
results = model.fit()
summarize(results)
results.summary()
Explanation: So does the father's age. But both age effects are small and borderline significant.
End of explanation
white = df[(df.mbrace==1) & (df.fbrace==1)]
len(white)
Explanation: What's up with prenatal visits?
The predictive power of prenatal visits is still surprising to me. To make sure we're controlled for race, I'll select cases where both parents are white:
End of explanation
var = 'previs'
white[[var, 'boy']].groupby(var).aggregate(series_to_ratio)
Explanation: And compute sex ratios for each level of previs
End of explanation
formula = ('boy ~ previs + no_previs')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
inter = results.params['Intercept']
slope = results.params['previs']
inter, slope
previs = np.arange(-5, 5)
logodds = inter + slope * previs
odds = np.exp(logodds)
odds * 100
formula = ('boy ~ dmar')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ lowed')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ highbo')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ wic')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ obese')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ C(pay_rec)')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ mager9')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ youngm + oldm')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
formula = ('boy ~ youngf + oldf')
model = smf.logit(formula, data=white)
results = model.fit()
summarize(results)
results.summary()
Explanation: The effect holds up. People with fewer than average prenatal visits are substantially more likely to have boys.
End of explanation |
2,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Diagnostics for approximate likelihood ratios
Kyle Cranmer, Juan Pavez, Gilles Louppe, March 2016.
This is an extension of the example in Parameterized inference from multidimensional data.
To aid in visualization, we restrict to a 1-dimensional slice of
the likelihood along $\alpha$ with $\beta=-1$. We consider three situations
Step1: Create model and generate artificial data
Step2: Known likelihood setup
Step3: Likelihood-free setup
Here we create the data to train a parametrized classifier
Step4: Now we use a Bayesian optimization procedure to create a smooth surrogate of the approximate likelihood.
Step5: Plots for the first diagonstic
Step6: We show the posterior mean of a Gaussian processes resulting
from Bayesian optimization of the raw approximate likelihood. In addition, the standard deviation of the Gaussian process
is shown for one of the $\theta_1$ reference points to indicate the size of
these statistical fluctuations. It is clear that in the well calibrated cases
that these fluctuations are small, while in the poorly calibrated case these
fluctuations are large. Moreover, in the first case we see that in the poorly
trained, well calibrated case the classifier $\hat{s}(\mathbf{x}; \theta_0,
\theta_1)$ has a significant dependence on the $\theta_1$ reference point. In
contrast, in the second case the likelihood curves vary significantly, but
this is comparable to the fluctuations expected from the calibration procedure.
Finally, the third case shows that in the well trained, well calibrated case
that the likelihood curves are all consistent with the exact likelihood within
the estimated uncertainty band of the Gaussian process.
The second diagnostic - ROC curves for a discriminator | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import theano
from scipy.stats import chi2
from itertools import product
np.random.seed(314)
Explanation: Diagnostics for approximate likelihood ratios
Kyle Cranmer, Juan Pavez, Gilles Louppe, March 2016.
This is an extension of the example in Parameterized inference from multidimensional data.
To aid in visualization, we restrict to a 1-dimensional slice of
the likelihood along $\alpha$ with $\beta=-1$. We consider three situations: i)
a poorly trained, but well calibrated classifier; ii) a well trained, but poorly
calibrated classifier; and iii) a well trained, and well calibrated classifier.
For each case, we employ two diagnostic tests. The first checks for independence
of $-2\log\Lambda(\theta)$ with respect to changes in the reference value
$\theta_1$. The second uses a
classifier to distinguish between samples from $p(\mathbf{x}|\theta_0)$ and
samples from $p(\mathbf{x}|\theta_1)$ weighted according to $r(\mathbf{x};
\theta_0, \theta_1)$.
End of explanation
from carl.distributions import Join
from carl.distributions import Mixture
from carl.distributions import Normal
from carl.distributions import Exponential
from carl.distributions import LinearTransform
from sklearn.datasets import make_sparse_spd_matrix
# Parameters
true_A = 1.
A = theano.shared(true_A, name="A")
# Build simulator
R = make_sparse_spd_matrix(5, alpha=0.5, random_state=7)
p0 = LinearTransform(Join(components=[
Normal(mu=A, sigma=1),
Normal(mu=-1, sigma=3),
Mixture(components=[Normal(mu=-2, sigma=1),
Normal(mu=2, sigma=0.5)]),
Exponential(inverse_scale=3.0),
Exponential(inverse_scale=0.5)]), R)
# Define p1 at fixed arbitrary value A=0
p1s = []
p1_params = [(0,-1),(1,-1),(0,1)]
for p1_p in p1_params:
p1s.append(LinearTransform(Join(components=[
Normal(mu=p1_p[0], sigma=1),
Normal(mu=p1_p[1], sigma=3),
Mixture(components=[Normal(mu=-2, sigma=1),
Normal(mu=2, sigma=0.5)]),
Exponential(inverse_scale=3.0),
Exponential(inverse_scale=0.5)]), R))
p1 = p1s[0]
# Draw data
X_true = p0.rvs(500, random_state=314)
Explanation: Create model and generate artificial data
End of explanation
# Minimize the exact LR
from scipy.optimize import minimize
p1 = p1s[2]
def nll_exact(theta, X):
A.set_value(theta[0])
return (p0.nll(X) - p1.nll(X)).sum()
r = minimize(nll_exact, x0=[0], args=(X_true,))
exact_MLE = r.x
print("Exact MLE =", exact_MLE)
# Exact LR
A.set_value(true_A)
nlls = []
As_ = []
bounds = [(true_A - 0.30, true_A + 0.30)]
As = np.linspace(bounds[0][0],bounds[0][1], 100)
nll = [nll_exact([a], X_true) for a in As]
nll = np.array(nll)
nll = 2. * (nll - r.fun)
plt.plot(As, nll)
plt.xlabel(r"$\alpha$")
plt.title(r"$-2 \log L(\theta) / L(\theta_{MLE}))$")
plt.show()
Explanation: Known likelihood setup
End of explanation
# Build classification data
from carl.learning import make_parameterized_classification
bounds = [(-3, 3), (-3, 3)]
clf_parameters = [(1000, 100000), (1000000, 500), (1000000, 100000)]
X = [0]*3*3
y = [0]*3*3
for k,(param,p1) in enumerate(product(clf_parameters,p1s)):
X[k], y[k] = make_parameterized_classification(
p0, p1,
param[0],
[(A, np.linspace(bounds[0][0],bounds[0][1], num=30))],
random_state=0)
# Train parameterized classifier
from carl.learning import as_classifier
from carl.learning import make_parameterized_classification
from carl.learning import ParameterizedClassifier
from sklearn.neural_network import MLPRegressor
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
clfs = []
for k, _ in enumerate(product(clf_parameters,p1s)):
clfs.append(ParameterizedClassifier(
make_pipeline(StandardScaler(),
as_classifier(MLPRegressor(learning_rate="adaptive",
hidden_layer_sizes=(40, 40),
tol=1e-6,
random_state=0))),
[A]))
clfs[k].fit(X[k], y[k])
from carl.learning import CalibratedClassifierCV
from carl.ratios import ClassifierRatio
def vectorize(func,n_samples,clf,p1):
def wrapper(X):
v = np.zeros(len(X))
for i, x_i in enumerate(X):
v[i] = func(x_i,n_samples=n_samples,clf=clf, p1=p1)
return v.reshape(-1, 1)
return wrapper
def objective(theta, random_state=0, n_samples=100000, clf=clfs[0],p1=p1s[0]):
# Set parameter values
A.set_value(theta[0])
# Fit ratio
ratio = ClassifierRatio(CalibratedClassifierCV(
base_estimator=clf,
cv="prefit", # keep the pre-trained classifier
method="histogram", bins=50))
X0 = p0.rvs(n_samples=n_samples)
X1 = p1.rvs(n_samples=n_samples, random_state=random_state)
X = np.vstack((X0, X1))
y = np.zeros(len(X))
y[len(X0):] = 1
ratio.fit(X, y)
# Evaluate log-likelihood ratio
r = ratio.predict(X_true, log=True)
value = -np.mean(r[np.isfinite(r)]) # optimization is more stable using mean
# this will need to be rescaled by len(X_true)
return value
Explanation: Likelihood-free setup
Here we create the data to train a parametrized classifier
End of explanation
from GPyOpt.methods import BayesianOptimization
solvers = []
for k, (param, p1) in enumerate(product(clf_parameters,p1s)):
clf = clfs[k]
n_samples = param[1]
bounds = [(-3, 3)]
solvers.append(BayesianOptimization(vectorize(objective,n_samples,clf,p1), bounds))
solvers[k].run_optimization(max_iter=50, true_gradients=False)
approx_MLEs = []
for k, _ in enumerate(product(clf_parameters,p1s)):
solver = solvers[k]
approx_MLE = solver.x_opt
approx_MLEs.append(approx_MLE)
print("Approx. MLE =", approx_MLE)
solver.plot_acquisition()
solver.plot_convergence()
# Minimize the surrogate GP approximate of the approximate LR
rs = []
solver = solvers[0]
for k, _ in enumerate(product(clf_parameters,p1s)):
def gp_objective(theta):
theta = theta.reshape(1, -1)
return solvers[k].model.predict(theta)[0][0]
r = minimize(gp_objective, x0=[0])
rs.append(r)
gp_MLE = r.x
print("GP MLE =", gp_MLE)
#bounds = [(exact_MLE[0] - 0.16, exact_MLE[0] + 0.16)]
approx_ratios = []
gp_ratios = []
gp_std = []
gp_q1 = []
gp_q2 = []
n_points = 30
for k,(param,p1) in enumerate(product(clf_parameters,p1s)):
clf = clfs[k]
n_samples = param[1]
solver = solvers[k]
#As = np.linspace(*bounds[0], 100)
nll_gp, var_gp = solvers[k].model.predict(As.reshape(-1, 1))
nll_gp = 2. * (nll_gp - rs[k].fun) * len(X_true)
gp_ratios.append(nll_gp)
# STD
std_gp = np.sqrt(4*var_gp*len(X_true)*len(X_true))
std_gp[np.isnan(std_gp)] = 0.
gp_std.append(std_gp)
# 95% CI
q1_gp, q2_gp = solvers[k].model.predict_quantiles(As.reshape(-1, 1))
q1_gp = 2. * (q1_gp - rs[k].fun) * len(X_true)
q2_gp = 2. * (q2_gp - rs[k].fun) * len(X_true)
gp_q1.append(q1_gp)
gp_q2.append(q2_gp)
Explanation: Now we use a Bayesian optimization procedure to create a smooth surrogate of the approximate likelihood.
End of explanation
bounds = [(true_A - 0.30, true_A + 0.30)]
for k, _ in enumerate(clf_parameters):
fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.plot(As, nll, label="Exact")
#plt.plot(np.linspace(*bounds[0], n_points), nll_approx - , label="approx.")
#plt.plot(np.linspace(bounds[0][0],bounds[0][1], n_points), nll_approx , label="approx.")
ax.plot(As, gp_ratios[3*k], label=r"Approx., $\theta_1=(\alpha=0,\beta=-1)$")
ax.plot(As, gp_ratios[3*k+1], label=r"Approx., $\theta_1=(\alpha=1,\beta=-1)$")
ax.fill_between(As,(gp_ratios[3*k] - gp_std[3*k]).ravel(),(gp_ratios[3*k] + gp_std[3*k]).ravel(),
color='g',alpha=0.2)
ax.plot(As, gp_ratios[3*k+2], label=r"Approx., $\theta_1=(\alpha=0,\beta=1)$")
handles, labels = ax.get_legend_handles_labels()
ax.set_xlabel(r"$\alpha$")
ax.set_ylabel(r"$-2 \log \Lambda(\theta)$")
#plt.legend()
p5 = plt.Rectangle((0, 0), 0.2, 0.2, fc="green",alpha=0.2,edgecolor='none')
handles.insert(4,p5)
labels.insert(4,r"$\pm 1 \sigma$, $\theta_1=(\alpha=0,\beta=-1)$")
handles[1],handles[-2] = handles[-2],handles[1]
labels[1],labels[-2] = labels[-2],labels[1]
ax.legend(handles,labels)
ax.set_ylim(0, 14)
ax.set_xlim(bounds[0][0],bounds[0][1])
plt.savefig('likelihood_comp_{0}.pdf'.format(k))
plt.show()
Explanation: Plots for the first diagonstic: $\theta_1$-independence
End of explanation
from sklearn.metrics import roc_curve, auc
def makeROC(predictions ,targetdata):
fpr, tpr, _ = roc_curve(targetdata.ravel(),predictions.ravel())
roc_auc = auc(fpr, tpr)
return fpr,tpr,roc_auc
# I obtain data from r*p1 by resampling data from p1 using r as weights
def weight_data(x0,x1,weights):
x1_len = x1.shape[0]
weights = weights / weights.sum()
weighted_data = np.random.choice(range(x1_len), x1_len, p = weights)
w_x1 = x1.copy()[weighted_data]
y = np.zeros(x1_len * 2)
x_all = np.vstack((w_x1,x0))
y_all = np.zeros(x1_len * 2)
y_all[x1_len:] = 1
return (x_all,y_all)
p1 = p1s[0]
X0_roc = p0.rvs(500000,random_state=777)
X1_roc = p1.rvs(500000,random_state=777)
# Roc curve comparison for p0 - r*p1
for k,param in enumerate(clf_parameters):
#fig.add_subplot(3,2,(k+1)*2)
clf = clfs[3*k]
n_samples = param[1]
X0 = p0.rvs(n_samples)
X1 = p1.rvs(n_samples,random_state=777)
X_len = X1.shape[0]
ratio = ClassifierRatio(CalibratedClassifierCV(
base_estimator=clf,
cv="prefit", # keep the pre-trained classifier
method="histogram", bins=50))
X = np.vstack((X0, X1))
y = np.zeros(len(X))
y[len(X0):] = 1
ratio.fit(X, y)
# Weighted with true ratios
true_r = p0.pdf(X1_roc) / p1.pdf(X1_roc)
true_r[np.isinf(true_r)] = 0.
true_weighted = weight_data(X0_roc,X1_roc,true_r)
# Weighted with approximated ratios
app_r = ratio.predict(X1_roc,log=False)
app_r[np.isinf(app_r)] = 0.
app_weighted = weight_data(X0_roc,X1_roc,app_r)
clf_true = MLPRegressor(tol=1e-05, activation="logistic",
hidden_layer_sizes=(10, 10), learning_rate_init=1e-07,
learning_rate="constant", algorithm="l-bfgs", random_state=1,
max_iter=75)
clf_true.fit(true_weighted[0],true_weighted[1])
predicted_true = clf_true.predict(true_weighted[0])
fpr_t,tpr_t,roc_auc_t = makeROC(predicted_true, true_weighted[1])
plt.plot(fpr_t, tpr_t, label=r"$p(x|\theta_1)r(x|\theta_0,\theta_1)$ exact" % roc_auc_t)
clf_true.fit(np.vstack((X0_roc,X1_roc)),true_weighted[1])
predicted_true = clf_true.predict(np.vstack((X0_roc,X1_roc)))
fpr_f,tpr_f,roc_auc_f = makeROC(predicted_true, true_weighted[1])
plt.plot(fpr_f, tpr_f, label=r"$p(x|\theta_1)$ no weights" % roc_auc_f)
clf_true.fit(app_weighted[0],app_weighted[1])
predicted_true = clf_true.predict(app_weighted[0])
fpr_a,tpr_a,roc_auc_a = makeROC(predicted_true, app_weighted[1])
plt.plot(fpr_a, tpr_a, label=r"$p(x|\theta_1)r(x|\theta_0,\theta_1)$ approx." % roc_auc_a)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend(loc="lower right")
plt.tight_layout()
plt.savefig('ROC_comp{0}.pdf'.format(k))
plt.show()
#plt.tight_layout()
#plt.savefig('all_comp.pdf'.format(k))
Explanation: We show the posterior mean of a Gaussian processes resulting
from Bayesian optimization of the raw approximate likelihood. In addition, the standard deviation of the Gaussian process
is shown for one of the $\theta_1$ reference points to indicate the size of
these statistical fluctuations. It is clear that in the well calibrated cases
that these fluctuations are small, while in the poorly calibrated case these
fluctuations are large. Moreover, in the first case we see that in the poorly
trained, well calibrated case the classifier $\hat{s}(\mathbf{x}; \theta_0,
\theta_1)$ has a significant dependence on the $\theta_1$ reference point. In
contrast, in the second case the likelihood curves vary significantly, but
this is comparable to the fluctuations expected from the calibration procedure.
Finally, the third case shows that in the well trained, well calibrated case
that the likelihood curves are all consistent with the exact likelihood within
the estimated uncertainty band of the Gaussian process.
The second diagnostic - ROC curves for a discriminator
End of explanation |
2,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plots the NINO Sea Surface Temperature indices (data from the Bureau of Meteorology) and the real-time Southern Oscillation Index (SOI) from LongPaddock
Nicolas Fauchereau
Step1: set up proxies here if needed
Step2: path where the figures will be saved
Step3: Get the SOI, set the datetime index
Step4: calculates 30 days and 90 days rolling averages
Step5: set up the matplotlib parameters for plotting
Step6: plots the Southern Oscillation Index
Step7: Plots the NINO SST Indices | Python Code:
%matplotlib inline
import os, sys
import pandas as pd
from datetime import datetime, timedelta
from cStringIO import StringIO
import requests
import matplotlib as mpl
from matplotlib import pyplot as plt
from IPython.display import Image
Explanation: Plots the NINO Sea Surface Temperature indices (data from the Bureau of Meteorology) and the real-time Southern Oscillation Index (SOI) from LongPaddock
Nicolas Fauchereau
End of explanation
proxies = {}
# proxies['http'] = 'url:port'
Explanation: set up proxies here if needed
End of explanation
dpath = os.path.join(os.environ['HOME'], 'operational/ICU/indices/figures')
today = datetime.utcnow() - timedelta(15)
Explanation: path where the figures will be saved
End of explanation
url = 'http://www.longpaddock.qld.gov.au/seasonalclimateoutlook/southernoscillationindex/soidatafiles/DailySOI1933-1992Base.txt'
r = requests.get(url, proxies=proxies)
soi = pd.read_table(StringIO(r.content), sep='\s*', engine='python')
index = [datetime(year,1,1) + timedelta(day-1) for year, day in soi.loc[:,['Year','Day']].values]
soi.index = index
soi = soi.loc[:,['SOI']]
soi.head()
Explanation: Get the SOI, set the datetime index
End of explanation
soi['soirm1'] = pd.rolling_mean(soi.SOI, 30)
soi['soirm3'] = pd.rolling_mean(soi.SOI, 90)
soi = soi.ix['2013':]
soi.tail()
Explanation: calculates 30 days and 90 days rolling averages
End of explanation
from matplotlib.dates import YearLocator, MonthLocator, DateFormatter
years = YearLocator()
months = MonthLocator()
mFMT = DateFormatter('%b')
yFMT = DateFormatter('\n\n%Y')
mpl.rcParams['xtick.labelsize'] = 12
mpl.rcParams['ytick.labelsize'] = 12
mpl.rcParams['axes.titlesize'] = 14
mpl.rcParams['xtick.direction'] = 'out'
mpl.rcParams['ytick.direction'] = 'out'
mpl.rcParams['xtick.major.size'] = 5
mpl.rcParams['ytick.major.size'] = 5
mpl.rcParams['xtick.minor.size'] = 2
Explanation: set up the matplotlib parameters for plotting
End of explanation
f, ax = plt.subplots(figsize=(10,8))
ax.fill_between(soi.index, soi.soirm1, 0, (soi.soirm1 >= 0), color='b', alpha=0.7, interpolate=True)
ax.fill_between(soi.index, soi.soirm1, 0, (soi.soirm1 < 0), color='r', alpha=0.7, interpolate=True)
ax.plot(soi.index, soi.soirm1, c='k')
ax.plot(soi.index, soi.soirm3, c='w', lw=2.5)
ax.plot(soi.index, soi.soirm3, c='0.5', lw=2, label='90 days running mean')
ax.legend()
ax.axhline(0, color='k')
ax.set_ylim(-30,30)
ax.grid(linestyle='--')
ax.xaxis.set_minor_locator(months)
ax.xaxis.set_major_locator(years)
ax.xaxis.set_minor_formatter(mFMT)
ax.xaxis.set_major_formatter(yFMT)
[label.set_fontsize(13) for label in ax.get_xminorticklabels()]
[label.set_rotation(90) for label in ax.get_xminorticklabels()]
[label.set_fontsize(18) for label in ax.get_xmajorticklabels()]
[label.set_fontsize(18) for label in ax.get_ymajorticklabels()]
ax.set_title("Southern Oscillation Index (30 days running mean)\
\n ending {0:}: latest 30 days (90 days) value = {1:<4.2f} ({2:<4.2f})".\
format(soi.index[-1].strftime("%Y-%m-%d"), soi.iloc[-1,1], soi.iloc[-1,2]))
f.savefig(os.path.join(dpath, 'SOI_LP_realtime_plot.png'), dpi=200)
Explanation: plots the Southern Oscillation Index
End of explanation
Image(url='http://www1.ncdc.noaa.gov/pub/data/cmb/teleconnections/nino-regions.gif')
for nino in ["3.4", "3", "4"]:
print("processing NINO{}".format(nino))
url = "http://www.bom.gov.au/climate/enso/nino_%s.txt" % (nino)
r = requests.get(url, proxies=proxies)
data = pd.read_table(StringIO(r.content), sep=',', header=None, index_col=1, \
parse_dates=True, names=['iDate','SST'])
data = data.ix['2013':]
lastmonth = data.loc[today.strftime("%Y-%m"),'SST'].mean()
f, ax = plt.subplots(figsize=(10, 8))
ax.fill_between(data.index, data.SST, 0, (data.SST >= 0), color='r', alpha=0.7, interpolate=True)
ax.fill_between(data.index, data.SST, 0, (data.SST < 0), color='b', alpha=0.7, interpolate=True)
ax.plot(data.index, data.SST, c='k')
ax.axhline(0, color='k')
ax.set_ylim(-2,2)
ax.grid(linestyle='--')
ax.xaxis.set_minor_locator(months)
ax.xaxis.set_major_locator(years)
ax.xaxis.set_minor_formatter(mFMT)
ax.xaxis.set_major_formatter(yFMT)
[label.set_fontsize(13) for label in ax.get_xminorticklabels()]
[label.set_rotation(90) for label in ax.get_xminorticklabels()]
[label.set_fontsize(18) for label in ax.get_xmajorticklabels()]
[label.set_fontsize(18) for label in ax.get_ymajorticklabels()]
ax.set_title("NINO {} (weekly) ending {}\nlatest weekly / monthly values: {:<4.2f} / {:<4.2f}"\
.format(nino, data.index[-1].strftime("%Y-%B-%d"), data.iloc[-1,-1], lastmonth))
f.savefig(os.path.join(dpath, 'NINO{}_realtime_plot.png'.format(nino)))
Explanation: Plots the NINO SST Indices
End of explanation |
2,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
De connectie met Lizard is gemaakt en bovenstaand zijn alle beschikbare endpoints.
Nu gaan we de metadata verzamelen van de timeseries met uuid 867b166a-fa39-457d-a9e9-4bcb2ff04f61
Step1: Download van timeseries met uuid 867b166a-fa39-457d-a9e9-4bcb2ff04f61 van 31 december 1999 tot 14 februari 2018.
Dit is de metadata
Step2: En dit is de data
Step3: Nu kunnen we de andere tijdseries van de zelfde locatie opzoeken op basis van de metadata. De neerslag, verdamping en windsnelheid zijn allen in meer of mindere mate gecorreleerd voor een en de zelfde locatie. We gaan hierna die data in een datafame zetten.
Step4: De data is van verschillende lengte, dat is niet te correleren. Resamplen naar maanden hadden we al gedaan uit Lizard. Daarbij kregen we per periode de min en max terug. Die gaan we correleren. We laten zien dat de tijdseries nu allen de zelfde lengte hadden. We hadden dit ook kunnen doen door eerst naar de metadata te kijken en dan pas de data op te halen.
Step5: De lengte is nu inderdaad voor allemaal hetzelfde. Nu selecteren we de max van de verschillende tijdseries
Step6: Vervolgens is het eenvoudig de correlatie te berekenen | Python Code:
result = cli.timeseries.get(uuid="867b166a-fa39-457d-a9e9-4bcb2ff04f61")
result.metadata
Explanation: De connectie met Lizard is gemaakt en bovenstaand zijn alle beschikbare endpoints.
Nu gaan we de metadata verzamelen van de timeseries met uuid 867b166a-fa39-457d-a9e9-4bcb2ff04f61:
End of explanation
queryparams = {
"end":1518631200000,
"start":946681200000,
"window":"month"
}
result = cli.timeseries.get(uuid="867b166a-fa39-457d-a9e9-4bcb2ff04f61", **queryparams)
result.metadata
Explanation: Download van timeseries met uuid 867b166a-fa39-457d-a9e9-4bcb2ff04f61 van 31 december 1999 tot 14 februari 2018.
Dit is de metadata:
End of explanation
result.data[0]
Explanation: En dit is de data:
End of explanation
location__uuid = result.metadata['location__uuid'][0]
# een scientific result kan ook gesplitst worden in metadata en events:
metadata_multiple, events_multiple = cli.timeseries.get(location__uuid=location__uuid, **queryparams)
columns = [x for x in metadata_multiple.columns if "observation_type" in x or "uuid" in x]
metadata_multiple[columns]
Explanation: Nu kunnen we de andere tijdseries van de zelfde locatie opzoeken op basis van de metadata. De neerslag, verdamping en windsnelheid zijn allen in meer of mindere mate gecorreleerd voor een en de zelfde locatie. We gaan hierna die data in een datafame zetten.
End of explanation
indexed_events = [e.set_index('timestamp') for e in events_multiple if 'timestamp' in e.columns]
first = max([indexed.index[0] for indexed in indexed_events])
last = min([indexed.index[-1] for indexed in indexed_events])
print(first, last)
indexed_events_ranged = [e[first:last] for e in indexed_events]
[e.shape for e in indexed_events_ranged]
Explanation: De data is van verschillende lengte, dat is niet te correleren. Resamplen naar maanden hadden we al gedaan uit Lizard. Daarbij kregen we per periode de min en max terug. Die gaan we correleren. We laten zien dat de tijdseries nu allen de zelfde lengte hadden. We hadden dit ook kunnen doen door eerst naar de metadata te kijken en dan pas de data op te halen.
End of explanation
observation_types = metadata_multiple['observation_type__parameter'].tolist()
max_weerdata = pd.DataFrame({observation_type: events['max'] for observation_type, events in zip(observation_types, indexed_events_ranged)})
max_weerdata
Explanation: De lengte is nu inderdaad voor allemaal hetzelfde. Nu selecteren we de max van de verschillende tijdseries:
End of explanation
max_weerdata.corr()
Explanation: Vervolgens is het eenvoudig de correlatie te berekenen:
End of explanation |
2,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Thermal equilibrium of a single particle
In a large ensemble of identical systems each member will have a different state due to thermal fluctuations, even if all the systems were initialised in the same initial state.
As we integrate the dynamics of the ensemble we will have a distribution of states (i.e. the states of each member of the system). However, as the ensemble evolves, the distribution over the states eventually reaches a stationary distribution
Step1: We use the quadrature rule to numerically evaluate the partition function $Z$.
Step2: Energy landscape
We can plot the energy landscape (energy as a function of the system variables)
Step3: We observe that
Step4: What does this mean? If we had an ensemble of single particles, the distribution of the states of those particles varies greatly depending on $\sigma$. Remember we can decrease $\sigma$ by reducing the anisotropy strength or particle size or by increasing the temperature.
- When $\sigma$ is high, most of the particles in the ensemble will be found closely aligned with the anisotropy axis.
- When $\sigma$ is low, the states of the particles are more evenly distributed.
Magpy equilibrium
Using Magpy, we can simulate the dynamics of the state of a single nanoparticle. If we simulate a large ensemble of these systems for 'long enough', the distribution of states will reach equilibrium. If Magpy is implemented correctly, we should recover the analytical distribution from above.
Set up the model
Select the parameters for the single particle
Step5: Create an ensemble
From the single particle we create an ensemble of 10,000 identical particles.
Step6: Simulate
Now we simulate! We don't need to simulate for very long because $\sigma$ is very high and the system will reach equilibrium quickly.
Step7: Check that we have equilibriated
Step8: We can see that the system has reached a local minima. We could let the simulation run until the ensemble relaxes into both minima but it would take a very long time because the energy barrier is so high in this example.
Compute theta
The results of the simulation are x,y,z coordinates of the magnetisation of each particle in the ensemble. We need to convert these into angles.
Step9: Compare to analytical solution
Now we compare our empirical distribution of states to the analytical distribution that we computed above. | Python Code:
import numpy as np
# anisotropy energy of the system
def anisotropy_e(theta, sigma):
return -sigma*np.cos(theta)**2
# numerator of the Boltzmann distribution
# (i.e. without the partition function Z)
def p_unorm(theta, sigma):
return np.sin(theta)*np.exp(-anisotropy_e(theta, sigma))
Explanation: Thermal equilibrium of a single particle
In a large ensemble of identical systems each member will have a different state due to thermal fluctuations, even if all the systems were initialised in the same initial state.
As we integrate the dynamics of the ensemble we will have a distribution of states (i.e. the states of each member of the system). However, as the ensemble evolves, the distribution over the states eventually reaches a stationary distribution: the Boltzmann distribution. Even though the state of each member in the ensemble contitues to fluctuate, the ensemble as a whole is in a stastistical equilibrium (thermal equilibrium).
For an ensemble of single particles, we can compute the Boltzmann distribution by hand. In this example, we compare the analytical solution with the result of simulating an ensemble with Magpy.
Problem setup
A single particle has a uniaxial anisotropy axis $K$ and a magnetic moment of three components (x,y,z components). The angle $\theta$ is the angle between the magnetic moment and the anisotropy axis.
Boltzmann distribution
The Boltzmann distribution represents of states over the ensemble; here the state is the solid angle $\phi=\sin(\theta)$ (i.e. the distribution over the surface of the sphere). The distribution is parameterised by the temperature of the system and the energy landscape of the problem.
$$p(\theta) = \frac{\sin(\theta)e^{-E(\theta)/(K_BT)}}{Z}$$
where $Z$ is called the partition function:
$$Z=\int_\theta \sin(\theta)e^{-E(\theta)/(K_BT)}\mathrm{d}\theta$$
Stoner-Wohlfarth model
The energy function for a single domain magnetic nanoparticle is given by the Stoner-Wohlfarth equation:
$$\frac{E\left(\theta\right)}{K_BT}=-\sigma\cos^2\theta$$
where $\sigma$ is called the normalised anisotropy strength:
$$\sigma=\frac{KV}{K_BT}$$
Functions for analytic solution
End of explanation
from scipy.integrate import quad
# The analytic Boltzmann distribution
def boltzmann(thetas, sigma):
Z = quad(lambda t: p_unorm(t, sigma), 0, thetas[-1])[0]
distribution = np.array([
p_unorm(t, sigma) / Z for t in thetas
])
return distribution
Explanation: We use the quadrature rule to numerically evaluate the partition function $Z$.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
thetas = np.linspace(0, np.pi, 1000)
sigmas = [1, 3, 5, 7]
e_landscape = [anisotropy_e(thetas, s) for s in sigmas]
for s, e in zip(sigmas, e_landscape):
plt.plot(thetas, e, label='$\sigma={}$'.format(s))
plt.legend(); plt.xlabel('Angle (radians)'); plt.ylabel('Energy')
plt.title('Energy landscape for a single particle');
Explanation: Energy landscape
We can plot the energy landscape (energy as a function of the system variables)
End of explanation
p_dist = [boltzmann(thetas, s) for s in sigmas]
for s, p in zip(sigmas, p_dist):
plt.plot(thetas, p, label='$\sigma={}$'.format(s))
plt.legend(); plt.xlabel('Angle (radians)')
plt.ylabel('Probability of angle')
plt.title('Probability distribution of angle');
Explanation: We observe that:
- The energy of the system has two minima: one alongside each direction of the anisotropy axis.
- The minima are separated by a maxima: perpendicular to the anisotropy axis.
- Stronger anisotropy increases the size of the energy barrier between the two minima.
Equilibrium distribution (Boltzmann)
We can also plot the equilibrium distribution of the system, which is the probability distribution over the system states in a large ensemble of systems.
End of explanation
import magpy as mp
# These parameters will determine the distribution
K = 1e5
r = 7e-9
T = 300
kdir = [0., 0., 1.]
# These parameters affect the dynamics but
# have no effect on the equilibrium
Ms = 400e3
location = [0., 0., 0.]
alpha=1.0
initial_direction = [0., 0., 1.]
# Normalised anisotropy strength KV/KB/T
V = 4./3 * np.pi * r**3
kb = mp.core.get_KB()
sigma = K * V / kb / T
print(sigma)
import magpy as mp
single_particle = mp.Model(
anisotropy=[K],
anisotropy_axis=[kdir],
damping=alpha,
location=[location],
magnetisation=Ms,
magnetisation_direction=[initial_direction],
radius=[r],
temperature=T
)
Explanation: What does this mean? If we had an ensemble of single particles, the distribution of the states of those particles varies greatly depending on $\sigma$. Remember we can decrease $\sigma$ by reducing the anisotropy strength or particle size or by increasing the temperature.
- When $\sigma$ is high, most of the particles in the ensemble will be found closely aligned with the anisotropy axis.
- When $\sigma$ is low, the states of the particles are more evenly distributed.
Magpy equilibrium
Using Magpy, we can simulate the dynamics of the state of a single nanoparticle. If we simulate a large ensemble of these systems for 'long enough', the distribution of states will reach equilibrium. If Magpy is implemented correctly, we should recover the analytical distribution from above.
Set up the model
Select the parameters for the single particle
End of explanation
particle_ensemble = mp.EnsembleModel(
base_model=single_particle, N=10000
)
Explanation: Create an ensemble
From the single particle we create an ensemble of 10,000 identical particles.
End of explanation
res = particle_ensemble.simulate(
end_time=1e-9, time_step=1e-12, max_samples=50,
random_state=1001, implicit_solve=True
)
Explanation: Simulate
Now we simulate! We don't need to simulate for very long because $\sigma$ is very high and the system will reach equilibrium quickly.
End of explanation
plt.plot(res.time, res.ensemble_magnetisation())
plt.title('10,000 single particles - ensemble magnetisation')
plt.xlabel('Time'); plt.ylabel('Magnetisation');
Explanation: Check that we have equilibriated
End of explanation
M_z = np.array([state['z'][0] for state in res.final_state()])
m_z = M_z / Ms
simulated_thetas = np.arccos(m_z)
Explanation: We can see that the system has reached a local minima. We could let the simulation run until the ensemble relaxes into both minima but it would take a very long time because the energy barrier is so high in this example.
Compute theta
The results of the simulation are x,y,z coordinates of the magnetisation of each particle in the ensemble. We need to convert these into angles.
End of explanation
theta_grid = np.linspace(0.0, simulated_thetas.max(), 100)
analytical_probability = boltzmann(theta_grid, sigma)
plt.hist(simulated_thetas, normed=True, bins=80, label='Simulated');
plt.plot(theta_grid, analytical_probability, label='Analytical')
plt.title('Simulated and analytical distributions')
plt.xlabel('Angle (radians)'); plt.ylabel('Probability of angle');
Explanation: Compare to analytical solution
Now we compare our empirical distribution of states to the analytical distribution that we computed above.
End of explanation |
2,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cccr-iitm', 'iitm-esm', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CCCR-IITM
Source ID: IITM-ESM
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:48
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
2,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 2
Step1: Evaluating a Heptagon Number
We are almost ready to look at Python code for drawing these figures. The last step is to provide a mapping from heptagon numbers to real numbers, since any drawing library requires us to provide points as $(x,y)$ pairs, where $x$ and $y$ are floating-point numbers. This is easy, of course
Step2: Python Code for Drawing
Although we have not yet discussed how we will accommodate the skew coordinate frame we've used, let's take a look at the Python code to render the skewed heptagon shown earlier. I won't go over the details of the matplotlib drawing library, since you can explore the online documentation. The thing to note is the drawPolygon function, which takes an array of heptagon number pairs as vertices, then renders them and draws a path connecting them.
Step3: Correcting the Skew
Now all that remains is to straighten up our heptagon by applying a shear transformation. This is simpler than it sounds, using the most basic technique of linear algebra, a change of basis.
The trick is to represent points as a column vectors | Python Code:
# load the definitions from the previous notebook
%run HeptagonNumbers.py
# represent points or vertices as pairs of heptagon numbers
p0 = ( zero, zero )
p1 = ( sigma, zero )
p2 = ( sigma+1, rho )
p3 = ( sigma, rho*sigma )
p4 = ( zero, sigma*sigma )
p5 = ( -rho, rho*sigma )
p6 = ( -rho, rho )
heptagon = [ p0, p1, p2, p3, p4, p5, p6 ]
heptagram_rho = [ p0, p2, p4, p6, p1, p3, p5 ]
heptagram_sigma = [ p0, p3, p6, p2, p5, p1, p4 ]
Explanation: Part 2: Drawing the Heptagon
The heptagon numbers we defined in Part 1 are obviously useful for defining lengths of lines in a figure like the one below, since all of those line lengths are related to the $\rho$ and $\sigma$ ratios. But will heptagon numbers suffice for the coordinates we must give to a graphics library, to draw line segments on the screen? The answer turns out to be yes, though we'll need one little trick to make it work.
<img src="heptagonSampler.png",width=1100,height=600 />
Finding the Heptagon Vertices
To start with, we will try to construct coordinates for the seven vertices of the heptagon. We will label the points as in the figure below, $P0$ through $P6$.
<img src="heptagonVertices.png" width=500 height=500 />
For convenience, we can say that $P0$ is the origin of our coordinate system, so it will have coordinates $(0,0)$. If the heptagon edge length is one, then the coordinates of point $P1$ are clearly $(1,0)$. But now what? We can use the Pythogorean theorem to find the coordinates of point $P4$, but we end up with $(1/2,a)$, where
$$a = \sqrt{\sigma^2 - \frac{1}{4}}
= \sqrt{\frac{3}{4} + \rho + \sigma} $$
This is annoying, since we have no way to take the square root of a heptagon number! Fortunately, there is an easier way.
Suppose we abandon our usual Cartesian coordinate frame, and use one that works a little more naturally for us? We can use a different frame of reference, one where the "y" axis is defined as the line passing through points $P0$ and $P4$. We can then model all the heptagon vertices quite naturally, and adjust for the modified frame of reference when we get to the point of drawing.
For the sake of further convenience, let us scale everything up by a factor of $\sigma$. This makes it quite easy to write down the coordinates of all the points marked on the diagram above. Notice the three points I have included in the interior of the heptagon. Those points divide the horizontal and vertical diagonals into $\rho:\sigma:1$ and $\rho:\sigma$ proportions. So now we have:
|point|coordinates|
|-----|-----------|
|$P0$|$(0,0)$|
|$P1$|$(\sigma,0)$|
|$P2$|$(1+\sigma,\rho)$|
|$P3$|$(\sigma,\rho+\sigma)$|
|$P4$|$(0,1+\rho+\sigma)$|
|$P5$|$(-\rho,\rho+\sigma)$|
|$P6$|$(-\rho,\rho)$|
If we render our heptagon and heptagrams with these coordinates, ignoring the fact that we used an unusual coordinate frame, we get a skewed heptagon:
<img src="skewHeptagon.png" width=500 height=500 />
This figure is certainly not regular in any usual sense, since
the edge lengths and angles vary, but we could refer to it as an affine regular heptagon (and associated heptagrams). In linear algebra, an affine transformation is one that preserves parallel lines and ratios along lines. Those are exactly the properties that we took advantage of in capturing our coordinates.
Although we have not yet discussed how we will accommodate the skew coordinate frame we've used, let's take a look at the Python code to capture the vertices used above. Note that a point is represented simply as a pair of heptagon numbers, using Python's tuple notation.
End of explanation
print "sigma = " + str( HeptagonNumber.sigma_real )
print "rho = " + str( HeptagonNumber.rho_real )
# This function maps from a pair of heptagon numbers to
# a pair of floating point numbers (approximating real numbers)
def render( v ):
x, y = v
return [ float(x), float(y) ]
Explanation: Evaluating a Heptagon Number
We are almost ready to look at Python code for drawing these figures. The last step is to provide a mapping from heptagon numbers to real numbers, since any drawing library requires us to provide points as $(x,y)$ pairs, where $x$ and $y$ are floating-point numbers. This is easy, of course: we just evaluate the expression $a+b\rho+c\sigma$, using predefined values for $\rho$ and $\sigma$. But what are those values?
We can easily derive them from a tiny bit of trigonometry, looking at our heptagon-plus-heptagrams figure again.
<img src="findingConstants.png" width=500 height=500 />
I have rotated the heptagon a bit, so we see angles presented in the traditional way, with zero radians corresponding to the X axis, to the right, and considering angles around the point $A$. We can find $\sigma$ using the right triangle $\triangle ABC$. If our heptagon has edge length of one, then line segment $AB$ has length $\sigma$, by definition, and line segment $BC$ has length $1/2$. Remembering the formula for the sine function (opposite over hypotenuse), we can see that
$$\sin \angle CAB = \frac{1/2}{\sigma} = \frac{1}{2\sigma}$$
which means that
$$\sigma = \frac{1}{2\sin \angle CAB}$$
Now we just need to know what angle $\angle CAB$ is. Here, you can have some fun by convincing yourself that $\angle CAB$ is equal to $\pi/14$ radians. As a hint, first use similar triangles to show that all those narrow angles at the heptagon vertices are equal to $\pi/7$. In any case, we have our value for $\sigma$:
$$\sigma = \frac{1}{2\sin{\frac{\pi}{14}}} $$
Computing $\rho$ is just as easy. It is convenient to use triangle $\triangle ADE$, with the following results:
$$\frac{\rho}{2} = \sin{\angle EAD} = \sin{\frac{5\pi}{14}}$$
$$\rho = 2\sin{\frac{5\pi}{14}}$$
These values for $\rho$ and $\sigma$ were already captured as constants in Part 1. Here we will just print out the values, and define a rendering function that produces a vector of floats from a vector of heptagon numbers.
End of explanation
%pylab inline
import matplotlib.pyplot as plt
import matplotlib.path as mpath
import matplotlib.patches as mpatches
Path = mpath.Path
def drawPolygon( polygonVerts, color, mapping=render ):
n = len( polygonVerts )
codes = [ Path.MOVETO ]
verts = []
verts .append( mapping( polygonVerts[ 0 ] ) )
for i in range(1,n+1):
codes.append ( Path.LINETO )
verts.append ( mapping( polygonVerts[ i % n ] ) )
path = mpath.Path( verts, codes )
return mpatches.PathPatch( path, facecolor='none', edgecolor=color )
def drawHeptagrams( mapping=render ):
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
ax.add_patch( drawPolygon( heptagon,'#dd0000', mapping ) )
ax.add_patch( drawPolygon( heptagram_rho,'#990099', mapping ) )
ax.add_patch( drawPolygon( heptagram_sigma,'#0000dd', mapping ) )
ax.set_xlim(-3,4)
ax.set_ylim(-1,6)
drawHeptagrams()
Explanation: Python Code for Drawing
Although we have not yet discussed how we will accommodate the skew coordinate frame we've used, let's take a look at the Python code to render the skewed heptagon shown earlier. I won't go over the details of the matplotlib drawing library, since you can explore the online documentation. The thing to note is the drawPolygon function, which takes an array of heptagon number pairs as vertices, then renders them and draws a path connecting them.
End of explanation
def skewRender(v):
x = float( v[0] )
y = float( v[1] )
x = x + y/(2*HeptagonNumber.sigma_real)
y = math.sin( (3.0/7.0) * math.pi ) * y
return [ x, y ]
drawHeptagrams( skewRender )
Explanation: Correcting the Skew
Now all that remains is to straighten up our heptagon by applying a shear transformation. This is simpler than it sounds, using the most basic technique of linear algebra, a change of basis.
The trick is to represent points as a column vectors:
$$(x,y) \Rightarrow \begin{bmatrix}x \ y\end{bmatrix}$$
Now, consider the vectors corresponding to the natural basis vectors we used to construct the heptagon vertices:
$$ \begin{bmatrix}1 \ 0\end{bmatrix}, \begin{bmatrix}0 \ 1\end{bmatrix} $$
What should our transformation do to these two vectors? We don't need any change in the X-axis direction, so we'll leave that one alone.
$$ \begin{bmatrix}1 \ 0\end{bmatrix} \Rightarrow \begin{bmatrix}1 \ 0\end{bmatrix} $$
For the Y-axis, we must determine what point in a traditional cartesian plane corresponds to our "vertical" basis vector. We can find that by applying a bit of trigonometry, as we did when deriving $\rho$ and $\sigma$ earlier. The result is:
$$ \begin{bmatrix}1 \ 0\end{bmatrix} \Rightarrow \begin{bmatrix} \frac{1}{2\sigma} \ \sin\frac{3}{7}\pi\end{bmatrix} $$
It turns out that specifying those two transformations is equivalent to specifying the transformation of any initial vector. We use our transformed basis vectors as the column of a matrix, and write:
$$\begin{bmatrix}x' \ y'\end{bmatrix} = \begin{bmatrix}1 & \frac{1}{2\sigma} \0 & \sin\frac{3}{7}\pi\end{bmatrix}
\begin{bmatrix}x \ y\end{bmatrix}$$
Again, this is simpler than it looks. It is just a way of writing one equation that represents two equations:
$$ x' = x + y \frac{1}{2\sigma} $$
$$ y' = y \sin\frac{3}{7}\pi $$
And now we finally have something that we can translate directly into Python, to render our heptagon and heptagrams with full symmetry. Since we defined drawHeptagrams to accept a rendering function already, we can just call it again with the new function.
End of explanation |
2,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scores
First let's take a look at the ratings users can give to images. This is just for warming up since this is a feature that's not so much used on Danbooru.
Step1: Status
Now we take a look at the status of the posts since we'll most likely not use deleted or banned posts. We have the following categories
Step2: Wow, there are a lot of deleted posts. I think I need to check them out manually and their reason for deletion.
But first create a view filtered_posts where flagged posts are removed. Furthermore remove children and remove posts where the aspect ration is not in 1
Step3: How much R18 content is there on danbooru?
There are three ratings
Step4: How much tags
This is the first interesting thing.
Step5: Tags in general
Step6: Very interesting! Even in log10 space these tags are not really linear distributed. Let's see if we can solve this with preprocessing or if there are some fancy tricks like weighting to take these differences in the distribution into account.
Step7: Artists
Step8: Popular anime characters
Step9: Popular series
Last but not least.
Step10: Filter out some more tags
There are some tags than can not be easily inferred from just looking on the images, that do not correspont with the use cases I have in mind or that train on useless features. Better to either remove such posts or just remove these tags.
I looked through the first 500 tags from the list above and took note of the following ones | Python Code:
scores = %sql SELECT score, COUNT(*) FROM posts GROUP BY score ORDER BY score DESC
worst_post = %sql SELECT id FROM posts WHERE score = (SELECT MIN(score) FROM posts)
best_post = %sql SELECT id FROM posts WHERE score = (SELECT MAX(score) FROM posts)
pd_count = scores.DataFrame()["count"]
pd_count.index = scores.DataFrame()["score"]
pd_count.plot(logy=True)
plt.ylabel("count")
plt.title("Post score distribution")
Explanation: Scores
First let's take a look at the ratings users can give to images. This is just for warming up since this is a feature that's not so much used on Danbooru.
End of explanation
pending_posts = %sql SELECT id FROM posts WHERE is_pending = true
flagged_posts = %sql SELECT id FROM posts WHERE is_flagged = true
deleted_posts = %sql SELECT id FROM posts WHERE is_deleted = true
banned_posts = %sql SELECT id FROM posts WHERE is_banned = true
def single_column_to_set(column):
return set(idx[0] for idx in column)
posts_with_status = map(single_column_to_set, (pending_posts, flagged_posts, deleted_posts, banned_posts))
posts_with_status = list(posts_with_status)
post_flag_count = np.array(list(map(len, posts_with_status)))
post_flag_count_log = post_flag_count.copy()
post_flag_count_log[post_flag_count == 0] = 1
post_flag_count_log = np.log10(post_flag_count_log)
rects = plt.bar(range(4), post_flag_count_log)
plt.xticks(range(4), ["pending", "flagged", "deleted", "banned"])
for idx, rect in enumerate(rects):
height = rect.get_height()
width = rect.get_width()
plt.text(rect.get_x() + width / 2, height, post_flag_count[idx], va="bottom", ha="center")
plt.ylabel("count")
yticks = plt.yticks()[0][1:-1]
plt.yticks(yticks, map(lambda x: "$10^{:d}$".format(int(x)), yticks))
plt.title("Post status distribution")
print("There is a small intersection of deleted and banned posts: %d" % len(posts_with_status[2] & posts_with_status[3]))
print("What is even the difference between these two?")
Explanation: Status
Now we take a look at the status of the posts since we'll most likely not use deleted or banned posts. We have the following categories:
is_pending, is_flagged, is_deleted, is_banned
End of explanation
%%sql
CREATE OR REPLACE TEMPORARY VIEW filtered_posts AS
(SELECT * FROM posts
WHERE is_pending = false
AND is_flagged = false
AND is_deleted = false
AND is_banned = false
AND parent_id = 0
AND (image_width >= 512 OR image_height >= 512)
AND cast(image_width as double precision) / cast(image_height as double precision) BETWEEN 0.5 AND 2.0)
filtered_count = %sql SELECT COUNT(*) FROM filtered_posts
filtered_ext = %sql SELECT DISTINCT file_ext FROM filtered_posts
print("Now after filtering we have %d posts left with the following file extensions:" % filtered_count[0][0])
print(reduce(list_to_string, (x[0] for x in filtered_ext)))
%%sql
CREATE OR REPLACE TEMPORARY VIEW filtered_images AS
(SELECT * FROM filtered_posts
WHERE file_ext = 'jpeg'
OR file_ext = 'jpg'
OR file_ext = 'png')
%sql SELECT COUNT(*) FROM filtered_images
Explanation: Wow, there are a lot of deleted posts. I think I need to check them out manually and their reason for deletion.
But first create a view filtered_posts where flagged posts are removed. Furthermore remove children and remove posts where the aspect ration is not in 1:2 to 2:1.
End of explanation
rating_distribution = %sql SELECT date_trunc('month', created_at) AS month, rating, COUNT(*) FROM posts GROUP BY month, rating ORDER BY month ASC, rating DESC
rating_distribution = np.array(rating_distribution).reshape((150,3,3))
rating_distribution = pd.DataFrame(rating_distribution[:,:,2], columns=rating_distribution[0,:,1], index=rating_distribution[:,0,0], dtype=np.int)
rating_distribution.plot.area()
plt.xlim(rating_distribution.index[0], rating_distribution.index[-1])
plt.legend(["safe", "questionable", "explicit"], title="Rating")
plt.xlabel("Date of upload")
plt.ylabel("Uploads per month")
plt.title("Distribution of uploads over time, grouped by rating")
Explanation: How much R18 content is there on danbooru?
There are three ratings:
* s:safe
* q:questionable
* e:explicit
End of explanation
%%sql
CREATE OR REPLACE TEMPORARY VIEW filtered_tags AS
(SELECT tags.name, tag_count.count, tags.category
FROM
(SELECT tag_id, COUNT(post_id)
FROM tagged INNER JOIN filtered_images ON filtered_images.id = tagged.post_id
GROUP BY tag_id) tag_count
INNER JOIN
tags ON tags.id = tag_count.tag_id
ORDER BY tag_count.count DESC)
Explanation: How much tags
This is the first interesting thing.
End of explanation
tag_count = %sql SELECT * FROM filtered_tags LIMIT 1000
def list_count(most_popular_tags):
pop_tag_number = len(most_popular_tags)
rank_range = list(range(1, pop_tag_number + 1))
pop_tag_number_length = len(str(pop_tag_number))
formatter = "{: <30}│{: >10}│{: >9}"
formatter = "{: >%d} " % (pop_tag_number_length) + formatter
print(formatter.format("RANK", "NAME", "COUNT", "CATEGORY"))
print("─" * (pop_tag_number_length + 2) + "──────────────────────────────┼──────────┼─────────")
for rank, (name, count, category) in zip(rank_range, most_popular_tags):
print(formatter.format(rank, name, count, category))
list_count(tag_count)
def plot_count(most_popular_tags, steps, logx=True):
pop_tag_number = len(most_popular_tags)
_, pop_tag_count, _ = zip(*most_popular_tags)
rank_range = list(range(1, pop_tag_number + 1))
if logx:
plt.semilogy(rank_range, pop_tag_count)
else:
plt.plot(rank_range, pop_tag_count)
plt.xticks(rank_range[steps-1::steps], rank_range[steps-1::steps])
plt.xlim(0, pop_tag_number)
plt.ylabel("count")
plt.xlabel("rank")
if logx:
annotate_line = np.logspace(np.log10(pop_tag_count[steps // 4]), np.log10(pop_tag_count[-1]) * 1.1, 9)
else:
annotate_line = np.linspace(pop_tag_count[0] * 0.9, pop_tag_count[steps*9] * 1.1, 9)
for logy, i in zip(annotate_line, range(0, pop_tag_number - steps, steps)):
idx = random.randint(i, i + steps - 1)
random_rank = rank_range[idx]
random_count = pop_tag_count[idx]
random_tag = most_popular_tags[idx][0]
plt.annotate(random_tag, (random_rank, random_count), (i + steps / 2, logy),
arrowprops={"arrowstyle":"-|>"})
plt.title("Distribution of %d most popular tags" % pop_tag_number)
plot_count(tag_count, 100)
Explanation: Tags in general
End of explanation
print("Important! These are the top 1000 tags (in descending order) where the word breast is included:")
breast_list = [tag for tag, _, _ in tag_count if tag.find("breast") >= 0]
print(reduce(list_to_string, breast_list))
print("In total these are %d tags!" % len(breast_list))
def posts_with_tag(tagname):
tag_id = %sql SELECT id FROM tags WHERE name = :tagname
tag_id = tag_id[0][0]
post_id = %sql SELECT post_id FROM tagged WHERE tag_id = :tag_id
return post_id
posts_with_tag("headphones_on_breasts")
Explanation: Very interesting! Even in log10 space these tags are not really linear distributed. Let's see if we can solve this with preprocessing or if there are some fancy tricks like weighting to take these differences in the distribution into account.
End of explanation
artist_count = %sql SELECT * FROM filtered_tags WHERE category = 'a' LIMIT 100
list_count(artist_count)
plot_count(artist_count, 10, logx=False)
Explanation: Artists
End of explanation
character_count = %sql SELECT * FROM filtered_tags WHERE category = 'c' LIMIT 100
list_count(character_count)
plot_count(character_count, 10, logx=False)
Explanation: Popular anime characters
End of explanation
series_count = %sql SELECT * FROM filtered_tags WHERE category = 'y' LIMIT 100
list_count(series_count)
plot_count(series_count, 10)
Explanation: Popular series
Last but not least.
End of explanation
%%sql
CREATE OR REPLACE TEMPORARY VIEW final_posts AS
SELECT *
FROM filtered_images
WHERE id NOT IN
(SELECT DISTINCT post_id
FROM tagged
WHERE tag_id
IN
(SELECT id as tag_id FROM tags
WHERE name = 'comic'
OR name = 'no_humans'
OR name = '4koma'
OR name = 'photo'
OR name = '3d'))
ORDER BY id ASC
final_posts = %sql SELECT id, rating, file_ext FROM final_posts
%%sql
SELECT tags.id, tags.name, tag_count.count
FROM
(
SELECT tagged.tag_id, COUNT(tagged.post_id)
FROM
final_posts
INNER JOIN
tagged
ON final_posts.id = tagged.post_id
GROUP BY tagged.tag_id
HAVING COUNT(tagged.post_id) >= 10000
) tag_count
INNER JOIN
tags
ON tag_count.tag_id = tags.id
WHERE tags.category = 'g'
ORDER BY tag_count.count DESC
final_tag_count = _
final_posts = np.array(list(map(tuple, final_posts)),
dtype=[("id", np.int32), ("rating", "<U1"), ("file_ext", "<U4")])
final_tag_count = np.array(list(map(tuple, final_tag_count)),
dtype=[("id", np.int32), ("name", '<U29'), ("count", np.int32)])
pd.DataFrame(final_posts).to_hdf("metadata.h5", "posts", mode="a", complevel=9, complib="bzip2")
pd.DataFrame(final_tag_count).to_hdf("metadata.h5", "tag_count", mode="a", complevel=9, complib="bzip2")
Explanation: Filter out some more tags
There are some tags than can not be easily inferred from just looking on the images, that do not correspont with the use cases I have in mind or that train on useless features. Better to either remove such posts or just remove these tags.
I looked through the first 500 tags from the list above and took note of the following ones:
monochrome
comic
greyscale
alternate_costume
translated
artist_request
sketch
character_name
copyright_request
artist_name
cosplay
dated
signature
parody
twitter_username
copyright_name
alternate_hairstyle
no_humans
translation_request
english
gradient
crossover
doujinshi
genderswap
remodel_(kantai_collection)
game_cg
cover_page
official_art
scan
text
4koma
traditional_media
3d
photo
lowres
It might also be a good idea to remove the tags for series (category y) because I don't know why this would be useful. The same for artists but since these are not in the TOP 1000 it should be no problem either way.
Some more filtering
And now get a list of all the images without the bold marked tags from the last section.
Furthermore we need information about the distribution of the remaining tags etc. and we need to write them into some easily parsable format for further processing with our neural networks.
End of explanation |
2,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Unsupervised Learning
Project
Step1: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories
Step2: Implementation
Step3: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint
Step4: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint
Step5: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint
Step6: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
Step7: Implementation
Step8: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer
Step9: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint
Step10: Implementation
Step11: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
Step12: Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
Step13: Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer
Step14: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer
Step15: Implementation
Step16: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint
Step17: Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
Explanation: Machine Learning Engineer Nanodegree
Unsupervised Learning
Project: Creating Customer Segments
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# Display a description of the dataset
display(data.describe())
Explanation: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
End of explanation
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [338, 154, 181]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
Explanation: Implementation: Selecting Samples
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
End of explanation
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
test_label = 'Grocery'
new_data = data.drop(test_label, axis = 1)
test_feature = data[test_label]
# TODO: Split the data into training and testing sets using the given feature as the target
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(new_data, test_feature, test_size=0.25, random_state=777)
# TODO: Create a decision tree regressor and fit it to the training set
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor()
regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
print score
Explanation: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant.
Answer: I deliberately looked for the records with min(fresh), min(milk) and max(fresh) and it did not dissapoint me to see that they seem to represent vastly different customer segments.
The first record is in the top 25% for 'Frozen' goods, top 50% for 'Grocery' and 'Delicatessen'and in the bottom 25% for the last 3 categories. This could be a small grocery store which specializes in frozen goods, but has a grocery and deli section as well. The lack of fresh goods (taken to mean produce), however, seems to suggest otherwise. Though the spending is fairly high, it's not incredibly so (I'm not convinced even a small grocery store only sells ~25,000 m.u. worth of goods in a year). Threfore, it's possible that this could also be a small group of individuals (such as college roommates) who primarily eat frozen foods (eg. frozen pizza, fries).
The second record has very low spending all around (WAY below the 25th percentile). This customer is probably an individual, and one that shops at other places.
This customer exceeds the 75th percentile in all categories, although they only come close to the max value in one category (Fresh). This is likely a grocery store of some kind, which specializes in selling produce.
Implementation: Feature Relevance
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
- Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
- Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
- Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
- Import a decision tree regressor, set a random_state, and fit the learner to the training data.
- Report the prediction score of the testing set using the regressor's score function.
End of explanation
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.
Answer: I attempted to predict 'Grocery'. The reported prediction score ranged between 0.78-0.82 when run multiple times (even with a constant random_state). This feature seems to be pretty good for identifying customer spending habits
Visualize Feature Distributions
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
End of explanation
# TODO: Scale the data using the natural logarithm
log_data = np.log(data)
# TODO: Scale the sample data using the natural logarithm
log_samples = np.log(samples)
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint: Is the data normally distributed? Where do most of the data points lie?
Answer: Grocery seems to be mostly correlated with 'Milk' and 'Detergents_Paper'. The remaining 3 features are not quite as correlated (In fact, they aren't really correlated with anything else at all). This confirms my suspicion that the feature I chose (Grocery) is relevant. The data, however seems to be highly skewed to the left (a few very large outliers) across all features. This suggests that the company perhaps has a few very large (probably corporate) buyers.
Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
Implementation: Feature Scaling
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
- Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.
- Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.
End of explanation
# Display the log-transformed sample data
display(log_samples)
Explanation: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
End of explanation
from collections import defaultdict
outlier_indices = defaultdict(int)
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3 - Q1) * 1.5
# Display the outliers
print "Data points considered outliers for the feature '{}':".format(feature)
rows = log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]
feature_indices = rows.index
display(rows)
# Track all indices that are outliers
for index in feature_indices:
if (index not in indices):
outlier_indices[index] += 1
# OPTIONAL: Select the indices for data points you wish to remove
# If an index appeared 3 times (was an outlier for at least 3 of the categories), drop the row
outliers = []
for index in outlier_indices:
if outlier_indices[index] >= 1:
outliers.append(index)
print(outliers)
print(len(outliers))
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
display(good_data.describe())
display(log_data.describe())
Explanation: Implementation: Outlier Detection
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
- Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
- Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
- Assign the calculation of an outlier step for the given feature to step.
- Optionally remove data points from the dataset by adding indices to the outliers list.
NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.
End of explanation
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
from sklearn.decomposition import PCA
pca = PCA(n_components=6, random_state=777).fit(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = vs.pca_results(good_data, pca)
Explanation: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer:
There were 42 unique rows that had at least 1 outlier, 5 with 2+, 1 with 3+, and 0 with 4+.
Based on this, I have chosen to remove any data point with an outlier (which works out roughly 10% of the original data). I chose to remove any rows with at least 1 feature that is an outlier. I decided to do this because it seemed to lower the average distance between the mean and the median.
To determine this, I recorded the mean and median of each feature after removing data points with at least 1 feature that is an outlier, at least 2 features that are an outlier, and without removing any data points. Lastly I calculated the difference between the mean and median of each column and averaged the result. The results were:
|Min # of Outlier Features|Average Difference Between Mean and Median|
|---|---|
|None (Base)|0.123|
|1|0.0852|
|2|0.110|
There isn't much improvement when only removing the 5 data points, but there is a much larger improvement when removing all 42 outliers.
Consequently, the first 2 sample points I chose were removed. I have opted to not remove them (and thus, the averages will be slightly different from what I initially calculated, but the overall effect should be the same) by making them an exception and then re-running the code. It is quite surprising that the last sample point wasn't considered an outlier as it had that maximum value in the 'Fresh' category. As a matter of fact, outliers for 'Fresh' were only those that were lower than the 25th percentile - 1.5IQR.
Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
- Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
- Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after having a PCA transformation applied
display(log_samples)
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Explanation: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights.
Answer: 72.13% of the total variance is explained by the first 2 principal components. 92.95% is explained by the first 4 PCs.
Dimension 1: This dimension suggests that an increase in both 'Fresh' and 'Frozen' results in a moderate decrease in spending on 'Milk' and 'Grocery' and a large decrease in spending in 'Detergents_Paper'
Dimension 2: This dimension suggests that small purchases of 'Detergents_Paper' is correlated with a large decrease in spending on 'Fresh', 'Frozen', and 'Deli' (In fact, it is a decrease in spending in all other categories)
Dimension 3: This dimension suggests that large purchases of 'Frozen' and 'Deli' goods are correlated with a large decrease in spending on 'Fresh'
Dimension 4: This dimension suggests that very large purchases of 'Deli' is correlated with a large decrease in spending on 'Frozen' and a moderate decrease in spending on 'Detergents_Paper'
When comparing with the scatter plots from above, an interesting observation can be made. Previously we determined that 'Grocery', 'Milk' and 'Detergents_Paper' were correlated. In fact, according to the scatter plots, they are all positively correlated (that is, and increase in one results in an increase in the other). The correlation between 'Milk' and 'Detergents_Paper' is a bit weaker but the overall shape is there. However, from the PCA, we can see that asides from dimension 1, 'Grocery' and 'Milk' are negatively correlated with 'Detergents_Paper'. 'Grocery' and 'Milk' are positively correlated in all cases except for the last dimension, which only represents ~2.5% of the total variance and can be considered an edge case.
Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
End of explanation
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2, random_state=777).fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform log_samples using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
Explanation: Implementation: Dimensionality Reduction
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
- Assign the results of fitting PCA in two dimensions with good_data to pca.
- Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.
- Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
Explanation: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
End of explanation
# Create a biplot
vs.biplot(good_data, reduced_data, pca)
Explanation: Visualizing a Biplot
A biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.
Run the code cell below to produce a biplot of the reduced-dimension data.
End of explanation
# TODO: Apply your clustering algorithm of choice to the reduced data
clusterer = None
# TODO: Predict the cluster for each data point
preds = None
# TODO: Find the cluster centers
centers = None
# TODO: Predict the cluster for each transformed sample data point
sample_preds = None
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = None
Explanation: Observation
Once we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories.
From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?
Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer:
Implementation: Creating Clusters
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
- Fit a clustering algorithm to the reduced_data and assign it to clusterer.
- Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.
- Find the cluster centers using the algorithm's respective attribute and assign them to centers.
- Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
- Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.
- Assign the silhouette score to score and print the result.
End of explanation
# Display the results of the clustering from implementation
vs.cluster_results(reduced_data, preds, centers, pca_samples)
Explanation: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer:
Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
End of explanation
# TODO: Inverse transform the centers
log_centers = None
# TODO: Exponentiate the centers
true_centers = None
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
Explanation: Implementation: Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
- Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.
- Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.
End of explanation
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
Explanation: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.
Answer:
Question 9
For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?
Run the code block below to find which cluster each sample point is predicted to be.
End of explanation
# Display the clustering results based on 'Channel' data
vs.channel_results(reduced_data, outliers, pca_samples)
Explanation: Answer:
Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
Question 10
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?
Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
Answer:
Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.
How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?
Hint: A supervised learner could be used to train on the original customers. What would be the target variable?
Answer:
Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
End of explanation |
2,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AXON
Step1: JSON is subset of AXON
Here is well known example of JSON message
Step2: One can see that content of json_vals and axon_vals are equal.
Step3: AXON supports more readable and compact form. Any JSON message can be translated to this form by removing of all , and " around keys that are identifiers
Step4: We'l call these forms as JSON style of notation, which bases on compositions of dicts and lists.
For compatibility reasons there is a mode in pyaxon python library that allow using symbol ',' as a separator symbol between values in lists and dicts. In this case almost valid JSON message can be loaded as well as AXON message
Step5: There is compact dump
Step6: There is a dump into formatted form
Step7: There is also parameter hsize. It specifies maximum number of simple data items in a line
Step8: AXON also supports reference links between values in the message. Here is an simple example
Step9: AXON extends JSON and makes things that XML can
Mapping as sort of named JSON object
Let's consider short fragment of JSON
Step10: Here is it's XML version for comparison
Step11: Compact form of notation usually as small as many binary serialization formats in the cases when objects contains mostly strings and a few of numbers.
Formatted form is used for readability of expresson based format of representation
Step12: Note that one free to use spaces and line breaks between tokens as he want.
AXON also supports YAML/Python inspired formatted form without braces | Python Code:
from __future__ import unicode_literals, print_function, division
from pprint import pprint
import axon
import json
import xml.etree as etree
from IPython.display import HTML, display, display_html
Explanation: AXON: Tutorial
Let's import inventory for playing with AXON with python.
End of explanation
!cat basic_sample.json
json_vals= json.load(open("basic_sample.json"))
vals = axon.load("basic_sample.json", json=1)
Explanation: JSON is subset of AXON
Here is well known example of JSON message:
End of explanation
print(json_vals)
print(vals[0])
assert str(json_vals) == str(vals[0])
Explanation: One can see that content of json_vals and axon_vals are equal.
End of explanation
axon.dumps(vals, pretty=1))
Explanation: AXON supports more readable and compact form. Any JSON message can be translated to this form by removing of all , and " around keys that are identifiers:
End of explanation
!cat better_json.axon
vals = axon.load("better_json.axon")
pprint(vals)
Explanation: We'l call these forms as JSON style of notation, which bases on compositions of dicts and lists.
For compatibility reasons there is a mode in pyaxon python library that allow using symbol ',' as a separator symbol between values in lists and dicts. In this case almost valid JSON message can be loaded as well as AXON message:
AXON as better JSON
AXON supports decimal numbers, date/time/datetime values and comments.
Let's consider AXON example – list of dicts containing decimal, date and time values:
End of explanation
print(axon.dumps(vals))
Explanation: There is compact dump:
End of explanation
print(axon.dumps(vals, pretty=1))
Explanation: There is a dump into formatted form:
End of explanation
print(axon.dumps(vals, pretty=1, hsize=2))
Explanation: There is also parameter hsize. It specifies maximum number of simple data items in a line:
End of explanation
!cat better_json_crossref.axon
vals = axon.load("better_json_crossref.axon")
assert vals[-1]['children'][0] is vals[0]
assert vals[-1]['children'][1] is vals[1]
pprint(vals)
print(axon.dumps(vals, pretty=1, crossref=1, hsize=2))
Explanation: AXON also supports reference links between values in the message. Here is an simple example:
End of explanation
vals = axon.load("basic_sample.axon")
print(axon.dumps(vals, pretty=1))
Explanation: AXON extends JSON and makes things that XML can
Mapping as sort of named JSON object
Let's consider short fragment of JSON:
json
"name": {
"key_1": "value_1",
"key_2": "value_2"
}
It can be translated as a value of the attribute name of some object. But it also can be translated as an object that is constructed from the value
json
{
"key_1": "value_1",
"key_2": "value_2"
}
using some factory function that corresponds to the tag name.
For this kind of use cases there are mappings in AXON:
javascript
name {
key_1: "value_1"
key_2: "value_2"
}
It's also usefull for notation of the object whose type/class is mapped to the name.
This kind of notation may be also considered as direct translation of the XML notation:
xml
<name
key_1="value_1"
key_2="value_2" />
Sequence as a sort of named JSON array
Let's consider another short fragment of JSON:
json
"name": [
«value_1»,
«value_2»
]
Some times this form is used for notation of the container of some type that corresponds to the tag name.
For this kind of use cases there are sequences in AXON:
javascript
tag {
«value_1»
«value_2»
}
This kind of notation in AXON can be considered as translation of the following XML pattern:
xml
<tag>
«value_1»
«value_2»
</tag>
AXON and XML
First basic example of JSON can be translated to AXON by following XML style of data representation when anonymous structures becames named and subelement is used instead of key:value or attribute:value value for some good reasons:
End of explanation
print(axon.dumps(vals))
Explanation: Here is it's XML version for comparison:
<person firstName="John" lastName="Smith" age="25">
<address
streetAddress="21 2nd Street"
city="New York"
state="NY"
postalCode=10021 />
<phoneNumber type="home" number="212 555-1234"/>
<phoneNumber type="fax" number="646 555-4567"/>
</person>
By this way one can support extensibility of the representation as XML does when element is used instead of attribute. In case of XML this kind of notation is verbose for representation of name/value pair:
<attr>value</attr>
But in case of AXON it isn't:
attr{value}
So any XML element
xml
<ename
attr_1="value_m"
...
attr_m="value_m">
<subename_1>...</subename_1>
...
<subename_N>...</subename_N>
</ename>
can be easily translated to AXON notation as follows:
``` javascript
ename {
attr_1:"value_m"
...
attr_m:"value_m"
subename_1 {...}
...
subename_N {...}
}
```
AXON formatting with/without braces
Presented above AXON forms of messages are are in compact form and in formatted with braces.
Compact form uses minimum amount of space:
End of explanation
print(axon.dumps(vals, pretty=1, braces=1))
Explanation: Compact form of notation usually as small as many binary serialization formats in the cases when objects contains mostly strings and a few of numbers.
Formatted form is used for readability of expresson based format of representation:
End of explanation
print(axon.dumps(vals, pretty=1))
Explanation: Note that one free to use spaces and line breaks between tokens as he want.
AXON also supports YAML/Python inspired formatted form without braces:
End of explanation |
2,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
'rv' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Dataset Parameters
Let's create the ParameterSets which would be added to the Bundle when calling add_dataset. Later we'll call add_dataset, which will create and attach both these ParameterSets for us.
Step3: For information on these passband-dependent parameters, see the section on the lc dataset (these are used only to compute fluxes when rv_method=='flux-weighted')
times
Step4: rvs
Step5: sigmas
Step6: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to the RV dataset.
Other compute options are covered elsewhere
Step7: rv_method
Step8: If rv_method is set to 'dynamical' then the computed radial velocities are simply the z-velocities of the centers of mass of each component. In this case, only the dynamical options are relevant. For more details on these, see the section on the orb dataset.
If rv_method is set to 'flux-weighted' then radial velocities are determined by the z-velocity of each visible surface element of the mesh, weighted by their respective intensities. Since the stars are placed in their orbits by the dynamic options, the section on the orb dataset is still applicable. So are the meshing options described in mesh dataset and the options for computing fluxes in lc dataset.
rv_grav
Step9: See the Gravitational Redshift Example Script for more details on the influence this parameter has on radial velocities.
Synthetics
Step10: Plotting
By default, RV datasets plot as 'rvs' vs 'times'.
Step11: Since these are the only two columns available in the synthetic model, the only other options is to plot in phase instead of time.
Step12: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
Step13: Mesh Fields
If a mesh dataset exists at any of the same times as the time array in the rv dataset, or if pbmesh is set to True in the compute options, then radial velocities for each surface element will be available in the model as well (only if mesh_method=='flux_weighted').
Since the radial velocities are flux-weighted, the flux-related quantities are also included. For a description of these, see the section on the lc dataset.
Let's add a single mesh at the first time of the rv dataset and re-call run_compute
Step14: These new columns are stored with the rv's dataset tag, but with the mesh model-kind.
Step15: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the MESH dataset), but since the mesh elements are stored with the 'mesh01' dataset tag, and the rv (including flux-related) quantities are stored with the 'rv01' dataset tag, it is important not to provide the 'mesh01' dataset tag before plotting.
Step16: rvs | Python Code:
!pip install -I "phoebe>=2.0,<2.1"
Explanation: 'rv' Datasets and Options
Setup
Let's first make sure we have the latest version of PHOEBE 2.0 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
End of explanation
ps, constraints = phoebe.dataset.rv()
print ps
ps_dep = phoebe.dataset.rv_dep()
print ps_dep
Explanation: Dataset Parameters
Let's create the ParameterSets which would be added to the Bundle when calling add_dataset. Later we'll call add_dataset, which will create and attach both these ParameterSets for us.
End of explanation
print ps['times']
Explanation: For information on these passband-dependent parameters, see the section on the lc dataset (these are used only to compute fluxes when rv_method=='flux-weighted')
times
End of explanation
print ps['rvs']
Explanation: rvs
End of explanation
print ps['sigmas']
Explanation: sigmas
End of explanation
ps_compute = phoebe.compute.phoebe()
print ps_compute
Explanation: Compute Options
Let's look at the compute options (for the default PHOEBE 2 backend) that relate to the RV dataset.
Other compute options are covered elsewhere:
* parameters related to dynamics are explained in the section on the orb dataset
* parameters related to meshing, eclipse detection, and subdivision (used if rv_method=='flux-weighted') are explained in the section on the mesh dataset
* parameters related to computing fluxes (used if rv_method=='flux-weighted') are explained in the section on the lc dataset
End of explanation
print ps_compute['rv_method']
Explanation: rv_method
End of explanation
print ps_compute['rv_grav']
Explanation: If rv_method is set to 'dynamical' then the computed radial velocities are simply the z-velocities of the centers of mass of each component. In this case, only the dynamical options are relevant. For more details on these, see the section on the orb dataset.
If rv_method is set to 'flux-weighted' then radial velocities are determined by the z-velocity of each visible surface element of the mesh, weighted by their respective intensities. Since the stars are placed in their orbits by the dynamic options, the section on the orb dataset is still applicable. So are the meshing options described in mesh dataset and the options for computing fluxes in lc dataset.
rv_grav
End of explanation
b.add_dataset('rv', times=np.linspace(0,1,101), dataset='rv01')
b.run_compute(irrad_method='none')
b['rv@model'].twigs
print b['times@primary@rv@model']
print b['rvs@primary@rv@model']
Explanation: See the Gravitational Redshift Example Script for more details on the influence this parameter has on radial velocities.
Synthetics
End of explanation
axs, artists = b['rv@model'].plot()
Explanation: Plotting
By default, RV datasets plot as 'rvs' vs 'times'.
End of explanation
axs, artists = b['rv@model'].plot(x='phases')
Explanation: Since these are the only two columns available in the synthetic model, the only other options is to plot in phase instead of time.
End of explanation
b['period'].components
axs, artists = b['rv@model'].plot(x='phases:binary')
Explanation: In system hierarchies where there may be multiple periods, it is also possible to determine whose period to use for phasing.
End of explanation
b.add_dataset('mesh', times=[0], dataset='mesh01')
b.run_compute(irrad_method='none')
print b['model'].datasets
Explanation: Mesh Fields
If a mesh dataset exists at any of the same times as the time array in the rv dataset, or if pbmesh is set to True in the compute options, then radial velocities for each surface element will be available in the model as well (only if mesh_method=='flux_weighted').
Since the radial velocities are flux-weighted, the flux-related quantities are also included. For a description of these, see the section on the lc dataset.
Let's add a single mesh at the first time of the rv dataset and re-call run_compute
End of explanation
b.filter(dataset='rv01', kind='mesh', context='model').twigs
Explanation: These new columns are stored with the rv's dataset tag, but with the mesh model-kind.
End of explanation
axs, artists = b['mesh@model'].plot(facecolor='rvs', edgecolor=None)
# NOT:
# axs, artists = b['mesh01@model'].plot(facecolor='rvs', edgecolor=None)
Explanation: Any of these columns are then available to use as edge or facecolors when plotting the mesh (see the section on the MESH dataset), but since the mesh elements are stored with the 'mesh01' dataset tag, and the rv (including flux-related) quantities are stored with the 'rv01' dataset tag, it is important not to provide the 'mesh01' dataset tag before plotting.
End of explanation
print b['rvs@primary@rv01@mesh@model']
Explanation: rvs
End of explanation |
2,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feedback k domácím projektům
Jde tento kód napsat jednodušeji, aby ale dělal úplně totéž?
Step1: Ano, lze
Step2: A co tento?
Step3: Ten taky
Step4: A do třetice
Step5: A jeden nepodařený | Python Code:
for radek in range(4):
radek += 1
for value in range(radek):
print('X', end=' ')
print('')
Explanation: Feedback k domácím projektům
Jde tento kód napsat jednodušeji, aby ale dělal úplně totéž?
End of explanation
for radek in range(1, 5):
print('X ' * radek)
Explanation: Ano, lze :-)
End of explanation
promenna = "X"
for j in range(5):
for i in promenna:
print(i, i, i, i, i)
Explanation: A co tento?
End of explanation
for j in range(5):
print('X ' * 5)
Explanation: Ten taky
End of explanation
for X_sloupce in range (6):
print ('')
for X_radky in range (6):
if X_radky == 0 or X_radky == 5 or X_sloupce == 0 or X_sloupce == 5:
print ('X', end = ' ')
else:
print (' ', end = ' ')
for x in range(6):
if x % 5 == 0:
print('X ' * 6)
else:
print('X ', ' ' * 6, 'X')
Explanation: A do třetice
End of explanation
ctverec = input("Když napíšeš podelně, vypíšes z x část čtverce")
if ctverec == "podelne":
print(" x"*5, sep=" ")
for i in range(5):
print(" x"," "," x")
print(" x"*5, sep=" ")
Explanation: A jeden nepodařený
End of explanation |
2,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing data is awesome. In this post, I decided to use D3 in iPython notebook to visualize the "network of frequent associations between 62 dolphins in a community living off Doubtful Sound, New Zealand".
There is something therapeutically beautiful about force directed layouts of which you can pull and push around.
The first thing -- after downloading the dolphins dataset -- was to wrangle the data to a workable format.
Step1: Initially, I thought JSON format (code to do this below) was the way to go, but then later realized that I wanted to keep this post simple (and because the D3 code was extrapolated from other code -- used in conjunction with PHP while pulling data from a MySQL database -- of which was not meant to take in JSON formatted data).
import json
with open('dolphins.json', 'w') as out
Step2: The next thing I did was to write to fdg-dolphins.html the D3 JavaScript code. I also added a <!--ADD-DATASET--> comment so that I can later replace this with the contents of dolphins.js.
Here, I would also like to mention the D3 code was inspired from the Force-Directed Graph of co-occurring character in Les Misérables. In addition, I also added a legend categorizing the quantity of neighbors a node has (i.e. is this dolphin friendly?)
Step3: In the following bit of code, <!--ADD-DATASET--> in fdg-dolphins.html was replaced with the contents of dolphins.js.
Step4: Finally, the D3 dolphins network in iPython notebook is visualized! | Python Code:
import networkx as nx
G = nx.read_gml('dolphins.gml') ##downloaded from above link
category = {}
for i,k in G.edge.iteritems():
if len(k) < 4:
category[i] = '< 4 neighbors'
elif len(k) < 11:
category[i] = '5-10 neighbors'
else:
category[i] = '> 10 neighbors'
_nodes = []
for i in range(0,62):
profile = G.node[i]
_nodes.append({'name':profile['label'].encode("utf-8"),
'group':category[i]})
_edges = [{'source':i[0], 'target':i[1]} for i in G.edges()]
Explanation: Visualizing data is awesome. In this post, I decided to use D3 in iPython notebook to visualize the "network of frequent associations between 62 dolphins in a community living off Doubtful Sound, New Zealand".
There is something therapeutically beautiful about force directed layouts of which you can pull and push around.
The first thing -- after downloading the dolphins dataset -- was to wrangle the data to a workable format.
End of explanation
import sys
datfile = 'dolphins.js'
def print_list_JavaScript_format(x, dat, out = sys.stdout):
out.write('var %s = [\n' % x)
for i in dat:
out.write('%s,\n' % i)
out.write('];\n')
with open(datfile, 'w') as out:
print_list_JavaScript_format('nodes', _nodes, out)
print_list_JavaScript_format('links', _edges, out)
Explanation: Initially, I thought JSON format (code to do this below) was the way to go, but then later realized that I wanted to keep this post simple (and because the D3 code was extrapolated from other code -- used in conjunction with PHP while pulling data from a MySQL database -- of which was not meant to take in JSON formatted data).
import json
with open('dolphins.json', 'w') as out:
dat = {"nodes":_nodes,
"links":_edges}
json.dump(dat, out)
Therefore, I pre-processed the nodes and links variables to JavaScript format and outputed this information into dolphins.js.
End of explanation
%%writefile fdg-dolphins.html
<!DOCTYPE html>
<html>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.5/d3.min.js"></script>
<style>
.node {
stroke: #fff;
stroke-width: 1.5px;
}
.link {
stroke: #999;
stroke-opacity: .6;
}
</style>
<body>
<div class="chart">
<script>
<!--ADD-DATASET-->
var width = 640,
height = 480;
var color = d3.scale.category10()
.domain(['< 4 neighbors', '5-10 neighbors', '> 10 neighbors']);
var svg = d3.select('.chart').append('svg')
.attr('width', width)
.attr('height', height);
var force = d3.layout.force()
.size([width, height])
.charge(-120)
.linkDistance(50)
.nodes(nodes)
.links(links);
var link = svg.selectAll('.link')
.data(links)
.enter().append('line')
.attr('class', 'link')
.style("stroke-width", function(d) { return Math.sqrt(d.value); });
var node = svg.selectAll('.node')
.data(nodes)
.enter().append('circle')
.attr('class', 'node')
.attr("r", 5)
.style("fill", function(d) { return color(d.group); })
.call(force.drag);
node.append("title")
.text(function(d) { return d.name; });
force.on("tick", function() {
link.attr("x1", function(d) { return d.source.x; })
.attr("y1", function(d) { return d.source.y; })
.attr("x2", function(d) { return d.target.x; })
.attr("y2", function(d) { return d.target.y; });
node.attr("cx", function(d) { return d.x; })
.attr("cy", function(d) { return d.y; });
});
force.start();
//Legend
var legend = svg.selectAll(".legend")
.data(color.domain())
.enter().append("g")
.attr("class", "legend")
.attr("transform", function(d, i) { return "translate(0," + i * 20 + ")"; });
legend.append("rect")
.attr("x", width - 18)
.attr("width", 18)
.attr("height", 18)
.style("fill", color);
legend.append("text")
.attr("x", width - 24)
.attr("y", 9)
.attr("dy", ".35em")
.style("text-anchor", "end")
.text(function(d){return d});
</script>
</div>
</body>
</html>
Explanation: The next thing I did was to write to fdg-dolphins.html the D3 JavaScript code. I also added a <!--ADD-DATASET--> comment so that I can later replace this with the contents of dolphins.js.
Here, I would also like to mention the D3 code was inspired from the Force-Directed Graph of co-occurring character in Les Misérables. In addition, I also added a legend categorizing the quantity of neighbors a node has (i.e. is this dolphin friendly?)
End of explanation
import re
htmlfile = 'fdg-dolphins.html'
with open(datfile) as f:
dat = f.read()
with open(htmlfile) as f:
dat = re.sub('<!--ADD-DATASET-->', dat, f.read())
with open(htmlfile, 'w') as f:
f.write(dat)
Explanation: In the following bit of code, <!--ADD-DATASET--> in fdg-dolphins.html was replaced with the contents of dolphins.js.
End of explanation
from IPython.display import IFrame
IFrame(htmlfile,650,500)
Explanation: Finally, the D3 dolphins network in iPython notebook is visualized!
End of explanation |
2,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interaktives Übungsblatt
Vorgeplänkel
Step1: Systemmatrizen
Wir werden im Weiteren pyMG nutzen um die Systemmatrix für gegebene Parameter $ n$ und $\sigma$ für das Helmholtz-Problem in 1D aufzustellen.
Step2: Plotten Sie mithilfe von matrix_plot die Systemmatrizen für $\sigma = 0$ und $n=10$.
Step3: Aufgabe
Step4: Frage
Step5: Iterationsmatrizen des Glätters
Frage
Weitaus spannender sind die Spektralradiien der Iterationsmatrizen eines Glätters. Warum?
Step6: Aufgabe
Plotten Sie die Iterationsmatrix des gewichteten Jacobi für verschiedene $\sigma$. Zu welcher Klasse gehört diese Iterationsmatrix.
Step7: Frage
Step8: Frage
Ist die Iterationsmatrix zyklisch?
Step9: Frage
Step10: Frage
Step11: Frage
Verhält sich der gewichtete Jacobi Glätter wie in der Vorlesung vorraussgesagt?
Step12: Frage
Step13: Eine einfache Methode wäre
Step14: Aufgabe
Überlegt euch eigene Vergleichsmethoden und variiert $n$,$\sigma$ und die Periodizität, um herauszufinden, wann die
get_theta_eigvals Methode die Eigenwerte gut Schätzen kann.
Step15: Zweigitter-Iterationsmatrix
Step16: Im Folgenden werden wir nun mithilfe des pymg frameworks die Zweigitter-Iterationsmatrix für ein einfaches Multigrid
aufstellen. Wir beginnen mit der Grobgitterkorrektur.
Step17: Buntebilderaufgabe
Step18: Aufgabe
Step19: Nun verwenden wir die Grobgitterkorrektur und die Iterationsmatrizen der Glätter um die Zweigitteriterationsmatrix zu berrechnen.
Step20: Buntebilderaufgabe
Step21: Frage
Was sieht man in den beiden vorhergehenden Plots?
Als nächstes behandeln wir den periodischen Fall.
Step22: Buntebilderaufgabe
Step23: Frage
Was fällt auf? (Das sind die besten Fragen... zumindest für den Übungsleiter.)
Aufgabe
Step24: Aufgabe
Step25: Bonusbuntebilderaufgabe
Vergleicht analog zu den Eigenwertplots der Systemmatrizen, die Eigenwertplots der Zweigitteriterationsmatrizen.
Step26: Asymptotische Äquivalenz zwischen periodisch und nicht-periodisch
Wir sehen, dass die Spektralradiien auf den ersten Blick gut übereinstimmen. Wir wollen nun empirisch ergründen ob die Matrizenklassen der periodischen und nicht periodischen Fällen möglicherweise zueinander asymptotisch äquivalent sind.
Zur Erinnerung
Step27: Aufgabe
Step28: Glättung
Step29: Gauss-Seidel
Step30: Grobgitterkorrektur
Hier trifft man mal wieder auf das Problem, dass die Freiheitsgrade im periodischen und nicht periodischen Fall unterschiedlich verteilt sind.
Step31: Frage
Welcher Trick wird hier verwendet um mit den unterschiedlichen Dimensionen der Matrizen umzugehen?
Step32: Zweigitter | Python Code:
import sys
# Diese Zeile muss angepasst werden!
sys.path.append("/home/moser/MG_2016/pyMG-2016/")
import scipy as sp
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import pymg
from project.helmholtz1d import Helmholtz1D
from project.helmholtz1d_periodic import Helmholtz1D_Periodic
from project.gauss_seidel import GaussSeidel
from project.weighted_jacobi import WeightedJacobi
from project.pfasst.plot_tools import eigvalue_plot_list, matrix_plot, matrix_row_plot
from project.pfasst.transfer_tools import to_dense
from project.pfasst.matrix_method_tools import matrix_power
def plot_3_eigvalueplots(A_p,A_z,A_m):
eig_p.append(sp.linalg.eigvals(to_dense(A_p)))
eig_z.append(sp.linalg.eigvals(to_dense(A_z)))
eig_m.append(sp.linalg.eigvals(to_dense(A_m)))
real_part_p = np.real(eig_p[-1])
img_part_p = np.imag(eig_p[-1])
real_part_z = np.real(eig_z[-1])
img_part_z = np.imag(eig_z[-1])
real_part_m = np.real(eig_m[-1])
img_part_m = np.imag(eig_m[-1])
fig1, (ax1, ax2, ax3) = plt.subplots(ncols=3,figsize=(15,3))
ax1.plot(real_part_p,img_part_p,'ro')
ax1.set_xlabel("real part")
ax1.set_ylabel("img part")
ax1.set_title('eigenvalues')
ax2.plot(real_part_z,img_part_z,'bo')
ax2.set_xlabel("real part")
ax2.set_ylabel("img part")
ax2.set_title('eigenvalues')
ax3.plot(real_part_m,img_part_m,'go')
ax3.set_xlabel("real part")
ax3.set_ylabel("img part")
ax3.set_title('eigenvalues')
fig1.tight_layout()
plt.show()
def plot_2_eigvalueplots(A_p,A_z):
eig_p.append(sp.linalg.eigvals(to_dense(A_p)))
eig_z.append(sp.linalg.eigvals(to_dense(A_z)))
real_part_p = np.real(eig_p[-1])
img_part_p = np.imag(eig_p[-1])
real_part_z = np.real(eig_z[-1])
img_part_z = np.imag(eig_z[-1])
fig1, (ax1, ax2) = plt.subplots(ncols=2,figsize=(15,3))
ax1.plot(real_part_p,img_part_p,'ro')
ax1.set_xlabel("real part")
ax1.set_ylabel("img part")
ax1.set_title('eigenvalues')
ax2.plot(real_part_z,img_part_z,'bo')
ax2.set_xlabel("real part")
ax2.set_ylabel("img part")
ax2.set_title('eigenvalues')
fig1.tight_layout()
plt.show()
Explanation: Interaktives Übungsblatt
Vorgeplänkel
End of explanation
def system_matrix_hh1d(n,sig):
hh1d = Helmholtz1D(n, sig)
return hh1d.A
def system_matrix_hh1d_periodic(n,sig):
hh1d = Helmholtz1D_Periodic(n, sig)
return hh1d.A
def spec_rad(A):
return np.max(np.abs(sp.linalg.eigvals(to_dense(A))))
Explanation: Systemmatrizen
Wir werden im Weiteren pyMG nutzen um die Systemmatrix für gegebene Parameter $ n$ und $\sigma$ für das Helmholtz-Problem in 1D aufzustellen.
End of explanation
matrix_plot(to_dense(system_matrix_hh1d(10,0)))
matrix_plot(to_dense(system_matrix_hh1d_periodic(10,0)))
Explanation: Plotten Sie mithilfe von matrix_plot die Systemmatrizen für $\sigma = 0$ und $n=10$.
End of explanation
eig_p=[]
eig_m=[]
eig_z=[]
for n in [5,10,20]:
A_p = system_matrix_hh1d(n,100.0)
A_z = system_matrix_hh1d(n,0.0)
A_m = system_matrix_hh1d(n,-100.0)
plot_3_eigvalueplots(A_p, A_z, A_m)
Explanation: Aufgabe: Plotten Sie mithilfe von plot_3_eigvalueplots die Eigenwerte der Systemmatrix für $n \in [5,10,20]$ und $\sigma = 100$,$\sigma = -100$ und $\sigma = 0$.
End of explanation
n=30
for sigma in [1000,0,-1000]:
plot_2_eigvalueplots(system_matrix_hh1d(n,sigma),system_matrix_hh1d_periodic(n,sigma))
Explanation: Frage: Wie unterscheiden sich die Spektren der verschiedenen Systemmatrizen?
End of explanation
def iteration_matrix_wjac(n, sigma, periodic=True):
if periodic:
A = system_matrix_hh1d_periodic(n,sigma)
else:
A = system_matrix_hh1d(n,sigma)
wjac = WeightedJacobi(A, 2.0/3.0)
P_inv = wjac.Pinv
return np.eye(n) - P_inv.dot(A)
Explanation: Iterationsmatrizen des Glätters
Frage
Weitaus spannender sind die Spektralradiien der Iterationsmatrizen eines Glätters. Warum?
End of explanation
matrix_plot(iteration_matrix_wjac(10,-100))
n = 10
sigma_range = np.linspace(-100,100,100)
sr_wjac_periodic = map(lambda sig : spec_rad(iteration_matrix_wjac(n, sig,periodic=True)), sigma_range)
sr_wjac = map(lambda sig : spec_rad(iteration_matrix_wjac(n, sig,periodic=False)), sigma_range)
# Achsen festhalten
fig1, (ax1, ax2, ax3) = plt.subplots(ncols=3,figsize=(15,4))
ax1.plot(sigma_range, sr_wjac_periodic,'k-')
ax1.set_xlabel('$\sigma$')
ax1.set_ylabel("spectral radius")
ax1.set_title('periodic')
ax2.plot(sigma_range, sr_wjac,'k-')
ax2.set_xlabel('$\sigma$')
ax2.set_ylabel("spectral radius")
ax2.set_title('non-periodic')
ax3.plot(sigma_range, np.abs(np.asarray(sr_wjac) - np.asarray(sr_wjac_periodic)),'k-')
ax3.set_xlabel('$\sigma$')
ax3.set_ylabel("spectral radius")
ax3.set_title('difference')
fig1.tight_layout()
plt.show()
Explanation: Aufgabe
Plotten Sie die Iterationsmatrix des gewichteten Jacobi für verschiedene $\sigma$. Zu welcher Klasse gehört diese Iterationsmatrix.
End of explanation
def iteration_matrix_gs(n, sigma, periodic=True):
if periodic:
A = system_matrix_hh1d_periodic(n,sigma)
else:
A = system_matrix_hh1d(n,sigma)
gs = GaussSeidel(A)
P_inv = gs.Pinv
return np.eye(n) - P_inv.dot(A)
Explanation: Frage : Wie verhalten sich die Spektren für das periodische Problem zu den Problemen mit Dirichletrandbedingungen für verschiedene $\sigma$ und $n$? Erkenntnis durch ausprobieren!
Aufgabe
Nutzen Sie die folgende Funktion, um die Iterationsmatrix für Gauß-Seidel abhängig von $\sigma$ und $n$ zu berechne. Finden Sie heraus wie sich der Spektralradius für verschiedene $\sigma$ und den periodischen, sowie nicht periodischen Fall verhält.
End of explanation
matrix_plot(iteration_matrix_gs(10,0,True))
sr_gs_periodic = map(lambda sig : spec_rad(iteration_matrix_gs(n, sig,periodic=True)), sigma_range)
sr_gs = map(lambda sig : spec_rad(iteration_matrix_gs(n, sig,periodic=False)), sigma_range)
fig1, (ax1, ax2, ax3) = plt.subplots(ncols=3,figsize=(15,4))
ax1.plot(sigma_range, sr_gs_periodic,'k-')
ax1.set_xlabel('$\sigma$')
ax1.set_ylabel("spectral radius")
ax1.set_title('periodic')
ax2.plot(sigma_range, sr_gs,'k-')
ax2.set_xlabel('$\sigma$')
ax2.set_ylabel("spectral radius")
ax2.set_title('non-periodic')
ax3.plot(sigma_range, np.abs(np.asarray(sr_gs) - np.asarray(sr_gs_periodic)),'k-')
ax3.set_xlabel('$\sigma$')
ax3.set_ylabel("spectral radius")
ax3.set_title('difference')
fig1.tight_layout()
plt.show()
Explanation: Frage
Ist die Iterationsmatrix zyklisch?
End of explanation
def transformation_matrix_fourier_basis(N):
psi = np.zeros((N,N),dtype=np.complex128)
for i in range(N):
for j in range(N):
psi[i,j] = np.exp(2*np.pi*1.0j*j*i/N)
return psi/np.sqrt(N)
def plot_fourier_transformed(A):
A = to_dense(A)
n = A.shape[0]
PSI_trafo = transformation_matrix_fourier_basis(n)
PSI_trafo_inv = sp.linalg.inv(PSI_trafo)
A_traf = np.dot(PSI_trafo_inv, np.dot(A,PSI_trafo))
matrix_row_plot([A,np.abs(A_traf)])
plot_fourier_transformed(iteration_matrix_wjac(16,0))
plot_fourier_transformed(iteration_matrix_gs(16,0))
Explanation: Frage : Wie verhalten sich die Spektren für das periodische Problem zu den Problemen mit Dirichletrandbedingungen für verschiedene $\sigma$ und $n$? Erkenntnis durch ausprobieren!
Das Leben im Fourier-Raum
Wir können mithilfe der Moden
$$v^{(m)} = \frac{1}{\sqrt{n}} \begin{pmatrix}1 \ e^{-2\pi i m/n} \ \vdots \ e^{-2\pi i m(n-1)/n} \end{pmatrix} $$
eine Transformation definieren. Die uns den Operatoren/Matrizen in den Fourier-Raum übersetzt.
End of explanation
def get_theta_eigvals(A, plot=False,which='all'):
A = to_dense(A)
n = A.shape[0]
PSI_trafo = transformation_matrix_fourier_basis(n)
PSI_trafo_inv = sp.linalg.inv(PSI_trafo)
A_traf = np.dot(PSI_trafo_inv, np.dot(A,PSI_trafo))
if plot:
matrix_plot(np.abs(A_traf))
eigvals = np.asarray(map(lambda k : A_traf[k,k],range(n)))
if which is 'high':
return eigvals[np.ceil(n/4):np.floor(3.0*n/4)]
elif which is 'low':
return np.hstack([eigvals[:np.floor(n/4)],eigvals[np.ceil(3.0*n/4):]])
else:
return eigvals
Explanation: Frage: Was ist hier passiert? Und was passiert für unterschiedliche $\sigma$?
Die hohen Eigenwerte extrahiert man nun durch auspicken der richtigen Diagonalwerte nach der Transformation, falls die Matrix zyklisch ist.
End of explanation
print np.abs(get_theta_eigvals(iteration_matrix_wjac(16,0), plot=False,which='high'))
print np.abs(get_theta_eigvals(iteration_matrix_wjac(16,0), plot=False,which='low'))
Explanation: Frage
Verhält sich der gewichtete Jacobi Glätter wie in der Vorlesung vorraussgesagt?
End of explanation
It_gs = iteration_matrix_gs(16,0)
eigvals = sp.linalg.eigvals(It_gs)
diagonals = get_theta_eigvals(It_gs)
Explanation: Frage:
Wie gut passen eigentlich die Diagonalwerte der Fourier-transformierten Iterationsmatrix mit den Eigenwerten der Matrix für Gauss-Seidel zusammen? Was könnte man machen um Sie zu vergleichen.
End of explanation
sum_eig = np.sum(np.abs(eigvals))
sum_diag = np.sum(np.abs(diagonals))
print sum_eig
print sum_diag
Explanation: Eine einfache Methode wäre:
End of explanation
def spec_rad_estimate(A):
diagonals = get_theta_eigvals(A)
return np.max(np.abs(diagonals))
sr_gs_periodic = map(lambda sig : spec_rad(iteration_matrix_gs(16, sig,periodic=True)), sigma_range)
sr_gs = map(lambda sig : spec_rad(iteration_matrix_gs(16, sig,periodic=False)), sigma_range)
md_gs_periodic = map(lambda sig : spec_rad_estimate(iteration_matrix_gs(16, sig,periodic=True)), sigma_range)
md_gs = map(lambda sig : spec_rad_estimate(iteration_matrix_gs(16, sig,periodic=False)), sigma_range)
fig1, (ax1, ax2, ax3) = plt.subplots(ncols=3,figsize=(15,4))
ax1.plot(sigma_range, sr_gs,'k-',sigma_range, sr_gs_periodic,'k--')
ax1.set_xlabel('$\sigma$')
ax1.set_ylabel("spectral radius")
ax1.set_title('Computed')
ax2.plot(sigma_range, md_gs,'k-',sigma_range, md_gs_periodic,'k--')
ax2.set_xlabel('$\sigma$')
ax2.set_ylabel("spectral radius")
ax2.set_title('Estimated')
ax3.plot(sigma_range, np.abs(np.asarray(sr_gs) - np.asarray(md_gs)),'k-',
sigma_range, np.abs(np.asarray(sr_gs_periodic) - np.asarray(md_gs_periodic)),'k--')
ax3.set_xlabel('$\sigma$')
ax3.set_ylabel("spectral radius")
ax3.set_title('Difference')
fig1.tight_layout()
plt.show()
Explanation: Aufgabe
Überlegt euch eigene Vergleichsmethoden und variiert $n$,$\sigma$ und die Periodizität, um herauszufinden, wann die
get_theta_eigvals Methode die Eigenwerte gut Schätzen kann.
End of explanation
from project.linear_transfer import LinearTransfer
from project.linear_transfer_periodic import LinearTransferPeriodic
Explanation: Zweigitter-Iterationsmatrix
End of explanation
def coarse_grid_correction(n,nc, sigma):
A_fine = to_dense(system_matrix_hh1d(n,sigma))
A_coarse = to_dense(system_matrix_hh1d(nc,sigma))
A_coarse_inv = sp.linalg.inv(A_coarse)
lin_trans = LinearTransfer(n, nc)
prolong = to_dense(lin_trans.I_2htoh)
restrict = to_dense(lin_trans.I_hto2h)
return np.eye(n)- np.dot(prolong.dot(A_coarse_inv.dot(restrict)), A_fine)
Explanation: Im Folgenden werden wir nun mithilfe des pymg frameworks die Zweigitter-Iterationsmatrix für ein einfaches Multigrid
aufstellen. Wir beginnen mit der Grobgitterkorrektur.
End of explanation
plot_fourier_transformed(coarse_grid_correction(31,15,-1000))
plot_fourier_transformed(coarse_grid_correction(31,15,0))
plot_fourier_transformed(coarse_grid_correction(31,15,1000))
def coarse_grid_correction_periodic(n,nc, sigma):
A_fine = to_dense(system_matrix_hh1d_periodic(n,sigma))
A_coarse = to_dense(system_matrix_hh1d_periodic(nc,sigma))
A_coarse_inv = sp.linalg.inv(A_coarse)
lin_trans = LinearTransferPeriodic(n, nc)
prolong = to_dense(lin_trans.I_2htoh)
restrict = to_dense(lin_trans.I_hto2h)
return np.eye(n)- np.dot(prolong.dot(A_coarse_inv.dot(restrict)), A_fine)
Explanation: Buntebilderaufgabe: Nutze plot_fourier_transformed um für $n=31$, $n_c=15$ und verschiedene $\sigma\in[-1000,1000]$ um die Grobgitterkorrekturiterationsmatrizen und deren Fourier-transformierten zu plotten.
End of explanation
plot_fourier_transformed(coarse_grid_correction_periodic(32,16,-1000))
plot_fourier_transformed(coarse_grid_correction_periodic(32,16,-0.00))
plot_fourier_transformed(coarse_grid_correction_periodic(32,16,1000))
Explanation: Aufgabe:
Nutzen Sie coarse_grid_correction_periodic für die Grobgitterkorrektur für das periodische Problem und plotten Sie nochmal für verschiedene $\sigma$ die Matrizen und ihre Fourier-transformierten.
Frage:
Was genau passiert bei $\sigma = 0$ und in der Nähe davon? Für welche $n_f$ und $n_c$ ist die Grobgitterkorrektur für ein periodisches Problem sinnvoll? Was fällt sonst auf? Und was hat das Ganze mit der Vorlesung zu tun?
End of explanation
def two_grid_it_matrix(n,nc, sigma, nu1=3,nu2=3,typ='wjac'):
cg = coarse_grid_correction(n,nc,sigma)
if typ is 'wjac':
smoother = iteration_matrix_wjac(n,sigma, periodic=False)
if typ is 'gs':
smoother = iteration_matrix_gs(n,sigma, periodic=False)
pre_sm = matrix_power(smoother, nu1)
post_sm = matrix_power(smoother, nu2)
return pre_sm.dot(cg.dot(post_sm))
Explanation: Nun verwenden wir die Grobgitterkorrektur und die Iterationsmatrizen der Glätter um die Zweigitteriterationsmatrix zu berrechnen.
End of explanation
plot_fourier_transformed(two_grid_it_matrix(15,7,-1000,typ='wjac'))
plot_fourier_transformed(two_grid_it_matrix(15,7,0,typ='wjac'))
plot_fourier_transformed(two_grid_it_matrix(15,7,1000,typ='wjac'))
plot_fourier_transformed(two_grid_it_matrix(15,7,-100,typ='gs'))
plot_fourier_transformed(two_grid_it_matrix(15,7,0,typ='gs'))
plot_fourier_transformed(two_grid_it_matrix(15,7,100,typ='gs'))
sr_2grid_var_sigma = map(lambda sig : spec_rad(two_grid_it_matrix(15,7,sig)), sigma_range)
plt.semilogy(sigma_range, sr_2grid_var_sigma,'k-')
plt.title('$n_f = 15, n_c = 7$')
plt.xlabel('$\sigma$')
plt.ylabel("spectral radius")
nf_range = map(lambda k: 2**k-1,range(3,10))
nc_range = map(lambda k: 2**k-1,range(2,9))
sr_2grid_m1000 = map(lambda nf,nc : spec_rad(two_grid_it_matrix(nf,nc,-1000)), nf_range, nc_range)
sr_2grid_0 = map(lambda nf,nc : spec_rad(two_grid_it_matrix(nf,nc,0)), nf_range, nc_range)
sr_2grid_p1000 = map(lambda nf,nc : spec_rad(two_grid_it_matrix(nf,nc,1000)), nf_range, nc_range)
plt.semilogy(nf_range, sr_2grid_m1000,'k-',nf_range, sr_2grid_0,'k--',nf_range, sr_2grid_p1000,'k:')
plt.xlabel('$n_f$')
plt.ylabel("spectral radius")
plt.legend(("$\sigma = -1000$","$\sigma = 0$","$\sigma = 1000$"),'upper right',shadow = True)
Explanation: Buntebilderaufgabe:
Nutzen Sie plot_fourier_transformed um für $n=15$, $n_c=7$ und verschiedene $\sigma\in[-1000,1000]$ um die Zweigittermatrizen und deren Fourier-transformierten zu plotten.
End of explanation
def two_grid_it_matrix_periodic(n,nc, sigma, nu1=3,nu2=3,typ='wjac'):
cg = coarse_grid_correction_periodic(n,nc,sigma)
if typ is 'wjac':
smoother = iteration_matrix_wjac(n,sigma, periodic=True)
if typ is 'gs':
smoother = iteration_matrix_gs(n,sigma, periodic=True)
pre_sm = matrix_power(smoother, nu1)
post_sm = matrix_power(smoother, nu2)
return pre_sm.dot(cg.dot(post_sm))
Explanation: Frage
Was sieht man in den beiden vorhergehenden Plots?
Als nächstes behandeln wir den periodischen Fall.
End of explanation
plot_fourier_transformed(two_grid_it_matrix_periodic(16,8,-100,typ='wjac'))
plot_fourier_transformed(two_grid_it_matrix_periodic(16,8,0.01,typ='wjac'))
plot_fourier_transformed(two_grid_it_matrix_periodic(16,8,100,typ='wjac'))
plot_fourier_transformed(two_grid_it_matrix_periodic(16,8,-100,typ='gs'))
plot_fourier_transformed(two_grid_it_matrix_periodic(16,8,-0.01,typ='gs'))
plot_fourier_transformed(two_grid_it_matrix_periodic(16,8,100,typ='gs'))
Explanation: Buntebilderaufgabe:
Nutzen Sie plot_fourier_transformed um für $n=16$, $n_c=8$ und verschiedene $\sigma\in[-1000,1000]$ um die Zweigittermatrizen und deren Fourier-transformierten zu plotten.
End of explanation
sr_2grid_var_sigma_periodic = map(lambda sig : spec_rad(two_grid_it_matrix_periodic(16,8,sig)), sigma_range)
plt.plot(sigma_range,np.asarray(sr_2grid_var_sigma_periodic),'k-')
plt.title('Differenz periodisch und nicht periodisch')
plt.xlabel('$\sigma$')
plt.ylabel("spectral radius")
nf_range = map(lambda k: 2**k,range(3,10))
nc_range = map(lambda k: 2**k,range(2,9))
sr_2grid_m1000_p = map(lambda nf,nc : spec_rad(two_grid_it_matrix_periodic(nf,nc,-1000)), nf_range, nc_range)
sr_2grid_0_p = map(lambda nf,nc : spec_rad(two_grid_it_matrix_periodic(nf,nc,0.01)), nf_range, nc_range)
sr_2grid_p1000_p = map(lambda nf,nc : spec_rad(two_grid_it_matrix_periodic(nf,nc,1000)), nf_range, nc_range)
plt.semilogy(nf_range, sr_2grid_m1000_p,'k-',nf_range, sr_2grid_0_p,'k--',nf_range, sr_2grid_p1000_p,'k:')
plt.xlabel('$n_f$')
plt.ylabel("spectral radius")
plt.legend(("$\sigma = -1000$","$\sigma = 0$","$\sigma = 1000$"),'upper right',shadow = True)
Explanation: Frage
Was fällt auf? (Das sind die besten Fragen... zumindest für den Übungsleiter.)
Aufgabe:
Nutzen Sie die Funktion two_grid_it_matrix_periodic für den periodischen Fall und plotten Sie den Spektralradius über $\sigma$ und den Spektralradius über $n$ für 3 verschiedene $\sigma$.
End of explanation
plt.plot(sigma_range, np.asarray(sr_2grid_var_sigma)-np.asarray(sr_2grid_var_sigma_periodic),'k-')
plt.title('Differenz periodisch und nicht periodisch')
plt.xlabel('$\sigma$')
plt.ylabel("spectral radius")
plt.semilogy(nf_range, np.abs(np.asarray(sr_2grid_m1000_p) - np.asarray(sr_2grid_m1000)),'k-',
nf_range, np.abs(np.asarray(sr_2grid_0_p) - np.asarray(sr_2grid_0)),'k--',
nf_range, np.abs(np.asarray(sr_2grid_p1000_p) - np.asarray(sr_2grid_p1000)),'k:')
plt.xlabel('$n_f$')
plt.ylabel("spectral radius")
plt.legend(("$\sigma = -1000$","$\sigma = 0$","$\sigma = 1000$"),'upper right',shadow = True)
Explanation: Aufgabe: Plotten sie die Differenzen zwischen dem periodischem und nicht-periodischem Fall.
End of explanation
eig_p=[]
eig_m=[]
eig_z=[]
for nf,nc in zip([7,15,31],[3,7,15]):
A_p = two_grid_it_matrix(nf,nc,-100)
A_z = two_grid_it_matrix(nf,nc,0)
A_m = two_grid_it_matrix(nf,nc,100)
plot_3_eigvalueplots(A_p, A_z, A_m)
Explanation: Bonusbuntebilderaufgabe
Vergleicht analog zu den Eigenwertplots der Systemmatrizen, die Eigenwertplots der Zweigitteriterationsmatrizen.
End of explanation
def hs_norm(A):
n = A.shape[0]
return sp.linalg.norm(A,'fro')/np.sqrt(n)
Explanation: Asymptotische Äquivalenz zwischen periodisch und nicht-periodisch
Wir sehen, dass die Spektralradiien auf den ersten Blick gut übereinstimmen. Wir wollen nun empirisch ergründen ob die Matrizenklassen der periodischen und nicht periodischen Fällen möglicherweise zueinander asymptotisch äquivalent sind.
Zur Erinnerung:
Hilbert-Schmidt Norm
Wir definieren die Hilbert-Schmidt Norm einer Matrx $A \in K^{n \times n}$ als
$$ |A| = \left( \frac{1}{n}\sum_{i = 0}^{n-1}\sum_{i = 0}^{n-1} |a_{i,j}|^2 \right)^{1/2}.$$
Es gilt
1. $|A| = \left( \frac{1}{n}\mbox{Spur}(A^A) \right)^{1/2}$
1. $|A| = \left( \frac{1}{n}\sum_{k=0}^{n-1}\lambda_k\right)^{1/2}$, wobei $\lambda_k$ die Eigenwerte von $A^A$ sind
1. $|A| \leq \|A\|$
Asymptotisch äquivalente Folgen von Matrizen
Seien ${A_n}$ und ${B_n}$ Folgen von $n\times n$ Matrizen, welche
beschränkt bzgl. der starken Norm sind:
$$ \|A_n\|,\|B_n\| \leq M \le \infty, n=1,2,\ldots $$
und bzgl. der schwachen Norm konvergieren
$$\lim_{n \to \infty} |A_n -B_n| = 0.$$
Wir nennen diese Folgen asymptotisch äquivalent und notieren dies als $A_n \sim B_n$.
Für ${A_n}$ , ${B_n}$ und ${C_n}$,
welche jeweils die Eigenwerte ${\alpha_{n,i}}$,${\beta_{n,i}}$ und ${\zeta_{n,i}}$ haben gelten
folgende Zusammenhänge.
Wenn $A_n \sim B_n$, dann $\lim_{n \to \infty} |A_n| = \lim_{n \to \infty} |B_n| $
Wenn $A_n \sim B_n$ und $B_n \sim C_n$, dann $A_n \sim C_n$
Wenn $A_nB_n \sim C_n$ und $\|A_n^{-1}\|\leq K \le \infty$, dann gilt $B_n \sim A_n^{-1}C_n$
Wenn $A_n \sim B_n$, dann $\exists -\infty \le m,M\le \infty$, s.d. $m\leq \alpha_{n,i}, \beta_{n,i}\leq M \; \forall n\geq 1 \mbox{und}\; k\geq 0$
Wenn $A_n \sim B_n$, dann gilt $ \lim_{n \to \infty} \frac{1}{n} \sum_{k=0}^{n-1} (\alpha_{n,k}^s - \beta_{n,k}^s) = 0$
Aufgabe:
Schreiben sie eine Funktion hs_norm, welche die Hilbert-Schmidt Norm berechnet in maximal 3 Zeilen.
End of explanation
n_range = np.arange(10,100)
hs_sysmat_m1000 = map(lambda n: hs_norm(to_dense(system_matrix_hh1d(n,-1000))-to_dense(system_matrix_hh1d_periodic(n,-1000))),n_range)
hs_sysmat_0 = map(lambda n: hs_norm(to_dense(system_matrix_hh1d(n,0.001))-to_dense(system_matrix_hh1d_periodic(n,0.001))),n_range)
hs_sysmat_p1000 = map(lambda n: hs_norm(to_dense(system_matrix_hh1d(n,1000))-to_dense(system_matrix_hh1d_periodic(n,1000))),n_range)
plt.plot(hs_sysmat_m1000)
plt.plot(hs_sysmat_0)
plt.plot(hs_sysmat_p1000)
Explanation: Aufgabe:
Überprüfen Sie empirisch ob die
Systemmatrizenklassen
Glättungsiterationsmatrizenklassen
Grobgitterkorrekturmatrizenklassen
Zweigitteriterationsmatrizenklassen
asymptotisch äquivalent sind für $\sigma = { -1000, 0.001, 1000 }$.
Systemmatrizen:
End of explanation
n_range = 2**np.arange(1,11)
hs_wjac_m1000 = map(lambda n: hs_norm(to_dense(iteration_matrix_wjac(n,-1000))-to_dense(iteration_matrix_wjac(n,-1000,False))),n_range)
hs_wjac_0 = map(lambda n: hs_norm(to_dense(iteration_matrix_wjac(n,0))-to_dense(iteration_matrix_wjac(n,0,False))),n_range)
hs_wjac_p1000 = map(lambda n: hs_norm(to_dense(iteration_matrix_wjac(n,1000))-to_dense(iteration_matrix_wjac(n,1000,False))),n_range)
plt.plot(n_range, hs_wjac_m1000)
plt.plot(n_range, hs_wjac_0)
plt.plot(n_range, hs_wjac_p1000)
Explanation: Glättung:
Jacobi
End of explanation
n_range = 2**np.arange(1,11)
hs_gs_m1000 = map(lambda n: hs_norm(to_dense(iteration_matrix_gs(n,-1000))-to_dense(iteration_matrix_gs(n,-1000,False))),n_range)
hs_gs_0 = map(lambda n: hs_norm(to_dense(iteration_matrix_gs(n,0))-to_dense(iteration_matrix_gs(n,0,False))),n_range)
hs_gs_p1000 = map(lambda n: hs_norm(to_dense(iteration_matrix_gs(n,1000))-to_dense(iteration_matrix_gs(n,1000,False))),n_range)
plt.plot(n_range, hs_gs_m1000)
plt.plot(n_range, hs_gs_0)
plt.plot(n_range, hs_gs_p1000)
Explanation: Gauss-Seidel
End of explanation
def einmal_einpacken(A):
return np.r_[[np.zeros(A.shape[0]+1)],np.c_[np.zeros(A.shape[0]),A]]
Explanation: Grobgitterkorrektur
Hier trifft man mal wieder auf das Problem, dass die Freiheitsgrade im periodischen und nicht periodischen Fall unterschiedlich verteilt sind.
End of explanation
n_f_range = 2**np.arange(3,10)
n_c_range = 2**np.arange(2,9)
hs_cgc_m1000 = map(lambda nf,nc: hs_norm(einmal_einpacken(coarse_grid_correction(nf-1,nc-1,-1000))-coarse_grid_correction_periodic(nf,nc,-1000)),n_f_range ,n_c_range)
hs_cgc_0 = map(lambda nf,nc: hs_norm(einmal_einpacken(coarse_grid_correction(nf-1,nc-1,0))-coarse_grid_correction_periodic(nf,nc,0.001)),n_f_range ,n_c_range)
hs_cgc_p1000 = map(lambda nf,nc: hs_norm(einmal_einpacken(coarse_grid_correction(nf-1,nc-1,1000))-coarse_grid_correction_periodic(nf,nc,1000)),n_f_range ,n_c_range)
plt.semilogy(n_f_range, hs_cgc_m1000)
plt.semilogy(n_f_range, hs_cgc_0)
plt.semilogy(n_f_range, hs_cgc_p1000)
# plt.semilogy(n_f_range, 1/np.sqrt(n_f_range))
Explanation: Frage
Welcher Trick wird hier verwendet um mit den unterschiedlichen Dimensionen der Matrizen umzugehen?
End of explanation
n_f_range = 2**np.arange(3,12)
n_c_range = 2**np.arange(2,11)
hs_2grid_m1000 = map(lambda nf,nc: hs_norm(
einmal_einpacken(two_grid_it_matrix(nf-1,nc-1,-1000))-two_grid_it_matrix_periodic(nf,nc,-1000))
,n_f_range ,n_c_range)
hs_2grid_0 = map(lambda nf,nc: hs_norm(
einmal_einpacken(two_grid_it_matrix(nf-1,nc-1,0.001))-two_grid_it_matrix_periodic(nf,nc,0.001))
,n_f_range ,n_c_range)
hs_2grid_p1000 = map(lambda nf,nc: hs_norm(
einmal_einpacken(two_grid_it_matrix(nf-1,nc-1,1000))-two_grid_it_matrix_periodic(nf,nc,1000))
,n_f_range ,n_c_range)
plt.semilogy(n_f_range, hs_2grid_m1000)
plt.semilogy(n_f_range, hs_2grid_0)
plt.semilogy(n_f_range, hs_2grid_p1000)
plt.semilogy(n_f_range, 1/np.sqrt(n_f_range)*30)
Explanation: Zweigitter
End of explanation |
2,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Afinação e Notas Musicais
Objetivo
Após esta unidade, o aluno será capaz de aplicar modelos matemáticos para relacionar o fenômeno perceptual da altura, o fenômeno físico da frequência fundamental e o fenômeno cultural das notas musicais.
Pré-requisitos
Para acompanhar adequadamente esta unidade, o aluno deve estar tranquilo com
Step1: Veja que há um fenômeno interessante que acontece. A nota que deveria ter frequência de 880.0 Hz (um intervalo de oitava em relação à referência inicial) na verdade é calculada como 892 Hz. Isso representa um erro calculável, na forma
Step2: Esse erro, de 1.36%, é bem conhecido e é chamado de Coma Pitagórico. Ele representa uma espécie de erro perceptual acumulado do sistema de afinação. Mas, veja
Step3: Quão desafinado é um sistema de afinação?
No experimento a seguir, vamos calcular o quão desafinado é cada sistema de afinação em relação aos intervalos da escala diatônica. É importante lembrar que esses intervalos foram definidos ao longo de um processo histórico, e não através do resultado de um cálculo. Utilizaremos os seguintes | Python Code:
referencia_inicial = 440.0 # Hz
frequencias = [] # Esta lista recebera todas as frequencias de uma escala
f = referencia_inicial
while len(frequencias) < 12:
if f > (referencia_inicial * 2):
f /= 2.
frequencias.append(f)
f *= (3/2.)
frequencias.sort()
print frequencias
print f
Explanation: Afinação e Notas Musicais
Objetivo
Após esta unidade, o aluno será capaz de aplicar modelos matemáticos para relacionar o fenômeno perceptual da altura, o fenômeno físico da frequência fundamental e o fenômeno cultural das notas musicais.
Pré-requisitos
Para acompanhar adequadamente esta unidade, o aluno deve estar tranquilo com:
1. Podemos organizar notas musicais em oitavas,
1. Os nomes das notas musicais se repetem em cada oitava,
1. Algumas notas são referidas usando um modificador sustenido ou bemol
1. Quando duas notas soam juntas, o som resultante pode ser dissonante ou consonante.
Como afinar um cravo
O conceito de afinação está bastante ligado ao conceito de intervalos. Um intervalo é a diferença perceptual de alturas entre dois tons que soam simultaneamente. Tons medidos e absolutos (por exemplo: uma senóide de frequência 440 Hz) só foram possíveis depois da invenção de instrumentos modernos de geração e de instrumentos precisos para medição. Antes destes avanços tecnológicos, só era possível afinar instrumentos utilizando a percepção de intervalos.
Na Grécia antiga, já haviam instrumentos de sopro e de corda (veja a flauta de Baco ou a lira de Apolo, por exemplo). Isso significa que havia alguma forma de afiná-los (em especial, o instrumento de corda, que desafina rapidamente com o calor). Portanto, os gregos já dominavam o conceito de que uma corda emite um som mais agudo quando é esticada mais fortemente, e um som mais grave quando é esticada com menos força.
Afinação Pitagórica
Pitágoras é dito ser um dos pioneiros em sistematizar uma forma de afinar instrumentos. Ele observou que duas cordas de igual material, esticadas sob a mesma força, mas de comprimentos diferentes, emitem sons de alturas diferentes. Sabemos, hoje, que cada altura diferente está ligada a uma frequência fundamental diferente.
A união dos sons simultâneos das duas cordas produz um intervalo. Pitágoras observou que o intervalo produzido era muito mais agradável ao ouvido (ou: consonante) quando a razão entre os comprimentos das cordas era de 1 para 2. Neste caso, sabemos que a frequência fundamental de uma das vibrações é a metade da outra, e este intervalo é chamado de oitava. O segundo intervalo mais agradável ao ouvido ocorria quando os comprimentos das cordas tinham razão 2 para 3. Neste caso, surge um intervalo chamado de quinta.
Pitágoras, então, definiu o seguinte método para encontrar as alturas de uma escala:
1. Inicie com um tom de referência
1. Afine a próxima nota usando um intervalo de quinta na direção dos agudos
1. Se meu intervalo em relação à referência é maior que uma oitava, então caminhe uma oitava em direção dos graves
1. Se já afinei todas as cordas de uma oitava, então pare.
1. Use a nova nota obtida como referência e continue do passo 2.
End of explanation
print 100*(f - (referencia_inicial * 2)) / (referencia_inicial*2)
Explanation: Veja que há um fenômeno interessante que acontece. A nota que deveria ter frequência de 880.0 Hz (um intervalo de oitava em relação à referência inicial) na verdade é calculada como 892 Hz. Isso representa um erro calculável, na forma:
End of explanation
frequencias_t = [] # Esta lista recebera todas as frequencias de uma escala
ft = referencia_inicial
while len(frequencias_t) < 12:
frequencias_t.append(ft)
ft *= 2**(1/12.)
frequencias_t.sort()
print frequencias_t
print ft
Explanation: Esse erro, de 1.36%, é bem conhecido e é chamado de Coma Pitagórico. Ele representa uma espécie de erro perceptual acumulado do sistema de afinação. Mas, veja: há um erro perceptual acumulado que, causa uma sensação de dissonância mesmo que a afinação toda utilize apenas intervalos perfeitamente consonantes. Trata-se de um paradoxo com o qual os músicos tiveram que lidar através da história.
Na verdade, isso é um fenômeno matematicamente inevitável. A série de frequências geradas pela afinação pitagórica tem a forma:
$$f \times (\frac{3}{2}) ^ N,$$
excluídas as oitavas de ajuste.
Um intervalo de oitava é gerado utilizando uma multiplicação da frequência inicial por uma potência de dois e, como sabemos, não existe nenhum caso em que uma potência de dois é gerada por uma potência de três.
Nomes das notas musicais
Por razões históricas e puramente culturais, as civilizações que descenderam da Grécia antiga utilizaram doze notas como as partes de uma escala. Poderiam ser mais (como em algumas culturas asiáticas) ou menos (como em algumas culturas africanas). As notas receberam seus nomes (dó, ré, mi, fá, sol, lá, si) tomando por base uma poesia em latim chamada Ut Queant Laxis, que é um hino a João Batista. Também, é comum utilizar a notação em letras (C, D, E, F, G, A, B) e os sinais de bemol (Cb, Db, Eb, etc) e sustenido (C#, D#, E#, etc) para denotar acidentes.
Ciclo de Quintas
O sistema de afinação Pitagórico determina, implicitamente, um caminho por entre as notas musicais de uma escala que foi historicamente chamado de ciclo de quintas. Trata-se de um ciclo onde se colocam todas as notas musicais (excluindo-se a oitava), de forma que um passo em qualquer direção percorre um intervalo de quinta:
C, G, D, A, E, B, F#, C#, G#, D#, A#, F, C
Afinação de igual temperamento
A relação entre a frequência de vibração de uma corda e a sensação de altura relacionada a ela foi estudada incialmente (ao menos nestes termos) por Vincenzo Galilei, no século XVI. Esse conhecimento permitiu usar uma linguagem mais próxima à contemporânea para se referir à frequência de cordas vibrantes. Mesmo assim, ainda não haviam instrumentos para detectar frequências com precisão, e, portanto, os sistemas de afinação ainda dependiam de relações perceptuais intervalares.
Uma afinação relativa, como a pitagórica, varia de acordo com o tom. Isso significa que, se um cravo for afinado à partir de um Dó, ele será afinado de maneira diferente que se partir de um Ré. Assim, instrumentos de teclado devem ser afinados novamente quanto uma música é tocada em um outro tom. Ao longo da história, surgiram algumas propostas mostrando caminhos alternativos para a afinação, e re-distribuindo o Coma de forma que ele fique acumulado em notas pouco usadas. Porém, especialmente com a necessidade de tocar o cravo em um repertório vasto sem ter grandes pausas, o processo histórico-cultural consolidou a afinação de igual temperamento.
A afinação de igual temperamento já era conhecida desde Vincenzo Galilei, mas ganhou grande força depois do surgimento das peças da série Cravo Bem Temperado de Bach. A afinação de igual temperamento distribui o Coma igualmente por todas as notas da escala, de forma que nenhuma soa especialmente desafinada. O custo disso é que todas as notas soam levemente desafinadas.
Na afinação de igual temperamento, a razão entre as frequências de duas notas consecutivas é igual a $\sqrt[12]{2}$, de forma que ao fim de 12 notas a frequência obtida será $f \times (\sqrt[12]{2})^{12} = f \times 2$.
End of explanation
intervalos_diatonica = [2, 3, 4, 5, 6, 7]
intervalos_cromatica = [2, 4, 5, 7, 9, 11]
razoes = [9/8., 5/4., 4/3., 3/2., 5/3., 15/8.]
for i in xrange(len(intervalos_diatonica)):
frequencia_ideal = referencia_inicial * razoes[i]
frequencia_pitagorica = frequencias[intervalos_cromatica[i]]
frequencia_temperada = frequencias_t[intervalos_cromatica[i]]
erro_pitagorica = 100*(frequencia_pitagorica - (frequencia_ideal)) / (frequencia_ideal)
erro_temperada = 100*(frequencia_temperada - (frequencia_ideal)) / (frequencia_ideal)
print "Intervalo:", intervalos_diatonica[i]
print "Erro pitagorica:", erro_pitagorica
print "Erro temperada:", erro_temperada
Explanation: Quão desafinado é um sistema de afinação?
No experimento a seguir, vamos calcular o quão desafinado é cada sistema de afinação em relação aos intervalos da escala diatônica. É importante lembrar que esses intervalos foram definidos ao longo de um processo histórico, e não através do resultado de um cálculo. Utilizaremos os seguintes:
| Intervalo | Razão de Frequências |
|:-----------:|:------------:|
| II | 9/8 |
| III | 5/4 |
| IV | 4/3 |
| V | 3/2 |
| VI | 5/3 |
| VII | 15/8 |
End of explanation |
2,883 | Given the following text description, write Python code to implement the functionality described.
Description:
Minimum number of elements which are not part of Increasing or decreasing subsequence in array
Python3 program to return minimum number of elements which are not part of increasing or decreasing subsequences . ; Return minimum number of elements which is not part of any of the sequence . ; If already calculated , return value . ; If whole array is traversed . ; calculating by considering element as part of decreasing sequence . ; calculating by considering element as part of increasing sequence . ; If cannot be calculated for decreasing sequence . ; After considering once by decreasing sequence , now try for increasing sequence . ; If element cannot be part of any of the sequence . ; After considering element as part of increasing and decreasing sequence trying as not part of any of the sequence . ; Wrapper Function ; Adding two number at the end of array , so that increasing and decreasing sequence can be made . MAX - 2 index is assigned INT_MAX for decreasing sequence because / next number of sequence must be less than it . Similarly , for Increasing sequence INT_MIN is assigned to MAX - 1 index . ; Driver code
| Python Code:
MAX = 102
def countMin(arr , dp , n , dec , inc , i ) :
if dp[dec ][inc ][i ] != - 1 :
return dp[dec ][inc ][i ]
if i == n :
return 0
if arr[i ] < arr[dec ] :
dp[dec ][inc ][i ] = countMin(arr , dp , n , i , inc , i + 1 )
if arr[i ] > arr[inc ] :
if dp[dec ][inc ][i ] == - 1 :
dp[dec ][inc ][i ] = countMin(arr , dp , n , dec , i , i + 1 )
else :
dp[dec ][inc ][i ] = min(countMin(arr , dp , n , dec , i , i + 1 ) , dp[dec ][inc ][i ] )
if dp[dec ][inc ][i ] == - 1 :
dp[dec ][inc ][i ] = 1 + countMin(arr , dp , n , dec , inc , i + 1 )
else :
dp[dec ][inc ][i ] = min(1 + countMin(arr , dp , n , dec , inc , i + 1 ) , dp[dec ][inc ][i ] )
return dp[dec ][inc ][i ]
def wrapper(arr , n ) :
arr[MAX - 2 ] = 1000000000
arr[MAX - 1 ] = - 1000000000
dp =[[[- 1 for i in range(MAX ) ] for i in range(MAX ) ] for i in range(MAX ) ]
return countMin(arr , dp , n , MAX - 2 , MAX - 1 , 0 )
if __name__== ' __main __' :
n = 12
arr =[7 , 8 , 1 , 2 , 4 , 6 , 3 , 5 , 2 , 1 , 8 , 7 ]
for i in range(MAX ) :
arr . append(0 )
print(wrapper(arr , n ) )
|
2,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step4: Recurrent Neural Networks
For an introduction to RNN take a look at this great article.
Basic RNNs
Step5: Manual RNN
Step6: Using rnn()
The static_rnn() function creates an unrolled RNN network by chaining cells.
Step7: Using dynamic_rnn()
The dynamic_rnn() function uses a while_loop() operation to run over the cell the appropriate number of times, and you can set swap_memory = True if you want it to swap the GPU’s memory to the CPU’s memory during backpropagation to avoid OOM errors. Conveniently, it also accepts a single tensor for all inputs at every time step (shape [None, n_steps, n_inputs]) and it outputs a single tensor for all outputs at every time step (shape [None, n_steps, n_neurons]); there is no need to stack, unstack, or transpose.
Step8: Packing sequences
Step9: Training a sequence classifier
We will treat each image as a sequence of 28 rows of 28 pixels each (since each MNIST image is 28 × 28 pixels). We will use cells of 150 recurrent neurons, plus a fully connected layer containing 10 neurons (one per class) connected to the output of the last time step, followed by a softmax layer.
Step10: Training the same sequence classifier with Keras
Step11: Multi-layer RNN
It is quite common to stack multiple layers of cells. This gives you a deep RNN.
To implement a deep RNN in TensorFlow, you can create several cells and stack them into a MultiRNNCell.
Step12: Multi-layer RNN with Keras
When stacking RNNs with Keras remember to set return_sequences=True on hidden layers.
Step13: Time series
Now let’s take a look at how to handle time series, such as stock prices, air temperature, brain wave patterns, and so on. In this section we will train an RNN to predict the next value in a generated time series. Each training instance is a randomly selected sequence of 20 consecutive values from the time series, and the target sequence is the same as the input sequence, except it is shifted by one time step into the future.
Step14: Using an OuputProjectionWrapper
Step15: Without using an OutputProjectionWrapper
Step16: With Keras
Step17: Dropout
If you build a very deep RNN, it may end up overfitting the training set. To prevent that, a common technique is to apply dropout. You can simply add a dropout layer before or after the RNN as usual, but if you also want to apply dropout between the RNN layers, you need to use a DropoutWrapper.
Step18: Dropout with Keras
Step19: LSTM
The Long Short-Term Memory (LSTM) cell was proposed in (Hochreiter-Schmidhuber,1997), and it was gradually improved over the years by several researchers. If you consider the LSTM cell as a black box, it can be used very much like a basic cell, except it will perform much better; training will converge faster and it will detect long-term dependencies in the data.
Step20: LSTM with Keras
Step21: Distributing layers across devices
If you try to create each cell in a different device() block, it will not work.
Step22: This fails because a BasicRNNCell is a cell factory, not a cell per se; no cells get created when you create the factory, and thus no variables do either. The device block is simply ignored. The cells actually get created later. When you call dynamic_rnn(), it calls the MultiRNNCell, which calls each individual BasicRNNCell, which create the actual cells (including their variables). Unfortunately, none of these classes provide any way to control the devices on which the variables get created. If you try to put the dynamic_rnn() call within a device block, the whole RNN gets pinned to a single device.
The trick is to create your own cell wrapper
Step23: # Bidirectional LSTM on the IMDB sentiment classification task on Keras
Step24: LSTM on the IMDB sentiment classification task on Keras
Step25: LSTM+FC on the IMDB sentiment classification task on Keras
Step26: Recurrent convolutional network on the IMDB sentiment
Step27: Convolutional network on the IMDB sentiment
Step30: IMDB datasets with bi-gram embeddings
Step33: IMDB datasets with bi-gram embeddings and Convolution1D | Python Code:
# Common imports
import numpy as np
import numpy.random as rnd
import os
# to make this notebook's output stable across runs
rnd.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Tensorflow
import tensorflow as tf
#
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "b<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
Explanation: Recurrent Neural Networks
For an introduction to RNN take a look at this great article.
Basic RNNs
End of explanation
tf.reset_default_graph()
n_inputs = 3
n_neurons = 5
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
Wx = tf.Variable(tf.random_normal(shape=[n_inputs, n_neurons], dtype=tf.float32))
Wy = tf.Variable(tf.random_normal(shape=[n_neurons, n_neurons], dtype=tf.float32))
b = tf.Variable(tf.zeros([1, n_neurons], dtype=tf.float32))
Y0 = tf.tanh(tf.matmul(X0, Wx) + b)
Y1 = tf.tanh(tf.matmul(Y0, Wy) + tf.matmul(X1, Wx) + b)
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) # t = 0
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]]) # t = 1
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
print(Y0_val)
print(Y1_val)
Explanation: Manual RNN
End of explanation
tf.reset_default_graph()
n_inputs = 3
n_neurons = 5
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.contrib.rnn.static_rnn(basic_cell, [X0, X1], dtype=tf.float32)
Y0, Y1 = output_seqs
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]])
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]])
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
Y0_val
Y1_val
#show_graph(tf.get_default_graph())
Explanation: Using rnn()
The static_rnn() function creates an unrolled RNN network by chaining cells.
End of explanation
tf.reset_default_graph()
n_steps = 2
n_inputs = 3
n_neurons = 5
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.array([
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
print("outputs =", outputs.eval(feed_dict={X: X_batch}))
#show_graph(tf.get_default_graph())
Explanation: Using dynamic_rnn()
The dynamic_rnn() function uses a while_loop() operation to run over the cell the appropriate number of times, and you can set swap_memory = True if you want it to swap the GPU’s memory to the CPU’s memory during backpropagation to avoid OOM errors. Conveniently, it also accepts a single tensor for all inputs at every time step (shape [None, n_steps, n_inputs]) and it outputs a single tensor for all outputs at every time step (shape [None, n_steps, n_neurons]); there is no need to stack, unstack, or transpose.
End of explanation
tf.reset_default_graph()
n_steps = 2
n_inputs = 3
n_neurons = 5
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
seq_length = tf.placeholder(tf.int32, [None]) ### <----------------------------------------
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, sequence_length=seq_length, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2]) ### <------------------------
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print(outputs_val)
print(states_val)
Explanation: Packing sequences
End of explanation
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
with tf.variable_scope("rnn", initializer=tf.contrib.layers.variance_scaling_initializer()):
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
logits = fully_connected(states, n_outputs, activation_fn=None)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
n_epochs = 100
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
Explanation: Training a sequence classifier
We will treat each image as a sequence of 28 rows of 28 pixels each (since each MNIST image is 28 × 28 pixels). We will use cells of 150 recurrent neurons, plus a fully connected layer containing 10 neurons (one per class) connected to the output of the last time step, followed by a softmax layer.
End of explanation
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import SimpleRNN
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
batch_size = 150
num_classes = 10
epochs = 100
hidden_units = 150
learning_rate = 0.001
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 28, 28)
x_test = x_test.reshape(x_test.shape[0], 28, 28)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('Evaluate IRNN...')
a = Input(shape=x_train.shape[1:])
b = SimpleRNN(hidden_units,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu')(a)
b = Dense(num_classes)(b)
b = Activation('softmax')(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
scores = model.evaluate(x_test, y_test, verbose=0)
print('IRNN test score:', scores[0])
print('IRNN test accuracy:', scores[1])
Explanation: Training the same sequence classifier with Keras
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 28
n_inputs = 28
n_neurons1 = 150
n_neurons2 = 100
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
hidden1 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons1, activation=tf.nn.relu)
hidden2 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons2, activation=tf.nn.relu)
multi_layer_cell = tf.contrib.rnn.MultiRNNCell([hidden1, hidden2])
outputs, states_tuple = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
states = tf.concat(axis=1, values=states_tuple)
logits = fully_connected(states, n_outputs, activation_fn=None)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
n_epochs = 100
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
Explanation: Multi-layer RNN
It is quite common to stack multiple layers of cells. This gives you a deep RNN.
To implement a deep RNN in TensorFlow, you can create several cells and stack them into a MultiRNNCell.
End of explanation
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import SimpleRNN
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
keras.backend.clear_session()
batch_size = 150
num_classes = 10
epochs = 50 # instead of 100 (too much time)
hidden_units_1 = 150
hidden_units_2 = 100
learning_rate = 0.001
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 28, 28)
x_test = x_test.reshape(x_test.shape[0], 28, 28)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('Evaluate IRNN...')
a = Input(shape=x_train.shape[1:])
b = SimpleRNN(hidden_units_1,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu' , return_sequences=True)(a)
b = SimpleRNN(hidden_units_2,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu')(b)
b = Dense(num_classes)(b)
b = Activation('softmax')(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
scores = model.evaluate(x_test, y_test, verbose=0)
print('IRNN test score:', scores[0])
print('IRNN test accuracy:', scores[1])
Explanation: Multi-layer RNN with Keras
When stacking RNNs with Keras remember to set return_sequences=True on hidden layers.
End of explanation
t_min, t_max = 0, 30
n_steps = 20
def time_series(t):
return t * np.sin(t) / 3 + 2 * np.sin(t*5)
def next_batch(batch_size, n_steps,resolution = 0.1):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
t = np.linspace(t_min, t_max, (t_max - t_min) // resolution)
t_instance = np.linspace(12.2, 12.2 + resolution * (n_steps + 1), n_steps + 1)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"$t . \sin(t) / 3 + 2 . \sin(5t)$")
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
plt.axis([0, 30, -17, 13])
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.title("A training instance", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
X_batch, y_batch = next_batch(1, n_steps)
np.c_[X_batch[0], y_batch[0]]
Explanation: Time series
Now let’s take a look at how to handle time series, such as stock prices, air temperature, brain wave patterns, and so on. In this section we will train an RNN to predict the next value in a generated time series. Each training instance is a randomly selected sequence of 20 consecutive values from the time series, and the target sequence is the same as the input sequence, except it is shifted by one time step into the future.
End of explanation
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.OutputProjectionWrapper(
tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),
output_size=n_outputs)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
n_outputs = 1
learning_rate = 0.001
loss = tf.reduce_sum(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
n_iterations = 1000
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
print(y_pred)
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
Explanation: Using an OuputProjectionWrapper
End of explanation
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 20
n_inputs = 1
n_neurons = 100
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
rnn_outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
n_outputs = 1
learning_rate = 0.001
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = fully_connected(stacked_rnn_outputs, n_outputs, activation_fn=None)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
n_iterations = 1000
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
print(y_pred)
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
Explanation: Without using an OutputProjectionWrapper
End of explanation
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import SimpleRNN
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
def ts_next_batch(batch_size, n_steps,resolution = 0.1):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
keras.backend.clear_session()
batch_size = 50
hidden_units = 100
learning_rate = 0.001
n_inputs = 1
n_outputs = 1
n_steps = 20
print('Evaluate IRNN...')
a = Input(shape=(n_steps,n_inputs))
b = SimpleRNN(hidden_units,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu' , return_sequences=True)(a)
b = keras.layers.core.Reshape((-1,n_neurons))(b)
b = Dense(1,activation=None)(b)
b = keras.layers.core.Reshape((n_steps, n_outputs))(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_squared_error'])
X_batch, y_batch = ts_next_batch(batch_size*1000, n_steps)
x_test, y_test = ts_next_batch(batch_size, n_steps)
model.fit(X_batch, y_batch,
batch_size=batch_size,
epochs=1,
verbose=1,
validation_data=(x_test, y_test))
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = model.predict(X_new,verbose=0)
print(y_pred)
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
Explanation: With Keras
End of explanation
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_inputs = 1
n_neurons = 100
n_layers = 3
n_steps = 20
n_outputs = 1
keep_prob = 0.5
learning_rate = 0.001
is_training = True
def deep_rnn_with_dropout(X, y, is_training):
if is_training:
multi_layer_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicRNNCell(num_units=n_neurons), input_keep_prob=keep_prob) for _ in range(n_layers)],)
else:
multi_layer_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicRNNCell(num_units=n_neurons) for _ in range(n_layers)],)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = fully_connected(stacked_rnn_outputs, n_outputs, activation_fn=None)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
return outputs, loss, training_op
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
outputs, loss, training_op = deep_rnn_with_dropout(X, y, is_training)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 2000
batch_size = 50
with tf.Session() as sess:
if is_training:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
save_path = saver.save(sess, "/tmp/my_model.ckpt")
else:
saver.restore(sess, "/tmp/my_model.ckpt")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
is_training = False
with tf.Session() as sess:
if is_training:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
save_path = saver.save(sess, "/tmp/my_model.ckpt")
else:
saver.restore(sess, "/tmp/my_model.ckpt")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
Explanation: Dropout
If you build a very deep RNN, it may end up overfitting the training set. To prevent that, a common technique is to apply dropout. You can simply add a dropout layer before or after the RNN as usual, but if you also want to apply dropout between the RNN layers, you need to use a DropoutWrapper.
End of explanation
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import SimpleRNN
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
def ts_next_batch(batch_size, n_steps,resolution = 0.1):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
keras.backend.clear_session()
batch_size = 50
hidden_units = 100
learning_rate = 0.001
n_inputs = 1
n_outputs = 1
n_steps = 20
n_layers = 3
keep_prob = 0.5
print('Evaluate IRNN...')
a = Input(shape=(n_steps,n_inputs))
b = SimpleRNN(hidden_units,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu' , return_sequences=True)(a)
b = Dropout(keep_prob)(b)
for i in range(n_layers-1):
b = SimpleRNN(hidden_units,
kernel_initializer=initializers.RandomNormal(stddev=0.001),
recurrent_initializer=initializers.Identity(),
activation='relu' , return_sequences=True)(a)
b = Dropout(keep_prob)(b)
b = keras.layers.core.Reshape((-1,n_neurons))(b)
b = Dense(1,activation=None)(b)
b = keras.layers.core.Reshape((n_steps, n_outputs))(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='mean_squared_error',
optimizer=optimizer,
metrics=['mean_squared_error'])
X_batch, y_batch = ts_next_batch(batch_size*2000, n_steps)
x_test, y_test = ts_next_batch(batch_size*2, n_steps)
model.fit(X_batch, y_batch,
batch_size=batch_size,
epochs=1,
verbose=1,
validation_data=(x_test, y_test))
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = model.predict(X_new,verbose=0)
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
Explanation: Dropout with Keras
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
tf.reset_default_graph()
from tensorflow.contrib.layers import fully_connected
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
n_layers = 3
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
multi_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons) for _ in range(n_layers)])
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
top_layer_h_state = states[-1][1]
logits = fully_connected(top_layer_h_state, n_outputs, activation_fn=None, scope="softmax")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((batch_size, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print("Epoch", epoch, "Train accuracy =", acc_train, "Test accuracy =", acc_test)
Explanation: LSTM
The Long Short-Term Memory (LSTM) cell was proposed in (Hochreiter-Schmidhuber,1997), and it was gradually improved over the years by several researchers. If you consider the LSTM cell as a black box, it can be used very much like a basic cell, except it will perform much better; training will converge faster and it will detect long-term dependencies in the data.
End of explanation
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Activation
from keras.layers import LSTM
from keras import initializers
from keras.optimizers import RMSprop
from keras.models import Model
from keras.layers import Input, Dense
keras.backend.clear_session()
batch_size = 150
num_classes = 10
epochs = 10
n_neurons = 150
n_layers = 3
learning_rate = 0.001
# the data, shuffled and split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(x_train.shape[0], 28, 28)
x_test = x_test.reshape(x_test.shape[0], 28, 28)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
print('Evaluate LSTM...')
a = Input(shape=x_train.shape[1:])
b = LSTM(n_neurons,return_sequences=True)(a)
for i in range(n_layers-2):
b = LSTM(n_neurons,return_sequences=True)(b)
b = LSTM(n_neurons,return_sequences=False)(b)
b = Dense(num_classes)(b)
b = Activation('softmax')(b)
optimizer = keras.optimizers.Adamax(lr=learning_rate)
model = Model(inputs=[a], outputs=[b])
model.compile(loss='categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_test, y_test))
scores = model.evaluate(x_test, y_test, verbose=0)
print('LSTM test score:', scores[0])
print('LSTM test accuracy:', scores[1])
Explanation: LSTM with Keras
End of explanation
with tf.device("/gpu:0"): # BAD! This is ignored.
layer1 = tf.contrib.rnn.BasicRNNCell( num_units = n_neurons)
with tf.device("/gpu:1"): # BAD! Ignored again.
layer2 = tf.contrib.rnn.BasicRNNCell( num_units = n_neurons)
Explanation: Distributing layers across devices
If you try to create each cell in a different device() block, it will not work.
End of explanation
import tensorflow as tf
class DeviceCellWrapper(tf.contrib.rnn.RNNCell):
def __init__(self, device, cell):
self._cell = cell
self._device = device
@property
def state_size(self):
return self._cell.state_size
@property
def output_size(self):
return self._cell.output_size
def __call__(self, inputs, state, scope=None):
with tf.device(self._device):
return self._cell(inputs, state, scope)
tf.reset_default_graph()
n_inputs = 5
n_neurons = 100
devices = ["/cpu:0"]*5
n_steps = 20
X = tf.placeholder(tf.float32, shape=[None, n_steps, n_inputs])
lstm_cells = [DeviceCellWrapper(device, tf.contrib.rnn.BasicRNNCell(num_units=n_neurons))
for device in devices]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(outputs, feed_dict={X: rnd.rand(2, n_steps, n_inputs)}))
Explanation: This fails because a BasicRNNCell is a cell factory, not a cell per se; no cells get created when you create the factory, and thus no variables do either. The device block is simply ignored. The cells actually get created later. When you call dynamic_rnn(), it calls the MultiRNNCell, which calls each individual BasicRNNCell, which create the actual cells (including their variables). Unfortunately, none of these classes provide any way to control the devices on which the variables get created. If you try to put the dynamic_rnn() call within a device block, the whole RNN gets pinned to a single device.
The trick is to create your own cell wrapper
End of explanation
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb
del model
keras.backend.clear_session()
max_features = 20000
# cut texts after this number of words
# (among top max_features most common words)
maxlen = 100
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print("Pad sequences (samples x time)")
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
y_train = np.array(y_train)
y_test = np.array(y_test)
model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(Bidirectional(LSTM(64)))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=4,
validation_data=[x_test, y_test])
Explanation: # Bidirectional LSTM on the IMDB sentiment classification task on Keras
End of explanation
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb
del model
keras.backend.clear_session()
max_features = 20000
# cut texts after this number of words
# (among top max_features most common words)
maxlen = 100
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print("Pad sequences (samples x time)")
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
y_train = np.array(y_train)
y_test = np.array(y_test)
model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(LSTM(64))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=4,
validation_data=[x_test, y_test])
Explanation: LSTM on the IMDB sentiment classification task on Keras
End of explanation
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional
from keras.datasets import imdb
del model
keras.backend.clear_session()
max_features = 20000
# cut texts after this number of words
# (among top max_features most common words)
maxlen = 100
batch_size = 32
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print("Pad sequences (samples x time)")
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
y_train = np.array(y_train)
y_test = np.array(y_test)
model = Sequential()
model.add(Embedding(max_features, 128, input_length=maxlen))
model.add(LSTM(64))
model.add(Dropout(0.5))
model.add(Dense(100))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# try using different optimizers and different optimizer configs
model.compile('adam', 'binary_crossentropy', metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=4,
validation_data=[x_test, y_test])
Explanation: LSTM+FC on the IMDB sentiment classification task on Keras
End of explanation
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import LSTM
from keras.layers import Conv1D, MaxPooling1D
from keras.datasets import imdb
del model
keras.backend.clear_session()
# Embedding
max_features = 20000
maxlen = 100
embedding_size = 128
# Convolution
kernel_size = 5
filters = 64
pool_size = 4
# LSTM
lstm_output_size = 70
# Training
batch_size = 30
epochs = 4
'''
Note:
batch_size is highly sensitive.
Only 2 epochs are needed as the dataset is very small.
'''
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
model.add(Embedding(max_features, embedding_size, input_length=maxlen))
model.add(Dropout(0.25))
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
model.add(MaxPooling1D(pool_size=pool_size))
model.add(LSTM(lstm_output_size))
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
print('Train...')
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
score, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print('Test score:', score)
print('Test accuracy:', acc)
Explanation: Recurrent convolutional network on the IMDB sentiment
End of explanation
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalMaxPooling1D
from keras.datasets import imdb
keras.backend.clear_session()
del model
# set parameters:
max_features = 5000
maxlen = 400
batch_size = 32
embedding_dims = 50
filters = 250
kernel_size = 3
hidden_dims = 250
epochs = 4
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
model.add(Dropout(0.2))
# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
# we use max pooling:
model.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
Explanation: Convolutional network on the IMDB sentiment
End of explanation
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Embedding
from keras.layers import GlobalAveragePooling1D
from keras.datasets import imdb
keras.backend.clear_session()
del model
def create_ngram_set(input_list, ngram_value=2):
Extract a set of n-grams from a list of integers.
>>> create_ngram_set([1, 4, 9, 4, 1, 4], ngram_value=2)
{(4, 9), (4, 1), (1, 4), (9, 4)}
>>> create_ngram_set([1, 4, 9, 4, 1, 4], ngram_value=3)
[(1, 4, 9), (4, 9, 4), (9, 4, 1), (4, 1, 4)]
return set(zip(*[input_list[i:] for i in range(ngram_value)]))
def add_ngram(sequences, token_indice, ngram_range=2):
Augment the input list of list (sequences) by appending n-grams values.
Example: adding bi-gram
>>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]]
>>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017}
>>> add_ngram(sequences, token_indice, ngram_range=2)
[[1, 3, 4, 5, 1337, 2017], [1, 3, 7, 9, 2, 1337, 42]]
Example: adding tri-gram
>>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]]
>>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017, (7, 9, 2): 2018}
>>> add_ngram(sequences, token_indice, ngram_range=3)
[[1, 3, 4, 5, 1337], [1, 3, 7, 9, 2, 1337, 2018]]
new_sequences = []
for input_list in sequences:
new_list = input_list[:]
for i in range(len(new_list) - ngram_range + 1):
for ngram_value in range(2, ngram_range + 1):
ngram = tuple(new_list[i:i + ngram_value])
if ngram in token_indice:
new_list.append(token_indice[ngram])
new_sequences.append(new_list)
return new_sequences
# Set parameters:
# ngram_range = 2 will add bi-grams features
ngram_range = 2
max_features = 20000
maxlen = 400
batch_size = 32
embedding_dims = 50
epochs = 5
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Average train sequence length: {}'.format(np.mean(list(map(len, x_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, x_test)), dtype=int)))
if ngram_range > 1:
print('Adding {}-gram features'.format(ngram_range))
# Create set of unique n-gram from the training set.
ngram_set = set()
for input_list in x_train:
for i in range(2, ngram_range + 1):
set_of_ngram = create_ngram_set(input_list, ngram_value=i)
ngram_set.update(set_of_ngram)
# Dictionary mapping n-gram token to a unique integer.
# Integer values are greater than max_features in order
# to avoid collision with existing features.
start_index = max_features + 1
token_indice = {v: k + start_index for k, v in enumerate(ngram_set)}
indice_token = {token_indice[k]: k for k in token_indice}
# max_features is the highest integer that could be found in the dataset.
max_features = np.max(list(indice_token.keys())) + 1
# Augmenting x_train and x_test with n-grams features
x_train = add_ngram(x_train, token_indice, ngram_range)
x_test = add_ngram(x_test, token_indice, ngram_range)
print('Average train sequence length: {}'.format(np.mean(list(map(len, x_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, x_test)), dtype=int)))
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
# we add a GlobalAveragePooling1D, which will average the embeddings
# of all words in the document
model.add(GlobalAveragePooling1D())
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
Explanation: IMDB datasets with bi-gram embeddings
End of explanation
from keras.preprocessing import sequence
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Embedding
from keras.layers import GlobalAveragePooling1D
from keras.datasets import imdb
keras.backend.clear_session()
del model
def create_ngram_set(input_list, ngram_value=2):
Extract a set of n-grams from a list of integers.
>>> create_ngram_set([1, 4, 9, 4, 1, 4], ngram_value=2)
{(4, 9), (4, 1), (1, 4), (9, 4)}
>>> create_ngram_set([1, 4, 9, 4, 1, 4], ngram_value=3)
[(1, 4, 9), (4, 9, 4), (9, 4, 1), (4, 1, 4)]
return set(zip(*[input_list[i:] for i in range(ngram_value)]))
def add_ngram(sequences, token_indice, ngram_range=2):
Augment the input list of list (sequences) by appending n-grams values.
Example: adding bi-gram
>>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]]
>>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017}
>>> add_ngram(sequences, token_indice, ngram_range=2)
[[1, 3, 4, 5, 1337, 2017], [1, 3, 7, 9, 2, 1337, 42]]
Example: adding tri-gram
>>> sequences = [[1, 3, 4, 5], [1, 3, 7, 9, 2]]
>>> token_indice = {(1, 3): 1337, (9, 2): 42, (4, 5): 2017, (7, 9, 2): 2018}
>>> add_ngram(sequences, token_indice, ngram_range=3)
[[1, 3, 4, 5, 1337], [1, 3, 7, 9, 2, 1337, 2018]]
new_sequences = []
for input_list in sequences:
new_list = input_list[:]
for i in range(len(new_list) - ngram_range + 1):
for ngram_value in range(2, ngram_range + 1):
ngram = tuple(new_list[i:i + ngram_value])
if ngram in token_indice:
new_list.append(token_indice[ngram])
new_sequences.append(new_list)
return new_sequences
# Set parameters:
# ngram_range = 2 will add bi-grams features
ngram_range = 2
max_features = 20000
maxlen = 400
batch_size = 32
embedding_dims = 50
epochs = 5
print('Loading data...')
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=max_features)
print(len(x_train), 'train sequences')
print(len(x_test), 'test sequences')
print('Average train sequence length: {}'.format(np.mean(list(map(len, x_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, x_test)), dtype=int)))
if ngram_range > 1:
print('Adding {}-gram features'.format(ngram_range))
# Create set of unique n-gram from the training set.
ngram_set = set()
for input_list in x_train:
for i in range(2, ngram_range + 1):
set_of_ngram = create_ngram_set(input_list, ngram_value=i)
ngram_set.update(set_of_ngram)
# Dictionary mapping n-gram token to a unique integer.
# Integer values are greater than max_features in order
# to avoid collision with existing features.
start_index = max_features + 1
token_indice = {v: k + start_index for k, v in enumerate(ngram_set)}
indice_token = {token_indice[k]: k for k in token_indice}
# max_features is the highest integer that could be found in the dataset.
max_features = np.max(list(indice_token.keys())) + 1
# Augmenting x_train and x_test with n-grams features
x_train = add_ngram(x_train, token_indice, ngram_range)
x_test = add_ngram(x_test, token_indice, ngram_range)
print('Average train sequence length: {}'.format(np.mean(list(map(len, x_train)), dtype=int)))
print('Average test sequence length: {}'.format(np.mean(list(map(len, x_test)), dtype=int)))
print('Pad sequences (samples x time)')
x_train = sequence.pad_sequences(x_train, maxlen=maxlen)
x_test = sequence.pad_sequences(x_test, maxlen=maxlen)
print('x_train shape:', x_train.shape)
print('x_test shape:', x_test.shape)
print('Build model...')
model = Sequential()
# we start off with an efficient embedding layer which maps
# our vocab indices into embedding_dims dimensions
model.add(Embedding(max_features,
embedding_dims,
input_length=maxlen))
model.add(Dropout(0.2))
# we add a Convolution1D, which will learn filters
# word group filters of size filter_length:
model.add(Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1))
# we use max pooling:
model.add(GlobalMaxPooling1D())
# We add a vanilla hidden layer:
model.add(Dense(hidden_dims))
model.add(Dropout(0.2))
model.add(Activation('relu'))
# We project onto a single unit output layer, and squash it with a sigmoid:
model.add(Dense(1))
model.add(Activation('sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
Explanation: IMDB datasets with bi-gram embeddings and Convolution1D
End of explanation |
2,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
record schedules for 2 weeks, then augment count with weekly flight numbers.
seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.
Step1: sch checks out with source
Step2: mdf checks out with source | Python Code:
for i in locations:
print i
if i not in sch:sch[i]={}
#march 11-24 = 2 weeks
for d in range (11,25):
if d not in sch[i]:
try:
url=airportialinks[i]
full=url+'arrivals/201703'+str(d)
m=requests.get(full).content
sch[i][full]=pd.read_html(m)[0]
#print full
except: pass #print 'no tables',i,d
for i in range(11,25):
testurl=u'https://www.airportia.com/jordan/queen-alia-international-airport/arrivals/201703'+str(i)
print 'nr. of flights on March',i,':',len(sch['AMM'][testurl])
testurl=u'https://www.airportia.com/jordan/queen-alia-international-airport/arrivals/20170318'
k=sch['AMM'][testurl]
k[k['From']=='Frankfurt FRA']
Explanation: record schedules for 2 weeks, then augment count with weekly flight numbers.
seasonal and seasonal charter will count as once per week for 3 months, so 12/52 per week. TGM separate, since its history is in the past.
End of explanation
mdf=pd.DataFrame()
for i in sch:
for d in sch[i]:
df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1)
df['To']=i
df['Date']=d
mdf=pd.concat([mdf,df])
mdf['City']=[i[:i.rfind(' ')] for i in mdf['From']]
mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['From']]
k=mdf[mdf['Date']==testurl]
k[k['From']=='Frankfurt FRA']
Explanation: sch checks out with source
End of explanation
file("mdf_jo_arrv.json",'w').write(json.dumps(mdf.reset_index().to_json()))
len(mdf)
airlines=set(mdf['Airline'])
cities=set(mdf['City'])
file("cities_jo_arrv.json",'w').write(json.dumps(list(cities)))
file("airlines_jo_arrv.json",'w').write(json.dumps(list(airlines)))
citycoords={}
for i in cities:
if i not in citycoords:
if i==u'Birmingham': z='Birmingham, UK'
elif i==u'Valencia': z='Valencia, Spain'
elif i==u'Naples': z='Naples, Italy'
elif i==u'St. Petersburg': z='St. Petersburg, Russia'
elif i==u'Bristol': z='Bristol, UK'
elif i==u'Beida': z='Bayda, Libya'
else: z=i
citycoords[i]=Geocoder(apik).geocode(z)
print i
citysave={}
for i in citycoords:
citysave[i]={"coords":citycoords[i][0].coordinates,
"country":citycoords[i][0].country}
file("citysave_jo_arrv.json",'w').write(json.dumps(citysave))
Explanation: mdf checks out with source
End of explanation |
2,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook provides an example on how to use a custom class within Flexcode. <br>
In order to be compatible, a regression method needs to have a fit and predict method implemented - i.e.
model.fit() and model.predict() need to be the functions used for training and predicting respectively.
We provide here an example with artifical data. <br>
We compare the FlexZBoost (Flexcode with builtin XGBoost) with the custom class of FLexcode when passing
XGBoost Regressor. The two should give basically identical results.
Step1: Data Creation
Step2: FlexZBoost
Step3: Custom Model
Our custom model in this case is going to be XGBRegressor. <br>
The only difference with the above is that we are going to use the CustomModel class and we are going to pass
XGBRegressor as custom_model.
After that, everything is exactly as above. <br>
Parameters can be passed also in the same way as above.
Step4: The two conditional density estimates should be the same across the board. <br>
We check the maximum difference in absolute value between the two. | Python Code:
import flexcode
import numpy as np
import xgboost as xgb
from flexcode.regression_models import XGBoost, CustomModel
Explanation: This notebook provides an example on how to use a custom class within Flexcode. <br>
In order to be compatible, a regression method needs to have a fit and predict method implemented - i.e.
model.fit() and model.predict() need to be the functions used for training and predicting respectively.
We provide here an example with artifical data. <br>
We compare the FlexZBoost (Flexcode with builtin XGBoost) with the custom class of FLexcode when passing
XGBoost Regressor. The two should give basically identical results.
End of explanation
def generate_data(n_draws):
x = np.random.normal(0, 1, n_draws)
z = np.random.normal(x, 1, n_draws)
return x, z
x_train, z_train = generate_data(5000)
x_validation, z_validation = generate_data(5000)
x_test, z_test = generate_data(5000)
Explanation: Data Creation
End of explanation
# Parameterize model
model = flexcode.FlexCodeModel(XGBoost, max_basis=31, basis_system="cosine",
regression_params={'max_depth': 3, 'learning_rate': 0.5, 'objective': 'reg:linear'})
# Fit and tune model
model.fit(x_train, z_train)
cdes_predict_xgb, z_grid = model.predict(x_test, n_grid=200)
model.__dict__
import pickle
pickle.dump(file=open('example.pkl', 'wb'), obj=model,
protocol=pickle.HIGHEST_PROTOCOL)
model = pickle.load(open('example.pkl', 'rb'))
model.__dict__
cdes_predict_xgb, z_grid = model.predict(x_test, n_grid=200)
Explanation: FlexZBoost
End of explanation
# Parameterize model
my_model = xgb.XGBRegressor
model_c = flexcode.FlexCodeModel(CustomModel, max_basis=31, basis_system="cosine",
regression_params={'max_depth': 3, 'learning_rate': 0.5, 'objective': 'reg:linear'},
custom_model=my_model)
# Fit and tune model
model_c.fit(x_train, z_train)
cdes_predict_custom, z_grid = model_c.predict(x_test, n_grid=200)
Explanation: Custom Model
Our custom model in this case is going to be XGBRegressor. <br>
The only difference with the above is that we are going to use the CustomModel class and we are going to pass
XGBRegressor as custom_model.
After that, everything is exactly as above. <br>
Parameters can be passed also in the same way as above.
End of explanation
np.max(np.abs(cdes_predict_custom - cdes_predict_xgb))
Explanation: The two conditional density estimates should be the same across the board. <br>
We check the maximum difference in absolute value between the two.
End of explanation |
2,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Long Short-Term Memory Network Example
Licensed under the Apache License, Version 2.0.
This example implements a Bayesian version of LSTM (Hochreiter, Schmidhuber, 1997) using tf.keras and edward2.
It's based on the discussion in this issue.
Step1: Data generation
We generate random, valid and invalid strings from the embedded reber grammar as inputs. The objective is a binary classification between valid and invalid strings.
Step2: Hyperparameters
Step3: Training a baseline LSTM
This is our baseline model. It's a simple RNN with a single layer of LSTM cells.
As it would be expected the model can quickly be fitted to differentiate the valid from the invalid strings.
Step4: Training a Bayesian LSTM
In comparison we build a similar RNN we we replace the standard LSTM cells with the LSTMCellFlipout class from edward2.
Step5: From the training curves and the test results it's clear that the model is not converging and we are not able to differentiate the valid from the invalid strings.
Step6: Hyperparameter tuning
In this section we run the experiments over a grid of, for BNN important, parameters.
Step7: Analysing the impact of hyperparameters
The following plot shows the test accuracy over the hyperparameters. It's clear that scaling the loss of the Bayesian layers in the model is necessary to reach convergence.
To plot those results NaN values in the clipvalues column are set to -1. In the corresponding runs gradients were not clipped.
What is the correct scaling factor for the regularizers?
BNN use the Kullback-Leibler Divergence (KL-divergence) as a regularization term to minimize the difference between the prior distribution $p$ and the approximated posterior distribution $q$ over the parameters of the network. This divergence term is written as $KL(q||p)$.
The KL-divergence should be included once for an entire dataset. Due to the implementation in tensorflow the regularization term is added to the value of the loss function after every batch though, leading to a larger KL-divergence term compared to the loss. To bring both terms on the same scale, we apply a scaling factor < 1 to the KL-divergence. The loss for each batch is divided by the batch size and therefore $KL(q||p)$ (which is in relation to the complete dataset) is divided by the number of training samples (in this case x_train.shape[0]).
Step8: Choosing the correct batch size as well as possibly a value to clip gradients by during the training is more complicated.
From those experiments it looks like a smaller batch size will be benefitial, even though this is also dependent on the dataset. Running this experiment with less samples shows that a larger batch size can greatly increase accuracy. A possible explaination for this behaviour is that in cases where less data is available, larger batch sizes compensate for the additional variance that is caused by the Bayesian properties.
Using this dataset, clipping the gradients doesn't seem to have a huge affect on the outcome although it is often an important technique with Bayesian LSTMs.
Step9: Training the best Bayesian Model | Python Code:
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow as tf
import edward2 as ed
import matplotlib.pyplot as plt
from tqdm import tqdm
from sklearn.model_selection import train_test_split, ParameterGrid
from tensorflow.keras.preprocessing import sequence
import embedded_reber_grammar as erg
def plot_hist(hist):
plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(hist.history['val_accuracy'], label='val_accuracy')
plt.plot(hist.history['accuracy'], label='train_accuracy')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(hist.history['val_loss'], label='val_loss')
plt.plot(hist.history['loss'], label='train_loss')
plt.legend()
plt.show()
Explanation: Bayesian Long Short-Term Memory Network Example
Licensed under the Apache License, Version 2.0.
This example implements a Bayesian version of LSTM (Hochreiter, Schmidhuber, 1997) using tf.keras and edward2.
It's based on the discussion in this issue.
End of explanation
x, y = [], []
n = 3000
for i in range(n):
x.append(np.asarray(erg.encode_string(erg.generate_valid_string(erg.embedded_rg))))
y.append(1)
for i in range(n):
x.append(np.asarray(erg.encode_string(erg.generate_invalid_string(erg.embedded_rg))))
y.append(0)
x = sequence.pad_sequences(x)
x_train, x_test, y_train, y_test = train_test_split(np.asarray(x), np.asarray(y))
print(f"Number of training samples: {x_train.shape[0]}")
print(f"Number of test samples: {x_test.shape[0]} \n")
sequence_length = x_train.shape[1]
num_chars = x_train.shape[2]
print(f"Length of sequences: {sequence_length}")
print(f"Number of characters: {num_chars}")
Explanation: Data generation
We generate random, valid and invalid strings from the embedded reber grammar as inputs. The objective is a binary classification between valid and invalid strings.
End of explanation
batch_size = 64
epochs = 40
Explanation: Hyperparameters
End of explanation
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(sequence_length, num_chars)))
model.add(tf.keras.layers.RNN(
tf.keras.layers.LSTMCell(128)
))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit(x_train, y_train, validation_split=0.2, epochs=epochs, batch_size=batch_size, verbose=0)
test_results = model.evaluate(x_test, y_test)
print(f"Test loss: {test_results[0]}")
print(f"Test accuracy: {test_results[1]}")
plot_hist(hist)
Explanation: Training a baseline LSTM
This is our baseline model. It's a simple RNN with a single layer of LSTM cells.
As it would be expected the model can quickly be fitted to differentiate the valid from the invalid strings.
End of explanation
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(sequence_length, num_chars)))
model.add(tf.keras.layers.RNN(
ed.layers.LSTMCellFlipout(128)
))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
hist = model.fit(x_train, y_train, validation_split=0.2, epochs=epochs, batch_size=256, verbose=0)
Explanation: Training a Bayesian LSTM
In comparison we build a similar RNN we we replace the standard LSTM cells with the LSTMCellFlipout class from edward2.
End of explanation
test_results = model.evaluate(x_test, y_test)
print(f"Test loss: {test_results[0]}")
print(f"Test accuracy: {test_results[1]}")
plot_hist(hist)
Explanation: From the training curves and the test results it's clear that the model is not converging and we are not able to differentiate the valid from the invalid strings.
End of explanation
params = {
'loss_scaling': [1., 1./x_train.shape[0]],
'batch_size': [64, 128, 256],
'clipvalue': [None, 0.1, 0.5],
}
param_grid = ParameterGrid(params)
results = pd.DataFrame(columns=list(params.keys())+['test_loss', 'test_accuracy'])
def training_run(param_set):
sf = param_set['loss_scaling']
bs = int(param_set['batch_size'])
cv = param_set['clipvalue']
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape=(sequence_length, num_chars)))
model.add(tf.keras.layers.RNN(
ed.layers.LSTMCellFlipout(
128,
kernel_regularizer=ed.regularizers.NormalKLDivergence(scale_factor=sf),
recurrent_regularizer=ed.regularizers.NormalKLDivergence(scale_factor=sf)
),
))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.Dense(1, activation='sigmoid'))
optimizer = tf.keras.optimizers.Adam(clipvalue=cv)
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
hist = model.fit(x_train, y_train, validation_split=0.2, epochs=epochs, batch_size=bs, verbose=0)
return model, hist
for param_set in tqdm(param_grid):
model, hist = training_run(param_set)
test_results = np.mean(np.asarray([model.evaluate(x_test, y_test, verbose=0) for _ in range(10)]), axis=0)
new_line = param_set
new_line['test_loss'] = test_results[0]
new_line['test_accuracy'] = test_results[1]
results = pd.concat([results, pd.DataFrame(new_line, index=[0])], ignore_index=True, axis=0)
Explanation: Hyperparameter tuning
In this section we run the experiments over a grid of, for BNN important, parameters.
End of explanation
results_ = results.drop(columns=['test_loss']).fillna(-1)
sns.pairplot(results_, y_vars=['test_accuracy'], x_vars=['loss_scaling', 'batch_size', 'clipvalue'])
Explanation: Analysing the impact of hyperparameters
The following plot shows the test accuracy over the hyperparameters. It's clear that scaling the loss of the Bayesian layers in the model is necessary to reach convergence.
To plot those results NaN values in the clipvalues column are set to -1. In the corresponding runs gradients were not clipped.
What is the correct scaling factor for the regularizers?
BNN use the Kullback-Leibler Divergence (KL-divergence) as a regularization term to minimize the difference between the prior distribution $p$ and the approximated posterior distribution $q$ over the parameters of the network. This divergence term is written as $KL(q||p)$.
The KL-divergence should be included once for an entire dataset. Due to the implementation in tensorflow the regularization term is added to the value of the loss function after every batch though, leading to a larger KL-divergence term compared to the loss. To bring both terms on the same scale, we apply a scaling factor < 1 to the KL-divergence. The loss for each batch is divided by the batch size and therefore $KL(q||p)$ (which is in relation to the complete dataset) is divided by the number of training samples (in this case x_train.shape[0]).
End of explanation
results_ = results_[results_['loss_scaling'] != 1.]
results_lowclip = results_[results_['clipvalue'] == 0.1].drop(columns=['loss_scaling'])
results_highclip = results_[results_['clipvalue'] == 0.5].drop(columns=['loss_scaling'])
plt.scatter(results_lowclip['batch_size'], results_lowclip['test_accuracy'], c='b', label="clipvalue=0.1")
plt.scatter(results_highclip['batch_size'], results_highclip['test_accuracy'], c='r', label="clipvalue=0.5")
plt.xlabel("batch size")
plt.ylabel("accuracy")
plt.legend()
plt.show()
results_ = results_[results_['loss_scaling'] != 1.]
results_64 = results_[results_['batch_size'] == 64].drop(columns=['loss_scaling'])
results_128 = results_[results_['batch_size'] == 128].drop(columns=['loss_scaling'])
results_256 = results_[results_['batch_size'] == 256].drop(columns=['loss_scaling'])
plt.scatter(results_64['clipvalue'], results_64['test_accuracy'], c='b', label="batch_size=64")
plt.scatter(results_128['clipvalue'], results_128['test_accuracy'], c='r', label="batch_size=128")
plt.scatter(results_256['clipvalue'], results_256['test_accuracy'], c='g', label="batch_size=256")
plt.xlabel("clipvalue")
plt.ylabel("accuracy")
plt.legend()
plt.show()
Explanation: Choosing the correct batch size as well as possibly a value to clip gradients by during the training is more complicated.
From those experiments it looks like a smaller batch size will be benefitial, even though this is also dependent on the dataset. Running this experiment with less samples shows that a larger batch size can greatly increase accuracy. A possible explaination for this behaviour is that in cases where less data is available, larger batch sizes compensate for the additional variance that is caused by the Bayesian properties.
Using this dataset, clipping the gradients doesn't seem to have a huge affect on the outcome although it is often an important technique with Bayesian LSTMs.
End of explanation
best_params = results_.iloc[np.argmax(results_['test_accuracy'])].to_dict()
if best_params['clipvalue'] < 0:
best_params['clipvalue'] = None
model, hist = training_run(best_params)
plot_hist(hist)
Explanation: Training the best Bayesian Model
End of explanation |
2,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--BOOK_INFORMATION-->
<a href="https
Step1: Because of all the noise we added, the two half moons might not be apparent at first glance.
That's a perfect scenario for our current intentions, which is to show that decision trees are
tempted to overlook the general arrangement of data points (that is, the fact that they are
organized in half circles) and instead focus on the noise in the data.
To illustrate this point, we first need to split the data into training and test sets. We choose a
comfortable 75-25 split (by not specifying train_size), as we have done a number of times
before
Step2: Now let's have some fun. What we want to do is to study how the decision boundary of a
decision tree changes as we make it deeper and deeper.
For this, we will bring back the plot_decision_boundary function from Chapter 6,
Detecting Pedestrians with Support Vector Machines among others
Step3: Then we can code up a for loop, where at each iteration, we fit a tree of a different depth
Step4: As we continue to build deeper and deeper trees, we notice something strange
Step5: The tree object provides a number of options, the most important of which are the
following
Step6: Then we are ready to train the classifier on the data from the preceding code
Step7: The test labels can be predicted with the predict method
Step8: Using scikit-learn's accuracy_score, we can evaluate the model on the test set
Step9: After training, we can pass the predicted labels to the plot_decision_boundary function
Step10: Implementing a random forest with scikit-learn
Alternatively, we can implement random forests using scikit-learn
Step11: Here, we have a number of options to customize the ensemble
Step12: This gives roughly the same result as in OpenCV. We can use our helper function to plot the
decision boundary
Step13: Implementing extremely randomized trees
Random forests are already pretty arbitrary. But what if we wanted to take the randomness
to its extreme?
In extremely randomized trees (see ExtraTreesClassifier and ExtraTreesRegressor
classes), the randomness is taken even further than in random forests. Remember how
decision trees usually choose a threshold for every feature so that the purity of the node
split is maximized. Extremely randomized trees, on the other hand, choose these thresholds
at random. The best one of these randomly-generated thresholds is then used as the
splitting rule.
We can build an extremely randomized tree as follows
Step14: To illustrate the difference between a single decision tree, a random forest, and extremely
randomized trees, let's consider a simple dataset, such as the Iris dataset
Step15: We can then fit and score the tree object the same way we did before
Step16: For comparison, using a random forest would have resulted in the same performance
Step17: In fact, the same is true for a single tree
Step18: So what's the difference between them?
To answer this question, we have to look at the
decision boundaries. Fortunately, we have already imported our
plot_decision_boundary helper function in the preceding section, so all we need to do is
pass the different classifier objects to it.
We will build a list of classifiers, where each entry in the list is a tuple that contains an
index, a name for the classifier, and the classifier object
Step19: Then it's easy to pass the list of classifiers to our helper function such that the decision
landscape of every classifier is drawn in its own subplot | Python Code:
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.25, random_state=100)
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
plt.figure(figsize=(10, 6))
plt.scatter(X[:, 0], X[:, 1], s=100, c=y)
plt.xlabel('feature 1')
plt.ylabel('feature 2');
Explanation: <!--BOOK_INFORMATION-->
<a href="https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv" target="_blank"><img align="left" src="data/cover.jpg" style="width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;"></a>
This notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.
The code is released under the MIT license,
and is available on GitHub.
Note that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.
If you find this content useful, please consider supporting the work by
buying the book!
<!--NAVIGATION-->
< Understanding Ensemble Methods | Contents | Using Random Forests for Face Recognition >
Combining Decision Trees Into a Random Forest
A popular variation of bagged decision trees are the so-called random forests. These are
essentially a collection of decision trees, where each tree is slightly different from the others.
In contrast to bagged decision trees, each tree in a random forest is trained on a slightly
different subset of data features.
Although a single tree of unlimited depth might do a relatively good job of predicting the
data, it is also prone to overfitting. The idea behind random forests is to build a large
number of trees, each of them trained on a random subset of data samples and features.
Because of the randomness of the procedure, each tree in the forest will overfit the data in a
slightly different way. The effect of overfitting can then be reduced by averaging the
predictions of the individual trees.
Understanding the shortcomings of decision trees
The effect of overfitting the dataset, which a decision tree often falls victim of is best
demonstrated through a simple example.
For this, we will return to the make_moons function from scikit-learn's datasets module,
which we previously used in Chapter 8, Discovering Hidden Structures with Unsupervised
Learning to organize data into two interleaving half circles. Here, we choose to generate 100
data samples belonging to two half circles, in combination with some Gaussian noise with
standard deviation 0.25:
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=100
)
Explanation: Because of all the noise we added, the two half moons might not be apparent at first glance.
That's a perfect scenario for our current intentions, which is to show that decision trees are
tempted to overlook the general arrangement of data points (that is, the fact that they are
organized in half circles) and instead focus on the noise in the data.
To illustrate this point, we first need to split the data into training and test sets. We choose a
comfortable 75-25 split (by not specifying train_size), as we have done a number of times
before:
End of explanation
import numpy as np
def plot_decision_boundary(classifier, X_test, y_test):
# create a mesh to plot in
h = 0.02 # step size in mesh
x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1
y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
X_hypo = np.c_[xx.ravel().astype(np.float32),
yy.ravel().astype(np.float32)]
ret = classifier.predict(X_hypo)
if isinstance(ret, tuple):
zz = ret[1]
else:
zz = ret
zz = zz.reshape(xx.shape)
plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8)
plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200)
Explanation: Now let's have some fun. What we want to do is to study how the decision boundary of a
decision tree changes as we make it deeper and deeper.
For this, we will bring back the plot_decision_boundary function from Chapter 6,
Detecting Pedestrians with Support Vector Machines among others:
End of explanation
from sklearn.tree import DecisionTreeClassifier
plt.figure(figsize=(16, 8))
for depth in range(1, 9):
plt.subplot(2, 4, depth)
tree = DecisionTreeClassifier(max_depth=depth)
tree.fit(X, y)
plot_decision_boundary(tree, X_test, y_test)
plt.axis('off')
plt.title('depth = %d' % depth)
Explanation: Then we can code up a for loop, where at each iteration, we fit a tree of a different depth:
End of explanation
import cv2
rtree = cv2.ml.RTrees_create()
Explanation: As we continue to build deeper and deeper trees, we notice something strange: the deeper
the tree, the more likely it is to get strangely shaped decision regions, such as the tall and
skinny patches in the rightmost panel of the lower row. It's clear that these patches are more
a result of the noise in the data rather than some characteristic of the underlying data
distribution. This is an indication that most of the trees are overfitting the data. After all, we
know for a fact that the data is organized into two half circles! As such, the trees with
depth=3 or depth=5 are probably closest to the real data distribution.
There are at least two different ways to make a decision tree less powerful:
- Train the tree only on a subset of the data
- Train the tree only on a subset of the features
Random forests do just that. In addition, they repeat the experiment many times by
building an ensemble of trees, each of which is trained on a randomly chosen subset of data
samples and/or features.
Implementing our first random forest
In OpenCV, random forests can be built using the RTrees_create function from the ml
module:
End of explanation
n_trees = 10
eps = 0.01
criteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,
n_trees, eps)
rtree.setTermCriteria(criteria)
Explanation: The tree object provides a number of options, the most important of which are the
following:
- setMaxDepth: This sets the maximum possible depth of each tree in the ensemble. The actual obtained depth may be smaller if other termination criteria are met first.
- setMinSampleCount: This sets the minimum number of samples that a node can contain for it to get split.
- setMaxCategories: This sets the maximum number of categories allowed. Setting the number of categories to a smaller value than the actual number of classes in the data performs subset estimation.
- setTermCriteria: This sets the termination criteria of the algorithm. This is also where you set the number of trees in the forest.
We can specify the number of trees in the forest by passing an integer n_trees to the
setTermCriteria method. Here, we also want to tell the algorithm to quit once the score
does not increase by at least eps from one iteration to the next:
End of explanation
rtree.train(X_train.astype(np.float32), cv2.ml.ROW_SAMPLE, y_train);
Explanation: Then we are ready to train the classifier on the data from the preceding code:
End of explanation
_, y_hat = rtree.predict(X_test.astype(np.float32))
Explanation: The test labels can be predicted with the predict method:
End of explanation
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_hat)
Explanation: Using scikit-learn's accuracy_score, we can evaluate the model on the test set:
End of explanation
plt.figure(figsize=(10, 6))
plot_decision_boundary(rtree, X_test, y_test)
Explanation: After training, we can pass the predicted labels to the plot_decision_boundary function:
End of explanation
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=10, random_state=200)
Explanation: Implementing a random forest with scikit-learn
Alternatively, we can implement random forests using scikit-learn:
End of explanation
forest.fit(X_train, y_train)
forest.score(X_test, y_test)
Explanation: Here, we have a number of options to customize the ensemble:
- n_estimators: This specifies the number of trees in the forest.
- criterion: This specifies the node splitting criterion. Setting criterion='gini' implements the Gini impurity, whereas setting criterion='entropy' implements information gain.
- max_features: This specifies the number (or fraction) of features to consider at each node split.
- max_depth: This specifies the maximum depth of each tree.
- min_samples: This specifies the minimum number of samples required to split a node.
We can then fit the random forest to the data and score it like any other estimator:
End of explanation
plt.figure(figsize=(10, 6))
plot_decision_boundary(forest, X_test, y_test)
Explanation: This gives roughly the same result as in OpenCV. We can use our helper function to plot the
decision boundary:
End of explanation
from sklearn.ensemble import ExtraTreesClassifier
extra_tree = ExtraTreesClassifier(n_estimators=10, random_state=100)
Explanation: Implementing extremely randomized trees
Random forests are already pretty arbitrary. But what if we wanted to take the randomness
to its extreme?
In extremely randomized trees (see ExtraTreesClassifier and ExtraTreesRegressor
classes), the randomness is taken even further than in random forests. Remember how
decision trees usually choose a threshold for every feature so that the purity of the node
split is maximized. Extremely randomized trees, on the other hand, choose these thresholds
at random. The best one of these randomly-generated thresholds is then used as the
splitting rule.
We can build an extremely randomized tree as follows:
End of explanation
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data[:, [0, 2]]
y = iris.target
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=100
)
Explanation: To illustrate the difference between a single decision tree, a random forest, and extremely
randomized trees, let's consider a simple dataset, such as the Iris dataset:
End of explanation
extra_tree.fit(X_train, y_train)
extra_tree.score(X_test, y_test)
Explanation: We can then fit and score the tree object the same way we did before:
End of explanation
forest = RandomForestClassifier(n_estimators=10, random_state=100)
forest.fit(X_train, y_train)
forest.score(X_test, y_test)
Explanation: For comparison, using a random forest would have resulted in the same performance:
End of explanation
tree = DecisionTreeClassifier()
tree.fit(X_train, y_train)
tree.score(X_test, y_test)
Explanation: In fact, the same is true for a single tree:
End of explanation
classifiers = [
(1, 'decision tree', tree),
(2, 'random forest', forest),
(3, 'extremely randomized trees', extra_tree)
]
Explanation: So what's the difference between them?
To answer this question, we have to look at the
decision boundaries. Fortunately, we have already imported our
plot_decision_boundary helper function in the preceding section, so all we need to do is
pass the different classifier objects to it.
We will build a list of classifiers, where each entry in the list is a tuple that contains an
index, a name for the classifier, and the classifier object:
End of explanation
plt.figure(figsize=(17, 5))
for sp, name, model in classifiers:
plt.subplot(1, 3, sp)
plot_decision_boundary(model, X_test, y_test)
plt.title(name)
plt.axis('off')
Explanation: Then it's easy to pass the list of classifiers to our helper function such that the decision
landscape of every classifier is drawn in its own subplot:
End of explanation |
2,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial - Assemble the data on the wikitext dataset
Using Datasets, Pipeline, TfmdLists and Transform in text
In this tutorial, we explore the mid-level API for data collection in the text application. We will use the bases introduced in the pets tutorial so you should be familiar with Transform, Pipeline, TfmdLists and Datasets already.
Data
Step1: The dataset comes with the articles in two csv files, so we read it and concatenate them in one dataframe.
Step2: We could tokenize it based on spaces to compare (as is usually done) but here we'll use the standard fastai tokenizer.
Step3: Model | Python Code:
path = untar_data(URLs.WIKITEXT_TINY)
Explanation: Tutorial - Assemble the data on the wikitext dataset
Using Datasets, Pipeline, TfmdLists and Transform in text
In this tutorial, we explore the mid-level API for data collection in the text application. We will use the bases introduced in the pets tutorial so you should be familiar with Transform, Pipeline, TfmdLists and Datasets already.
Data
End of explanation
df_train = pd.read_csv(path/'train.csv', header=None)
df_valid = pd.read_csv(path/'test.csv', header=None)
df_all = pd.concat([df_train, df_valid])
df_all.head()
Explanation: The dataset comes with the articles in two csv files, so we read it and concatenate them in one dataframe.
End of explanation
splits = [list(range_of(df_train)), list(range(len(df_train), len(df_all)))]
tfms = [attrgetter("text"), Tokenizer.from_df(0), Numericalize()]
dsets = Datasets(df_all, [tfms], splits=splits, dl_type=LMDataLoader)
bs,sl = 104,72
dls = dsets.dataloaders(bs=bs, seq_len=sl)
dls.show_batch(max_n=3)
Explanation: We could tokenize it based on spaces to compare (as is usually done) but here we'll use the standard fastai tokenizer.
End of explanation
config = awd_lstm_lm_config.copy()
config.update({'input_p': 0.6, 'output_p': 0.4, 'weight_p': 0.5, 'embed_p': 0.1, 'hidden_p': 0.2})
model = get_language_model(AWD_LSTM, len(dls.vocab), config=config)
opt_func = partial(Adam, wd=0.1, eps=1e-7)
cbs = [MixedPrecision(), GradientClip(0.1)] + rnn_cbs(alpha=2, beta=1)
learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), opt_func=opt_func, cbs=cbs, metrics=[accuracy, Perplexity()])
learn.fit_one_cycle(1, 5e-3, moms=(0.8,0.7,0.8), div=10)
#learn.fit_one_cycle(90, 5e-3, moms=(0.8,0.7,0.8), div=10)
Explanation: Model
End of explanation |
2,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Scientific Programming in Python</h1>
<h2 align="center">Topic 2
Step1: Table of Contents
1.- Useful Magics
2.- Basic NumPy Operations
3.- Internals of NumPy
4.- Efficient Programming in NumPy
5.- Vectorized Algorithms in NumPy
6.- Bonus
<div id='magics' />
Useful magic built-in functions for time-profiling
time
Step2: timeit
Step3: prun
Step4: <div id='basic_numpy' />
Basic NumPy operations
The reasons of why you should use NumPy instead of any other iterable object_ in Python are
Step5: Basic Mathematical operations
Most of the operations performed in NumPy are handled element-wise, i.e, computing C = A + B will translates into $C[i,j] = A[i,j] + B[i,j]$. (The exception is broadcasting, and will be explained soon).
Below is a list with the most common used mathematical operations. For a comprehensive list se here
Step6: Boolean operations
Comparisons in NumPy work exaclty the same way as mathematical operations, i.e, element wise!. Let's see some examples
Step7: <div id='internals' />
Internals of NumPy
The numpy.ndarray structure
The ndarray is the NumPy object that let us create $N$-dimensional arrays. It is essentially defined by
Step8: How an ndarray is stored in memory?
When there is more than one dimension, there are two ways of storing the elements in the memory block
Step9: An interesting result
Step10: Why Python is so slow?
Python is Dynamically Typed rather than Statically Typed. What this means is that at the time the program executes, the interpreter doesn't know the type of the variables that are defined. <img src='data/cint_vs_pyint.png' style="width
Step12: As you can see, some interesting things have happened
Step13: Array computations can involve in-place operations (first example below
Step14: Be sure to choose the type of operation you actually need. Implicit-copy operations are slower!
Step15: Efficient memory access
We have basically three alternatives to access arrays without loops
Step16: Array slices are implemented as memory views, i.e, refer to the original data buffer of an array, but with different offsets, shapes and strides.
Array views should be used whenever possible, but one needs to be careful about the fact that views refer to the original data buffer.
Fancy indexing is several orders of magnitude slower as it involves copying a large array.
Another useful indexing technique is the mask of booleans. Lets suppose we want to get all the elements on array with value less than 0.5
Step17: Broadcasting
There is no need to always reshape arrays before operate on them. This useful feature implemented in NumPy arrays is called Broadcasting rules. In the visualization below, the extra memory indicated by the dotted boxes is never allocated, but it can be convenient to think about the operations as if it is.
<img src='data/broadcasting.png' style="width
Step18: <div id='vectorized' />
Vectorized Algorithms with NumPy
Vectorization
Step19: <div id='bonus' />
Bonus | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
Explanation: <h1 align="center">Scientific Programming in Python</h1>
<h2 align="center">Topic 2: NumPy and Efficient Numerical Programming</h2>
Notebook created by Martín Villanueva - [email protected] - DI UTFSM - April 2017.
End of explanation
%time {1+2 for i in range(10000)}
Explanation: Table of Contents
1.- Useful Magics
2.- Basic NumPy Operations
3.- Internals of NumPy
4.- Efficient Programming in NumPy
5.- Vectorized Algorithms in NumPy
6.- Bonus
<div id='magics' />
Useful magic built-in functions for time-profiling
time: See how long it takes a code to run.
End of explanation
%timeit [1+2 for i in range(10000)]
%timeit -n 100 [1+2 for i in range(10000)]
%%timeit
for i in range(10000):
1+2
Explanation: timeit:
See how long a code takes to run averaged over multiple runs.
It will limit the number of runs depending on how long the script takes to execute.
Provide an accurate time calculation by reducing the impact of startup or shutdown costs on the time calculation by executing the code repeatedly.
End of explanation
from time import sleep
def foo(): sleep(1)
def bar(): sleep(2)
def baz(): foo(),bar()
%prun baz()
Explanation: prun: See how long it took each function in a script to run.
End of explanation
# Arrays of zeros: np.zeros(shape)
print("Zeros:")
print( np.zeros((3,3)) )
# Arrays of ones: np.ones(shape)
print("\nOnes:")
print( np.ones((3,3)) )
# Empty array: np.empty(shape)
print("\nEmpty:")
print( np.empty((3,3)) )
# Range of values: np.range(start, stop, step)
print("\nRange:")
print( np.arange(0., 10., 1.) )
# Regular grid: np.linspace(start, end, n_values)
print("\nRegular grid:")
print( np.linspace(0., 1., 9) )
# Random secuences: np.random
print("\nRandom sequences:")
print( np.random.uniform(10, size=6) )
# Array constructor: np.array( python_iterable )
print("\nArray constructor")
print( np.array([2, 3, 5, 10, -1]) )
print( 10*np.random.random((5,5)) )
Explanation: <div id='basic_numpy' />
Basic NumPy operations
The reasons of why you should use NumPy instead of any other iterable object_ in Python are:
* NumPy provides an ndarray structure for storing numerical data __in a contiguous way.
* Also implements fast mathematical operations on ndarrays, that exploit this contiguity.
* Brevity of the syntax for array operations. A language like C or Java would require us to write a loop for a matrix operation as simple as C=A+B.
Creating Arrays
There are several NumPy functions for creating common types of arrays. Below is a list of the most common used:
End of explanation
# first we create two random arrays:
A = np.random.random((5,5))
B = np.random.random((5,5))
# sum
print("Sum:")
print( A+B )
# subtraction
print("\nSubtraction")
print( A-B )
# product
print("\nProduct")
print( A*B )
# matricial product
print("\nMatricial Product")
print( np.dot(A,B) )
# power
print("\n Power")
print( A**2 )
# Some common mathematical functions
print("\n np.exp()")
print( np.exp(A) )
print("\n np.sin()")
print( np.sin(A) )
print("\n np.cos()")
print( np.cos(A))
print("\n np.tan()")
print( np.tan(A) )
Explanation: Basic Mathematical operations
Most of the operations performed in NumPy are handled element-wise, i.e, computing C = A + B will translates into $C[i,j] = A[i,j] + B[i,j]$. (The exception is broadcasting, and will be explained soon).
Below is a list with the most common used mathematical operations. For a comprehensive list se here: NumPy mathematical functions.
End of explanation
# Creating two 2d-arrays
A = np.array( [[1, 2, 3], [2, 3, 5], [1, 9, 6]] )
B = np.array( [[1, 2, 3], [3, 5, 5], [0, 8, 5]] )
print("A > B:")
print( A > B )
print("\nA =< B:")
print( A <= B )
print("\n A==B:")
print( A==B )
print("\n A!=B:")
print( A!=B )
# Creating two 2d boolean arrays
C = A==B
D = A>=B
print("\n A and B:")
print( C & D)
print( np.logical_and(C,D) )
print("\n A or B:")
print( C | D)
print( np.logical_or(C,D) )
print("\n not A:")
print( ~C )
print( np.logical_not(C))
Explanation: Boolean operations
Comparisons in NumPy work exaclty the same way as mathematical operations, i.e, element wise!. Let's see some examples:
End of explanation
# Lets create a random array
A = np.random.random((5,5))
print("Dims: ")
print(A.ndim)
print("\nShape: ")
print(A.shape)
print("\nStrides: ")
print(A.strides)
print("\nData type: ")
print(A.dtype)
Explanation: <div id='internals' />
Internals of NumPy
The numpy.ndarray structure
The ndarray is the NumPy object that let us create $N$-dimensional arrays. It is essentially defined by:
1. A number of dimensions
2. a shape
3. strides
4. data type or dtpe
5. The data buffer.
<img src='data/ndarray.png' style="width: 500px;">
End of explanation
# Lets create a random array in C-order
A = np.random.random((5,2))
print("C strides:")
print(A.strides)
# Lets create a random array in F-order
B = np.asfortranarray(A)
print("\nF strides:")
print(B.strides)
Explanation: How an ndarray is stored in memory?
When there is more than one dimension, there are two ways of storing the elements in the memory block:
1. Elements can be stored in row-major order (also known as C-order) or,
2. In column-major order (also known as Fortran-order).
<img src='data/ndarray_storage.png' style="width: 800px;">
What are the strides?
NumPy uses the notion of strides to convert between a multidimensional index and the memory location of the underlying (1D) sequence of elements.
For example, the mapping between array[i,j] and the corresponding address the byte is:
* offset = array.strides[0] * i1 + array.strides[1] * i2
* address = base + offset
where base is the address of the first byte (array[0,0]).
End of explanation
X = np.random.random((5000,5000))
%timeit X[0,:].sum()
%timeit X[:,0].sum()
Explanation: An interesting result
End of explanation
#Python Lists implementation
def norm_square_list(vector):
norm = 0
for v in vector:
norm += v*v
return norm
#Naive NumPy implementation
def norm_square_array(vector):
norm = 0
for v in vector:
norm += v*v
return norm
#Vectorized NumPy implementation
def norm_square_numpy(vector):
return np.sum(vector * vector)
#Clever NumPy implementation
def norm_square_dot(vector):
return np.dot(vector, vector)
#Vector to use - dimension 10^6
vector = range(1000000)
npvector = np.array(vector)
#Timing the list implementation
%timeit norm_square_list(vector)
#Timing the naive array implementation
%timeit norm_square_array(npvector)
#Timing the NumPy-vectorized implementation
%timeit norm_square_numpy(npvector)
#Timing the clever NumPy-vectorized implementation
%timeit norm_square_dot(npvector)
Explanation: Why Python is so slow?
Python is Dynamically Typed rather than Statically Typed. What this means is that at the time the program executes, the interpreter doesn't know the type of the variables that are defined. <img src='data/cint_vs_pyint.png' style="width: 300px;">
Python is interpreted rather than compiled. A smart compiler can look ahead and optimize for repeated or unneeded operations, which can result in speed-ups.
Python's object model can lead to inefficient memory access. A NumPy array in its simplest form is a Python object build around a C array. That is, it has a pointer to a contiguous data buffer of values. A Python list, on the other hand, has a pointer to a contiguous buffer of pointers, each of which points to a Python object which in turn has references to its data (in this case, integers). <img src='data/array_vs_list.png' style="width: 500px;">
Why Numpy is so fast?
Computations follow the Single Instruction Multiple Data (SIMD) paradigm. So that NumPy can take advantage of vectorized instructions on modern CPUs, like Intel's SSE and AVX, AMD's XOP.
<img src='data/devectorized.png' style="width: 350px;", caption='asdf'>
<img src='data/vectorized.png' style="width: 350px;">
A NumPy array is described by metadata (number of dimensions, shape, data type, strides, and so on) and the data (which is stored in a homogeneous and contiguous blocks of memory).
Array computations can be written very efficiently in a low-level language like C (and a large part of NumPy is actually written in C). Aditionally many internal methods and functions are linked to highly optimized linear algebra libraries like BLAS (Basic Linear Algebra Subprograms) and LAPACK (Linear Algebra PACKage).
Spatial locality in memory access patterns results in significant performance gains, notably thanks to the CPU cache. Indeed, the cache loads bytes in chunks from RAM to the CPU registers.
Lets see the problem...
Consider the problem of calculating the squared norm of a vector ($\displaystyle || \mathbf{v} ||^2 = \mathbf{v} \cdot \mathbf{v}$) with the following 4 implementations:
End of explanation
def id(x):
This function returns the memory
block address of an array.
return x.__array_interface__['data'][0]
Explanation: As you can see, some interesting things have happened:
The naive NumPy implementation, which iterates over data is actually slower than simply using a list. This is because the array stores a very low-level representation of the numbers it stores , and this must be converted into Python-compatible objects before being returned to user, causing this extra overhead each time you index an array.
norm_square_numpy is slower than the clever NumPy implementation because two reasons:
There is time spend in allocating memory for storing the temporary result (vector x vector) and
This creates two implied loops, one to do the multiplication and one to do the sum.
The clever implementation uses np.dot() NumPy function which has no need to store intermediate results, and iterates just one time (but at C speed).
<div id='efficient' />
Eficient programming with NumPy
In-place and implicit copy operations
Prefer in-place over implicit-copy operations whenever possible. This will save memory (less work to garbage collector) and performs faster.
End of explanation
a = np.zeros(10); aid = id(a)
# in-place operation
a *= 2; id(a) == aid
# implicit-copy operation
a = a * 2; id(a) == aid
Explanation: Array computations can involve in-place operations (first example below: the array is modified) or implicit-copy operations (second example: a new array is created).
End of explanation
%%timeit
a = np.ones(100000000)
a *= 2
%%timeit
a = np.ones(100000000)
b = a * 2
Explanation: Be sure to choose the type of operation you actually need. Implicit-copy operations are slower!
End of explanation
m, n = 1000000, 100
a = np.random.random_sample((m, n))
index = np.arange(0, m, 10)
#fancy indexing - indexing with lists
%timeit a[index,:]
#memory slice - memory views
%timeit a[::10]
Explanation: Efficient memory access
We have basically three alternatives to access arrays without loops:
array slicing
boolean masks
fancy indexing.
Note. If you find yourself looping over indices to select items on which operation is performed, it can probably be done more efficiently with one of these techniques!
End of explanation
def naive_indexing(vect):
ret = list()
for val in vect:
if val < 0.5: ret.append(val)
return np.array(ret)
#data to occupy and mask of booleans
vect = np.random.random_sample(1000000)
mask = vect < 0.5
mask
#naive indexing
%timeit naive_indexing(vect)
#mask indexing
%timeit vect[mask]
#'improved' mask indexing
%timeit np.compress(mask, vect)
Explanation: Array slices are implemented as memory views, i.e, refer to the original data buffer of an array, but with different offsets, shapes and strides.
Array views should be used whenever possible, but one needs to be careful about the fact that views refer to the original data buffer.
Fancy indexing is several orders of magnitude slower as it involves copying a large array.
Another useful indexing technique is the mask of booleans. Lets suppose we want to get all the elements on array with value less than 0.5
End of explanation
# array([0,1,2]) + 5
np.arange(3) + 5
# array([[1, 1 ,1], [1, 1, 1], [1, 1, 1]]) + array([0, 1, 2])
np.ones((3,3)) + np.arange(3)
# array([[0], [1], [2]]) + array([0, 1 ,2])
np.arange(3).reshape((3,1)) + np.arange(3)
Explanation: Broadcasting
There is no need to always reshape arrays before operate on them. This useful feature implemented in NumPy arrays is called Broadcasting rules. In the visualization below, the extra memory indicated by the dotted boxes is never allocated, but it can be convenient to think about the operations as if it is.
<img src='data/broadcasting.png' style="width: 600px;">
How it works: The two arrays to be operated must match in at least one dimension. Then the array with less dimensions will logically extended to match the dimensions of the other
End of explanation
def naive_orth(Q,v):
m,n = Q.shape
for j in range(n):
v -= np.dot(Q[:,j],v)*Q[:,j]
return v
def vectorized_orth(Q,v):
proy = np.dot(Q.transpose(),v)
# v -= (proy*Q).sum(axis=1)
v -= np.dot(Q,proy)
return v
# Let's generate a random unitary matrix
# Q unitary matrix, dimensions 100000 x1000
m,n = 10000,100
A = 10 * np.random.random((m,n))
Q,R = np.linalg.qr(A, mode='reduced')
del R
# v will be the starting vector for orthogonalization
v = np.random.random(m)
v1 = v.copy()
v2 = v.copy()
%timeit naive_orth(Q,v1)
%timeit vectorized_orth(Q,v2)
Explanation: <div id='vectorized' />
Vectorized Algorithms with NumPy
Vectorization: The process of converting a scalar implementation, which processes a single pair of operands at a time, to a vector implementation, which processes one operation on multiple pairs of operands at once.
Example: Gram-Shmidt Orthogonalization
The problem: Given a matrix $Q_{m\times n}$ with ($m>n$), find a vector $v$ orthogonal to the column space of $Q$.
<img src='data/orthogonalization.jpg' style="width: 400px;">
End of explanation
A = np.random.random((100,100))
b = np.random.random(100)
c = np.random.random(100)
### Matrix power ###
np.linalg.matrix_power(A,3)
### Cholesky decomposition ###
#np.linalg.cholesky(A) #A must be positive definite
### QR decomposition ###
np.linalg.qr(A, mode='reduced')
### SVD decomposition ###
np.linalg.svd(A, full_matrices=False)
### Eigenvectors ###
np.linalg.eig(A)
### Eigevalues ###
np.linalg.eigvals(A)
### Matrix or vector norm ###
np.linalg.norm(A, ord='fro')
### Condition number ###
np.linalg.cond(A, p=np.inf)
### Determinant ###
np.linalg.det(A)
### Linear solver Ax=b ###
np.linalg.solve(A,b)
### Least Squares Ax=b (over-determined) ###
np.linalg.lstsq(A,b)
### Inverse ###
np.linalg.inv(A)
### Pseudo-Inverse ###
np.linalg.pinv(A)
### and many more...
del A,b,c
Explanation: <div id='bonus' />
Bonus: Useful libraries based in NumPy
numpy.linalg (Numpy's submodule)
End of explanation |
2,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Purpose
Step2: Input
Step3: Workflow
Tokenization to break text into units e.g. words, phrases, or symbols
Stop word removal to get rid of common words
e.g. this, a, is
Step4: About stemmers and lemmatisation
Stemming to reduce a word to its roots
e.g. having => hav
Lemmatisation to determine a word's lemma/canonical form
e.g. having => have
English Stemmers and Lemmatizers
For stemming English words with NLTK, you can choose between the PorterStemmer or the LancasterStemmer. The Porter Stemming Algorithm is the oldest stemming algorithm supported in NLTK, originally published in 1979. The Lancaster Stemming Algorithm is much newer, published in 1990, and can be more aggressive than the Porter stemming algorithm.
The WordNet Lemmatizer uses the WordNet Database to lookup lemmas. Lemmas differ from stems in that a lemma is a canonical form of the word, while a stem may not be a real word.
Resources
Step5: Count & POS tag of each stemmed/non-stop word
meaning of POS tags
Step6: Proportion of POS tags | Python Code:
import pandas as pd
import nltk
from nltk.corpus import stopwords
from nltk.stem import SnowballStemmer
from collections import Counter
Explanation: Purpose: To experiment with Python's Natural Language Toolkit.
NLTK is a leading platform for building Python programs to work with human language data
End of explanation
bloboftext =
This little piggy went to market,
This little piggy stayed home,
This little piggy had roast beef,
This little piggy had none,
And this little piggy went wee wee wee all the way home.
Explanation: Input
End of explanation
## Tokenization
bagofwords = nltk.word_tokenize(bloboftext.lower())
print len(bagofwords)
## Stop word removal
stop = stopwords.words('english')
bagofwords = [i for i in bagofwords if i not in stop]
print len(bagofwords)
Explanation: Workflow
Tokenization to break text into units e.g. words, phrases, or symbols
Stop word removal to get rid of common words
e.g. this, a, is
End of explanation
snowball_stemmer = SnowballStemmer("english")
## What words was stemmed?
_original = set(bagofwords)
_stemmed = set([snowball_stemmer.stem(i) for i in bagofwords])
print 'BEFORE:\t%s' % ', '.join(map(lambda x:'"%s"'%x, _original-_stemmed))
print ' AFTER:\t%s' % ', '.join(map(lambda x:'"%s"'%x, _stemmed-_original))
del _original, _stemmed
## Proceed with stemming
bagofwords = [snowball_stemmer.stem(i) for i in bagofwords]
Explanation: About stemmers and lemmatisation
Stemming to reduce a word to its roots
e.g. having => hav
Lemmatisation to determine a word's lemma/canonical form
e.g. having => have
English Stemmers and Lemmatizers
For stemming English words with NLTK, you can choose between the PorterStemmer or the LancasterStemmer. The Porter Stemming Algorithm is the oldest stemming algorithm supported in NLTK, originally published in 1979. The Lancaster Stemming Algorithm is much newer, published in 1990, and can be more aggressive than the Porter stemming algorithm.
The WordNet Lemmatizer uses the WordNet Database to lookup lemmas. Lemmas differ from stems in that a lemma is a canonical form of the word, while a stem may not be a real word.
Resources:
PorterStemmer or the SnowballStemmer (Snowball == Porter2)
Stemming and Lemmatization
What are the major differences and benefits of Porter and Lancaster Stemming algorithms?
End of explanation
for token, count in Counter(bagofwords).most_common():
print '%d\t%s\t%s' % (count, nltk.pos_tag([token])[0][1], token)
Explanation: Count & POS tag of each stemmed/non-stop word
meaning of POS tags: Penn Part of Speech Tags
NN Noun, singular or mass
VBD Verb, past tense
End of explanation
record = {}
for token, count in Counter(bagofwords).most_common():
postag = nltk.pos_tag([token])[0][1]
if record.has_key(postag):
record[postag] += count
else:
record[postag] = count
recordpd = pd.DataFrame.from_dict([record]).T
recordpd.columns = ['count']
N = sum(recordpd['count'])
recordpd['percent'] = recordpd['count']/N*100
recordpd
Explanation: Proportion of POS tags
End of explanation |
2,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
class Node(object):
def __init__(self, data):
# TODO: Implement me
pass
def insert(root, data):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement a binary search tree with an insert method.
Constraints
Test Cases
Algorithm
Code
Unit Test
Constraints
Can we assume we are working with valid integers?
Yes
Can we assume all left descendents <= n < all right descendents?
Yes
For simplicity, can we use just a Node class without a wrapper Tree class?
Yes
Do we have to keep track of the parent nodes?
This is optional
Test Cases
Insert
Insert will be tested through the following traversal:
In-Order Traversal (Provided)
5, 2, 8, 1, 3 -> 1, 2, 3, 5, 8
1, 2, 3, 4, 5 -> 1, 2, 3, 4, 5
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
%run dfs.py
%run ../utils/results.py
# %load test_bst.py
from nose.tools import assert_equal
class TestTree(object):
def __init__(self):
self.results = Results()
def test_tree(self):
node = Node(5)
insert(node, 2)
insert(node, 8)
insert(node, 1)
insert(node, 3)
in_order_traversal(node, self.results.add_result)
assert_equal(str(self.results), '[1, 2, 3, 5, 8]')
self.results.clear_results()
node = Node(1)
insert(node, 2)
insert(node, 3)
insert(node, 4)
insert(node, 5)
in_order_traversal(node, self.results.add_result)
assert_equal(str(self.results), '[1, 2, 3, 4, 5]')
print('Success: test_tree')
def main():
test = TestTree()
test.test_tree()
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
2,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Apriori 算法
- (1) 把各项目放到只包含自己的项集中,生成最初的频繁项集。只使用达到最小支持度的项目。
- (2) 查找现有频繁项集的超集,发现新的频繁项集,并用其生成新的备选项集。
- (3) 测试新生成的备选项集的频繁程度,如果不够频繁,则舍弃。如果没有新的频繁项集,就跳到最后一步。
- (4) 存储新发现的频繁项集,跳到步骤(2)。
- (5) 返回发现的所有频繁项集。
Step1: 1 我们把发现的频繁项集保存到以项集长度为键的字典中,便于根据长度查找,这样就可以找到最新发现的频繁项集。初始化一个字典。
Step2: 2 用一个函数来实现步骤(2)和(3),它接收新发现的频繁项集,创建超集,检测频繁程度
Step3: 抽取关联规则
如果用户喜欢前提中的所有电影,那么他们也会喜欢结论中的电影。 | Python Code:
frequent_itemsets = {}
min_support = 50
Explanation: Apriori 算法
- (1) 把各项目放到只包含自己的项集中,生成最初的频繁项集。只使用达到最小支持度的项目。
- (2) 查找现有频繁项集的超集,发现新的频繁项集,并用其生成新的备选项集。
- (3) 测试新生成的备选项集的频繁程度,如果不够频繁,则舍弃。如果没有新的频繁项集,就跳到最后一步。
- (4) 存储新发现的频繁项集,跳到步骤(2)。
- (5) 返回发现的所有频繁项集。
End of explanation
frequent_itemsets[1] = dict((frozenset((movie_id,)),row['Favorable'])
for movie_id,row in num_favorable_by_movie.iterrows()
if row['Favorable'] > min_support)
frequent_itemsets[1]
Explanation: 1 我们把发现的频繁项集保存到以项集长度为键的字典中,便于根据长度查找,这样就可以找到最新发现的频繁项集。初始化一个字典。
End of explanation
from collections import defaultdict
def find_frequent_itemsets(favorable_reviews_by_users,k_1_itemsets,min_support):
counts = defaultdict(int)
# 用户id,以及他点赞的电影集合
for user, reviews in favorable_reviews_by_users.items():
# 遍历前面找出的项集,判断它们是否是当前评分项集的子集。如果是,表明用户已经为子集中的电影打过分
for itemset in k_1_itemsets:
if not itemset.issubset(reviews):
continue
# 遍历用户打过分却没有出现在项集里的电影,用它生成超集,更新该项集的计数。
for other_reviewed_movie in reviews - itemset:
# 电影 | 用户打过分却没有出现在 【项集里】的电影集合
# 喜欢一个也喜欢另外一个原则
current_superset = itemset | frozenset((other_reviewed_movie,))
counts[current_superset] += 1 # 每个用户同时喜欢这个这个项集 次数+1
#函数最后检测达到支持度要求的项集,看它的频繁程度够不够,并返回其中的频繁项集。
return dict([(itemset, frequency) for itemset, frequency in counts.items() if frequency >= min_support])
import sys
for k in range(2, 20):
cur_frequent_itemsets = find_frequent_itemsets(favorable_reviews_by_users,frequent_itemsets[k-1],min_support)
if len(cur_frequent_itemsets) == 0:
print("Did not find any frequent itemsets of length {}".format(k))
sys.stdout.flush()
break
else:
print("I found {} frequent itemsets of length {}".format(len(cur_frequent_itemsets), k))
sys.stdout.flush()
frequent_itemsets[k] = cur_frequent_itemsets
#del frequent_itemsets[1]
frequent_itemsets.keys()
frequent_itemsets[10]
#frequent_itemsets[8]
Explanation: 2 用一个函数来实现步骤(2)和(3),它接收新发现的频繁项集,创建超集,检测频繁程度
End of explanation
candidate_rules = []
for itemsets_length, itemset_counts in frequent_itemsets.items():
for itemset in itemset_counts.keys():
for conclusion in itemset:
premise = itemset - set((conclusion,))
candidate_rules.append((premise, conclusion))
print(candidate_rules[:5])
correct_counts = defaultdict(int)
incorrect_counts = defaultdict(int)
#遍历所有用户及其喜欢的电影数据,在这个过程中遍历每条关联规则
for user, reviews in favorable_reviews_by_users.items():
for candidate_rule in candidate_rules:
premise, conclusion = candidate_rule
if not premise.issubset(reviews): # 前置
continue
if conclusion in reviews:
correct_counts[candidate_rule] += 1
else:
incorrect_counts[candidate_rule] += 1
rule_confidence = {candidate_rule: correct_counts[candidate_rule]/ float(correct_counts[candidate_rule] +incorrect_counts[candidate_rule]) for candidate_rule in candidate_rules}
# 对置信度字典进行排序后,输出置信度最高的前五条规则
from operator import itemgetter
sorted_confidence = sorted(rule_confidence.items(),key=itemgetter(1), reverse=True)
print(sorted_confidence[0])
for index in range(5):
print(index)
print("Rule #{0}".format(index + 1))
(premise, conclusion) = sorted_confidence[index][0]
print("Rule: If a person recommends {0} they will alsorecommend {1}".format(premise, conclusion))
print(" - Confidence:{0:.3f}".format(rule_confidence[(premise, conclusion)]))
print("")
#加载电影信息
movie_filename = os.getcwd()+"/data//ml-100k/u.item"
movie_data = pd.read_csv(movie_filename,delimiter="|",header=None,encoding="mac-roman")
movie_data.columns = ["MovieID", "Title", "Release Date",
"Video Release", "IMDB", "<UNK>", "Action", "Adventure",
"Animation", "Children's", "Comedy", "Crime", "Documentary",
"Drama", "Fantasy", "Film-Noir",
"Horror", "Musical", "Mystery", "Romance", "Sci-Fi", "Thriller",
"War", "Western"]
movie_data.ix[0:2]
def get_movie_name(movie_id):
title_obj = movie_data[movie_data['MovieID']==movie_id]['Title']
return title_obj.values[0]
for index in range(5):
print("Rule #{0}".format(index + 1))
(premise, conclusion) = sorted_confidence[index][0]
premise_name = ",".join(get_movie_name(idx) for idx in premise)
conclusion_name = get_movie_name(conclusion)
print("Rule: If a person recommends {0} they will alsorecommend {1}".format(premise_name, premise_name))
print(" - Confidence:{0:.3f}".format(rule_confidence[(premise, conclusion)]))
print("")
Explanation: 抽取关联规则
如果用户喜欢前提中的所有电影,那么他们也会喜欢结论中的电影。
End of explanation |
2,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature selection
Step1: Our first step is to count up all of the words in each of the documents. This conditional frequency distribution should look familiar by now. | Python Code:
documents = nltk.corpus.PlaintextCorpusReader('../data/EmbryoProjectTexts/files', 'https.+')
metadata = zotero.read('../data/EmbryoProjectTexts', index_by='link', follow_links=False)
Explanation: Feature selection: keywords
A major problem-area in text mining is determining the thematic or topical content of texts. One of the most basic problems in this area is to identify the terms in a text -- "keywords" -- that most accurately represents its distinctive thematic characteristics.
In this notebook, we will use Dunning's log-likelihood statistic to identify keywords for individual documents in a collection of texts. It is fairly typical that methods used for statistical analysis are also used for information extraction and classification.
We'll use the Embryo Project corpus from earlier notebooks. Recall that the plain text documents are stored separately from their metadata -- this is the format that you would expect from a Zotero RDF export.
End of explanation
wordcounts_per_document = nltk.ConditionalFreqDist([
(fileid, normalize_token(token))
for fileid in documents.fileids()
for token in documents.words(fileids=[fileid])
if filter_token(token)
])
from scipy import sparse
# We pick a single "focal" document that we want to characterize.
focal_fileid = documents.fileids()[3]
# Since this procedure will involve numerical matrices, we
# need to map documents and words onto row and column indices.
# These "dictionaries" will help us to keep track of those
# mappings.
document_index = {} # Maps int -> fileid (str).
vocabulary = {} # Maps int -> word (str).
lookup = {} # Reverse map for vocabulary (word (str) -> int).
# Containers for sparse data.
I = [] # Document vector.
J = [] # Word vector.
data = [] # Word count vector.
labels = [] # Vector of labels; either the URI of interest, or "Other".
# Here we transform the ConditionalFrequencyDist into three vectors (I, J, data)
# that sparsely describe the document-word count matrix.
for i, (fileid, counts) in enumerate(wordcounts_per_document.iteritems()):
document_index[i] = fileid
for token, count in counts.iteritems():
# Removing low-frequency terms is optional, but speeds things up
# quite a bit for this demonstration.
if count < 3:
continue
# get() lets us
j = lookup.get(token, len(vocabulary))
vocabulary[j] = token
lookup[token] = j
I.append(i)
J.append(j)
data.append(count)
labels.append(fileid if fileid == focal_fileid else 'Other')
print '\r', i,
sparse_matrix = sparse.coo_matrix((data, (I, J)))
sparse_matrix.shape
from sklearn.feature_selection import chi2
from sklearn.feature_extraction.text import CountVectorizer
keyness, _ = chi2(sparse_matrix, labels)
ranking = np.argsort(keyness)[::-1]
_, words = zip(*sorted(vocabulary.items(), key=lambda i: i[0]))
words = np.array(words)
keywords = words[ranking]
zip(keywords[:20], keyness[ranking][:20])
def extract_keywords(fileid, n=20):
print '\r', fileid,
document_index = {} # Maps int -> fileid (str).
vocabulary = {} # Maps int -> word (str).
lookup = {} # Reverse map for vocabulary (word (str) -> int).
I = []
J = []
data = []
labels = []
for i, (key, counts) in enumerate(wordcounts_per_document.iteritems()):
document_index[i] = key
for token, count in counts.iteritems():
if count < 3:
continue
j = lookup.get(token, len(vocabulary))
vocabulary[j] = token
lookup[token] = j
I.append(i)
J.append(j)
data.append(count)
labels.append(key if key == fileid else 'Other')
sparse_matrix = sparse.coo_matrix((data, (I, J)))
keyness, _ = chi2(sparse_matrix, labels)
ranking = np.argsort(keyness)[::-1]
_, words = zip(*sorted(vocabulary.items(), key=lambda i: i[0]))
words = np.array(words)
keywords = words[ranking]
return keywords[:n]
keywords = [extract_keywords(fileid) for fileid in documents.fileids()]
Explanation: Our first step is to count up all of the words in each of the documents. This conditional frequency distribution should look familiar by now.
End of explanation |
2,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Image Processing
Image processing is a very useful tool for scientists in the lab, and for everyday uses as well. For, example, an astronomer may use image processing to help find and recognize stars, or a self driving car may use it to stay in the correct lane.
This lecture will teach you some of the fundamental tools of image processing so that you can use it in your own studies/applications.
Fundamentals
What is an image, really?
Figure 1
Step1: We can check if we got this right by having the computer make the image. To do this, we will use matplotlib's "plt.imshow" function, which takes in matricies and turns them into images.
Step2: Now, we have gotten a figure that looks like the correct board, but the colors are wrong. To fix this, we need to tell the computer that we want to see the image in grayscale. We do this by inserting a "cmap" into the plt.imshow function. "cmap" stands for "color map" and in this case, we will set it to grayscale
Step3: If the output of the code above does not match the chekerboad in the exercise try again or ask for help. Notice how the order of the numbers is important to final image.
Exercise 2
Step4: Recall that the order of the numbers doesn't matter for scatter plots. To see this, go back to the previous code block, and switch the order of the exercise2 list. Even though you change the order, the image produced doesn't change.
Image Arithmatic (Introduction)
Now that we have covered the fundamentals, we can start diving into image manipulations starting with slices and arithmatic.
There are many tools we can use to learn about image processing. Today, we will be using Scikit-Image, a free to use and easy to learn python library, and we'll start by looking at an image of an insect.
Step5: Remember that, since we are working with images, the information is stored inside of a matrix. Lets take a look at that matrix.
Step6: The first thing we notice is that this image is no longer just 1's and 0's. The numbers go from 0 to 255, and recalling that 0 = absence of light, we take 0=black and 255=white. All of the numbers between 0 and 255 are various shades of gray.
Next, we can see that there are ellipses inside of the matrix. This is python's way of telling us that there are so many positions and values that writing them all out would take a huge amount of space on the screen, and a list that big is hard to read.
In the checker board example we had an image that had 3 rows and 3 columns. We can check the shape of any matrix by adding ".shape" after its name. Let's see the shape of the insect image
Step7: It looks like this matrix has 375 rows and 500 columns, so we expect it to be wider than it is tall. We can take a look at the value of a specific pixel by calling its location like this
Step8: Cropping
Cropping an image is very simple. Basically, we have a really big matrix, but want to get rid of the rows and columns that we are not interestesd in. We can do this by slicing the matrix usng brackets [ ] as show below
Step9: In this case we cropped to the left antenna.
Exercise 3
Step10: Image Arithmatic (continued)
since we are still working with a matrix, we are free to do normal math operations on it. For example, we can divide all of the values in the image by 2. This should darken the image, lets try it
Step11: To brighten the image, we can instead multiply the matrix by 1.5
Step12: We can also add and subtract images from one another. For example, we can subtract an image from its self
Step13: We can also directly modify the values of the matrix elements ourselves. For example, if we want to set a portion of the image to solid black, we can do so by setting all of the pixels in that region to zero, like this
Step14: Exercise 4
Step15: Challenge Problem
Step16: Advanced Image Processing
Blob Detection
What are blobs and why do we care?
In essence, "blob detection" is the process of searching for and identifying bright spots on a dark background, or dark spots on a light background. The "blobs" can correspond to any number of physical phenomena. In astronomy, for example, this can be relevant if you have an image of a part of the sky and want to identify something like galaxies. You might be wondering though, why not just look at an image yourself and pick out blobs "by eye"? In our example of identifying galaxies in an astronomical image, a systematic way of doing this may be beneficial for several reasons
Step17: This line reads the fits file --> data table and header.
Step18: Exercise 5
Step19: The following line reads the data associated with the header into a numpy array. A fits file can have more than one header, and since Python indices start at 0, we tell Python to read the data associated with the first (and only) header associated with this fits file.
Step20: Exercise 6
Step21: That's it, you've now read your astronomical image into Python and can work with the data and analyze it! Let's visualize the data now to get an idea of what it is we are actually dealing with.
Step22: We can now run some blob finding algorithms on this data set and see what we find and if it makes sense! For this, we will use the scikit-image package which is an image processing pacakge in Python. This package has three algorithms for detecting blobs (all discussed here
Step23: Exercise 7
Step24: Data from the blob finder
The first column of blobs_log (blobs_log[0]) is the y-coordinate, the second column (blobs_log[1]) is the x-coordinate, and the third column (blobs_log[2]) is the "size" of the Gaussian that was used to detect the blob.
Step25: Plotting the blobs
Now we'll plot the blobs onto our image! For this, we will create circles with plt.Circle using the corrdinate and radius information we have from blobs_log. Then we will use add_patch to add that circle to our image. Since the plotting software needs information about the axes of the image in order to know where each circle goes, we use plt.gca() which means "Get Current Axis" and store the axis information into "ax". Then when we add the circle patches to the image, they will appear in the correct places. Finally, since there is more than one blob, we will create the circular patches and add them to the image one-by-one using a while loop.
Step26: Exercise 8
Step27: Write down your findings below, such as what parameters give you no blobs, how does changing the parameters affect the results, and anything else that you find interesting!
Step28: Difference of Gaussian (DoG)
This algorithm is an approximation of the Laplacian of Gaussian, where rather than actually computing the "Laplacian" of the Gaussian that we mentioned above, DoG approximates this computation by taking the difference of two successively smoothed (blurred) images. Blobs are then detected as bright-on-dark spots in these difference images. This algorithm has the same disadvantage for larger blobs as the Laplacian of Gaussian algorithm.
To run this algorithm, we use the function "blob_dog" (https
Step29: Determinant of Hessian
The final option that scikit-image provides is the Determinant of Hessian algorithm. This algorithm performs a different kind of mathematical operation on the smoothed images, it calculates the Hessian matrix of the image (but we won't go into that), and searches for maxima in this matrix. It it the fastest approach, but it is also less accurate and has trouble finding very small blobs.
To try it out, we will run the function "blob_doh" (https
Step30: Exercise 11
Step31: Particle Tracking
Now that we have learned several image processing techniques, we are ready to put them together in an application. In physics, there is an experiment called the Monopole Ion Trap that uses electric fields to trap particles as show below.
The basic priciple is that the particle is attracted to and repeled by the rod at the top of the image. In the image above, we have a very stable, well behaved, trap in which the ion just goes back and forth between two locations. However, under certain curcumstances, the particle may start to behave erratically, as shown below.
We would like to track the particle so that we can study its position and velocity.
Lets start by importing the movie of the stable particle into python. We have saved each movie, image by image, inside of folders on the computers. So we will import the stable movie as follows
Step32: Now is a good time to notice that we don't actually need the entire image to do this computation. The only thing we really need is the region the particle travels within, as shown below
This is commonly called the Region of Interest (ROI) of an image and we will crop every single image as we import it. Lets do that by importing a test image and checking how we want to crop it
Step33: Now that we know where we want to crop the images, we can do so automatically for all of the images in the folder
Step34: Looks good. Okay, now that we have all of the images imported, we want to run a blob finding algorithm that finds the position of the blob in each of the images. However, there are "specles" in the background, so make sure that your blob finding parameters are set correctly.
Exercise 12
Step35: We should now have a list of particle locations and sizes. Note that the size information is not important to this application, so we can get rid of it. In general, we find it easier to work with big complicated lists using numpy, so first we will convert the list into a numpy array, and then clean it up.
Step36: Data Analysis
Great, using a ROI made this computation faster, and it made sure that the particle was easy to find. Unfortunately it also introduced a small offset error in the verticle position of the particle. This can be seen by taking a look at the picture below
Step37: The final step is to turn these pixel locations into measurements of distance between the particle and the center of the rod. To do this, we will use the usual distance formula
Step38: Great, so now we have the the distances of the particles from the center of the rod for each movie. The experimenters also know two important pieces of information.
the camera was taking 2360 pictures per second (FPS), so the time between each image is 1/2360 seconds.
the distance between each pixel 5.9 micron = .0059 millimeters (mm)
we can use this information to make a plot of the particle's distance as a function of time with proper units.
Step39: Congratulations!
You just did particle tracking. Now we'll quickly demonstrate how to find the velocity of the particle.
Recall that the definition of velocity is simply distance/time. Since we know that the time between pictures is 1/2360 seconds, all we have to do is calculate the distance the particle moved between each frame.
Step40: Sometimes is it useful/interesting to see the velocity data in a "phase diagram", which is just a plot of the position vs the velocity
Step41: As you can see, the stable particle makes a circle in these "phase diagrams". As you can try (below), the unstable particle will produce sometehing that looks like scribles instead. Phase diagrams are often used to quickly check the stability of a particle, without having to watch the full movie.
Challenge problem | Python Code:
# set exercise1 equal to your matrix
exercise1 = #your matrix goes here
Explanation: Introduction to Image Processing
Image processing is a very useful tool for scientists in the lab, and for everyday uses as well. For, example, an astronomer may use image processing to help find and recognize stars, or a self driving car may use it to stay in the correct lane.
This lecture will teach you some of the fundamental tools of image processing so that you can use it in your own studies/applications.
Fundamentals
What is an image, really?
Figure 1: Felix the cat
As we can see from Felix the cat, images are just a coordinate system in which each block (x,y) has a value associated to it. These values are later interpreted as a color.
For example, in Figure 1, we would be describing two colors (black and white) and we could describe them using 1's and 0's. In general, 0 is taken to mean the absence of color, which means that 0 = black and 1 = white. Putting these ideas together, we can see that each point in an image requires three peices of information:
x - position
y - position
color (1 or 0)
So if we were instructing a computer to produce even a small image, we would need to give it a large list of numbers.
Figure 2: Checker board pattern
For example, suppose we wanted the computer to produce the checkboard patter that we see in Figure 2. If we make a list of the information the computer needs, we can format it like (x-coordinate, y-coordinate, color) and the whole list would be
(1,1,1), (1,2,0), (1,3,1), (2,1,0), (2,2,1), (2,3,0), (3,1,1), (3,2,0), (3,3,1)
Figure 3: Images
To make the image making process easier, people decided to ditch the traditional coordinate system and use matricies instead! However, because both systems work and because sometimes one method can be more convenient than the other, both still exists, but they have different names.
When you use a coordinate system to instruct the computer, that's a scatter plot. When you use a matrix, that's called an image.
To instruct the computer to make a scatter plot of the 9 square checkerboard, we had to give it 3x9=27 numbers. To insturct it to make an image of the checkerboard, we only have to give it 9 numbers, but they have to be in the correct order. Specifically, the order looks like this:
image = [ [1, 0, 1],
[0, 1, 0],
[1, 0, 1] ]
Each location that we assign a value to is considered a pixel. Thus, we have created an image that is 3 pixels wide and 3 pixels tall, and it has a total of 9 pixels.
When we watch youtube videos at 1080p, we are actually looking at pictures that are 1080 pixels tall and 1920 pixel wide. These images have a total of 1080x1920 = 2,073,600 pixels. If we round to the nearest million, there are approximately 2 million pixels in each image. This would be considered a 2 Mega Pixel (MP) image. This is the same number phone manufactures using when talking about how many megapixels their newest device has.
Exercise 1: Making images
Can you make the image matrix for the checker board that is shown above?
End of explanation
import matplotlib.pyplot as plt
import numpy as np
plt.imshow(exercise1)
Explanation: We can check if we got this right by having the computer make the image. To do this, we will use matplotlib's "plt.imshow" function, which takes in matricies and turns them into images.
End of explanation
from matplotlib import cm
plt.imshow(exercise1, cmap=cm.gray)
Explanation: Now, we have gotten a figure that looks like the correct board, but the colors are wrong. To fix this, we need to tell the computer that we want to see the image in grayscale. We do this by inserting a "cmap" into the plt.imshow function. "cmap" stands for "color map" and in this case, we will set it to grayscale
End of explanation
exercise2 = [[1,1,0], [1,2,1], [1,3,0], [2,1,1], [2,2,0], [2,3,0], [3,1,1], [3,2,0], [3,3,1]]
exercise2 = np.array(exercise2)
x_coordinates = exercise2[::,0]
y_coordinates = exercise2[::,1]
values = exercise2[::,2]
plt.scatter(x_coordinates, y_coordinates, c=values, marker='s',s = 500, cmap=cm.gray)
Explanation: If the output of the code above does not match the chekerboad in the exercise try again or ask for help. Notice how the order of the numbers is important to final image.
Exercise 2: Making scatter plots
Now we will use a scatter plot to create the checkerboard. This time we will use matplotlib's "plt.scatter" function to create the figure.
In the code below, we have given you a list a coordinates and values for the blocks at those coordinates. Correct the list so that it reproduces the checkerboard from Exercise 1.
End of explanation
# first we import the input/output module from skimage called "io". io lets us
# read files on the computer or online
from skimage import io
insect = io.imread('https://matplotlib.org/3.1.1/_images/stinkbug.png')
#we can take a look at the image we just imported once again using image show.
plt.imshow(insect, cmap=cm.gray, vmin=0, vmax=256)
plt.colorbar()
Explanation: Recall that the order of the numbers doesn't matter for scatter plots. To see this, go back to the previous code block, and switch the order of the exercise2 list. Even though you change the order, the image produced doesn't change.
Image Arithmatic (Introduction)
Now that we have covered the fundamentals, we can start diving into image manipulations starting with slices and arithmatic.
There are many tools we can use to learn about image processing. Today, we will be using Scikit-Image, a free to use and easy to learn python library, and we'll start by looking at an image of an insect.
End of explanation
# show the array
print(insect)
Explanation: Remember that, since we are working with images, the information is stored inside of a matrix. Lets take a look at that matrix.
End of explanation
# show the shape of the array
print(insect.shape)
Explanation: The first thing we notice is that this image is no longer just 1's and 0's. The numbers go from 0 to 255, and recalling that 0 = absence of light, we take 0=black and 255=white. All of the numbers between 0 and 255 are various shades of gray.
Next, we can see that there are ellipses inside of the matrix. This is python's way of telling us that there are so many positions and values that writing them all out would take a huge amount of space on the screen, and a list that big is hard to read.
In the checker board example we had an image that had 3 rows and 3 columns. We can check the shape of any matrix by adding ".shape" after its name. Let's see the shape of the insect image
End of explanation
# show the value of the pixel in the 100th row and 200th column
print(insect[100,200])
Explanation: It looks like this matrix has 375 rows and 500 columns, so we expect it to be wider than it is tall. We can take a look at the value of a specific pixel by calling its location like this:
End of explanation
# display only the section of the image between columns 150 - 350 and rows 200 - 400
plt.imshow(insect[50:150,150:275], cmap=cm.gray, vmin=0, vmax=256)
Explanation: Cropping
Cropping an image is very simple. Basically, we have a really big matrix, but want to get rid of the rows and columns that we are not interestesd in. We can do this by slicing the matrix usng brackets [ ] as show below:
End of explanation
plt.imshow(insect[insert_your_slice_here], cmap=cm.gray, vmin=0, vmax=256)
Explanation: In this case we cropped to the left antenna.
Exercise 3: Cropping
By altering the slice locations of the image matrix, can you crop the photo to focus in on the insect's face?
End of explanation
#divide the image by two and display it
plt.imshow(insect/2, cmap=cm.gray, vmax=255, vmin=0)
plt.colorbar()
Explanation: Image Arithmatic (continued)
since we are still working with a matrix, we are free to do normal math operations on it. For example, we can divide all of the values in the image by 2. This should darken the image, lets try it:
End of explanation
#multiply the image by 1.5 and display it
plt.imshow(insect*1.5, cmap=cm.gray, vmax=255, vmin=0)
plt.colorbar()
Explanation: To brighten the image, we can instead multiply the matrix by 1.5:
End of explanation
# subtract the image to itself and display
plt.imshow(insect-insect, cmap=cm.gray, vmax=256, vmin=0)
# check that the array is full of zeros
print(insect-insect)
Explanation: We can also add and subtract images from one another. For example, we can subtract an image from its self:
End of explanation
# set the region x=[50,100] and y=[50,100] to zero and display
img = np.copy(insect)
img[50:100,50:100] = 0
plt.imshow(img, cmap=cm.gray, vmax=256, vmin=0)
Explanation: We can also directly modify the values of the matrix elements ourselves. For example, if we want to set a portion of the image to solid black, we can do so by setting all of the pixels in that region to zero, like this:
End of explanation
img = np.copy(insect)
img['insert_slice'] = 'insert value'
plt.imshow(img, cmap=cm.gray, vmax=256, vmin=0)
Explanation: Exercise 4: Censorship
By altering the code below, censor the insects face with a white block. Remember that white = 255
End of explanation
# challenge problem: make a fading image from top to bottom (hint: multiply the image by a gradient you can create)
Explanation: Challenge Problem : Gradients
End of explanation
from skimage.feature import blob_dog, blob_log, blob_doh
import matplotlib.pyplot as plt
from astropy.io import fits
import numpy as np
Explanation: Advanced Image Processing
Blob Detection
What are blobs and why do we care?
In essence, "blob detection" is the process of searching for and identifying bright spots on a dark background, or dark spots on a light background. The "blobs" can correspond to any number of physical phenomena. In astronomy, for example, this can be relevant if you have an image of a part of the sky and want to identify something like galaxies. You might be wondering though, why not just look at an image yourself and pick out blobs "by eye"? In our example of identifying galaxies in an astronomical image, a systematic way of doing this may be beneficial for several reasons:
- you may have faint objects in your image the are difficult to distinguish from the background
- you may have a crowded field where bright points are close enough together that they are difficult to disentangle
- you may have a large dataset where it would just take you a long time to go through everything by hand
- you may want to define where a blob "ends" in a systematic way so that your blob definition is consistent across all objects you identify
These are just some of the reasons why a systematic approach to identifying blobs in images could be beneficial. Checking the output of your algorithm by eye, however, is wise to make sure that it is not outputting nonsense.
Let's get started with first reading in our first astronomical image; a nearby group of galaxies! Astronomical images are stored as "fits" files, this is essentially a matrix that contains the information of how bright the sky is at each pixel in your image, and a "header", which contains information about how pixels translate into coordinates on the sky, what instrument was used, and the units of the image for example. We will read the fits file into a numpy array using the package called astropy (see http://docs.astropy.org/en/stable/index.html for documentation). This package streamlines the process of working with astronomical images in Python.
First we will import a few packages that we are going to use throughout this section: skimage will allow us to run the blob detection algorithms, matplotlib will allow us to plot our data, astropy will allow us to read "fits" files, and numpy will let us do cool things with arrays.
End of explanation
hdu = fits.open('./RSCG1.fits')
Explanation: This line reads the fits file --> data table and header.
End of explanation
#Insert your code here:
Explanation: Exercise 5: hdu
What is the type of hdu? What happens when you print hdu?
End of explanation
image = hdu[0].data
# This line closes the fits file because we don't need it anymore; we've already read the data into an array.
hdu.close()
Explanation: The following line reads the data associated with the header into a numpy array. A fits file can have more than one header, and since Python indices start at 0, we tell Python to read the data associated with the first (and only) header associated with this fits file.
End of explanation
#Insert your code here:
print(type(image))
print(image)
print(image.shape)
print(np.max(image))
print(np.min(image))
Explanation: Exercise 6: Astronomical Images
What is the type of the image? What happens when you print image? What are the dimensions of image? What are the minimum and maximum values?
End of explanation
# imshow will map the 2D array (our data), and origin='lower' tells imshow to plot (0,0) in the bottom left corner.
plt.imshow(image,origin='lower')
plt.colorbar()
plt.show()
# EXERCISE: What happens if you remove origin='lower'? How does the mapping of the image change?
Explanation: That's it, you've now read your astronomical image into Python and can work with the data and analyze it! Let's visualize the data now to get an idea of what it is we are actually dealing with.
End of explanation
# This line calls the blob detection (Laplacian of Gaussian function) on our galaxies image with the parameters that
# we provide
blobs_log = blob_log(image, min_sigma = 2, max_sigma=9, num_sigma=8, threshold=.05)
Explanation: We can now run some blob finding algorithms on this data set and see what we find and if it makes sense! For this, we will use the scikit-image package which is an image processing pacakge in Python. This package has three algorithms for detecting blobs (all discussed here: https://en.wikipedia.org/wiki/Blob_detection):
- Laplacian of Gaussian
- Difference of Gaussian
- Determinant of Hessian
This is great, but what does this actually mean? How do they work? What's the difference between them?
Laplacian of Gaussian
This algorithm starts by smoothing (blurring) the entire image with a two-dimensional Gaussian function:
$$\begin{align} g(x,y,\sigma) = \frac{1}{2\pi\sigma^2} e^\left(-\frac{x^2+y^2}{2\sigma^2}\right) \end{align}$$
where $(x,y)$ correspond to pixel coordinates on your image, and $\sigma$ represents the "size" or "width" of the Gaussian function. The algorithm then performs a mathematical computation called taking the "Laplacian", however we will not go into the details of that. What this effectively gives you is strong responses for blobs of size $\sqrt{2}\sigma$. This means that the algorithm is most sensitive to detecting blobs that have a similar size to that of the Gaussian function that you smooth your image with in the first place. In order to detect blobs of varying size, you need to use an approach that allows you to smooth your image with Gaussians of varying sizes to try to match the scale of potential blobs. This approach is the most accurate but also the slowest approach, especially for the largest blobs, because it needs to smooth the image with a larger Gaussian.
The scikit-image function "blob_log" (https://scikit-image.org/docs/dev/api/skimage.feature.html#skimage.feature.blob_log) has this capability. To run it, we simply need to call the function and provide it with the correct input parameters. In the following piece of code, this is what we are going to provide the function with:
- image to run the blob detection on
- min_sigma: the minimum size for the Gaussian that will be used to smooth the image; this corresponds to the smallest blobs we expect in the image
- max_sigma: the maximum size for the Gaussian that will be used to smooth the image; this corresponds to the largest blobs we expect in the image
- num_sigma: the number of sizes to consider between the minimum and maximum
- threshold: relates to how faint of a blob you can detect; a smaller threshold will detect fainter blobs
The call to this function will output a list of x and y coordinates corresponding to the center of each blob, as well as the "size" of the Gaussian that it used to find that blob. Let's try it out!
End of explanation
input("What kind of output is blobs_log?")
input("What are its dimension?")
input("What does the length of blobs_log physically correspond to?")
Explanation: Exercise 7: Blobs
What kind of output does blobs_log provide? What are its dimensions? What does the length of blobs_log physically correspond to?
End of explanation
# In this line we use the third column to calculate the radius of the blob itself.
blobs_log[:, 2] = blobs_log[:, 2] * np.sqrt(2)
Explanation: Data from the blob finder
The first column of blobs_log (blobs_log[0]) is the y-coordinate, the second column (blobs_log[1]) is the x-coordinate, and the third column (blobs_log[2]) is the "size" of the Gaussian that was used to detect the blob.
End of explanation
ax = plt.gca()
ax.imshow(image,origin='lower')
cnt = 0
while cnt < len(blobs_log):
c = plt.Circle((blobs_log[cnt][1], blobs_log[cnt][0]), blobs_log[cnt][2], color='white', linewidth=2, fill=False)
ax.add_patch(c)
cnt = cnt + 1
plt.show()
Explanation: Plotting the blobs
Now we'll plot the blobs onto our image! For this, we will create circles with plt.Circle using the corrdinate and radius information we have from blobs_log. Then we will use add_patch to add that circle to our image. Since the plotting software needs information about the axes of the image in order to know where each circle goes, we use plt.gca() which means "Get Current Axis" and store the axis information into "ax". Then when we add the circle patches to the image, they will appear in the correct places. Finally, since there is more than one blob, we will create the circular patches and add them to the image one-by-one using a while loop.
End of explanation
blobs_log = blob_log(image, min_sigma = 2, max_sigma=9, num_sigma=8, threshold=.05)
### Play with the paramters above this line ###
blobs_log[:, 2] = blobs_log[:, 2] * np.sqrt(2)
ax = plt.gca()
ax.imshow(image,origin='lower')
cnt = 0
while cnt < len(blobs_log):
c = plt.Circle((blobs_log[cnt][1], blobs_log[cnt][0]), blobs_log[cnt][2], color='white', linewidth=2, fill=False)
ax.add_patch(c)
cnt = cnt + 1
plt.show()
Explanation: Exercise 8: Blob Parameters
Play around with the input parameters in the following block of code, then run it to see how changing they will affect the number (more blobs fewer blobs, no blobs) and type of blobs detected (bigger, smaller, brighter, fainter).
End of explanation
# RECORD YOUR FINDINGS HERE
Explanation: Write down your findings below, such as what parameters give you no blobs, how does changing the parameters affect the results, and anything else that you find interesting!
End of explanation
blobs_dog = blob_dog('fill here')
blobs_dog[:, 2] = blobs_dog[:, 2] * np.sqrt(2)
ax = plt.gca()
ax.imshow(image,origin='lower')
cnt = 0
while cnt < len(blobs_dog):
c = plt.Circle((blobs_dog[cnt][1], blobs_dog[cnt][0]), blobs_dog[cnt][2], color='white', linewidth=2, fill=False)
ax.add_patch(c)
cnt = cnt + 1
plt.show()
Explanation: Difference of Gaussian (DoG)
This algorithm is an approximation of the Laplacian of Gaussian, where rather than actually computing the "Laplacian" of the Gaussian that we mentioned above, DoG approximates this computation by taking the difference of two successively smoothed (blurred) images. Blobs are then detected as bright-on-dark spots in these difference images. This algorithm has the same disadvantage for larger blobs as the Laplacian of Gaussian algorithm.
To run this algorithm, we use the function "blob_dog" (https://scikit-image.org/docs/stable/api/skimage.feature.html#skimage.feature.blob_dog), which takes the same parameters as blob_log, except for num_sigma, because the algorithm needs to take the difference between successive smoothings so it figures out the number of times it needs to smooth the image. This function returns the same type of array as "blob_log" : (y-coord, x-coord, size).
Exercise 9: DoG
Fill in the input parameters for the call to the blob_dog function and run the code. How is the result from this function different from the previous one?
End of explanation
blobs_doh = blob_doh('fill here')
ax = plt.gca()
ax.imshow(image,origin='lower')
cnt = 0
while cnt < len(blobs_doh):
c = plt.Circle((blobs_doh[cnt][1], blobs_doh[cnt][0]), blobs_doh[cnt][2], color='white', linewidth=2, fill=False)
ax.add_patch(c)
cnt = cnt + 1
plt.show()
Explanation: Determinant of Hessian
The final option that scikit-image provides is the Determinant of Hessian algorithm. This algorithm performs a different kind of mathematical operation on the smoothed images, it calculates the Hessian matrix of the image (but we won't go into that), and searches for maxima in this matrix. It it the fastest approach, but it is also less accurate and has trouble finding very small blobs.
To try it out, we will run the function "blob_doh" (https://scikit-image.org/docs/stable/api/skimage.feature.html#skimage.feature.blob_doh), which takes as input the same parameters as the "blob_log" function. Let's try it out!
Exercise 10: DoH
Fill in the input parameters for the call to the blob_doh function and run the code. How is the result from this function different from the previous one? How were the input parameters different from the previous algorithms?
End of explanation
# RECORD YOUR THOUGHTS HERE
Explanation: Exercise 11: Blobs 2
How do the three algorithms compare to each other? Do their results agree with what you would intuitively call a blob? Do you trust one algorithm more than another?
End of explanation
# we import the operating system library OS to help us import many files at once
import os
folder_location = "./P1"
#the following line will navigate python to the correct foler. chdir stands for change directory.
os.chdir(folder_location)
#the following line return a list of file names in the folder
files = os.listdir()
Explanation: Particle Tracking
Now that we have learned several image processing techniques, we are ready to put them together in an application. In physics, there is an experiment called the Monopole Ion Trap that uses electric fields to trap particles as show below.
The basic priciple is that the particle is attracted to and repeled by the rod at the top of the image. In the image above, we have a very stable, well behaved, trap in which the ion just goes back and forth between two locations. However, under certain curcumstances, the particle may start to behave erratically, as shown below.
We would like to track the particle so that we can study its position and velocity.
Lets start by importing the movie of the stable particle into python. We have saved each movie, image by image, inside of folders on the computers. So we will import the stable movie as follows:
End of explanation
test_image = io.imread(files[0])
plt.imshow(test_image['insert slice location'], cmap = cm.gray)
Explanation: Now is a good time to notice that we don't actually need the entire image to do this computation. The only thing we really need is the region the particle travels within, as shown below
This is commonly called the Region of Interest (ROI) of an image and we will crop every single image as we import it. Lets do that by importing a test image and checking how we want to crop it:
Exercise 11: Cropping 2
Find the correct copping location by chaning the slice location
End of explanation
# the following line creates an empty list that we will populate with the images as we import and crop them.
#That is, ROI is a list of matrices, each one representing an image.
ROIs = []
# the following for-loop imports the images
for image in files:
ROIs.append(io.imread(image)[210::])
# to make sure we are doing things correctly, lets see what the 17th image in the list is
plt.imshow(ROIs[16], cmap=cm.gray)
Explanation: Now that we know where we want to crop the images, we can do so automatically for all of the images in the folder
End of explanation
#student solution goes here
Explanation: Looks good. Okay, now that we have all of the images imported, we want to run a blob finding algorithm that finds the position of the blob in each of the images. However, there are "specles" in the background, so make sure that your blob finding parameters are set correctly.
Exercise 12: Particle Tracking
The goal is for you to use the examples we have provided above to write your own code to systematically find the particles in the ROI list. A general outline of the code is provided below, but if you need further help feel free to ask.
general outline:
1. choose one of the images in the ROI list to test your parameters on
2. apply one of the blob finding techniques to the image (take a look at earlier examples if needed)
3. make a for-loop that applies this technique to all of the images in ROI and collects all of the outputs needed
4. call your list of particles locations and sizes 'particles'
End of explanation
#the following line converts the list into a numpy array
particles = np.array(particles)
#the following line cleans up array to make it look nicer
particles = particles[::,0]
#this shows us what they array looks like.
print(particles)
Explanation: We should now have a list of particle locations and sizes. Note that the size information is not important to this application, so we can get rid of it. In general, we find it easier to work with big complicated lists using numpy, so first we will convert the list into a numpy array, and then clean it up.
End of explanation
for n, blob in enumerate(particles):
particles[n] = np.add(blob, [210,0,0])
print(particles)
Explanation: Data Analysis
Great, using a ROI made this computation faster, and it made sure that the particle was easy to find. Unfortunately it also introduced a small offset error in the verticle position of the particle. This can be seen by taking a look at the picture below:
To correct for this error we simply have to add an offset to the y position of the particle list.
End of explanation
distances = []
for particle in particles:
distance = np.sqrt((83 - particle[0])**2 + (137-particle[1])**2)
distances.append(distance)
distances = np.array(distances)
#print(distances)
Explanation: The final step is to turn these pixel locations into measurements of distance between the particle and the center of the rod. To do this, we will use the usual distance formula:
where (y_2, x_2) is the location of the center of the rod and (y_1, y_2) is the location of the particle. From previous measurements, the experimenters know that the center of the rod is approximately at
y_2 = 83
x_2 = 137
so we can change all of the particle location data to particle distance data as follows:
End of explanation
time = np.linspace(0,len(distances)/2360, len(distances) )
distances = distances*.0059
plt.plot(time,distances)
plt.xlabel('time(s)')
plt.ylabel('distance(mm)')
Explanation: Great, so now we have the the distances of the particles from the center of the rod for each movie. The experimenters also know two important pieces of information.
the camera was taking 2360 pictures per second (FPS), so the time between each image is 1/2360 seconds.
the distance between each pixel 5.9 micron = .0059 millimeters (mm)
we can use this information to make a plot of the particle's distance as a function of time with proper units.
End of explanation
velocities = []
for n in range(len(distances)):
if n < (len(distances)-1):
velocity = (distances[n+1] - distances[n])*2360
velocities.append(velocity)
#print(velocities)
Explanation: Congratulations!
You just did particle tracking. Now we'll quickly demonstrate how to find the velocity of the particle.
Recall that the definition of velocity is simply distance/time. Since we know that the time between pictures is 1/2360 seconds, all we have to do is calculate the distance the particle moved between each frame.
End of explanation
plt.scatter(distances[:-1:],velocities, )
plt.xlabel('distance(mm)')
plt.ylabel('velocity(mm/s)')
Explanation: Sometimes is it useful/interesting to see the velocity data in a "phase diagram", which is just a plot of the position vs the velocity:
End of explanation
os.chdir('..')
folder_location = "./PC"
#the following line will navigate python to the correct foler. chdir stands for change directory.
os.chdir(folder_location)
#the following line return a list of file names in the folder
files_unstable = os.listdir()
#print(files_unstable)
Explanation: As you can see, the stable particle makes a circle in these "phase diagrams". As you can try (below), the unstable particle will produce sometehing that looks like scribles instead. Phase diagrams are often used to quickly check the stability of a particle, without having to watch the full movie.
Challenge problem:
By repeating the steps above, can you reproduce these datasets for the case of the unstable particle? In the code below, we have imported all of the file names into "files_unstable" can you complete the code?
End of explanation |
2,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting the occupancies of Belgian trains
In this lab, we will go over some of the typical steps in a data science pipeline
Step1: 0. Create a kaggle account! https
Step2: Processing the data
Cleaning the variables, and adding station variables to our dataframe
Step3: 1.2
Step4: 2.2
Step5: 2.3
Step6: 2.4
Step7: 3. Predictive modeling
Step8: We train our model on a 'training set' and evaluate it on the testset. Functionality for making this split automatically can be found <a href="http
Step9: Since we have a lot of 'Null' (+-1/3th) values for our 'class' feature, and we don't want to throw that away, we can try to predict these labels based on the other features, we get +75% accuracy so that seems sufficient. But we can't forgot to do the same thing for the test set!
Step10: 4. 'Advanced' predictive modeling
Step11: 5. Data augmentation with external data sources
There is a unlimited amount of factors that influence the occupancy of a train! Definitely more than the limited amount of data given in the feedback logs. Therefore, we will try to create new features for our dataset using external data sources. Examples of data sources include
Step12: Transform all null classes to one null class, maybe try to predict the class? Based on to and from and time
Step13: 6. Generating a Kaggle submission and comparing your methodology to others
6.1 | Python Code:
import os
os.getcwd()
%matplotlib inline
%pylab inline
import pandas as pd
import numpy as np
from collections import Counter, OrderedDict
import json
import matplotlib
import matplotlib.pyplot as plt
import re
from scipy.misc import imread
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
Explanation: Predicting the occupancies of Belgian trains
In this lab, we will go over some of the typical steps in a data science pipeline:
Data processing & cleaning
Exploratory Data Analysis
Feature extraction/engineering
Model selection & hyper-parameter tuning
Data linking
...
We will make use of the following technologies and libraries:
Python3.5
Python libraries: pandas, numpy, sklearn, matplotlib, ...
Kaggle
NO SPARK!!! (next lab will deal with machine learning with Spark MLlib)
End of explanation
from pandas.io.json import json_normalize
import pickle
training_json = pd.DataFrame()
with open('data/training_data.nldjson') as data_file:
for line in data_file:
training_json = training_json.append(json_normalize(json.loads(line)))
with open('data/test.nldjson') as data_file:
for line in data_file:
out_test_json = json_normalize(json.loads(line))
out_test = out_test_json
training = training_json
out_test[0:1]
Explanation: 0. Create a kaggle account! https://www.kaggle.com/
The competition can be found here: https://inclass.kaggle.com/c/train-occupancy-prediction-v2/leaderboard
Create an account and form a team (shuffle II), use your names and BDS_ as a prefix in your team name
Note: you can only make 5 submissions per day
There are also student groups from Kortrijk (Master of Science in Industrial Engineering) participating. They get no help at all (you get this notebook) but this is their final lab + they have no project. THEREFORE: Let's push them down the leaderboard!!! ;)
Your deadline: the end of the kaggle competition.
Evaluation: Your work will be evaluated for 50%, your result will also matter for another 50%. The top 5 student groups get bonus points for this part of the course!
1. Loading and processing the data
Trains can get really crowded sometimes, so wouldn't it be great to know in advance how busy your train will be, so you can take an earlier or later one? iRail, created just that. their application, SpitsGids, shows you the occupancy of every train in Belgium. Furthermore, you can indicate the occupancy yourself. Using the collected data, machine learning models can be trained to predict what the occupancy level of a train will be.
The dataset which we will use during this labo is composed of two files:
train.nldjson: contains labeled training data (JSON records, separated by newlines)
test.nldjson: unlabeled data for which we will create a submission for a Kaggle competition at the end of this lab (again: JSON records, separated by newlines). Each of the records is uniquely identifiable through an id
A json record has the following structure:
{
"querytype": "occupancy",
"querytime": "2016-09-29T16:24:43+02:00",
"post": {
"connection": "http://irail.be/connections/008811601/20160929/S85666",
"from": "http://irail.be/stations/NMBS/008811601",
"to": "http://irail.be/stations/NMBS/008811676",
"date": "20160929",
"vehicle": "http://irail.be/vehicle/S85666",
"occupancy": "http://api.irail.be/terms/medium"
},
"user_agent": "Railer/1610 CFNetwork/808.0.2 Darwin/16.0.0"
}
This is how the five first rows of a processed DataFrame COULD look like
1.1: Load in both files and store the data in a pandas DataFrame, different methodologies can be applied in order to parse the JSON records (pd.io.json.json_normalize, json library, ...)
Loading the data
Loading the json files and dumping them via pickle
End of explanation
training['querytime'] = pd.to_datetime(training['querytime'])
out_test['querytime'] = pd.to_datetime(out_test['querytime'])
training = training.dropna()
training['post.occupancy'] = training['post.occupancy'].apply(lambda x: x.split("http://api.irail.be/terms/",1)[1])
training['post.vehicle'] = training['post.vehicle'].apply(lambda x: x.split("http://irail.be/vehicle/",1)[1])
out_test['post.vehicle'] = out_test['post.vehicle'].apply(lambda x: x.split("http://irail.be/vehicle/",1)[1])
#create class column, eg IC058 -> IC
training['post.class'] = training['post.vehicle'].apply(lambda x: " ".join(re.findall("[a-zA-Z]+", x)))
out_test['post.class'] = out_test['post.vehicle'].apply(lambda x: " ".join(re.findall("[a-zA-Z]+", x)))
#reset the index because you have duplicate indexes now because you appended DFs in a for loop
training = training.reset_index()
stations_df = pd.read_csv('data/stations.csv')
stations_df['from'] = stations_df.index
stations_df['destination'] = stations_df['from']
stations_df[0:4]
#post.from en post.to are in the some format of URI
stations_df["zoekterm"]=stations_df["name"]+" trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Zaventem"), "zoekterm"] = "Zaventem trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Charleroi"), "zoekterm"] = "Charleroi trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Brussel"), "zoekterm"] = "Brussel trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Gent"), "zoekterm"] = "Gent trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Liège"), "zoekterm"] = "Luik trein"
stations_df.loc[stations_df['zoekterm'].str.startswith("Antwerpen"), "zoekterm"] = "Antwerpen trein"
druktes_df = pd.read_csv('data/station_druktes.csv')
druktes_df[0:4]
training = pd.merge(training,stations_df[["URI","from"]], left_on = 'post.from', right_on = 'URI')
training = pd.merge(training,stations_df[["URI","destination"]], left_on = 'post.to', right_on = 'URI')
training = training.drop(['URI_y','URI_x'],1)
out_test = pd.merge(out_test,stations_df[["URI","from"]], left_on = 'post.from', right_on = 'URI')
out_test = pd.merge(out_test,stations_df[["URI","destination"]], left_on = 'post.to', right_on = 'URI')
out_test = out_test.drop(['URI_y','URI_x'],1)
Explanation: Processing the data
Cleaning the variables, and adding station variables to our dataframe
End of explanation
fig, ax = plt.subplots(1,1, figsize=(5,5))
training['post.class'].value_counts().plot(kind='pie', ax=ax, autopct='%1.1f%%')
#we have a lot of null/undefined, especially in our test set, we can't simply throw them away
Explanation: 1.2: Clean the data! Make sure the station- and vehicle-identifiers are in the right format. A station identifier consists of 9 characters (prefix = '00') and a vehicle identifier consists of the concatentation of the vehicle type (IC/L/S/P/...) and the line identifier. Try to fix as much of the records as possible, drop only the unfixable ones. How many records did you drop?
2. Exploratory Data Analysis (EDA)
Let's create some visualisations of our data in order to gain some insights. Which features are useful, which ones aren't?
We will create 3 visualisations:
* Pie chart of the class distribution
* Stacked Bar Chart depicting the distribution for one aggregated variable (such as the weekday or the vehicle type)
* Scattter plot depicting the 'crowdiness' of the stations in Belgium
For each of the visualisations, code to generate the plot has already been handed to you. You only need to prepare the data (i.e. create a new dataframe or select certain columns) such that it complies with the input specifications. If you want to create your own plotting code or extend the given code, you are free to do so!
2.1: *Create a pie_chart with the distribution of the different classes. Have a look at our webscraping lab for plotting pie charts. TIP: the value_counts() does most of the work for you!
End of explanation
training['weekday'] = training['querytime'].apply(lambda l: l.weekday())
out_test['weekday'] = out_test['querytime'].apply(lambda l: l.weekday())
print("timerange from training data:",training['querytime'].min(),training['querytime'].max())
print(training['querytime'].describe())
print(out_test['querytime'].describe())
date_training = training.set_index('querytime')
date_test = out_test.set_index('querytime')
grouper = pd.TimeGrouper("1d")
date_training = date_training.groupby(grouper).size()
date_test = date_test.groupby(grouper).size()
# plot
fig, ax = plt.subplots(1,1, figsize=(10,7))
ax.plot(date_training)
ax.plot(date_test)
fig, ax = plt.subplots(1,1, figsize=(6,6))
training['weekday'].value_counts().plot(kind='pie', ax=ax, autopct='%1.1f%%')
training['post.occupancy'].value_counts()
Explanation: 2.2: *Analyze the timestamps in the training and testset. First convert the timestamps to a pandas datetime object using pd.datetime. http://pandas.pydata.org/pandas-docs/stable/timeseries.html
Have the column in this data format simplifies a lot of work, since it allows you to convert and extract time features more easily. For example:
- df['weekday] = df['time'].apply(lambda l: l.weekday())
would map every date to a day of the week in [0,6].
A. What are the ranges of training and testset, is your challenges one of interpolating or extrapolating in the future?
TIP: The describe() function can already be helpful!
B. Plot the number of records in both training and testset per day. Have a look here on how to work with the timegrouper functionality: http://stackoverflow.com/questions/15297053/how-can-i-divide-single-values-of-a-dataframe-by-monthly-averages
C. OPTIONAL: Have insight into the time dependence can get you a long way: Make additional visualizations to make you understand how time affects train occupancy.
End of explanation
training[0:1]
occup = pd.crosstab(training['post.class'], training['post.occupancy'])
weekday = pd.crosstab(training['post.class'], training['weekday'])
occup = occup.drop(['BUS', 'EUR', 'EXT', 'ICE', 'ICT', 'P', 'TGV', 'THA', 'TRN', 'ic', 'null', ''])
occup = occup.apply(lambda r: r/r.sum(), axis=1)
occup[0:4]
weekday = weekday.drop(['BUS', 'EUR', 'EXT', 'ICE', 'ICT', 'P', 'TGV', 'THA', 'TRN', 'ic', 'null', ''])
weekday = weekday.apply(lambda r: r/r.sum(), axis=1)
df_occup = pd.DataFrame(occup)
df_occup.plot.bar(stacked=True);
df_weekday = pd.DataFrame(weekday)
df_weekday.plot.bar(stacked=True);
Explanation: 2.3: *Create a stacked_bar_chart with the distribution of the three classes over an aggregated variable (group the data by weekday, vehicle_type, ...). More info on creating stacked bar charts can be found here: http://pandas.pydata.org/pandas-docs/stable/visualization.html#bar-plots
The dataframe you need will require your grouping variables as the index, and 1 column occupancy category, for example:
| Index | Occupancy_Low | Occupancy_Medium | Occupancy_High | Sum_Occupancy |
|-------|----------------|------------------|----------------|---------------|
| IC | 15 | 30 | 10 | 55 |
| S | 20 | 10 | 30 | 60 |
| L | 12 | 9 | 14 | 35 |
If you want the values to be relative (%), add a sum column and use it to divide the occupancy columns
End of explanation
stops = stations_df[['URI','longitude','latitude']]
dest_count = training['post.to'].value_counts()
dest_count_df = pd.DataFrame({'id':dest_count.index, 'count':dest_count.values})
dest_loc = pd.merge(dest_count_df, stops, left_on = 'id', right_on = 'URI')
dest_loc = dest_loc[['id', 'count', 'latitude','longitude']]
fig, ax = plt.subplots(figsize=(12,10))
ax.scatter(dest_loc.longitude, dest_loc.latitude, s=dest_loc['count'] )
Explanation: 2.4: * To have an idea about the hotspots in the railway network make a scatter plot that depicts the number of visitors per station. Aggregate on the destination station and use the GTFS dataset at iRail to find the geolocation of the stations (stops.txt): https://gtfs.irail.be/nmbs
End of explanation
def get_seconds_since_midnight(x):
midnight = x.replace(hour=0, minute=0, second=0, microsecond=0)
return (x - midnight).seconds
def get_line_number(x):
pattern = re.compile("^[A-Z]+([0-9]+)$")
if pattern.match(x):
return int(pattern.match(x).group(1))
else:
return x
training['seconds_since_midnight'] = training['querytime'].apply(get_seconds_since_midnight)
training['month'] = training['querytime'].apply(lambda x: x.month)
training['occupancy'] = training['post.occupancy'].map({'low': 0, 'medium': 1, 'high': 2})
out_test['seconds_since_midnight'] = out_test['querytime'].apply(get_seconds_since_midnight)
out_test['month'] = out_test['querytime'].apply(lambda x: x.month)
fig, ax = plt.subplots(figsize=(5, 5))
corr_frame = training[['seconds_since_midnight', 'month', 'occupancy']].corr()
cax = ax.matshow(abs(corr_frame))
fig.colorbar(cax)
tickpos = np.array(range(0,len(corr_frame.columns)))
plt.xticks(tickpos,corr_frame.columns, rotation='vertical')
plt.yticks(tickpos,corr_frame.columns, rotation='horizontal')
plt.grid(None)
pd.tools.plotting.scatter_matrix(training[['seconds_since_midnight', 'month', 'occupancy']],
alpha=0.2, diagonal='kde', figsize=(10,10))
plt.grid(None)
Explanation: 3. Predictive modeling: creating a baseline
Now that we have processed, cleaned and explored our data it is time to create a predictive model that predicts the occupancies of future Belgian trains. We will start with applying Logistic Regression on features extracted from our initial dataset. Some code has already been given to get you started.
Feature extraction
Some possible features include (bold ones are already implemented for you):
The day of the week
The number of seconds since midnight of the querytime
The train vehicle type (IC/P/L/...)
The line number
The line category
Information about the from- and to-station (their identifier, their coordinates, the number of visitors, ...)
The month
A binary variable indicating whether a morning (6-10AM) or evening jam (3-7PM) is ongoing
...
In order to do reveal relations between these features you can try and plot them with:
<a href="https://datascience.stackexchange.com/questions/10459/calculation-and-visualization-of-correlation-matrix-with-pandas"> Correlation plot </a>
<a href="http://pandas.pydata.org/pandas-docs/stable/visualization.html#visualization-scatter-matrix"> Scatter matrix </a>
These relations can be important since some models do not perform very will when features are highly correlated
Feature normalization
Most models require the features to have a similar range, preferables [0,1]. A minmax scaler is usually sufficient: x -> (x - xmin) / (xmax - xmin)
Scikit will be used quite extensively from now on, have a look here for preprocessing functionality: http://scikit-learn.org/stable/modules/classes.html#module-sklearn.preprocessing
Dealing with categorical variables
All machine learning techniques, except for tree-based methods, assume that variables are ordinal (you can define an order). For some variables, such as the day of the week or the train vehicle type, this is not true. Therefore, a pre-processing step is required that transforms these categorical variables. A few examples of such transformations are:
One-hot-encoding (supported by pandas: get_dummies )
Binary encoding: map each variable to a number, binary encode these numbers and use each bit as a feature (advantage of this technique is that it introduces a lot less new variables in contrast to one-hot-encoding)
Hash encoding
...
3.1: Extract more features than the two given ones. Make sure you extract at least one categorical variable, and transform it! What gains (in terms of current accuracy (0.417339475755)) do you achieve with new features in comparison to the given code?
End of explanation
skf = StratifiedKFold(n_splits=5, random_state=1337)
X = training[['seconds_since_midnight', 'month']]
y = training['occupancy']
cms = []
accs = []
for train_index, test_index in skf.split(X, y):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :]
y_train, y_test = y[train_index], y[test_index]
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train)
predictions = log_reg.predict(X_test)
cm = confusion_matrix(y_test, predictions)
cms.append(cm)
accs.append(accuracy_score(y_test, predictions))
print(classification_report(y_test, predictions))
#accs.append(sum([float(cm[i][i]) for i in range(len(cm))])/np.sum(cm))
print('Confusion matrix:\n', np.mean(cms, axis=0))
print('Avg accuracy', np.mean(accs), '+-', np.std(accs))
print('Predict all lows', float(len(y[y == 0]))/float(len(y)))
Explanation: We train our model on a 'training set' and evaluate it on the testset. Functionality for making this split automatically can be found <a href="http://scikit-learn.org/stable/modules/classes.html#module-sklearn.model_selection"> here </a>
Our first model is a linear logistic regression model, more information on the API <a href="http://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model"> here </a>
The confusion matrix is part of the <a href="http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics"> metrics functionality </a>
End of explanation
training_class = training_holidays[training_holidays.class_enc != 0]
training_class = training_class[training_class.class_enc != 14]
test_class = training_holidays[(training_holidays.class_enc == 0)|(training_holidays.class_enc == 14)]
training_class["class_pred"]=training_class["class_enc"]
training_holidays_enc = pd.concat([training_class,test_class])
X_train = training_class[['seconds_since_midnight','weekday', 'month','id','id_2']]
X_test = test_class[['seconds_since_midnight','weekday', 'month','id','id_2']]
y_train = training_class['class_enc']
train.occupancy.value_counts()/train.shape[0]
test.occupancy.value_counts()/test.shape[0]
out_test_holidays_druktes.occupancy.value_counts()/out_test_holidays_druktes.shape[0]
from sklearn.linear_model import SGDClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.metrics import f1_score
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import VotingClassifier
from sklearn.model_selection import RandomizedSearchCV, GridSearchCV
from sklearn.cross_validation import train_test_split
train, test = train_test_split(training_holidays_druktes, test_size=0.2, random_state=42)
X_train = train[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8', 'temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']]
X_test = test[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8', 'temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']]
y_train = train['occupancy']
y_test = test['occupancy']
#month uit de set halen als we ongeziene willen predicten
from xgboost import XGBClassifier
xgb = XGBClassifier(n_estimators=5000, max_depth=3, min_child_weight=6, learning_rate=0.01,
colsample_bytree=0.5, subsample=0.6, gamma=0., nthread=-1,
max_delta_step=1, objective='multi:softmax')
xgb.fit(X_train, y_train, sample_weight=[1]*len(y_train))
print(xgb.score(X_train,y_train))
print(xgb.score(X_test, y_test))
ac = AdaBoostClassifier()
ada_param_grid = {'n_estimators': [10, 30, 100, 300, 1000],
'learning_rate': [0.1, 0.3, 1.0, 3.0]}
ac_grid = GridSearchCV(ac,ada_param_grid,cv=3,
scoring='accuracy')
ac_grid.fit(X_train, y_train)
ac = ac_grid.best_estimator_
#ac.fit(X_train, y_train)
#print(ac_grid.score(X_train,y_train))
#print(ac_grid.score(X_test, y_test))
rf = RandomForestClassifier()
param_dist = {"n_estimators": [20],
"max_depth": [7, None],
"max_features": range(4, 6),
"min_samples_split": range(2, 7),
"min_samples_leaf": range(1, 7),
"bootstrap": [True, False],
"criterion": ["gini", "entropy"]}
rand = GridSearchCV(rf,param_dist,cv=3,
scoring='accuracy')
rand.fit(X_train, y_train)
rf = rand.best_estimator_
print(rand.best_estimator_)
# rf.fit(X_train, y_train)
# print(rf.score(X_train,y_train))
# print(rf.score(X_test, y_test))
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier(random_state=0)
dtc.fit(X_train, y_train)
print(dtc.score(X_train,y_train))
print(dtc.score(X_test, y_test))
rf2 = rand.best_estimator_
rf3 = rand.best_estimator_
rf4 = rand.best_estimator_
# voting_clf = VotingClassifier(
# estimators=[('ac', ac), ('rf', rf), ('dtc', dtc),('rf2', rf2), ('rf3', rf3), ('rf4', rf4), ('xgb', xgb)],
# voting='hard'
# )
voting_clf = VotingClassifier(
estimators=[('ac', ac), ('rf', rf), ('xgb', xgb)],
voting='hard'
)
from sklearn.metrics import accuracy_score
for clf in (ac, rf, xgb, voting_clf):
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
ac.fit(X_train, y_train)
voting_clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(clf.__class__.__name__, accuracy_score(y_test, y_pred))
pd.DataFrame([X_train.columns, rf.feature_importances_])
y_predict_test = voting_clf.predict(out_test_holidays_druktes[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8','temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']])
y_predict_test = rf.predict(out_test_holidays_druktes[['seconds_since_midnight','drukte_from','drukte_to','school','name_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','from_lat','from_lng','des_lat','des_lng','real_trend','class__0','class__1','class__2','class__3','class__4','class__5','class__6','class__7','class__8','temperature', 'humidity', 'windspeed', 'weather_type', 'visibility']])
out_test_holidays_druktes["occupancy"] = y_predict_test
out_test_holidays_druktes.occupancy.value_counts()/out_test_holidays_druktes.shape[0]
train.occupancy.value_counts()/train.shape[0]
out_test_holidays_druktes[['seconds_since_midnight','drukte_from','drukte_to','name_enc','class_enc','day__0','day__1','day__2','day__3','day__4','day__5','day__6','trend','occupancy']][0:100]
out_test_holidays_druktes[["id","occupancy"]].to_csv('predictions.csv',index=False)
Explanation: Since we have a lot of 'Null' (+-1/3th) values for our 'class' feature, and we don't want to throw that away, we can try to predict these labels based on the other features, we get +75% accuracy so that seems sufficient. But we can't forgot to do the same thing for the test set!
End of explanation
skf = StratifiedKFold(n_splits=5, random_state=1337)
X = training[['seconds_since_midnight', 'month']]
y = training['occupancy']
cms = []
accs = []
parameters = {#'penalty': ['l1', 'l2'], # No penalty tuning, cause 'l1' is only supported by liblinear
# It can be interesting to manually take a look at 'l1' with 'liblinear', since LASSO
# provides sparse solutions (boils down to the fact that LASSO does some feature selection for you)
'solver': ['newton-cg', 'lbfgs', 'liblinear', 'sag'],
'tol': [1e-4, 1e-6, 1e-8],
'C': [1e-2, 1e-1, 1.0, 1e1],
'max_iter': [1e2, 1e3]
}
for train_index, test_index in skf.split(X, y):
X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :]
y_train, y_test = y[train_index], y[test_index]
tuned_log_reg = GridSearchCV(LogisticRegression(penalty='l2'), parameters, cv=3,
scoring='accuracy')
tuned_log_reg.fit(X_train, y_train)
print(tuned_log_reg.best_params_)
predictions = tuned_log_reg.predict(X_test)
cm = confusion_matrix(y_test, predictions)
cms.append(cm)
accs.append(accuracy_score(y_test, predictions))
print(classification_report(y_test, predictions))
print('Confusion matrix:\n', np.mean(cms, axis=0))
print('Avg accuracy', np.mean(accs), '+-', np.std(accs))
print('Predict all lows', float(len(y[y == 0]))/float(len(y)))
Explanation: 4. 'Advanced' predictive modeling: model selection & hyper-parameter tuning
Model evaluation and hyper-parameter tuning
In order to evaluate your model, K-fold cross-validation (https://en.wikipedia.org/wiki/Cross-validation_(statistics) ) is often applied. Here, the data is divided in K chunks, K-1 chunks are used for training while 1 chunk is used for testing. Different metrics exist, such as accuracy, AUC, F1 score, and more. For this lab, we will use accuracy.
Some machine learning techniques, supported by sklearn:
SVMs
Decision Trees
Decision Tree Ensemble: AdaBoost, Random Forest, Gradient Boosting
Multi-Level Perceptrons/Neural Networks
Naive Bayes
K-Nearest Neighbor
...
To tune the different hyper-parameters of a machine learning model, again different techniques exist:
* Grid search: exhaustively try all possible parameter combinations (Code to tune the different parameters of our LogReg model has been given)
* Random search: try a number of random combinations, it has been shown that this is quite equivalent to grid search
4.1: *Choose one or more machine learning techniques, different from Logistic Regression and apply them to our data, with tuned hyper-parameters! You will see that switching techniques in sklearn is really simple! Which model performs best on this data? *
End of explanation
holiday_pops = pd.read_json('data/holidays.json')
holidays = pd.read_json( (holiday_pops['holidays']).to_json(), orient='index')
holidays['date'] = pd.to_datetime(holidays['date'])
holidays.head(1)
training["date"] = training["querytime"].values.astype('datetime64[D]')
out_test["date"] = out_test["querytime"].values.astype('datetime64[D]')
training_holidays = pd.merge(training,holidays, how="left", on='date')
training_holidays.school = training_holidays.school.fillna(0)
training_holidays.name = training_holidays.name.fillna("geen")
training_holidays[0:1]
out_test_holidays = pd.merge(out_test,holidays, how="left", on='date')
out_test_holidays.school = out_test_holidays.school.fillna(0)
out_test_holidays.name = out_test_holidays.name.fillna("geen")
out_test_holidays[0:1]
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
#encode the names from the holidays (Summer,Christmas...)
training_holidays["name_enc"] = encoder.fit_transform(training_holidays["name"])
out_test_holidays["name_enc"] = encoder.fit_transform(out_test_holidays["name"])
#encode the classes (IC,TGV,L...)
training_holidays["class_enc"] = encoder.fit_transform(training_holidays["post.class"])
out_test_holidays["class_enc"] = encoder.fit_transform(out_test_holidays["post.class"])
training_holidays=training_holidays.rename(columns = {'too':'destination'})
out_test_holidays=out_test_holidays.rename(columns = {'too':'destination'})
stations_df = pd.merge(stations_df,druktes_df.drop(['Unnamed: 0','station'],1), left_on = 'name', right_on = 'station_link')
Explanation: 5. Data augmentation with external data sources
There is a unlimited amount of factors that influence the occupancy of a train! Definitely more than the limited amount of data given in the feedback logs. Therefore, we will try to create new features for our dataset using external data sources. Examples of data sources include:
Weather APIs
A holiday calendar
Event calendars
Connection and delay information of the SpitsGidsAPI
Data from the NMBS/SNCB
Twitter and other social media
many, many more
In order to save time, a few 'prepared' files have already been given to you. Of course, you are free to scrape/generate your own data as well:
Hourly weather data for all stations in Belgium, from August till April weather_data.zip
A file which contains the vehicle identifiers and the stations where this vehicle stops line_info.csv
Based on this line_info, you can construct a graph of the rail net in Belgium and apply some fancy graph features (pagerank, edge betweenness, ...) iGraph experiments.ipynb
A file containing the coordinates of a station, and the number of visitors during week/weekend for 2015 station_druktes.csv
A file with some of the holidays (this can definitely be extended) holidays.json
For event data, there is the Eventful API
5.1: Pick one (or more) external data source(s) and link your current data frame to that data source (requires some creativity in most cases). Extract features from your new, linked data source and re-train your model. How much gain did you achieve?
Als we kijken naar "training.id.value_counts()" dan zien we vooral dat het om studenten bestemmingen gaat, misschien komt dat omdat het vooral hen zijn die deze app gebruiken? We moeten dus nadenken wanneer zij de trein nemen, en wat dat kan beinvloeden. Misschien het aantal studenten per station incorporeren?
End of explanation
def transform_druktes(row):
start = row['from']
destination = row['destination']
day = row['weekday']
row['from_lat']=stations_df[stations_df["from"] == start]["latitude"].values[0]
row['from_lng']=stations_df[stations_df["destination"] == destination]["longitude"].values[0]
row['des_lat']=stations_df[stations_df["from"] == start]["latitude"].values[0]
row['des_lng']=stations_df[stations_df["destination"] == destination]["longitude"].values[0]
row['zoekterm']=stations_df[stations_df["destination"] == destination]["zoekterm"].values[0]
if day == 5:
row['drukte_from']=stations_df[stations_df["from"] == start]["zaterdag"].values[0]
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zaterdag"].values[0]
elif day == 6:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]
elif day == 4:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*1.11
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.0*1.11
elif day == 3:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*1.21
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.0*1.21
elif day == 2:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*0.736
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.0*0.736
elif day == 1:
row['drukte_from']=stations_df[stations_df["from"] == start]["zondag"].values[0]/5.0*0.92
row['drukte_to']=stations_df[stations_df["destination"] == destination]["zondag"].values[0]/5.*0.92
else:
row['drukte_from']=stations_df[stations_df["from"] == start]["week"].values[0]/5.0*1.016
row['drukte_to']=stations_df[stations_df["destination"] == destination]["week"].values[0]/5.0*1.016
return row
training_holidays_druktes = training_holidays_druktes.apply(transform_druktes, axis=1)
out_test_holidays_druktes = training_holidays_druktes.apply(transform_druktes, axis=1)
training_holidays_druktes = pd.concat([training_holidays_druktes,
pd.get_dummies(training_holidays_druktes['weekday'], prefix="day_"),
],1)
out_test_holidays_druktes = pd.concat([out_test_holidays_druktes,
pd.get_dummies(out_test_holidays_druktes['weekday'], prefix="day_"),
],1)
trends_df = pd.DataFrame()
real_trend_df = pd.DataFrame()
from pytrends.request import TrendReq
import pandas as pd
# enter your own credentials
google_username = "[email protected]"
google_password = "*******"
#path = ""
# Login to Google. Only need to run this once, the rest of requests will use the same session.
pytrend = TrendReq(google_username, google_password, custom_useragent='My Pytrends Script')
for i in range(0,645):
if i % 4 != 0:
continue
try:
pytrend.build_payload(kw_list=[stations_df[stations_df.destination == i].zoekterm.values[0], stations_df[stations_df.destination == i+1].zoekterm.values[0], stations_df[stations_df.destination == i+2].zoekterm.values[0], stations_df[stations_df.destination == i+3].zoekterm.values[0], "Brussel trein"],geo="BE",timeframe='2016-07-27 2017-04-05')
real_trend_df = pd.concat([real_trend_df,pytrend.interest_over_time()], axis=1)
except:
continue
no_dup_trends = trends_df.T.groupby(level=0).first().T
training_holidays_druktes = pd.merge(training_holidays_druktes,stations_df[["destination","zoekterm"]], left_on = 'destination', right_on = 'destination')
out_test_holidays_druktes = pd.merge(out_test_holidays_druktes,stations_df[["destination","zoekterm"]], left_on = 'destination', right_on = 'destination')
int(real_trend_df.loc["2016-07-28"]["Brussel trein"])
training_holidays_druktes_copy = training_holidays_druktes
out_test_holidays_druktes_copy = out_test_holidays_druktes
training_holidays_druktes = training_holidays_druktes_copy
out_test_holidays_druktes = out_test_holidays_druktes_copy
def get_trends(row):
zoek = str(row.zoekterm)
datum = str(row["date"])[0:10]
try:
row["real_trend"] = int(real_trend_df.loc[datum][zoek])
except:
row["real_trend"] = 0
return row
training_holidays_druktes = training_holidays_druktes.apply(get_trends, axis=1)
out_test_holidays_druktes = out_test_holidays_druktes.apply(get_trends, axis=1)
training_holidays_druktes = training_holidays_druktes.drop(['post.date','post.from','post.vehicle','querytype','user_agent','post.to','name','post.class'],1)
out_test_holidays_druktes = out_test_holidays_druktes.drop(['post.date','post.from','post.vehicle','querytype','user_agent','post.to','name','post.class'],1)
training_holidays_druktes["hour"] = training_holidays_druktes["querytime"].values.astype('datetime64[h]').astype('str')
out_test_holidays_druktes["hour"] = out_test_holidays_druktes["querytime"].values.astype('datetime64[h]').astype('str')
training_holidays_druktes["hour_lag"] = (training_holidays_druktes["querytime"].values.astype('datetime64[h]')-2).astype('str')
out_test_holidays_druktes["hour_lag"] = (out_test_holidays_druktes["querytime"].values.astype('datetime64[h]')-2).astype('str')
training_holidays_druktes["timeframe"] = training_holidays_druktes["hour_lag"]+" "+training_holidays_druktes["hour"]
out_test_holidays_druktes["timeframe"] = out_test_holidays_druktes["hour_lag"]+" "+out_test_holidays_druktes["hour"]
# enter your own credentials
google_username = "[email protected]"
google_password = "*******"
#path = ""
# Login to Google. Only need to run this once, the rest of requests will use the same session.
pytrend = TrendReq(google_username, google_password, custom_useragent='My Pytrends Script')
def get_hour_trends(row):
zoek = str(row.zoekterm_x)
tijd = str(row["timeframe"])
try:
pytrend.build_payload(kw_list=[zoek],timeframe=tijd)
row["hour_trend"] = int(pytrend.interest_over_time()[zoek].sum())
except:
row["hour_trend"] = 0
return row
training_holidays_druktes = training_holidays_druktes.apply(get_hour_trends, axis=1)
out_test_holidays_druktes = out_test_holidays_druktes.apply(get_hour_trends, axis=1)
training_holidays_druktes = pd.concat([training_holidays_druktes,
pd.get_dummies(training_holidays_druktes['class_enc'], prefix="class_"),
],1)
out_test_holidays_druktes = pd.concat([out_test_holidays_druktes,
pd.get_dummies(out_test_holidays_druktes['class_enc'], prefix="class_"),
],1)
# file names for all csv files containing weather information per month
weather_csv = ['weather_data_apr_1', 'weather_data_apr_2', 'weather_data_aug_1', 'weather_data_aug_2', 'weather_data_dec_1', 'weather_data_dec_2', 'weather_data_feb_1', 'weather_data_feb_2', 'weather_data_jan_1', 'weather_data_jan_2', 'weather_data_july_1', 'weather_data_july_2', 'weather_data_mar_1', 'weather_data_mar_2', 'weather_data_nov_1', 'weather_data_nov_2', 'weather_data_oct_1', 'weather_data_oct_2', 'weather_data_sep_1', 'weather_data_sep_2']
for i in range(len(weather_csv)):
weather_csv[i] = 'data/weather_data/' + weather_csv[i] + '.csv'
# create column of station index
stations_df['station_index'] = stations_df.index
# put all weather data in an array
weather_months = []
for csv in weather_csv:
weather_month = pd.read_csv(csv)
# convert date_time to a datetime object
weather_month['date_time'] = pd.to_datetime(weather_month['date_time'])
weather_month = weather_month.drop(['Unnamed: 0','lat','lng'], 1)
weather_months.append(weather_month)
# concatenate all weather data
weather = pd.concat(weather_months)
# merge weather month with station to convert station name to index (that can be found in holiday_druktes)
weather = pd.merge(weather, stations_df[["name", "station_index"]], left_on = 'station_name', right_on = 'name')
weather = weather.drop(['station_name', 'name'], 1)
# truncate querytime to the hour in new column
training_holidays_druktes['querytime_hour'] = training_holidays_druktes['querytime'].apply(lambda dt: datetime.datetime(dt.year, dt.month, dt.day, dt.hour))
# join training with weather data
training_holidays_druktes_weather = pd.merge(training_holidays_druktes, weather, how='left', left_on = ['destination', 'querytime_hour'], right_on = ['station_index', 'date_time'])
training_holidays_druktes_weather = training_holidays_druktes_weather.drop(['querytime_hour', 'date_time', 'station_index'], 1)
training_holidays_druktes_weather = training_holidays_druktes_weather.drop_duplicates()
# fill null rows of weather data with there mean
training_holidays_druktes_weather['temperature'].fillna(training_holidays_druktes_weather['temperature'].mean(), inplace=True)
training_holidays_druktes_weather['humidity'].fillna(training_holidays_druktes_weather['humidity'].mean(), inplace=True)
training_holidays_druktes_weather['windspeed'].fillna(training_holidays_druktes_weather['windspeed'].mean(), inplace=True)
training_holidays_druktes_weather['visibility'].fillna(training_holidays_druktes_weather['visibility'].mean(), inplace=True)
training_holidays_druktes_weather['weather_type'].fillna(training_holidays_druktes_weather['weather_type'].mean(), inplace=True)
#cast weather type to int
training_holidays_druktes_weather['weather_type'] = training_holidays_druktes_weather['weather_type'].astype(int)
# Add weather data to test data
# truncate querytime to the hour in new column
out_test_holidays_druktes['querytime_hour'] = out_test_holidays_druktes['querytime'].apply(lambda dt: datetime.datetime(dt.year, dt.month, dt.day, dt.hour))
# join test with weather data
out_test_holidays_druktes_weather = pd.merge(out_test_holidays_druktes, weather, how='left', left_on = ['destination', 'querytime_hour'], right_on = ['station_index', 'date_time'])
out_test_holidays_druktes_weather = out_test_holidays_druktes_weather.drop(['querytime_hour', 'date_time', 'station_index'], 1)
out_test_holidays_druktes_weather = out_test_holidays_druktes_weather.drop_duplicates()
# fill null rows of weather data with there mean
out_test_holidays_druktes_weather['temperature'].fillna(out_test_holidays_druktes_weather['temperature'].mean(), inplace=True)
out_test_holidays_druktes_weather['humidity'].fillna(out_test_holidays_druktes_weather['humidity'].mean(), inplace=True)
out_test_holidays_druktes_weather['windspeed'].fillna(out_test_holidays_druktes_weather['windspeed'].mean(), inplace=True)
out_test_holidays_druktes_weather['visibility'].fillna(out_test_holidays_druktes_weather['visibility'].mean(), inplace=True)
out_test_holidays_druktes_weather['weather_type'].fillna(out_test_holidays_druktes_weather['weather_type'].mean(), inplace=True)
#cast weather type to int
out_test_holidays_druktes_weather['weather_type'] = out_test_holidays_druktes_weather['weather_type'].astype(int)
# set out_test_holidays_druktes and training_holidays_druktes equal to weather counterpart such that we don't need to change all variable names above
out_test_holidays_druktes = out_test_holidays_druktes_weather
training_holidays_druktes = training_holidays_druktes_weather
Explanation: Transform all null classes to one null class, maybe try to predict the class? Based on to and from and time
End of explanation
pickle.dump(training_holidays_druktes,open("temp_data/training_holidays_druktes.pkl","wb"))
pickle.dump(out_test_holidays_druktes,open("temp_data/out_test_holidays_druktes.pkl","wb"))
training_holidays_druktes = pd.read_pickle("temp_data/training_holidays_druktes.pkl")
out_test_holidays_druktes = pd.read_pickle("temp_data/out_test_holidays_druktes.pkl")
training_holidays_druktes[0:5]
Explanation: 6. Generating a Kaggle submission and comparing your methodology to others
6.1: Train your best performing model on train.nldjson and generate predictions for test.nldjson. Create a file called submission.csv with format listed below and submit it on the Kaggle competition! What's your position on the leaderboard?
End of explanation |
2,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Epochs data
Step1: This tutorial focuses on visualization of epoched data. All of the functions
introduced here are basically high level matplotlib functions with built in
intelligence to work with epoched data. All the methods return a handle to
matplotlib figure instance.
Events used for constructing the epochs here are the triggers for subject
being presented a smiley face at the center of the visual field. More of the
paradigm at BABDHIFJ.
All plotting functions start with plot. Let's start with the most
obvious.
Step2: The numbers at the top refer to the event id of the epoch. The number at the
bottom is the running numbering for the epochs.
Since we did no artifact correction or rejection, there are epochs
contaminated with blinks and saccades. For instance, epoch number 1 seems to
be contaminated by a blink (scroll to the bottom to view the EOG channel).
This epoch can be marked for rejection by clicking on top of the browser
window. The epoch should turn red when you click it. This means that it will
be dropped as the browser window is closed.
It is possible to plot event markers on epoched data by passing events
keyword to the epochs plotter. The events are plotted as vertical lines and
they follow the same coloring scheme as
Step3: To plot individual channels as an image, where you see all the epochs at one
glance, you can use function
Step4: You also have functions for plotting channelwise information arranged into a
shape of the channel array. The image plotting uses automatic scaling by
default, but noisy channels and different channel types can cause the scaling
to be a bit off. Here we define the limits by hand. | Python Code:
import os.path as op
import mne
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')
raw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif'))
raw.set_eeg_reference() # set EEG average reference
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=1.)
Explanation: Visualize Epochs data
End of explanation
epochs.plot(block=True)
Explanation: This tutorial focuses on visualization of epoched data. All of the functions
introduced here are basically high level matplotlib functions with built in
intelligence to work with epoched data. All the methods return a handle to
matplotlib figure instance.
Events used for constructing the epochs here are the triggers for subject
being presented a smiley face at the center of the visual field. More of the
paradigm at BABDHIFJ.
All plotting functions start with plot. Let's start with the most
obvious. :func:mne.Epochs.plot offers an interactive browser that allows
rejection by hand when called in combination with a keyword block=True.
This blocks the execution of the script until the browser window is closed.
End of explanation
events = mne.pick_events(events, include=[5, 32])
mne.viz.plot_events(events)
epochs['smiley'].plot(events=events)
Explanation: The numbers at the top refer to the event id of the epoch. The number at the
bottom is the running numbering for the epochs.
Since we did no artifact correction or rejection, there are epochs
contaminated with blinks and saccades. For instance, epoch number 1 seems to
be contaminated by a blink (scroll to the bottom to view the EOG channel).
This epoch can be marked for rejection by clicking on top of the browser
window. The epoch should turn red when you click it. This means that it will
be dropped as the browser window is closed.
It is possible to plot event markers on epoched data by passing events
keyword to the epochs plotter. The events are plotted as vertical lines and
they follow the same coloring scheme as :func:mne.viz.plot_events. The
events plotter gives you all the events with a rough idea of the timing.
Since the colors are the same, the event plotter can also function as a
legend for the epochs plotter events. It is also possible to pass your own
colors via event_colors keyword. Here we can plot the reaction times
between seeing the smiley face and the button press (event 32).
When events are passed, the epoch numbering at the bottom is switched off by
default to avoid overlaps. You can turn it back on via settings dialog by
pressing o key. You should check out help at the lower left corner of the
window for more information about the interactive features.
End of explanation
epochs.plot_image(278, cmap='interactive')
Explanation: To plot individual channels as an image, where you see all the epochs at one
glance, you can use function :func:mne.Epochs.plot_image. It shows the
amplitude of the signal over all the epochs plus an average (evoked response)
of the activation. We explicitly set interactive colorbar on (it is also on
by default for plotting functions with a colorbar except the topo plots). In
interactive mode you can scale and change the colormap with mouse scroll and
up/down arrow keys. You can also drag the colorbar with left/right mouse
button. Hitting space bar resets the scale.
End of explanation
epochs.plot_topo_image(vmin=-200, vmax=200, title='ERF images')
Explanation: You also have functions for plotting channelwise information arranged into a
shape of the channel array. The image plotting uses automatic scaling by
default, but noisy channels and different channel types can cause the scaling
to be a bit off. Here we define the limits by hand.
End of explanation |
2,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Easy "Hard" Way
Step1: 1. A Quick Introduction to Cython
Cython is a compiler and a programming language used to generate C extension modules for Python.
The Cython language is a Python/C creole which is essentially Python with some additional keywords for specifying static data types. It looks something like this
Step2: Let's see how long it takes to compute the 100th Fibonacci number.
Step3: Now let's implement the same thing with Cython. Since Cython is essentially "Python with types," it is often fairly easy to make the move and see improvements in speed. It does come at the cost, however, of a separate compilation step.
There are several ways to ways to go about the compilation process, and in many cases, Cython's tooling makes it fairly simple. For example, Jupyter notebooks can make use of a %%cython magic command that will do all of compilation in the background for us. To make use of it, we need to load the cython extension.
Step4: Now we can write a Cython function.
Note
Step5: To see a bit more about writing Cython and its potential performance benefits, see this Cython examples notebook.
Even better, check out Kurt Smith's Cython tutorial which is happening at the same time as this tutorial.
2. Generating C Code with SymPy's codegen()
Our main goal in using Cython is to wrap SymPy-generated C code into a Python extension module so that we can call the fast compiled numerical routines from Python.
SymPy's codegen function takes code printing a step further
Step6: Now we'll use codegen (under sympy.utilities.codegen) to output C source and header files which can compute the right hand side (RHS) of the ODEs numerically, given the current values of our state variables. Here we'll import it and show the documentation
Step7: We just have one expression we're interested in computing, and that is the matrix expression representing the derivatives of our state variables with respect to time
Step8: Note that we've just expanded the outputs into individual variables so we can access the generated code easily. codegen gives us back the .c filename and its source code in a tuple, and the .h filename and its source in another tuple. Let's print the source code.
Step9: There are several things here worth noting
Step10: Now we need to replace the use of y0, y1, etc. in our rhs_of_odes matrix with the elements of our new state vector (e.g. y[0], y[1], etc.). We saw how to do this already in the previous notebook. Start by forming a mapping from y0 -> y[0, 0], y1 -> y[1, 0], etc.
Step11: Now replace the symbols in rhs_of_odes according to the mapping. We'll call it rhs_of_odes_ind and use that from now on.
Step12: Exercise
Step13: So by re-writing our expression in terms of a MatrixSymbol rather than individual symbols, the function signature of the generated code is cleaned up greatly.
However, we still have the issue of the auto-generated output variable name. To fix this, we can form a matrix equation rather than an expression. The name given to the symbol on the left hand side of the equation will then be used for our output variable name.
We'll start by defining a new MatrixSymbol that will represent the left hand side of our equation -- the derivatives of each state variable.
Step14: Exercise
Step15: Now we see that the c_odes function signature is nice and clean. We pass it a pointer to an array representing the current values of all of our state variables and a pointer to an array that we want to fill with the derivatives of each of those state variables.
If you're not familiar with C and pointers, you just need to know that it is idiomatic in C to preallocate a block of memory representing an array, then pass the location of that memory (and usually the number of elements it can hold), rather than passing the array itself to/from a function. For our purposes, this is as complicated as pointers will get.
Just so we can compile this code and use it, we'll re-use the codegen call above with to_files=True so the .c and .h files are actually written to the filesystem, rather than having their contents returned in a string.
Step16: 3. Wrapping the Generated Code with Cython
Now we want to wrap the function that was generated c_odes with a Cython function so we can generate an extension module and call that function from Python. Wrapping a set of C functions involves writing a Cython script that specifies the Python interface to the C functions. This script must do two things
Step17: Now we can write our wrapper code.
To write the wrapper, we first write the function signature as specified by the C library. Then, we create a wrapper function that makes use of the C implementation and returns the result. This wrapper function becomes the interface to the compiled code, and it does not need to be identical to the C function signature. In fact, we'll make our wrapper function compliant with the odeint interface (i.e. takes a 1-dimensional array of state variable values and the current time).
Step18: Exercise
Step19: Now we can use odeint to integrate the equations and plot the results to check that it worked. First we need to import odeint.
Step20: A couple convenience functions are provided in the scipy2017codegen package which give some reasonable initial conditions for the system and plot the state trajectories, respectively. Start by grabbing some initial values for our state variables and time values.
Step21: Finally we can integrate the equations using our Cython-wrapped C function and plot the results.
Step22: 4. Generating and Compiling a C Extension Module Automatically
As yet another layer of abstraction on top of codegen, SymPy provides an autowrap function that can automatically generate a Cython wrapper for the generated C code. This greatly simplifies the process of going from a SymPy expression to numerical computation, but as we'll see, we lose a bit of flexibility compared to manually creating the Cython wrapper.
Let's start by importing the autowrap function and checking out its documentation.
Step23: So autowrap takes in a SymPy expression and gives us back a binary callable which evaluates the expression numerically. Let's use the Equality formed earlier to generate a function we can call to evaluate the right hand side of our system of ODEs.
Step24: Exercise
Step25: One advantage to wrapping the generated C code manually is that we get fine control over how the function is used from Python. That is, in our hand-written Cython wrapper we were able to specify that from the Python side, the input to our wrapper function and its return value are both 1-dimensional ndarray objects. We were also able to add in the extra argument t for the current time, making the wrapper function fully compatible with odeint.
However, autowrap just sees that we have a matrix equation where each side is a 2-dimensional array with shape (14, 1). The function returned then expects the input array to be 2-dimensional and it returns a 2-dimensional array.
This won't work with odeint, so we can write a simple wrapper that massages the input and output and adds an extra argument for t.
Step26: Now a 1-dimensional input works.
Step27: Exercise
Step28: Finally, we can use our two wrapped functions in odeint and compare to our manually-written Cython wrapper result.
Step29: 5. Using a Custom Printer and an External Library with autowrap
As of SymPy 1.1, autowrap accepts a custom CodeGen object, which is responsible for generating the code. The CodeGen object in turn accepts a custom CodePrinter object, meaning we can use these two points of flexibility to make use of customized code printing in an autowrapped function. The following example is somewhat contrived, but the concept in general is powerful.
In our set of ODEs, there are quite a few instances of $y_i^2$, where $y_i$ is one of the 14 state variables. As an example, here's the equation for $\frac{dy_3(t)}{dt}$
Step30: There is a library called fastapprox that provides computational routines things like powers, exponentials, logarithms, and a few others. These routines provide limited precision with respect to something like math.h's equivalent functions, but they offer potentially faster computation.
The fastapprox library provides a function called fastpow, with the signature fastpow(float x, float p). It it follows the interface of pow from math.h. In the previous notebook, we saw how to turn instances of $x^3$ into x*x*x, which is potentially quicker than pow(x, 3). Here, let's just use fastpow instead.
Exercise
Step31: Now we can create a C99CodeGen object that will make use of this printer. This object will be passed in to autowrap with the code_gen keyword argument, and autowrap will use it in the code generation process.
Step32: However, for our generated code to use the fastpow function, it needs to have a #include "fastpow.h" preprocessor statement at the top. The code gen object supports this by allowing us to append preprocessor statements to its preprocessor_statements attribute.
Step33: One final issue remains, and that is telling autowrap where to find the fastapprox library headers. These header files have just been downloaded from GitHub and placed in the scipy2017codegen package, so it should be installed with the conda environment. We can find it by looking for the package directory.
Step34: Finally we're ready to call autowrap. We'll just use ode_eq, the Equality we created before, pass in the custom CodeGen object, and tell autowrap where the fastapprox headers are located.
Step35: If we navigate to the tmp directory created, we can view the wrapped_code_#.c to see our custom printing in action.
As before, we need a wrapper function for use with odeint, but aside from that, everything should be in place.
Step36: Exercise | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import sympy as sym
sym.init_printing()
Explanation: The Easy "Hard" Way: Cythonizing
In this notebook, we'll build on the previous work where we used SymPy's code printers to generate code for evaluating expressions numerically. As a layer of abstraction on top of C code printers, which generate snippets of code we can copy into a C program, we can generate a fully compilable C library. On top of this, we will see how to use Cython to compile such a library into a Python extension module so its computational routines can be called directly from Python.
Learning Objectives
After this lesson, you will be able to:
write a simple Cython function and run it a Jupyter notebook using the %%cython magic command
use the SymPy codegen function to output compilable C code
wrap codegen-generated code with Cython, compile it into an extension module, and call it from Python
use SymPy's autowrap function to do all of this behind the scenes
pass a custom code printer to autowrap to make use of an external C library
End of explanation
def python_fib(n):
a = 0.0
b = 1.0
for i in range(n):
tmp = a
a = a + b
b = tmp
return a
[python_fib(i) for i in range(10)]
Explanation: 1. A Quick Introduction to Cython
Cython is a compiler and a programming language used to generate C extension modules for Python.
The Cython language is a Python/C creole which is essentially Python with some additional keywords for specifying static data types. It looks something like this:
cython
def cython_sum(int n):
cdef float s = 0.0
cdef int i
for i in range(n):
s += i
return s
The Cython compiler transforms this code into a "flavor" of C specific to Python extension modules. This C code is then compiled into a binary file that can be imported and used just like a regular Python module -- the difference being that the functions you use from that module can potentially be much faster and more efficient than an equivalent pure Python implementation.
Aside from writing Cython code for computations, Cython is commonly used for writing wrappers around existing C code so that the functions therein can be made available in an extension module as described above. We will use this technique to make the SymPy-generated C code accessible to Python for use in SciPy's odeint.
Example
As a quick demonstration of what Cython can offer, we'll walk through a simple example of generating numbers in the Fibonacci sequence. If you're not familiar with it already, the sequence is initialized with $F_0 = 0$ and $F_1 = 1$, then the remaining terms are defined recursively by:
$$
F_i = F_{i-1} + F_{i-2}
$$
Our objective is to write a function that computes the $n$-th Fibonacci number. Let's start by writing a simple iterative solution in pure Python.
End of explanation
%timeit python_fib(100)
Explanation: Let's see how long it takes to compute the 100th Fibonacci number.
End of explanation
%load_ext cython
Explanation: Now let's implement the same thing with Cython. Since Cython is essentially "Python with types," it is often fairly easy to make the move and see improvements in speed. It does come at the cost, however, of a separate compilation step.
There are several ways to ways to go about the compilation process, and in many cases, Cython's tooling makes it fairly simple. For example, Jupyter notebooks can make use of a %%cython magic command that will do all of compilation in the background for us. To make use of it, we need to load the cython extension.
End of explanation
%%cython
def cython_fib(int n):
cdef double a = 0.0
cdef double b = 1.0
cdef double tmp
for i in range(n):
tmp = a
a = a + b
b = tmp
return a
%timeit cython_fib(100)
Explanation: Now we can write a Cython function.
Note: the --annotate (or -a) flag of the %%cython magic command will produce an interactive annotated printout of the Cython code, allowing us to see the C code that is generated.
End of explanation
from scipy2017codegen.chem import load_large_ode
rhs_of_odes, states = load_large_ode()
rhs_of_odes[0]
Explanation: To see a bit more about writing Cython and its potential performance benefits, see this Cython examples notebook.
Even better, check out Kurt Smith's Cython tutorial which is happening at the same time as this tutorial.
2. Generating C Code with SymPy's codegen()
Our main goal in using Cython is to wrap SymPy-generated C code into a Python extension module so that we can call the fast compiled numerical routines from Python.
SymPy's codegen function takes code printing a step further: it wraps a snippet of code that numerically evaluates an expression with a function, and puts that function into the context of a file that is fully ready-to-compile code.
Here we'll revisit the water radiolysis system, with the aim of numerically computing the right hand side of the system of ODEs and integrating using SciPy's odeint.
Recall that this system looks like:
$$
\begin{align}
\frac{dy_0(t)}{dt} &= f_0\left(y_0,\,y_1,\,\dots,\,y_{13},\,t\right) \
&\vdots \
\frac{dy_{13}(t)}{dt} &= f_{13}\left(y_0,\,y_1,\,\dots,\,y_{13},\,t\right)
\end{align}
$$
where we are representing our state variables $y_0,\,y_1,\dots,y_{13}$ as a vector $\mathbf{y}(t)$ that we called states in our code, and the collection of functions on the right hand side $\mathbf{f}(\mathbf{y}(t))$ we called rhs_of_odes.
Start by importing the system of ODEs and the matrix of state variables.
End of explanation
from sympy.utilities.codegen import codegen
#codegen?
Explanation: Now we'll use codegen (under sympy.utilities.codegen) to output C source and header files which can compute the right hand side (RHS) of the ODEs numerically, given the current values of our state variables. Here we'll import it and show the documentation:
End of explanation
[(cf, cs), (hf, hs)] = codegen(('c_odes', rhs_of_odes), language='c')
Explanation: We just have one expression we're interested in computing, and that is the matrix expression representing the derivatives of our state variables with respect to time: rhs_of_odes. What we want codegen to do is create a C function that takes in the current values of the state variables and gives us back each of the derivatives.
End of explanation
print(cs)
Explanation: Note that we've just expanded the outputs into individual variables so we can access the generated code easily. codegen gives us back the .c filename and its source code in a tuple, and the .h filename and its source in another tuple. Let's print the source code.
End of explanation
y = sym.MatrixSymbol('y', *states.shape)
Explanation: There are several things here worth noting:
the state variables are passed in individually
the state variables in the function signature are out of order
the output array is passed in as a pointer like in our Fibonacci sequence example, but it has an auto-generated name
Let's address the first issue first. Similarly to what we did in the C printing exercises, let's use a MatrixSymbol to represent our state vector instead of a matrix of individual state variable symbols (i.e. y[0] instead of y0). First, create the MatrixSymbol object that is the same shape as our states matrix.
End of explanation
state_array_map = dict(zip(states, y))
state_array_map
Explanation: Now we need to replace the use of y0, y1, etc. in our rhs_of_odes matrix with the elements of our new state vector (e.g. y[0], y[1], etc.). We saw how to do this already in the previous notebook. Start by forming a mapping from y0 -> y[0, 0], y1 -> y[1, 0], etc.
End of explanation
rhs_of_odes_ind = rhs_of_odes.xreplace(state_array_map)
rhs_of_odes_ind[0]
Explanation: Now replace the symbols in rhs_of_odes according to the mapping. We'll call it rhs_of_odes_ind and use that from now on.
End of explanation
[(cf, cs), (hf, hs)] = codegen(('c_odes', rhs_of_odes_ind), language='c')
print(cs)
Explanation: Exercise: use codegen again, but this time with rhs_of_odes_ind which makes use of a state vector rather than a container of symbols. Check out the resulting code. What is different about the function signature?
python
[(cf, cs), (hf, hs)] = codegen(???)
Solution
|
|
|
|
|
|
|
|
|
|
v
End of explanation
dY = sym.MatrixSymbol('dY', *y.shape)
Explanation: So by re-writing our expression in terms of a MatrixSymbol rather than individual symbols, the function signature of the generated code is cleaned up greatly.
However, we still have the issue of the auto-generated output variable name. To fix this, we can form a matrix equation rather than an expression. The name given to the symbol on the left hand side of the equation will then be used for our output variable name.
We'll start by defining a new MatrixSymbol that will represent the left hand side of our equation -- the derivatives of each state variable.
End of explanation
ode_eq = sym.Eq(dY, rhs_of_odes_ind)
[(cf, cs), (hf, hs)] = codegen(('c_odes', ode_eq), language='c')
print(hs)
Explanation: Exercise: form an equation using sym.Eq to equate the two sides of our system of differential equations, then use this as the expression in codegen. Print out just the header source to see the function signature. What is the output argument called now?
python
ode_eq = sym.Eq(???)
[(cf, cs), (hf, hs)] = codegen(???)
print(???)
Solution
|
|
|
|
|
|
|
|
|
|
v
End of explanation
codegen(('c_odes', ode_eq), language='c', to_files=True)
Explanation: Now we see that the c_odes function signature is nice and clean. We pass it a pointer to an array representing the current values of all of our state variables and a pointer to an array that we want to fill with the derivatives of each of those state variables.
If you're not familiar with C and pointers, you just need to know that it is idiomatic in C to preallocate a block of memory representing an array, then pass the location of that memory (and usually the number of elements it can hold), rather than passing the array itself to/from a function. For our purposes, this is as complicated as pointers will get.
Just so we can compile this code and use it, we'll re-use the codegen call above with to_files=True so the .c and .h files are actually written to the filesystem, rather than having their contents returned in a string.
End of explanation
%%writefile cy_odes.pyxbld
import numpy
# module name specified by `%%cython_pyximport` magic
# | just `modname + ".pyx"`
# | |
def make_ext(modname, pyxfilename):
from setuptools.extension import Extension
return Extension(modname,
sources=[pyxfilename, 'c_odes.c'],
include_dirs=['.', numpy.get_include()])
Explanation: 3. Wrapping the Generated Code with Cython
Now we want to wrap the function that was generated c_odes with a Cython function so we can generate an extension module and call that function from Python. Wrapping a set of C functions involves writing a Cython script that specifies the Python interface to the C functions. This script must do two things:
specify the function signatures as found in the C source
implement the Python interface to the C functions by wrapping them
The build system of Cython is able to take the Cython wrapper source code as well as the C library source code and compile/link things together into a Python extension module. We will write our wrapper code in a cell making use of the magic command %%cython_pyximport, which does a few things for us:
writes the contents of the cell to a Cython source file (modname.pyx)
looks for a modname.pyxbld file for instructions on how to build things
builds everything into an extension module
imports the extension module, making the functions declared there available in the notebook
So, it works similarly to the %%cython magic command we saw at the very beginning, but things are a bit more complicated now because we have this external library c_odes that needs to be compiled as well.
Note: The pyxbld file contains code similar to what would be found in the setup.py file of a package making use of Cython code for wrapping C libraries.
In either case, all that's needed is to tell setuptools/Cython:
the name of the extension module we want to make
the location of the Cython and C source files to be built
the location of headers needed during compilation -- both our C library's headers as well as NumPy's headers
We will call our extension module cy_odes, so here we'll generate a cy_odes.pyxbld file to specify how to build the module.
End of explanation
%%cython_pyximport cy_odes
import numpy as np
cimport numpy as cnp # cimport gives us access to NumPy's C API
# here we just replicate the function signature from the header
cdef extern from "c_odes.h":
void c_odes(double *y, double *dY)
# here is the "wrapper" signature that conforms to the odeint interface
def cy_odes(cnp.ndarray[cnp.double_t, ndim=1] y, double t):
# preallocate our output array
cdef cnp.ndarray[cnp.double_t, ndim=1] dY = np.empty(y.size, dtype=np.double)
# now call the C function
c_odes(<double *> y.data, <double *> dY.data)
# return the result
return dY
Explanation: Now we can write our wrapper code.
To write the wrapper, we first write the function signature as specified by the C library. Then, we create a wrapper function that makes use of the C implementation and returns the result. This wrapper function becomes the interface to the compiled code, and it does not need to be identical to the C function signature. In fact, we'll make our wrapper function compliant with the odeint interface (i.e. takes a 1-dimensional array of state variable values and the current time).
End of explanation
random_vals = np.random.randn(14)
cy_odes(random_vals, 0) # note: any time value will do
Explanation: Exercise: use np.random.randn to generate random state variable values and evaluate the right-hand-side of our ODEs with those values.
python
random_vals = np.random.randn(???)
???
Solution
|
|
|
|
|
|
|
|
|
|
v
End of explanation
from scipy.integrate import odeint
Explanation: Now we can use odeint to integrate the equations and plot the results to check that it worked. First we need to import odeint.
End of explanation
from scipy2017codegen.chem import watrad_init, watrad_plot
y_init, t_vals = watrad_init()
Explanation: A couple convenience functions are provided in the scipy2017codegen package which give some reasonable initial conditions for the system and plot the state trajectories, respectively. Start by grabbing some initial values for our state variables and time values.
End of explanation
y_vals = odeint(cy_odes, y_init, t_vals)
watrad_plot(t_vals, y_vals)
Explanation: Finally we can integrate the equations using our Cython-wrapped C function and plot the results.
End of explanation
from sympy.utilities.autowrap import autowrap
#autowrap?
Explanation: 4. Generating and Compiling a C Extension Module Automatically
As yet another layer of abstraction on top of codegen, SymPy provides an autowrap function that can automatically generate a Cython wrapper for the generated C code. This greatly simplifies the process of going from a SymPy expression to numerical computation, but as we'll see, we lose a bit of flexibility compared to manually creating the Cython wrapper.
Let's start by importing the autowrap function and checking out its documentation.
End of explanation
auto_odes = autowrap(ode_eq, backend='cython', tempdir='./autowraptmp')
Explanation: So autowrap takes in a SymPy expression and gives us back a binary callable which evaluates the expression numerically. Let's use the Equality formed earlier to generate a function we can call to evaluate the right hand side of our system of ODEs.
End of explanation
random_vals = np.random.randn(14, 1) # need a 2-dimensional vector
auto_odes(random_vals)
Explanation: Exercise: use the main Jupyter notebook tab to head to the temporary directory autowrap just created. Take a look at some of the files it contains. Can you map everything we did manually to the files generated?
Solution
|
|
|
|
|
|
|
|
|
|
v
autowrap generates quite a few files, but we'll explicitly list a few here:
wrapped_code_#.c: the same thing codegen generated
wrapper_module_#.pyx: the Cython wrapper code
wrapper_module_#.c: the cythonized code
setup.py: specification of the Extension for how to build the extension module
Exercise: just like we did before, generate some random values for the state variables and use auto_odes to compute the derivatives. Did it work like before?
Hint: take a look at wrapper_module_#.pyx to see the types of the arrays being passed in / created.
python
random_vals = np.random.randn(???)
auto_odes(???)
Solution
|
|
|
|
|
|
|
|
|
|
v
End of explanation
def auto_odes_wrapper(y, t):
dY = auto_odes(y[:, np.newaxis])
return dY.squeeze()
Explanation: One advantage to wrapping the generated C code manually is that we get fine control over how the function is used from Python. That is, in our hand-written Cython wrapper we were able to specify that from the Python side, the input to our wrapper function and its return value are both 1-dimensional ndarray objects. We were also able to add in the extra argument t for the current time, making the wrapper function fully compatible with odeint.
However, autowrap just sees that we have a matrix equation where each side is a 2-dimensional array with shape (14, 1). The function returned then expects the input array to be 2-dimensional and it returns a 2-dimensional array.
This won't work with odeint, so we can write a simple wrapper that massages the input and output and adds an extra argument for t.
End of explanation
random_vals = np.random.randn(14)
auto_odes_wrapper(random_vals, 0)
Explanation: Now a 1-dimensional input works.
End of explanation
jac = rhs_of_odes_ind.jacobian(y)
auto_jac = autowrap(jac, backend='cython', tempdir='./autowraptmp')
def auto_jac_wrapper(y, t):
return auto_jac(y[:, np.newaxis])
auto_jac_wrapper(random_vals, 2).shape
Explanation: Exercise: As we have seen previously, we can analytically evaluate the Jacobian of our system of ODEs, which can be helpful in numerical integration. Compute the Jacobian of rhs_of_odes_ind with respect to y, then use autowrap to generate a function that evaluates the Jacobian numerically. Finally, write a Python wrapper called auto_jac_wrapper to make it compatible with odeint.
```python
compute jacobian of rhs_of_odes_ind with respect to y
???
generate a function that computes the jacobian
auto_jac = autowrap(???)
def auto_jac_wrapper(y, t):
return ???
```
Test your wrapper by passing in the random_vals array from above. The shape of the result should be (14, 14).
Solution
|
|
|
|
|
|
|
|
|
|
v
End of explanation
y_vals = odeint(auto_odes_wrapper, y_init, t_vals, Dfun=auto_jac_wrapper)
watrad_plot(t_vals, y_vals)
Explanation: Finally, we can use our two wrapped functions in odeint and compare to our manually-written Cython wrapper result.
End of explanation
rhs_of_odes[3]
Explanation: 5. Using a Custom Printer and an External Library with autowrap
As of SymPy 1.1, autowrap accepts a custom CodeGen object, which is responsible for generating the code. The CodeGen object in turn accepts a custom CodePrinter object, meaning we can use these two points of flexibility to make use of customized code printing in an autowrapped function. The following example is somewhat contrived, but the concept in general is powerful.
In our set of ODEs, there are quite a few instances of $y_i^2$, where $y_i$ is one of the 14 state variables. As an example, here's the equation for $\frac{dy_3(t)}{dt}$:
End of explanation
from sympy.printing.ccode import C99CodePrinter
class CustomPrinter(C99CodePrinter):
def _print_Pow(self, expr):
return "fastpow({}, {})".format(self._print(expr.base),
self._print(expr.exp))
printer = CustomPrinter()
x = sym.symbols('x')
printer.doprint(x**3)
Explanation: There is a library called fastapprox that provides computational routines things like powers, exponentials, logarithms, and a few others. These routines provide limited precision with respect to something like math.h's equivalent functions, but they offer potentially faster computation.
The fastapprox library provides a function called fastpow, with the signature fastpow(float x, float p). It it follows the interface of pow from math.h. In the previous notebook, we saw how to turn instances of $x^3$ into x*x*x, which is potentially quicker than pow(x, 3). Here, let's just use fastpow instead.
Exercise: implement a CustomPrinter class that inherits from C99CodePrinter and overrides the _print_Pow function to make use of fastpow. Test it by instantiating the custom printer and printing a SymPy expression $x^3$.
Hint: it may be helpful to run C99CodePrinter._print_Pow?? to see how it works
```python
from sympy.printing.ccode import C99CodePrinter
class CustomPrinter(C99CodePrinter):
def _print_Pow(self, expr):
???
printer = CustomPrinter()
x = sym.symbols('x')
print x**3 using the custom printer
???
```
Solution
|
|
|
|
|
|
|
|
|
|
v
End of explanation
from sympy.utilities.codegen import C99CodeGen
gen = C99CodeGen(printer=printer)
Explanation: Now we can create a C99CodeGen object that will make use of this printer. This object will be passed in to autowrap with the code_gen keyword argument, and autowrap will use it in the code generation process.
End of explanation
gen.preprocessor_statements.append('#include "fastpow.h"')
Explanation: However, for our generated code to use the fastpow function, it needs to have a #include "fastpow.h" preprocessor statement at the top. The code gen object supports this by allowing us to append preprocessor statements to its preprocessor_statements attribute.
End of explanation
import os
import scipy2017codegen
package_dir = os.path.dirname(scipy2017codegen.__file__)
fastapprox_dir = os.path.join(package_dir, 'fastapprox')
Explanation: One final issue remains, and that is telling autowrap where to find the fastapprox library headers. These header files have just been downloaded from GitHub and placed in the scipy2017codegen package, so it should be installed with the conda environment. We can find it by looking for the package directory.
End of explanation
auto_odes_fastpow = autowrap(ode_eq,
code_gen=gen,
backend='cython',
include_dirs=[fastapprox_dir],
tempdir='autowraptmp_custom')
Explanation: Finally we're ready to call autowrap. We'll just use ode_eq, the Equality we created before, pass in the custom CodeGen object, and tell autowrap where the fastapprox headers are located.
End of explanation
def auto_odes_fastpow_wrapper(y, t):
dY = auto_odes_fastpow(y[:, np.newaxis])
return dY.squeeze()
y_vals, info = odeint(auto_odes_fastpow_wrapper, y_init, t_vals, full_output=True)
watrad_plot(t_vals, y_vals)
Explanation: If we navigate to the tmp directory created, we can view the wrapped_code_#.c to see our custom printing in action.
As before, we need a wrapper function for use with odeint, but aside from that, everything should be in place.
End of explanation
random_vals = np.random.randn(14)
dY1 = auto_odes_wrapper(random_vals, 0)
dY2 = auto_odes_fastpow_wrapper(random_vals, 0)
dY1 - dY2
Explanation: Exercise: generate an array of random state variable values, then use this array in the auto_odes_wrapper and auto_odes_fastpow_wrapper functions. Compare their outputs.
Solution
|
|
|
|
|
|
|
|
|
|
v
End of explanation |
2,899 | Given the following text description, write Python code to implement the functionality described.
Description:
Largest Sum Contiguous Subarray having unique elements
Function to calculate required maximum subarray sum ; Initialize two pointers ; Stores the unique elements ; Insert the first element ; Current max sum ; Global maximum sum ; Update sum & increment j ; Add the current element ; Update sum and increment i and remove arr [ i ] from set ; Remove the element from start position ; Return the maximum sum ; Driver Code ; Given array arr [ ] ; Function call ; Print the maximum sum
| Python Code:
def maxSumSubarray(arr ) :
i = 0
j = 1
set = { }
set[arr[0 ] ] = 1
sum = arr[0 ]
maxsum = sum
while(i < len(arr ) - 1 and j < len(arr ) ) :
if arr[j ] not in set :
sum = sum + arr[j ]
maxsum = max(sum , maxsum )
set[arr[j ] ] = 1
j += 1
else :
sum -= arr[i ]
del set[arr[i ] ]
i += 1
return maxsum
if __name__== ' __main __' :
arr =[1 , 2 , 3 , 1 , 5 ]
ans = maxSumSubarray(arr )
print(ans )
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.