Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
5,600 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ALGEBRA & DEFINITIONS
Clifford algebra is $$Cl_{1,4}(\mathbb{R})$$
Flat space, no metric, just signature
All constants are equal to 1
Step1: Quaternions
http
Step2: Imaginary unit
Step3: Associative Hyperbolic Quaternions
Step4: DIRAC
http
Step5: PHYSICS
The following symbols are defined
Step6: Density element combined derivatives
See here http
Step7: $$k_1$$ but density element seems null
Step8: $$k_2$$ but density element seems null
Step9: $$k_1 + k_2$$ density element is not null | Python Code:
from sympy import *
variables = (t, x, y, z, w) = symbols('t x y z w', real=True)
print(variables)
metric=[1
,-1
,-1
,-1
,-1]
myBasis='gamma_t gamma_x gamma_y gamma_z gamma_w'
sp5d = Ga(myBasis, g=metric, coords=variables,norm=True)
(gamma_t, gamma_x, gamma_y, gamma_z, gamma_w) = sp5d.mv()
(grad, rgrad) = sp5d.grads()
Explanation: ALGEBRA & DEFINITIONS
Clifford algebra is $$Cl_{1,4}(\mathbb{R})$$
Flat space, no metric, just signature
All constants are equal to 1
End of explanation
iquat=gamma_y*gamma_z
jquat=gamma_z*gamma_x
kquat=gamma_x*gamma_y
iquat.texLabel='\\mathit{\\boldsymbol{i}}'
jquat.texLabel='\\mathit{\\boldsymbol{j}}'
kquat.texLabel='\\mathit{\\boldsymbol{k}}'
display(Math('(1,'+iquat.texLabel+','+jquat.texLabel+','+kquat.texLabel+')'))
CheckProperties(iquat,jquat,kquat,iquat.texLabel,jquat.texLabel,kquat.texLabel)
Explanation: Quaternions
http://en.wikipedia.org/wiki/Quaternion
End of explanation
imag=gamma_w
imag.texLabel='i'
displayWithTitle(imag, title=imag.texLabel)
displayWithTitle((imag*imag), title=imag.texLabel+'^2')
Explanation: Imaginary unit
End of explanation
ihquat=gamma_t
jhquat=gamma_t*gamma_x*gamma_y*gamma_z*gamma_w
khquat=gamma_x*gamma_y*gamma_z*gamma_w
ihquat.texLabel='\\mathbf{i}'
jhquat.texLabel='\\mathbf{j}'
khquat.texLabel='\\mathbf{k}'
display(Math('(1,'+ihquat.texLabel+','+jhquat.texLabel+','+khquat.texLabel+')'))
CheckProperties(ihquat,jhquat,khquat,ihquat.texLabel,jhquat.texLabel,khquat.texLabel)
Explanation: Associative Hyperbolic Quaternions
End of explanation
displayWithTitle(grad,title='\overrightarrow{D} = \Sigma e_i \partial x_i')
displayWithTitle(rgrad,title='\overleftarrow{D} = \Sigma \partial x_i e_i')
Explanation: DIRAC
http://en.wikipedia.org/wiki/Dirac_equation
Regular
$$0=({\gamma}_0 \frac{\partial}{\partial t}+{\gamma}_1 \frac{\partial}{\partial x}+{\gamma}_2 \frac{\partial}{\partial y}+{\gamma}_3 \frac{\partial}{\partial z}+im) {\psi}$$
Adjoint
$$0={\overline{\psi}}(\frac{\partial}{\partial t}{\gamma}_0+\frac{\partial}{\partial x}{\gamma}_1+\frac{\partial}{\partial y}{\gamma}_2+\frac{\partial}{\partial z}{\gamma}_3+im)$$
Gradient definition
End of explanation
m, E, p_x, p_y, p_z = symbols('m E p_x p_y p_z', real=True)
rquat = [iquat, jquat, kquat]
pv =[p_x, p_y, p_z]
p = S(0)
for (dim, var) in zip(pv, rquat):
p += var * dim
p.texLabel='\\mathbf{p}'
display(Latex('Momentum $'+p.texLabel+'$ is defined with $p_x, p_y, p_z \\in \\mathbb{R}$'))
display(Math(p.texLabel+'=p_x'+iquat.texLabel+'+p_y'+jquat.texLabel+'+p_z'+kquat.texLabel))
displayWithTitle(p, title=p.texLabel)
f=m*w-imag*(E*t+p_x*x+p_y*y+p_z*z)
displayWithTitle(f, title='f')
displayWithTitle(grad*f, title='\overrightarrow{D}f')
displayWithTitle(f*rgrad, title='f\overleftarrow{D}')
displayWithTitle(grad*f*grad*f, title='\overrightarrow{D}f\overrightarrow{D}f')
displayWithTitle(f*rgrad*f*rgrad, title='f\overleftarrow{D}f\overleftarrow{D}')
displayWithTitle((grad*f - f*rgrad)/2, title='1/2 (\overrightarrow{D}f-f\overleftarrow{D})')
displayWithTitle((grad*f + f*rgrad)/2, title='1/2 (\overrightarrow{D}f+f\overleftarrow{D})')
Explanation: PHYSICS
The following symbols are defined :
Energy $$E \in \mathbb{R}$$
Mass $$m \in \mathbb{R}$$
End of explanation
K={}
K[1]=(ihquat*imag*E-imag*m+khquat*p)*(1+jhquat)
K[1].texLabel='('+ihquat.texLabel+imag.texLabel+'E-'+imag.texLabel+'m+'+khquat.texLabel+p.texLabel+')(1+'+jhquat.texLabel+')'
texLabel='('+ihquat.texLabel+imag.texLabel+'E+'+imag.texLabel+'m+'+khquat.texLabel+p.texLabel+')'+'(1-'+jhquat.texLabel+')'+imag.texLabel
K[2]=(ihquat*imag*E+imag*m+khquat*p)*(1-jhquat)*imag
K[2].texLabel=texLabel
texLabel='-'+'('+ihquat.texLabel+imag.texLabel+'E+'+imag.texLabel+'m+'+khquat.texLabel+p.texLabel+')'+imag.texLabel
def showDerivatives(f, i, k):
displayWithTitle(k, 'k_' + str(i) + '=' + k.texLabel)
displayWithTitle(k*k, 'k_' + str(i) + '^2')
rho=(f*k)*(k*f)
displayWithTitle(rho, title='\\rho')
left=grad*rho
right=rho*rgrad
display(Latex('Combine regular and adjoint Dirac equations with difference'))
displayWithTitle((left-right)/2, '1/2 (\overrightarrow{D}\\rho-\\rho\overleftarrow{D})')
display(Latex('Combine regular and adjoint Dirac equations with addition'))
displayWithTitle((left+right)/2, '1/2 (\overrightarrow{D}\\rho+\\rho\overleftarrow{D})')
Explanation: Density element combined derivatives
See here http://www.bbk.ac.uk/tpru/BasilHiley/Bohm-Vienna.pdf
End of explanation
showDerivatives(f, 1, K[1])
Explanation: $$k_1$$ but density element seems null
End of explanation
showDerivatives(f, 2, K[2])
Explanation: $$k_2$$ but density element seems null
End of explanation
sum=K[1]+K[2]
sum.texLabel='k_1+k_2'
showDerivatives(f, 3, sum)
Explanation: $$k_1 + k_2$$ density element is not null
End of explanation |
5,601 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This IPython Notebook illustrates the use of the openmc.mgxs module to calculate multi-group cross sections for a heterogeneous fuel pin cell geometry. In particular, this Notebook illustrates the following features
Step1: First we need to define materials that will be used in the problem. We'll create three distinct materials for water, clad and fuel.
Step2: With our materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
Step4: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
Step5: We now must create a geometry with the pin cell universe and export it to XML.
Step6: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 10,000 particles.
Step7: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
Step8: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
Step9: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
Step10: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
Step11: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
Step12: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
Step13: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
Step14: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Extracting and Storing MGXS Data
Let's first inspect one of our cross sections by printing it to the screen as a microscopic cross section in units of barns.
Step15: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
Step16: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
Step17: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
Step18: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 8-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
Step19: Verification with OpenMOC
Now, let's verify our cross sections using OpenMOC. First, we construct an equivalent OpenMOC geometry.
Step20: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
Step21: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
Step22: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
Step23: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
Step24: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias
Step25: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
Step26: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-dark')
import openmoc
import openmc
import openmc.mgxs as mgxs
import openmc.data
from openmc.openmoc_compatible import get_openmoc_geometry
%matplotlib inline
Explanation: This IPython Notebook illustrates the use of the openmc.mgxs module to calculate multi-group cross sections for a heterogeneous fuel pin cell geometry. In particular, this Notebook illustrates the following features:
Creation of multi-group cross sections on a heterogeneous geometry
Calculation of cross sections on a nuclide-by-nuclide basis
The use of tally precision triggers with multi-group cross sections
Built-in features for energy condensation in downstream data processing
The use of the openmc.data module to plot continuous-energy vs. multi-group cross sections
Validation of multi-group cross sections with OpenMOC
Note: This Notebook was created using OpenMOC to verify the multi-group cross-sections generated by OpenMC. You must install OpenMOC on your system in order to run this Notebook in its entirety. In addition, this Notebook illustrates the use of Pandas DataFrames to containerize multi-group cross section data.
Generate Input Files
End of explanation
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
Explanation: First we need to define materials that will be used in the problem. We'll create three distinct materials for water, clad and fuel.
End of explanation
# Instantiate a Materials collection
materials_file = openmc.Materials([fuel, water, zircaloy])
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, r=0.45720)
# Create box to surround the geometry
box = openmc.model.rectangular_prism(1.26, 1.26, boundary_type='reflective')
Explanation: Now let's move on to the geometry. Our problem will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces -- in this case two cylinders and six reflective planes.
End of explanation
# Create a Universe to encapsulate a fuel pin
pin_cell_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
pin_cell_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
pin_cell_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius & box
pin_cell_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now create cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create Geometry and set root Universe
openmc_geometry = openmc.Geometry(pin_cell_universe)
# Export to "geometry.xml"
openmc_geometry.export_to_xml()
Explanation: We now must create a geometry with the pin cell universe and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 10000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': True}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Activate tally precision triggers
settings_file.trigger_active = True
settings_file.trigger_max_batches = settings_file.batches * 4
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 10,000 particles.
End of explanation
# Instantiate a "coarse" 2-group EnergyGroups object
coarse_groups = mgxs.EnergyGroups([0., 0.625, 20.0e6])
# Instantiate a "fine" 8-group EnergyGroups object
fine_groups = mgxs.EnergyGroups([0., 0.058, 0.14, 0.28,
0.625, 4.0, 5.53e3, 821.0e3, 20.0e6])
Explanation: Now we are finally ready to make use of the openmc.mgxs module to generate multi-group cross sections! First, let's define "coarse" 2-group and "fine" 8-group structures using the built-in EnergyGroups class.
End of explanation
# Extract all Cells filled by Materials
openmc_cells = openmc_geometry.get_all_material_cells().values()
# Create dictionary to store multi-group cross sections for all cells
xs_library = {}
# Instantiate 8-group cross sections for each cell
for cell in openmc_cells:
xs_library[cell.id] = {}
xs_library[cell.id]['transport'] = mgxs.TransportXS(groups=fine_groups)
xs_library[cell.id]['fission'] = mgxs.FissionXS(groups=fine_groups)
xs_library[cell.id]['nu-fission'] = mgxs.FissionXS(groups=fine_groups, nu=True)
xs_library[cell.id]['nu-scatter'] = mgxs.ScatterMatrixXS(groups=fine_groups, nu=True)
xs_library[cell.id]['chi'] = mgxs.Chi(groups=fine_groups)
Explanation: Now we will instantiate a variety of MGXS objects needed to run an OpenMOC simulation to verify the accuracy of our cross sections. In particular, we define transport, fission, nu-fission, nu-scatter and chi cross sections for each of the three cells in the fuel pin with the 8-group structure as our energy groups.
End of explanation
# Create a tally trigger for +/- 0.01 on each tally used to compute the multi-group cross sections
tally_trigger = openmc.Trigger('std_dev', 1e-2)
# Add the tally trigger to each of the multi-group cross section tallies
for cell in openmc_cells:
for mgxs_type in xs_library[cell.id]:
xs_library[cell.id][mgxs_type].tally_trigger = tally_trigger
Explanation: Next, we showcase the use of OpenMC's tally precision trigger feature in conjunction with the openmc.mgxs module. In particular, we will assign a tally trigger of 1E-2 on the standard deviation for each of the tallies used to compute multi-group cross sections.
End of explanation
# Instantiate an empty Tallies object
tallies_file = openmc.Tallies()
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
# Set the cross sections domain to the cell
xs_library[cell.id][rxn_type].domain = cell
# Tally cross sections by nuclide
xs_library[cell.id][rxn_type].by_nuclide = True
# Add OpenMC tallies to the tallies file for XML generation
for tally in xs_library[cell.id][rxn_type].tallies.values():
tallies_file.append(tally, merge=True)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: Now, we must loop over all cells to set the cross section domains to the various cells - fuel, clad and moderator - included in the geometry. In addition, we will set each cross section to tally cross sections on a per-nuclide basis through the use of the MGXS class' boolean by_nuclide instance attribute.
End of explanation
# Run OpenMC
openmc.run()
Explanation: Now we a have a complete set of inputs, so we can go ahead and run our simulation.
End of explanation
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.082.h5')
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.
End of explanation
# Iterate over all cells and cross section types
for cell in openmc_cells:
for rxn_type in xs_library[cell.id]:
xs_library[cell.id][rxn_type].load_from_statepoint(sp)
Explanation: The statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='micro', nuclides=['U235', 'U238'])
Explanation: That's it! Our multi-group cross sections are now ready for the big spotlight. This time we have cross sections in three distinct spatial zones - fuel, clad and moderator - on a per-nuclide basis.
Extracting and Storing MGXS Data
Let's first inspect one of our cross sections by printing it to the screen as a microscopic cross section in units of barns.
End of explanation
nufission = xs_library[fuel_cell.id]['nu-fission']
nufission.print_xs(xs_type='macro', nuclides='sum')
Explanation: Our multi-group cross sections are capable of summing across all nuclides to provide us with macroscopic cross sections as well.
End of explanation
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
df.head(10)
Explanation: Although a printed report is nice, it is not scalable or flexible. Let's extract the microscopic cross section data for the moderator as a Pandas DataFrame .
End of explanation
# Extract the 8-group transport cross section for the fuel
fine_xs = xs_library[fuel_cell.id]['transport']
# Condense to the 2-group structure
condensed_xs = fine_xs.get_condensed_xs(coarse_groups)
Explanation: Next, we illustate how one can easily take multi-group cross sections and condense them down to a coarser energy group structure. The MGXS class includes a get_condensed_xs(...) method which takes an EnergyGroups parameter with a coarse(r) group structure and returns a new MGXS condensed to the coarse groups. We illustrate this process below using the 2-group structure created earlier.
End of explanation
condensed_xs.print_xs()
df = condensed_xs.get_pandas_dataframe(xs_type='micro')
df
Explanation: Group condensation is as simple as that! We now have a new coarse 2-group TransportXS in addition to our original 8-group TransportXS. Let's inspect the 2-group TransportXS by printing it to the screen and extracting a Pandas DataFrame as we have already learned how to do.
End of explanation
# Create an OpenMOC Geometry from the OpenMC Geometry
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
Explanation: Verification with OpenMOC
Now, let's verify our cross sections using OpenMOC. First, we construct an equivalent OpenMOC geometry.
End of explanation
# Get all OpenMOC cells in the gometry
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
# Get a reference to the Material filling this Cell
openmoc_material = cell.getFillMaterial()
# Set the number of energy groups for the Material
openmoc_material.setNumEnergyGroups(fine_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Inject NumPy arrays of cross section data into the Material
# NOTE: Sum across nuclides to get macro cross sections needed by OpenMOC
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
Explanation: Next, we we can inject the multi-group cross sections into the equivalent fuel pin cell OpenMOC geometry.
End of explanation
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
Explanation: We are now ready to run OpenMOC to verify our cross-sections from OpenMC.
End of explanation
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.n
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
Explanation: We report the eigenvalues computed by OpenMC and OpenMOC here together to summarize our results.
End of explanation
openmoc_geometry = get_openmoc_geometry(sp.summary.geometry)
openmoc_cells = openmoc_geometry.getRootUniverse().getAllCells()
# Inject multi-group cross sections into OpenMOC Materials
for cell_id, cell in openmoc_cells.items():
# Ignore the root cell
if cell.getName() == 'root cell':
continue
openmoc_material = cell.getFillMaterial()
openmoc_material.setNumEnergyGroups(coarse_groups.num_groups)
# Extract the appropriate cross section objects for this cell
transport = xs_library[cell_id]['transport']
nufission = xs_library[cell_id]['nu-fission']
nuscatter = xs_library[cell_id]['nu-scatter']
chi = xs_library[cell_id]['chi']
# Perform group condensation
transport = transport.get_condensed_xs(coarse_groups)
nufission = nufission.get_condensed_xs(coarse_groups)
nuscatter = nuscatter.get_condensed_xs(coarse_groups)
chi = chi.get_condensed_xs(coarse_groups)
# Inject NumPy arrays of cross section data into the Material
openmoc_material.setSigmaT(transport.get_xs(nuclides='sum').flatten())
openmoc_material.setNuSigmaF(nufission.get_xs(nuclides='sum').flatten())
openmoc_material.setSigmaS(nuscatter.get_xs(nuclides='sum').flatten())
openmoc_material.setChi(chi.get_xs(nuclides='sum').flatten())
# Generate tracks for OpenMOC
track_generator = openmoc.TrackGenerator(openmoc_geometry, num_azim=128, azim_spacing=0.1)
track_generator.generateTracks()
# Run OpenMOC
solver = openmoc.CPUSolver(track_generator)
solver.computeEigenvalue()
# Print report of keff and bias with OpenMC
openmoc_keff = solver.getKeff()
openmc_keff = sp.k_combined.n
bias = (openmoc_keff - openmc_keff) * 1e5
print('openmc keff = {0:1.6f}'.format(openmc_keff))
print('openmoc keff = {0:1.6f}'.format(openmoc_keff))
print('bias [pcm]: {0:1.1f}'.format(bias))
Explanation: As a sanity check, let's run a simulation with the coarse 2-group cross sections to ensure that they also produce a reasonable result.
End of explanation
# Create a figure of the U-235 continuous-energy fission cross section
fig = openmc.plot_xs('U235', ['fission'])
# Get the axis to use for plotting the MGXS
ax = fig.gca()
# Extract energy group bounds and MGXS values to plot
fission = xs_library[fuel_cell.id]['fission']
energy_groups = fission.energy_groups
x = energy_groups.group_edges
y = fission.get_xs(nuclides=['U235'], order_groups='decreasing', xs_type='micro')
y = np.squeeze(y)
# Fix low energy bound
x[0] = 1.e-5
# Extend the mgxs values array for matplotlib's step plot
y = np.insert(y, 0, y[0])
# Create a step plot for the MGXS
ax.plot(x, y, drawstyle='steps', color='r', linewidth=3)
ax.set_title('U-235 Fission Cross Section')
ax.legend(['Continuous', 'Multi-Group'])
ax.set_xlim((x.min(), x.max()))
Explanation: There is a non-trivial bias in both the 2-group and 8-group cases. In the case of a pin cell, one can show that these biases do not converge to <100 pcm with more particle histories. For heterogeneous geometries, additional measures must be taken to address the following three sources of bias:
Appropriate transport-corrected cross sections
Spatial discretization of OpenMOC's mesh
Constant-in-angle multi-group cross sections
Visualizing MGXS Data
It is often insightful to generate visual depictions of multi-group cross sections. There are many different types of plots which may be useful for multi-group cross section visualization, only a few of which will be shown here for enrichment and inspiration.
One particularly useful visualization is a comparison of the continuous-energy and multi-group cross sections for a particular nuclide and reaction type. We illustrate one option for generating such plots with the use of the openmc.plotter module to plot continuous-energy cross sections from the openly available cross section library distributed by NNDC.
The MGXS data can also be plotted using the openmc.plot_xs command, however we will do this manually here to show how the openmc.Mgxs.get_xs method can be used to obtain data.
End of explanation
# Construct a Pandas DataFrame for the microscopic nu-scattering matrix
nuscatter = xs_library[moderator_cell.id]['nu-scatter']
df = nuscatter.get_pandas_dataframe(xs_type='micro')
# Slice DataFrame in two for each nuclide's mean values
h1 = df[df['nuclide'] == 'H1']['mean']
o16 = df[df['nuclide'] == 'O16']['mean']
# Cast DataFrames as NumPy arrays
h1 = h1.values
o16 = o16.values
# Reshape arrays to 2D matrix for plotting
h1.shape = (fine_groups.num_groups, fine_groups.num_groups)
o16.shape = (fine_groups.num_groups, fine_groups.num_groups)
Explanation: Another useful type of illustration is scattering matrix sparsity structures. First, we extract Pandas DataFrames for the H-1 and O-16 scattering matrices.
End of explanation
# Create plot of the H-1 scattering matrix
fig = plt.subplot(121)
fig.imshow(h1, interpolation='nearest', cmap='jet')
plt.title('H-1 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Create plot of the O-16 scattering matrix
fig2 = plt.subplot(122)
fig2.imshow(o16, interpolation='nearest', cmap='jet')
plt.title('O-16 Scattering Matrix')
plt.xlabel('Group Out')
plt.ylabel('Group In')
# Show the plot on screen
plt.show()
Explanation: Matplotlib's imshow routine can be used to plot the matrices to illustrate their sparsity structures.
End of explanation |
5,602 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interaktív függvények és ábrák
Az alábbiakban vizsgáljunk meg egy egyszerű módszert arra, hogy hogyan tehetjük Python-függvényeinket interaktívvá!
Ehhez az ipywidgets csomag lesz segítségünkre!
Step1: Mostanra már tudjuk, hogy hogyan ábrázoljunk egy matematikai függvényt
Step2: Írjunk egy függvényt, ami egy megadott frekvenciájú jelet rajzol ki!
Step3: Most jön a varázslat! Az interact() függvény segítségével interaktívvá tehetjük a fent definiált függvényünket!
Step4: Nézzük meg egy kicsit közelebbről, hogy is működik ez az interact() konstrukció! Definiáljunk ehhez először egy nagyon egyszerű függvényt!
Step5: Az interact egy olyan függvény, amely az első paramétereként egy függvényt vár, és kulcsszavakként várja a függvény bemenő paramétereit! Amit visszaad, az egy interaktív widget, ami lehet sokfajta, de alapvetően azt a célt szolgálja, hogy a func függvényt kiszolgálja. Annak ad egy bemenő paramétert, lefuttatja, majd vár, hogy a felhasználó újra megváltoztassa az állapotot.
Ha a kulcsszavas argumentumnak zárójelbe írt egész számokat adunk meg, akkor egy egész számokon végigmenő csúszkát kapunk
Step6: Ha egy bool értéket adunk meg, akkor egy pipálható dobozt
Step7: Ha egy általános listát adunk meg, akkor egy legördülő menüt kapunk
Step8: Ha a sima zárójelbe írt számok nem egészek (legalább az egyik) akkor egy float csúszkát kapunk
Step9: Ha pontosan specifikálni szeretnénk, hogy milyen interaktivitást akarunk, akkor azt az alábbiak szerint tehetjük meg,
egész csúszka$\rightarrow$IntSlider()
float csúszka$\rightarrow$FloatSlider()
legördülő menü$\rightarrow$Dropdown()
pipa doboz$\rightarrow$Checkbox()
szövegdoboz$\rightarrow$Text()
Ezt alább néhány példa illusztrálja
Step10: Ha egy függvényt sokáig tart kiértékelni, akkor interact helyett érdemes interact_manual-t használni. Ez csak akkor futtatja le a függvényt, ha a megjelenő gombot megnyomjuk.
Step11: A widgetekről bővebben itt található több információ. Végül nézünk meg egy több változós interactot! | Python Code:
%pylab inline
from ipywidgets import * # az interaktivitásért felelős csomag
Explanation: Interaktív függvények és ábrák
Az alábbiakban vizsgáljunk meg egy egyszerű módszert arra, hogy hogyan tehetjük Python-függvényeinket interaktívvá!
Ehhez az ipywidgets csomag lesz segítségünkre!
End of explanation
t=linspace(0,2*pi,100);
plot(t,sin(t))
Explanation: Mostanra már tudjuk, hogy hogyan ábrázoljunk egy matematikai függvényt:
End of explanation
def freki(omega):
plot(t,sin(omega*t))
freki(2.0)
Explanation: Írjunk egy függvényt, ami egy megadott frekvenciájú jelet rajzol ki!
End of explanation
interact(freki,omega=(0,10,0.1));
Explanation: Most jön a varázslat! Az interact() függvény segítségével interaktívvá tehetjük a fent definiált függvényünket!
End of explanation
def func(x):
print(x)
Explanation: Nézzük meg egy kicsit közelebbről, hogy is működik ez az interact() konstrukció! Definiáljunk ehhez először egy nagyon egyszerű függvényt!
End of explanation
interact(func,x=(0,10));
Explanation: Az interact egy olyan függvény, amely az első paramétereként egy függvényt vár, és kulcsszavakként várja a függvény bemenő paramétereit! Amit visszaad, az egy interaktív widget, ami lehet sokfajta, de alapvetően azt a célt szolgálja, hogy a func függvényt kiszolgálja. Annak ad egy bemenő paramétert, lefuttatja, majd vár, hogy a felhasználó újra megváltoztassa az állapotot.
Ha a kulcsszavas argumentumnak zárójelbe írt egész számokat adunk meg, akkor egy egész számokon végigmenő csúszkát kapunk:
End of explanation
interact(func,x=False);
Explanation: Ha egy bool értéket adunk meg, akkor egy pipálható dobozt:
End of explanation
interact(func,x=['hétfő','kedd','szerda']);
Explanation: Ha egy általános listát adunk meg, akkor egy legördülő menüt kapunk:
End of explanation
interact(func,x=(0,10,0.1));
Explanation: Ha a sima zárójelbe írt számok nem egészek (legalább az egyik) akkor egy float csúszkát kapunk:
End of explanation
interact(func,x=IntSlider(min=0,max=10,step=2,value=2,description='egesz szamos csuszka x='));
interact(func,x=FloatSlider(min=0,max=10,step=0.01,value=2,description='float szamos csuszka x='));
interact(func,x=Dropdown(options=['Hétfő','Kedd','Szerda'],description='legörülő x='));
interact(func,x=Checkbox());
interact(func,x=Text());
Explanation: Ha pontosan specifikálni szeretnénk, hogy milyen interaktivitást akarunk, akkor azt az alábbiak szerint tehetjük meg,
egész csúszka$\rightarrow$IntSlider()
float csúszka$\rightarrow$FloatSlider()
legördülő menü$\rightarrow$Dropdown()
pipa doboz$\rightarrow$Checkbox()
szövegdoboz$\rightarrow$Text()
Ezt alább néhány példa illusztrálja:
End of explanation
interact_manual(func,x=(0,10));
Explanation: Ha egy függvényt sokáig tart kiértékelni, akkor interact helyett érdemes interact_manual-t használni. Ez csak akkor futtatja le a függvényt, ha a megjelenő gombot megnyomjuk.
End of explanation
t=linspace(0,2*pi,100);
def oszci(A,omega,phi,szin):
plot(t,A*sin(omega*t+phi),color=szin)
plot(pi,A*sin(omega*pi+phi),'o')
xlim(0,2*pi)
ylim(-3,3)
xlabel('$t$',fontsize=20)
ylabel(r'$A\,\sin(\omega t+\varphi)$',fontsize=20)
grid(True)
interact(oszci,
A =FloatSlider(min=1,max=2,step=0.1,value=2,description='A'),
omega=FloatSlider(min=0,max=10,step=0.1,value=2,description=r'$\omega$'),
phi =FloatSlider(min=0,max=2*pi,step=0.1,value=0,description=r'$\varphi$'),
szin =Dropdown(options=['red','green','blue','darkcyan'],description='szín'));
Explanation: A widgetekről bővebben itt található több információ. Végül nézünk meg egy több változós interactot!
End of explanation |
5,603 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: In this notebook we explore different approaches to classification. First let's make a function to generate some data for classification.
Step2: Let's make some data and plot them (with different markers for the two classes).
Step4: Now let's look at some different classification methods.
First let's create a function that can take a dataset and a classifier and show us the "decision surface" - that is, which category is predicted for each value of the variables.
Step5: Nearest neighbor classifier
In the nearest neighbor classifier, we classify new datapoints by looking at which points are nearest in the training data. In the simplest case we could look at a single neighbor; try setting n_neighbors to 1 in the following cell and look at the results. Then try increasing the value (e.g. try 10, 20, and 40). What do you see as the number of neighbors increases?
We call the nearest neighbor classifier a nonparametric method. This doesn't mean that it has no parameters; to the contrary, it means that the number of parameters is not fixed, but grows with the amount of data.
Step6: Now let's write a function to perform cross-validation and compute prediction accuracy.
Step7: Apply that function to the nearest neighbors problem. We can look at accuracy (how often did it get the label right) and also look at the confusion matrix which shows each type of outcome.
Step8: Exercise
Step9: Linear discriminant analysis
Linear discriminant analysis is an example of a parametric classification method. For each class it fits a Gaussian distribution, and then makes its classification decision on the basis of which Gaussian has the highest density at each point.
Step10: Support vector machines
A commonly used classifier for fMRI data is the support vector machine (SVM). The SVM uses a kernel to represent the distances between observations; this can be either linear or nonlinear. The SVM then optimizes the boundary so as to minimize the distance between observations in each class.
Step11: Next we want to figure out what the right value of the gamma parameter is for the nonlinear SVM. We can't do this by trying a bunch of gamma values and seeing which works best, because this overfit the data (i.e. cheating). Instead, what we need to do is use a nested crossvalidation in wich we use crossvalidation on the training data to find the best gamma parameter, and then apply that to the test data. We can do this using the GridSearchCV() function in sklearn. See http | Python Code:
import numpy
%matplotlib inline
import matplotlib.pyplot as plt
import scipy.stats
import matplotlib
from matplotlib.colors import ListedColormap
import sklearn.neighbors
import sklearn.cross_validation
import sklearn.metrics
import sklearn.lda
import sklearn.svm
import sklearn.linear_model
from sklearn.model_selection import GridSearchCV,KFold,StratifiedKFold,LeaveOneOut
from sklearn.metrics import classification_report
n=100
def make_class_data(mean=[50,110],multiplier=1.5,var=[[10,10],[10,10]],cor=-0.4,N=100):
generate a synthetic classification data set with two variables
cor=numpy.array([[1.,cor],[cor,1.]])
var1=numpy.array([[var[0][0],0],[0,var[0][1]]])
cov1=var1.dot(cor).dot(var1)
d1=numpy.random.multivariate_normal(mean,cov1,int(N/2))
var2=numpy.array([[var[1][0],0],[0,var[1][1]]])
cov2=var2.dot(cor).dot(var2)
d2=numpy.random.multivariate_normal(numpy.array(mean)*multiplier,cov2,int(N/2))
d=numpy.vstack((d1,d2))
cl=numpy.zeros(N)
cl[:(N/2)]=1
return cl,d
Explanation: In this notebook we explore different approaches to classification. First let's make a function to generate some data for classification.
End of explanation
cl,d=make_class_data(multiplier=[1.1,1.1],N=n)
print(numpy.mean(d[:50,:],0))
print(numpy.mean(d[50:,:],0))
plt.scatter(d[:,0],d[:,1],c=cl,cmap=matplotlib.cm.hot)
Explanation: Let's make some data and plot them (with different markers for the two classes).
End of explanation
def plot_cls_with_decision_surface(d,cl,clf,h = .25 ):
Plot the decision boundary. For that, we will assign a color to each
point in the mesh [x_min, m_max]x[y_min, y_max].
h= step size in the grid
fig=plt.figure()
x_min, x_max = d[:, 0].min() - 1, d[:, 0].max() + 1
y_min, y_max = d[:, 1].min() - 1, d[:, 1].max() + 1
xx, yy = numpy.meshgrid(numpy.arange(x_min, x_max, h),
numpy.arange(y_min, y_max, h))
Z = clf.predict(numpy.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(d[:, 0], d[:, 1], c=cl, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
return fig
Explanation: Now let's look at some different classification methods.
First let's create a function that can take a dataset and a classifier and show us the "decision surface" - that is, which category is predicted for each value of the variables.
End of explanation
# adapted from http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html#example-neighbors-plot-classification-py
n_neighbors = 40
# step size in the mesh
# Create color maps
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00'])
clf = sklearn.neighbors.KNeighborsClassifier(n_neighbors, weights='uniform')
clf.fit(d, cl)
plot_cls_with_decision_surface(d,cl,clf)
Explanation: Nearest neighbor classifier
In the nearest neighbor classifier, we classify new datapoints by looking at which points are nearest in the training data. In the simplest case we could look at a single neighbor; try setting n_neighbors to 1 in the following cell and look at the results. Then try increasing the value (e.g. try 10, 20, and 40). What do you see as the number of neighbors increases?
We call the nearest neighbor classifier a nonparametric method. This doesn't mean that it has no parameters; to the contrary, it means that the number of parameters is not fixed, but grows with the amount of data.
End of explanation
def classify(d,cl,clf,cv):
pred=numpy.zeros(n)
for train,test in cv.split(d,cl):
clf.fit(d[train,:],cl[train])
pred[test]=clf.predict(d[test,:])
return sklearn.metrics.accuracy_score(cl,pred),sklearn.metrics.confusion_matrix(cl,pred)
Explanation: Now let's write a function to perform cross-validation and compute prediction accuracy.
End of explanation
n_neighbors=40
clf=sklearn.neighbors.KNeighborsClassifier(n_neighbors, weights='uniform')
# use stratified k-fold crossvalidation, which keeps the proportion of classes roughly
# equal across folds
cv=StratifiedKFold(8)
acc,confusion=classify(d,cl,clf,cv)
print('accuracy = %f'%acc)
print('confusion matrix:')
print(confusion)
Explanation: Apply that function to the nearest neighbors problem. We can look at accuracy (how often did it get the label right) and also look at the confusion matrix which shows each type of outcome.
End of explanation
nn_range = range(1,61)
accuracy_knn=numpy.zeros((100,len(nn_range)))
for x in range(100):
ds_cl,ds_x=make_class_data(multiplier=[1.1,1.1],N=n)
ds_cv=StratifiedKFold(8)
for i in nn_range:
clf=sklearn.neighbors.KNeighborsClassifier(i, weights='uniform')
accuracy_knn[x,i-1],_=classify(ds_x,ds_cl,clf,ds_cv)
plt.plot(nn_range,numpy.mean(accuracy_knn,0))
plt.xlabel('number of nearest neighbors')
plt.ylabel('accuracy')
Explanation: Exercise: Loop through different levels of n_neighbors (from 1 to 30) and compute the accuracy.
Now write a loop that does this using 100 different randomly generated datasets, and plot the mean across datasets. This will take a couple of minutes to run.
End of explanation
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
clf=sklearn.lda.LDA(store_covariance=True) #QuadraticDiscriminantAnalysis()
cv=LeaveOneOut()
acc,confusion=classify(d,cl,clf,cv)
print('accuracy = %f'%acc)
print('confusion matrix:')
print(confusion)
fig=plot_cls_with_decision_surface(d,cl,clf)
# plotting functions borrowed from http://scikit-learn.org/stable/auto_examples/classification/plot_lda_qda.html#sphx-glr-auto-examples-classification-plot-lda-qda-py
from matplotlib import colors
from scipy import linalg
import matplotlib as mpl
fig=plt.figure()
cmap = colors.LinearSegmentedColormap(
'red_blue_classes',
{'red': [(0, 1, 1), (1, 0.7, 0.7)],
'green': [(0, 0.7, 0.7), (1, 0.7, 0.7)],
'blue': [(0, 0.7, 0.7), (1, 1, 1)]})
plt.cm.register_cmap(cmap=cmap)
# class 0 and 1 : areas
nx, ny = 200, 200
x_min, x_max = numpy.min(d[:,0]),numpy.max(d[:,0])
y_min, y_max = numpy.min(d[:,1]),numpy.max(d[:,1])
xx, yy = numpy.meshgrid(numpy.linspace(x_min, x_max, nx),
numpy.linspace(y_min, y_max, ny))
Z = clf.predict_proba(numpy.c_[xx.ravel(), yy.ravel()])
Z = Z[:, 1].reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap='red_blue_classes',
norm=colors.Normalize(0., 1.))
plt.contour(xx, yy, Z, [0.5], linewidths=2., colors='k')
# means
plt.plot(clf.means_[0][0], clf.means_[0][1],
'o', color='black', markersize=10)
plt.plot(clf.means_[1][0], clf.means_[1][1],
'o', color='black', markersize=10)
def plot_ellipse(splot, mean, cov, color):
v, w = linalg.eigh(cov)
u = w[0] / linalg.norm(w[0])
angle = numpy.arctan(u[1] / u[0])
angle = 180 * angle / numpy.pi # convert to degrees
# filled Gaussian at 2 standard deviation
ell = mpl.patches.Ellipse(mean, 2 * v[0] ** 0.5, 2 * v[1] ** 0.5,
180 + angle, facecolor=color, edgecolor='yellow',
linewidth=2, zorder=2)
ell.set_clip_box(splot.bbox)
ell.set_alpha(0.5)
ax=splot.gca()
ax.add_artist(ell)
splot.canvas.draw()
def plot_lda_cov(lda, splot):
plot_ellipse(splot, lda.means_[0], lda.covariance_, 'red')
plot_ellipse(splot, lda.means_[1], lda.covariance_, 'blue')
plot_lda_cov(clf,fig)
Explanation: Linear discriminant analysis
Linear discriminant analysis is an example of a parametric classification method. For each class it fits a Gaussian distribution, and then makes its classification decision on the basis of which Gaussian has the highest density at each point.
End of explanation
cl,d=make_class_data(multiplier=[1.1,1.1],N=n)
use_linear=False
if use_linear:
clf=sklearn.svm.SVC(kernel='linear')
else:
clf=sklearn.svm.SVC(kernel='rbf',gamma=0.01)
acc,confusion=classify(d,cl,clf,cv)
print('accuracy = %f'%acc)
print('confusion matrix:')
print(confusion)
fig=plot_cls_with_decision_surface(d,cl,clf)
# put wide borders around the support vectors
for sv in clf.support_vectors_:
plt.scatter(sv[0],sv[1],s=80, facecolors='none', zorder=10)
Explanation: Support vector machines
A commonly used classifier for fMRI data is the support vector machine (SVM). The SVM uses a kernel to represent the distances between observations; this can be either linear or nonlinear. The SVM then optimizes the boundary so as to minimize the distance between observations in each class.
End of explanation
# do 4-fold cross validation
cv=KFold(4,shuffle=True)
# let's test both linear and rbf SVMs with a range of parameteter values
param_grid = [{'C': [1, 10, 100, 1000], 'kernel': ['linear']},
{'C': [1, 10, 100, 1000], 'gamma': [0.1,0.01,0.001, 0.0001], 'kernel': ['rbf']}]
pred=numpy.zeros(cl.shape)
for train,test in cv.split(d):
X_train=d[train,:]
X_test=d[test,:]
y_train=cl[train]
y_test=cl[test]
clf=GridSearchCV(sklearn.svm.SVC(C=1), param_grid, cv=5, scoring='accuracy')
clf.fit(X_train,y_train)
pred[test]=clf.predict(X_test)
_=plot_cls_with_decision_surface(X_train,y_train,clf.best_estimator_)
print('Best parameters:')
print('Mean CV training accuracy:',numpy.mean(clf.cv_results_['mean_train_score']))
print('Mean CV test accuracy:',numpy.mean(clf.cv_results_['mean_test_score']))
for k in clf.best_params_:
print(k,clf.best_params_[k])
print('Performance on out-of-sample test:')
print(classification_report(cl,pred))
Explanation: Next we want to figure out what the right value of the gamma parameter is for the nonlinear SVM. We can't do this by trying a bunch of gamma values and seeing which works best, because this overfit the data (i.e. cheating). Instead, what we need to do is use a nested crossvalidation in wich we use crossvalidation on the training data to find the best gamma parameter, and then apply that to the test data. We can do this using the GridSearchCV() function in sklearn. See http://scikit-learn.org/stable/auto_examples/model_selection/grid_search_digits.html#sphx-glr-auto-examples-model-selection-grid-search-digits-py for an example.
End of explanation |
5,604 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsing
Goals
Step1: Parsing is hard...
<h2>
<i>"System Administrators spent $24.3\%$ of
their work-life parsing files."</i>
<br><br>
Independent analysis by The GASP* Society ;) <br>
</h2>
<h3>
*(Grep Awk Sed Perl)
</h3>
... use a strategy!
<table>
<tr><td>
<ol><li>Collect parsing samples
<li>Play in ipython and collect %history
<li>Write tests, then the parser
<li>Eventually benchmark
</ol>
</td><td>
<img src="img/parsing-lifecycle.png" />
</td></tr>
</table>
Parsing postfix logs
Step4: Exercise I
- %edit 03_parsing_test.py
- complete the parse_line(line) function
- %paste your solution's code in iPython and run manually the test functions
Step5: Python Regexp
Step6: Achieve more complex splitting using regular expressions.
Step7: Benchmarking in iPython - I
Parsing big files needs benchmarks. iPython %timeit magic is a good starting point
We are going to measure the execution time of various tasks, using different strategies like regexp, join and split.
Step8: Example
Step9: Parsing | Python Code:
import re
import nose
# %timeit
Explanation: Parsing
Goals:
- Plan a parsing strategy
- Use basic regular expressions: match, search, sub
- Benchmarking a parser
- Running nosetests
- Write a simple parser
Modules:
End of explanation
from __future__ import print_function
# Before writing the parser, collect samples of
# the interesting lines. For now just
from course import mail_sent, mail_delivered
print("I'm goint to parse the following line", mail_sent, sep="\n\n")
# and %edit a simple
def test_sent():
hour, host, to = parse_line(mail_sent)
assert hour == '08:00:00'
assert to == '[email protected]'
# Play with mail_sent and start using basic strings in ipython
mail_sent.split()
# You can number fields with enumerate.
# Remember that ipython puts the last returned value in `_`
# which is useful in interactive mode!
fields, counting = _, enumerate(_)
print(*counting, sep="\n")
# Now we can pick fields singularly...
hour, host, dest = fields[2], fields[3], fields[6]
# ... or with
from operator import itemgetter
which_returns_a_function = itemgetter(2, 3, 6)
assert (hour, host, dest) == which_returns_a_function(fields)
Explanation: Parsing is hard...
<h2>
<i>"System Administrators spent $24.3\%$ of
their work-life parsing files."</i>
<br><br>
Independent analysis by The GASP* Society ;) <br>
</h2>
<h3>
*(Grep Awk Sed Perl)
</h3>
... use a strategy!
<table>
<tr><td>
<ol><li>Collect parsing samples
<li>Play in ipython and collect %history
<li>Write tests, then the parser
<li>Eventually benchmark
</ol>
</td><td>
<img src="img/parsing-lifecycle.png" />
</td></tr>
</table>
Parsing postfix logs
End of explanation
# %load ../scripts/03_parsing_test.py
Python for System Administrators
Roberto Polli <[email protected]>
This file shows how to parse a postfix maillog file.
maillog traces every incoming and outgoing email using
different line formats.
#
# Before writing the parser we collect the
# interesting lines to use as a sample
# For now we're just interested in the following cases
# 1- a mail is sent
# 2- a mail is delivered
test_str_1 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: to=<[email protected]>, relay=examplemx2.doe.it[222.33.44.555]:25, delay=0.8, delays=0.17/0.01/0.43/0.19, dsn=2.0.0, status=sent(250 ok: Message 2108406157 accepted)'
test_str_2 = 'Nov 31 08:00:00 test-fe1 postfix/smtp[16669]: 7CD8E730020: removed'
def test_sent():
hour, host, destination = parse_line(test_str_1)
assert hour == '08:00:00'
assert host == 'test-fe1'
assert destination == '[email protected]'
def test_delivered():
hour, host, destination = parse_line(test_str_2)
assert hour == '08:00:00'
assert host == 'test-fe1'
assert destination is None
def parse_line(line):
Complete the parse line function.
Without watching the solution: ICAgIGltcG9ydCByZQogICAgXywgXywgaG91ciwgaG9zdCwgXywgXywgZGVzdCA9IGxpbmUuc3BsaXQoKVs6N10KICAgIHRyeToKICAgICAgICBkZXN0ID0gcmUuc3BsaXQocidbPD5dJywgZGVzdClbMV0KICAgIGV4Y2VwdDoKICAgICAgICBkZXN0ID0gTm9uZQogICAgcmV0dXJuIChob3VyLCBob3N0LCBkZXN0KQoK
# Hint: "you can".split()
# Hint: "<you can slice>"[1:-1] or use re.split
raise NotImplementedError("Write me!")
#
# Run test
#
test_sent()
# Don't look at the solution ;)
%load course/parse_line.py
Explanation: Exercise I
- %edit 03_parsing_test.py
- complete the parse_line(line) function
- %paste your solution's code in iPython and run manually the test functions
End of explanation
# Python supports regular expressions via
import re
# We start showing a grep-reloaded function
def grep(expr, fpath):
one = re.compile(expr) # ...has two lookup methods...
assert ( one.match # which searches from ^ the beginning
and one.search ) # that searches $\pyver{anywhere}$
with open(fpath) as fp:
return [x for x in fp if one.search(x)]
# The function seems to work as expected ;)
assert not grep(r'^localhost', '/etc/hosts')
# And some more tests
ret = grep('127.0.0.1', '/etc/hosts')
assert ret, "ret should not be empty"
print(*ret)
Explanation: Python Regexp
End of explanation
from re import split # is a very nice function
import sys
from course import sh
# Let's gather some ping stats
if sys.platform.startswith('win'):
cmd = "ping -n3 www.google.it"
else:
cmd = "ping -c3 -w3 www.google.it"
# Split for both space and =
ping_output = [split("[ =]", x) for x in sh(cmd)]
print(*ping_output, sep="\n")
# Splitting with re.findall
from re import findall # can be misused too;
# eg for adding the ":" to a
mac = "00""24""e8""b4""33""20"
# ...using this
re_hex = "[0-9a-fA-F]{2}"
mac_address = ':'.join(findall(re_hex, mac))
print("The mac address is ", mac_address)
# Actually this does a bit of validation, requiring all chars to be in the 0-F range
Explanation: Achieve more complex splitting using regular expressions.
End of explanation
# Run the following cell many times.
# Do you always get the same results?
test_all_regexps = ("..", "[a-fA-F0-9]{2}")
for re_s in test_all_regexps:
%timeit ':'.join(findall(re_s, mac))
# We can even compare compiled vs inline regexp
import re
from time import sleep
for re_s in test_all_regexps:
re_c = re.compile(re_s)
%timeit ':'.join(re_c.findall(mac))
# Or find other methods:
# complex...
from re import sub as sed
%timeit sed(r'(..)', r'\1:', mac)
# ...or simple
%timeit ':'.join([mac[i:i+2] for i in range(0,12,2)])
#Outside iPython check the timeit module
# Execise: which is the fastest method? Why?
Explanation: Benchmarking in iPython - I
Parsing big files needs benchmarks. iPython %timeit magic is a good starting point
We are going to measure the execution time of various tasks, using different strategies like regexp, join and split.
End of explanation
# Don't need to type this VSAN configuration script
# which uses linux FC information from /sys filesystem
from glob import glob
fc_id_path = "/sys/class/fc_host/host*/port_name"
for x in glob(fc_id_path):
# ...we boldly skip an explicit close()
pwwn = open(x).read() # 0x500143802427e66c
pwwn = pwwn[2:]
# ...and even use the slower but readable
pwwn = re.findall(r'..', pwwn)
print("member pwwn ", ':'.join(pwwn))
Explanation: Example: generating vsan configuration snippets
End of explanation
#
# Use this cell for Exercise II
#
test_delivered()
Explanation: Parsing: Exercise II
Now another test for the delivered messages
- %edit 03_parsing_test.py
- %paste to iPython and run test_delivered()
- fix parse_line to work with both tests and save
End of explanation |
5,605 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1 Layer Network
Here we will make a network that will recognize 8x8 images of numbers. This will involve a creating a function that genrates networks and a function that can train the network.
Step1: Our network will be comprised of a list of numpy arrays with each array containing the weights and bias for that layer of perceptions.
Step2: Credit to Neural Networks and Deep Learning by Michael Nielsen for the image.
Step3: This is our code from the Making Perceptrons notebook that we use for our network.
Step4: Here we define functions to train the network based on a set of training data. The first step is to run our training data through our network to find how much error the network currently has. Since digits.target is a list of integers, we need a function to convert those integers into 10 dimensional vectors
Step5: Another important function we will need is a function that will compute the output error and multipply it with the derivative ofour sigmoid function to find our output layer's deltas. These deltas will be crucial for backpropagating our error to our hidden layers.
Step6: Once we have to deltas of our output layers, we move on to getting the hidden layer's deltas. To compute this, we will take the Hadamard product of the dot product of the weight array and the deltas of the succeeding array with the derivitive of that hidden layer's output.
$$\delta_{l}=((w_{l+1})^{T}\delta_{l+1})⊙ \sigma'(z_{l})$$
This formula backpropagates the error from each layer to the previous layer so that we can change each weight by how wrong it is.
Credit to Neural Networks and Deep Learning by Michael Nielsen for the formula.
Step7: Now that we can find the deltas for each layer in the network, we just need a function to edit our weights based off of a list of examples. For that, we use stocastic gradient descent.
Step8: To edit the weights in of network, we take the 2D array in each layer and subtract it with the 2D array that results from the average of the dot products of the deltas and the inputs of that layer for the samples in the training data. This average is multiplied by a learning rate, $η$, to give us control over how much the network will change.
$$w^{l}\rightarrow w^{l}−\frac{η}{m}\sum_{x} \delta_{x,l}(a_{x,l−1})^{T}$$
Credit to Neural Networks and Deep Learning by Michael Nielsen for the formula.
Step9: So, we have everything we need to train a network. All we are missing is a network to train. Let's make one and let's call him Donnel.
Step10: So as you can see, the network "Donnel" is simply a list of 2D numpy arrays with one array for each layer of the network. His hidden layer's shape is 40 x 65 with each row being a perceptron with 64 weights and 1 bias. Since Donnel's output layer has 10 nuerons in it, we need to be able to convert Donnel's output to numbers and numbers (0-9) into a list of perceptron outputs.
Step11: Now, lets train the network with 80% of the digits data set. To do this, we will use stocastic gradient descent on batch sized iteration of the total training data set. Essentially, we're going to change our weights 15 examples at a time until we complete 80% of the dataset. Let's run this through a couple cycles as well to get our accuracy as high as possible.
This Cell Takes 20 Minutes to Run | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact
from sklearn.datasets import load_digits
from IPython.display import Image, display
digits = load_digits()
print(digits.data.shape)
def show_examples(i):
plt.matshow(digits.images[i].reshape((8,8)), cmap='Greys_r')
display(digits.target[i])
interact(show_examples, i=[1,1797-1])
Explanation: 1 Layer Network
Here we will make a network that will recognize 8x8 images of numbers. This will involve a creating a function that genrates networks and a function that can train the network.
End of explanation
Image(url="http://neuralnetworksanddeeplearning.com/images/tikz35.png")
Explanation: Our network will be comprised of a list of numpy arrays with each array containing the weights and bias for that layer of perceptions.
End of explanation
def gen_network(size):
weights= [np.array([[np.random.randn() for _ in range(size[n-1]+1)]
for _ in range(size[n])]) for n in range(len(size))[1:]]
return weights
a = gen_network([2,2,1,3])
a
Explanation: Credit to Neural Networks and Deep Learning by Michael Nielsen for the image.
End of explanation
sigmoid = lambda x: 1/(1 +np.exp(-x))
def perceptron_sigmoid(weights, inputvect):
return sigmoid(np.dot(np.append(inputvect,[1]), weights))
def propforward(network, inputvect):
outputs = []
for layer in network:
neural_input = inputvect
output = [perceptron_sigmoid(weights, neural_input) for weights in layer]
outputs.append(output)
inputvect = output
outputs = np.array(outputs)
return [outputs[:-1], outputs[-1]]
Explanation: This is our code from the Making Perceptrons notebook that we use for our network.
End of explanation
def target_convert(n):
assert n <= 9 and n >= 0
n = round(n)
result = np.zeros((10,))
result[n]=1
return result
target_convert(4)
Explanation: Here we define functions to train the network based on a set of training data. The first step is to run our training data through our network to find how much error the network currently has. Since digits.target is a list of integers, we need a function to convert those integers into 10 dimensional vectors: the same format as the output of our network.
End of explanation
def find_deltas_sigmoid(outputs, targets):
return [output*(1-output)*(output-target) for output, target in zip(outputs, targets)]
Explanation: Another important function we will need is a function that will compute the output error and multipply it with the derivative ofour sigmoid function to find our output layer's deltas. These deltas will be crucial for backpropagating our error to our hidden layers.
End of explanation
def backprob(network, inputvect, targets):
hidden_outputs, outputs = propforward(network, inputvect)
change_in_outputs = find_deltas_sigmoid(outputs, targets)
list_deltas = [[] for _ in range(len(network))]
list_deltas[-1] = change_in_outputs
for n in range(len(network))[-1:0:-1]:
delta = change_in_outputs
change_in_hidden_outputs= [hidden_output*(1-hidden_output)*
np.dot(delta, np.array([n[i] for n in network[n]]).transpose())
for i, hidden_output in enumerate(hidden_outputs[n-1])]
list_deltas[n-1] = change_in_hidden_outputs
change_in_outputs = change_in_hidden_outputs
return list_deltas
Explanation: Once we have to deltas of our output layers, we move on to getting the hidden layer's deltas. To compute this, we will take the Hadamard product of the dot product of the weight array and the deltas of the succeeding array with the derivitive of that hidden layer's output.
$$\delta_{l}=((w_{l+1})^{T}\delta_{l+1})⊙ \sigma'(z_{l})$$
This formula backpropagates the error from each layer to the previous layer so that we can change each weight by how wrong it is.
Credit to Neural Networks and Deep Learning by Michael Nielsen for the formula.
End of explanation
def stoc_descent(network, input_list, target_list, learning_rate):
mega_delta = []
hidden_output = [propforward(network, inpt)[0] for inpt in input_list]
for inpt, target in zip(input_list, target_list):
mega_delta.append(backprob(network, inpt, target))
inputs=[]
inputs.append(input_list)
for n in range(len(network)):
inputs.append(hidden_output[n])
assert len(inputs) == len(network) + 1
deltas = []
for n in range(len(network)):
deltas.append([np.array(delta[n]) for delta in mega_delta])
assert len(deltas)==len(network)
for n in range(len(network)):
edit_weights(network[n], inputs[n], deltas[n], learning_rate)
Explanation: Now that we can find the deltas for each layer in the network, we just need a function to edit our weights based off of a list of examples. For that, we use stocastic gradient descent.
End of explanation
def edit_weights(layer, input_list, deltas, learning_rate):
for a, inpt in enumerate(input_list):
layer-=learning_rate/len(input_list)*np.dot(deltas[a].reshape(len(deltas[a]),1),
np.append(inpt,[1]).reshape(1,len(inpt)+1))
Explanation: To edit the weights in of network, we take the 2D array in each layer and subtract it with the 2D array that results from the average of the dot products of the deltas and the inputs of that layer for the samples in the training data. This average is multiplied by a learning rate, $η$, to give us control over how much the network will change.
$$w^{l}\rightarrow w^{l}−\frac{η}{m}\sum_{x} \delta_{x,l}(a_{x,l−1})^{T}$$
Credit to Neural Networks and Deep Learning by Michael Nielsen for the formula.
End of explanation
inputs=64
hidden_neurons=40
outputs=10
donnel = gen_network([inputs,hidden_neurons,outputs])
# Here's what Donnel looks like.
donnel
Explanation: So, we have everything we need to train a network. All we are missing is a network to train. Let's make one and let's call him Donnel.
End of explanation
def output_reader(output):
assert len(output)==10
result=[]
for i, t in enumerate(output):
if t == max(output) and abs(t-1)<=0.5:
result.append(i)
if len(result)==1:
return result[0]
else:
return 0
output_reader([0,0,0,0,0,1,0,0,0,0])
Explanation: So as you can see, the network "Donnel" is simply a list of 2D numpy arrays with one array for each layer of the network. His hidden layer's shape is 40 x 65 with each row being a perceptron with 64 weights and 1 bias. Since Donnel's output layer has 10 nuerons in it, we need to be able to convert Donnel's output to numbers and numbers (0-9) into a list of perceptron outputs.
End of explanation
%%timeit -r1 -n1
training_cycles = 20
numbers_per_cycle = 1438
batch_size = 15
learning_rate = 1
train_data_index = np.linspace(0,numbers_per_cycle, numbers_per_cycle + 1)
target_list = [target_convert(n) for n in digits.target[0:numbers_per_cycle]]
np.random.seed(1)
np.random.shuffle(train_data_index)
for _ in range(training_cycles):
for n in train_data_index:
if n+batch_size <= numbers_per_cycle:
training_data = digits.data[int(n):int(n+batch_size)]
target_data = target_list[int(n):int(n+batch_size)]
else:
training_data = digits.data[int(n-batch_size):numbers_per_cycle]
assert len(training_data)!=0
target_data = target_list[int(n-batch_size):numbers_per_cycle]
stoc_descent(donnel, training_data, target_data, learning_rate)
And let's check how accurate it is by testing it with the remaining 20% of the data set.
def check_net(rnge = 1438, check_number=202):
guesses = []
targets = []
number_correct = 0
rnge = range(rnge,rnge + 359)
for n in rnge:
guesses.append(output_reader(propforward(donnel, digits.data[n])[-1]))
targets.append(digits.target[n])
for guess, target in zip(guesses, targets):
if guess == target:
number_correct+=1
number_total = len(rnge)
print(number_correct/number_total*100)
print("%d/%d" %(number_correct, number_total))
print()
print(propforward(donnel, digits.data[check_number])[-1])
print()
print(output_reader(propforward(donnel, digits.data[check_number])[-1]))
show_examples(check_number)
interact(check_net, rnge=True, check_number = [1,1796])
Explanation: Now, lets train the network with 80% of the digits data set. To do this, we will use stocastic gradient descent on batch sized iteration of the total training data set. Essentially, we're going to change our weights 15 examples at a time until we complete 80% of the dataset. Let's run this through a couple cycles as well to get our accuracy as high as possible.
This Cell Takes 20 Minutes to Run
End of explanation |
5,606 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*30].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
# check data shape
print("input samples:", train_features.shape[0])
print("input features:", train_features.shape[1])
batch = np.random.choice(train_features.index, size=128)
count = 0
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
if count == 0:
print(record.shape)
print(target.shape)
inputs = np.array(record, ndmin=2).T
targets = np.array(target, ndmin=2).T
print(inputs.shape)
print(targets.shape)
count += 1
print("count:", count)
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = (lambda x: 1 / (1 + np.exp(-x)))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
# shape symbol: i - numInputs (56), h - numHidden, o - numOutputs
# inputs(i, 1) , not batched
# targets(1, 1)
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
# shape: inputs(i, 1).T dot weights(h, i).T => (1, h)
hidden_inputs = np.dot(inputs.T, self.weights_input_to_hidden.T) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
# shape: inputs(1, h) dot weights(o, h).T => (1, o)
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output.T) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
# shape(1, o)
output_errors = targets - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Backpropagated error
# shape: inputs(1, o) dot weights(o, h) => (1, h)
hidden_errors = np.dot(output_errors, self.weights_hidden_to_output) # errors propagated to the hidden layer
hidden_grad = hidden_errors * hidden_outputs * (1 - hidden_outputs) # hidden layer gradients
# TODO: Update the weights
# shape: (1, o).T dot (1, h) => (o, h)
self.weights_hidden_to_output += self.lr * np.dot(output_errors.T, hidden_outputs) # update hidden-to-output weights with gradient descent step
# shape: (1, h).T dot (1, i) => (h, i)
#self.weights_input_to_hidden += self.lr * np.dot(hidden_grad.T, inputs.T) # update input-to-hidden weights with gradient descent step
self.weights_input_to_hidden += self.lr * hidden_grad.T * inputs.T # update input-to-hidden weights with gradient descent step
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
#hidden_inputs = # signals into hidden layer
#hidden_outputs = # signals from hidden layer
hidden_inputs = np.dot(inputs.T, self.weights_input_to_hidden.T) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer
#final_inputs = # signals into final output layer
#final_outputs = # signals from final output layer
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output.T) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs.T
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 2000 # 100
learning_rate = 0.0699 # 0.1
hidden_nodes = 28 # input feature has 56, half is 28 # 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
#if False:
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
print(network.weights_input_to_hidden)
network.train(inputs, targets)
print(network.weights_input_to_hidden)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
The model predicts well during the first half of December, but is not so good during the second half of Decemember. I checked the csv file, and it turns out that it has only 2 years of data. And we separated the data, the last 21 days are used for test. Which means, for the second half of Decemember, only one year data are trained, which is not sufficient.
Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation |
5,607 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Python and MySQL</h1>
<h2>First import the python module containing the API</h2>
Step1: <h2>Set up a connection and create a cursor object</h2>
Step3: <h2>Execute a query and get the results</h2> | Python Code:
import pymysql
Explanation: <h1>Python and MySQL</h1>
<h2>First import the python module containing the API</h2>
End of explanation
db = pymysql.connect("localhost","root","None" ,database="schooldb")
cursor = db.cursor()
Explanation: <h2>Set up a connection and create a cursor object</h2>
End of explanation
cursor.execute('show tables;')
cursor.fetchall()
query =
SELECT course.name FROM student
INNER JOIN enrolls_in ON student.ssn = enrolls_in.ssn
INNER JOIN course ON course.number = enrolls_in.class
WHERE f_name = "JOHN";
cursor.execute(query)
cursor.fetchall()
# 向数据库插入数据
insert = 'INSERT INTO Student VALUES ("", "")'
cursor.execute(insert)
cursor.fetchall()
# 同步到数据库
db.commit()
Explanation: <h2>Execute a query and get the results</h2>
End of explanation |
5,608 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Thermal Expansion
1. Introduction
A given crystal should have a well defined average lattice constant at a given pressure and temperature. Here we use silicon as an example to show how to calculate lattice constants using <code>GPUMD</code>.
Importing Relevant Functions
The inputs/outputs for GPUMD are processed using the Atomic Simulation Environment (ASE) and the thermo package.
Step1: 2. Preparing the Inputs
We use a cubic system (of diamond structure) consisting of $10^3\times 8 = 8000$ silicon atoms and use the minimal Tersoff potential [Fan 2020].
Generate the xyz.in file
Step2: The first few lines of the xyz.in file are
Step3: Plot Thermal Expansion
The output file thermo.out contains many useful data. Here, we load the results and plot the data in the following figure. | Python Code:
from pylab import *
from ase.lattice.cubic import Diamond
from thermo.gpumd.io import ase_atoms_to_gpumd
import pandas as pd
Explanation: Thermal Expansion
1. Introduction
A given crystal should have a well defined average lattice constant at a given pressure and temperature. Here we use silicon as an example to show how to calculate lattice constants using <code>GPUMD</code>.
Importing Relevant Functions
The inputs/outputs for GPUMD are processed using the Atomic Simulation Environment (ASE) and the thermo package.
End of explanation
Si = Diamond('Si', size=(10,10,10))
Si
ase_atoms_to_gpumd(Si, M=4, cutoff=3)
Explanation: 2. Preparing the Inputs
We use a cubic system (of diamond structure) consisting of $10^3\times 8 = 8000$ silicon atoms and use the minimal Tersoff potential [Fan 2020].
Generate the xyz.in file:
End of explanation
aw = 2
fs = 16
font = {'size' : fs}
matplotlib.rc('font', **font)
matplotlib.rc('axes' , linewidth=aw)
def set_fig_properties(ax_list):
tl = 8
tw = 2
tlm = 4
for ax in ax_list:
ax.tick_params(which='major', length=tl, width=tw)
ax.tick_params(which='minor', length=tlm, width=tw)
ax.tick_params(which='both', axis='both', direction='in', right=True, top=True)
Explanation: The first few lines of the xyz.in file are:
8000 4 3 0 0 0
1 1 1 54.3 54.3 54.3
0 0 0 0 28
0 0 2.715 2.715 28
0 2.715 0 2.715 28
0 2.715 2.715 0 28
Explanations for the first line:
The first number states that the number of particles is 8000.
The second number in this line, 4, is good for silicon crystals described by the Tersoff potential because no atom can have more than 4 neighbor atoms in the temperature range studied. Making this number larger only results in more memory usage. If this number is not large enough, <code>GPUMD</code> will give an error message and exit.
The next number, 3, means the initial cutoff distance for the neighbor list construction is 3 A. Here, we only need to consider the first nearest neighbors. Any number larger than the first nearest neighbor distance and smaller than the second nearest neighbor distance is OK here. Note that we will also not update the neighbor list. There is no such need in this problem.
The remaining three zeros in the first line mean:
the box is orthogonal;
the initial velocities are not contained in this file;
there is no grouping method defined here.
Explanations for the second line:
The first three 1's mean that all three directions are periodic.
The remaining three numbers are the box lengths in the three directions. It can be seen that we have used an initial lattice constant of 5.43 A to build the model.
Starting from the third line, the numbers in the first column are all 0 here, which means that all the atoms are of type 0 (single atom-type system). The next three columns are the initial coordinates of the atoms. The last column gives the masses of the atoms. Here, we show isotopically pure Si-28 crystal, but this Jupyter notebook will generate an xyz.in file using the average of the various isotopes of Si. In some applications, one can consider mass disorder in a flexible way.
The <code>run.in</code> file:
The <code>run.in</code> input file is given below:<br>
```
potential potentials/tersoff/Si_Fan_2019.txt 0
velocity 100
ensemble npt_ber 100 100 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
time_step 1
dump_thermo 10
run 20000
ensemble npt_ber 200 200 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
dump_thermo 10
run 20000
ensemble npt_ber 300 300 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
dump_thermo 10
run 20000
ensemble npt_ber 400 400 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
dump_thermo 10
run 20000
ensemble npt_ber 500 500 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
dump_thermo 10
run 20000
ensemble npt_ber 600 600 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
dump_thermo 10
run 20000
ensemble npt_ber 700 700 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
dump_thermo 10
run 20000
ensemble npt_ber 800 800 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
dump_thermo 10
run 20000
ensemble npt_ber 900 900 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
dump_thermo 10
run 20000
ensemble npt_ber 1000 1000 100 0 0 0 53.4059 53.4059 53.4059 2000
neighbor off # we know it is safe to turn off neighbor list update
dump_thermo 10
run 20000
```
- The first line uses the potential keyword to define the potential to be used, which is specified in the file Si_Fan_2019.txt.
The second line uses the velocity keyword and sets the velocities to be initialized with a temperature of 100 K.
The following 4 lines define the first run. This run will be in the NPT ensemble, using the Berendsen method. The temperature is 100 K and the pressures are zero in all the directions. The coupling constants are 100 and 2000 time steps for the thermostat and the barostat (The elastic constant, or inverse compressibility parameter needed in the barostat is estimated to be 53.4059 GPa; this only needs to be correct up to the order of magnitude.), respectively. The time_step for integration is 1 fs. There are $2\times 10^4$ steps for this run and the thermodynamic quantities will be output every 10 steps.
After this run, there are 9 other runs with the same parameters but different target temperatures. Note that the time step only needs to be set once if one wants to use the same time step in the whole simulation. In contrast, one has to use the dump_thermo keyword for each run in order to get outputs for each run. That is, we can say that the time_step keyword is propagating and the dump_thermo keyword is non-propagating.
3. Results and Discussion
It takes less than 1 min to run this example when a Tesla K40 card is used. The speed of the run is about $3\times 10^7$ atom x step / second. Using a Tesla P100, the speed is close to $10^8$ atom x step / second.
Figure Properties
End of explanation
data = pd.read_csv("thermo.out", delim_whitespace=True, header=None)
labels = ['T', 'K', 'U', 'Px', 'Py', 'Pz', 'Pyz', 'Pxz', 'Pxy']
# Orthogonal
if data.shape[1] == 12:
labels += ['Lx', 'Ly', 'Lz']
elif data.shape[1] == 15:
labels += ['ax', 'ay', 'az', 'bx', 'by', 'bz', 'cx', 'cy', 'cz']
thermo = dict()
for i in range(data.shape[1]):
thermo[labels[i]] = data[i].to_numpy(dtype='float')
thermo.keys()
t = 0.01*np.arange(1,thermo['T'].shape[0]+1) # [ps]
NC = 10 # Number of cells in each direction
NT = 10 # Number of temperature steps
temp = np.arange(100,1001,100)
M = thermo['T'].shape[0]//NT
a = (thermo['Lx']+thermo['Ly']+thermo['Lz'])/(3*NC)
Pave = (thermo['Px']+thermo['Py']+thermo['Pz'])/3.
a_ave = a.reshape(NT, M)[:,M//2+1:].mean(axis=1)
fit = np.poly1d(np.polyfit(temp, a_ave, deg=1))
figure(figsize=(12,10))
subplot(2,2,1)
set_fig_properties([gca()])
plot(t, thermo['T'])
xlim([0, 200])
gca().set_xticks(range(0,201,50))
ylim([0, 1100])
gca().set_yticks(range(0,1101,500))
ylabel('Temperature (K)')
xlabel('Time (ps)')
title('(a)')
subplot(2,2,2)
set_fig_properties([gca()])
plot(t, Pave)
xlim([0, 200])
gca().set_xticks(range(0,201,50))
ylim([-0.1, 0.4])
gca().set_yticks(np.arange(-1,5)/10)
ylabel('Pressure (GPa)')
xlabel('Time (ps)')
title('(b)')
subplot(2,2,3)
set_fig_properties([gca()])
plot(t, a,linewidth=3)
xlim([0, 200])
gca().set_xticks(range(0,201,50))
ylim([5.43, 5.48])
gca().set_yticks([5.44,5.46,5.48])
ylabel(r'a ($\AA$)')
xlabel('Time (ps)')
title('(c)')
subplot(2,2,4)
set_fig_properties([gca()])
Tpoly = [0, 1100]
plot(Tpoly, fit(Tpoly),color='C3')
scatter(temp, a_ave,s=200,zorder=100,facecolor='none',edgecolors='C0',linewidths=3)
xlim([0, 1100])
gca().set_xticks(range(0,1101,500))
ylim([5.43, 5.48])
gca().set_yticks([5.44,5.46,5.48])
ylabel(r'a ($\AA$)')
xlabel('Temperature (K)')
title('(d)')
tight_layout()
show()
Explanation: Plot Thermal Expansion
The output file thermo.out contains many useful data. Here, we load the results and plot the data in the following figure.
End of explanation |
5,609 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Hyperparameter selections and results
|embed_input|embed_output|batch_size| rnn_size|keep_prob| lr|l|epochs|training_time| train_acc| val_acc| loss|
|--------------------------------------------------------------------------------------------------------------|
| 30| 30| 128| 50| 0.50|0.0010|2| 10| ? | 0.6684| 0.7248|0.3571|
| 15| 15| 256| 100| 0.50|0.0010|2| 10| 09
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_ids, target_ids = [], []
source_split = source_text.split("\n")
for s in source_split:
ids = [source_vocab_to_int[word] for word in s.split()]
source_ids.append(ids)
target_split = target_text.split("\n")
for s in target_split:
ids = [target_vocab_to_int[word] for word in s.split()]
ids.append(target_vocab_to_int['<EOS>'])
target_ids.append(ids)
return (source_ids, target_ids)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
# TODO: Implement Function
_input = tf.placeholder(tf.int32, [None, None], name="input")
_targets = tf.placeholder(tf.int32, [None, None], name="targets")
_lr = tf.placeholder(tf.float32, name="learning_rate")
_keep_prob = tf.placeholder(tf.float32, name="keep_prob")
_target_sequence_length = tf.placeholder(tf.int32, (None,), name="target_sequence_length")
_max_target_sequence_length = tf.reduce_max(_target_sequence_length, name="max_target_len")
_source_sequence_length = tf.placeholder(tf.int32, (None,), name="source_sequence_length")
return (
_input, _targets, _lr, _keep_prob, _target_sequence_length,
_max_target_sequence_length, _source_sequence_length)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
# worth mentioning that I had a hard time understanding strided_slice, but basically
# you are drawing a rectangle with two coordinates, [0,0] is the top left corner
# [batch_size, -1] is the bottom right corner, and [1,1] just means your stride
# so (in this case, keep everything)
ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])
# this basically creates a rank 2 tensor of '<GO'>, that is batch_size x 1, like a vertical vector
# it then pre-pends it to the ending tensor which is batch_size x (len(target[1]) - 1)
# and it does this along the column axis (axis = 1)
decoder_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return decoder_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
# TODO: Implement Function
# Taking the input, you use the vocabulary size, and encoding_embedding size as two of
# the i don't know 3?? dimensions, this needs futher study
enc_embedding = tf.contrib.layers.embed_sequence(
rnn_inputs, source_vocab_size, encoding_embedding_size)
# encoder
def make_cell(rnn_size):
# construct our cell and initialize
# made of LSTM cells
enc_cell = tf.contrib.rnn.DropoutWrapper(
tf.contrib.rnn.LSTMCell(
rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)),
input_keep_prob=keep_prob)
return enc_cell # this is just a single layer of our encoder
# now to make the full multi_rnn_cell of num_layer encoding cells
enc_cell = tf.contrib.rnn.MultiRNNCell(
[make_cell(rnn_size) for _ in range(num_layers)])
# print(enc_cell)
# lets embed the input
enc_output, enc_state = tf.nn.dynamic_rnn(
enc_cell, enc_embedding, sequence_length=source_sequence_length, dtype=tf.float32)
return (enc_output, enc_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
# TODO: Implement Function
# helper for the training process. used by BasicDecoder to read inputs.
# whatever this means :P
# print(max_summary_length.shape, max_summary_length.dtype)
training_helper = tf.contrib.seq2seq.TrainingHelper(
inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False)
# basic decoder
# ?? get the max target sequence length
#max_target_sequence_length = tf.reduce_max(target_sequence_length)
# wrapping a dropout for keep probability
# print(dec_cell)
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, input_keep_prob=keep_prob)
# print(dec_cell)
# make the training decoder
training_decoder = tf.contrib.seq2seq.BasicDecoder(
dec_cell, training_helper, encoder_state, output_layer)
# perform dynamic decoding using the decoder
x = tf.contrib.seq2seq.dynamic_decode(
training_decoder, impute_finished=True, maximum_iterations=max_summary_length)
basic_decoder_output = x[0]
return basic_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
# TODO: Implement Function
# create a 1d tensor of <GO> codes as our start tokens
start_tokens = tf.tile(
tf.constant([start_of_sequence_id], dtype=tf.int32),
[batch_size], name="start_tokens")
# Helper for the inference process
# This is an interesting function, it needs a vector of start_sequences, and a scalar
# of the end_of_sequence
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(
dec_embeddings, start_tokens, end_of_sequence_id)
# adding dropout wrapper to the decode cell
dec_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, input_keep_prob=keep_prob)
# Basic decoder
inference_decoder = tf.contrib.seq2seq.BasicDecoder(
dec_cell, inference_helper, encoder_state, output_layer)
# Perform dynamic decoding using the decoder
x = tf.contrib.seq2seq.dynamic_decode(
inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)
basic_decoder_output = x[0]
return basic_decoder_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
# 1. Decoder embedding
dec_embeddings = tf.Variable(tf.random_uniform(
[target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
# 2. Construct the decoder cell
def make_cell(rnn_size):
# construct our cell and initialize
# made of LSTM cells
dec_cell = tf.contrib.rnn.DropoutWrapper(
tf.contrib.rnn.LSTMCell(
rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2)),
input_keep_prob=keep_prob)
return dec_cell # this is just a single layer of our encoder
# Stack layers
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
# 3. Dense layer to translate decoders outputs at each time step into a choice from the
# target vocabulary
output_layer = Dense(
target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1))
# 4. Set up training decoder
with tf.variable_scope("decode"):
train = decoding_layer_train(
encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length,
output_layer, keep_prob)
# 5. Set up inference decoder
with tf.variable_scope("decode", reuse=True):
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
infer = decoding_layer_infer(
encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob)
return (train, infer)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
# TODO: Implement Function
# 1. Encode the input
_, enc_state = encoding_layer(
input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size,
enc_embedding_size)
# 2. Process target data
dec_input = process_decoder_input(
target_data, target_vocab_to_int, batch_size)
# 3. Decode the encoded input using the decoding layer
train, infer = decoding_layer(
dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size,
num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)
return train, infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 64
# RNN Size
rnn_size = 300
# Number of Layers
num_layers = 4
# Embedding Size
encoding_embedding_size = 64
decoding_embedding_size = 64
# Learning Rate
learning_rate = 0.0005
# Dropout Keep Probability
keep_probability = 0.75
display_step = 50
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
from tqdm import tqdm
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in tqdm(range(epochs)):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Hyperparameter selections and results
|embed_input|embed_output|batch_size| rnn_size|keep_prob| lr|l|epochs|training_time| train_acc| val_acc| loss|
|--------------------------------------------------------------------------------------------------------------|
| 30| 30| 128| 50| 0.50|0.0010|2| 10| ? | 0.6684| 0.7248|0.3571|
| 15| 15| 256| 100| 0.50|0.0010|2| 10| 09:10| 0.7305| 0.7228|0.3687|
| 15| 15| 512| 100| 0.75|0.0010|2| 10| 04:56| 0.6604| 0.6671|0.4456|
| 15| 15| 512| 100| 0.75|0.0005|2| 10| 04:53| 0.6127| 0.6345|0.6592|
| 15| 15| 512| 100| 0.75|0.0005|2| 30| 14:50| 0.8104| 0.7900|0.2596|
| 15| 15| 256| 100| 0.75|0.0010|2| 20| 18:02| 0.8766| 0.8699|0.1053|
| 15| 15| 256| 150| 0.75|0.0010|2| 20| 18:27| 0.9640| 0.9350|0.0368|
| 64| 64| 64| 300| 0.75|0.0005|4| 10| 1:22:56|0.9829|0.9794|0.0150|
Test sentence
Input
Word Ids: [177, 128, 65, 66, 93, 46, 178]
English Words: ['he', 'saw', 'a', 'old', 'yellow', 'truck', '.']
Prediction
Word Ids: [119, 301, 7, 121, 31, 300, 69, 1]
French Words: il a vu un camion jaune . <EOS>
Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sentence = sentence.lower()
seq = [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.split()]
return seq
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
5,610 | Given the following text description, write Python code to implement the functionality described.
Description:
Print all the pairs that contains the positive and negative values of an element
Function to print pairs of positive and negative values present in the array ; Store all the positive elements in the unordered_set ; Start traversing the array ; Check if the positive value of current element exists in the set or not ; Print that pair ; Driver code
| Python Code:
def printPairs(arr , n ) :
pairs = set()
pair_exists = False
for i in range(0 , n ) :
if arr[i ] > 0 :
pairs . add(arr[i ] )
for i in range(0 , n ) :
if arr[i ] < 0 :
if(- arr[i ] ) in pairs :
print("{ } , ▁ { } ". format(arr[i ] , - arr[i ] ) )
pair_exists = True
if pair_exists == False :
print("No ▁ such ▁ pair ▁ exists ")
if __name__== "__main __":
arr =[4 , 8 , 9 , - 4 , 1 , - 1 , - 8 , - 9 ]
n = len(arr )
printPairs(arr , n )
|
5,611 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analise e Tratamento Basico (Triagem) de dados
Analises por Hans. 2015
2012 (10sec)
2013 (10sec)
2014 (10sec ate 1Min seguinte)
2015 (1Min)
Step1: Ajustando o dominio temporal da serie de dados
Step2: Em busca dos GAPs
Step3: Analises Diarias
Step4: Quais dias o Acumulado de chuva foi superior a 20mm em 2011 ?
Step5: Quantos dias o Acumulado de chuva foi superior a 20mm em 2011 ?
Step6: Estatistica geral do DataFrame | Python Code:
import sys
import numpy as np
import pandas as pd
print(sys.version) # Versao do python - Opcional
print(np.__version__) # VErsao do modulo numpy - Opcional
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
import datetime
import time
#?pd.date_range
#rng = pd.date_range('1/1/2011', periods=90, freq='10mS')
#rng
# Carregando dados no dataframe df_dados a partir do arquivo .csv em servidor remoto.
#df_dados = pd.read_csv('http://fortran-zrhans.c9.io/csdapy/sr311-2014.csv', index_col=None)
#Dados local
#df_dados = pd.read_csv('../dados/sr311-2011.csv', index_col=None,parse_dates=['Timestamp'])
df_dados = pd.read_csv('sr311-2011.csv', index_col=None,parse_dates=['Timestamp'])
print("Dados Importados OK")
#Verificanco o nome das colunas
df_dados.columns.tolist()
df_dados.head(3)
#removendo a primeira coluna, e ajustando a coluna Timestamp para ser o indice
del(df_dados['Unnamed: 0'])
df_dados.set_index('Timestamp', inplace=True)
# Selecionando apenas algumas colunas de interesse
df_dados = df_dados[['AirTC', 'RH', 'Rain_mm']]
#df_dados = df_dados.dropna()
df_dados.head()
#s_chuva = df_dados.Rain_mm
#s_chuva.cumsum()
plt.figure(figsize=(16,8))
plt.subplot(2, 1, 1)
plt.title("Dados Brutos")
df_dados.AirTC.plot()
df_dados.RH.plot()
plt.subplot(2, 1, 2)
df_dados.Rain_mm.plot()
#plt.savefig('figs/nome-da-figura.png')
Explanation: Analise e Tratamento Basico (Triagem) de dados
Analises por Hans. 2015
2012 (10sec)
2013 (10sec)
2014 (10sec ate 1Min seguinte)
2015 (1Min)
End of explanation
#df_dados.index.min(), df_dados.index.max(),
## (Timestamp('2015-01-01 00:00:00'), Timestamp('2015-05-29 10:00:00'))
# Criando um novo dominio continuo com base no inicio e fim da serie de dados original
#d = pd.DataFrame(index=pd.date_range(pd.datetime(2015,1,1), pd.datetime(2015,5,29), freq='Min'))
d = pd.DataFrame(index=pd.date_range(pd.datetime(2011,1,1), pd.datetime(2011,12,31,23,59,00), freq='Min'))
print("Indice 2011 criado OK")
d.head(2),d.tail(2)
# Unindo os dois DataFrames pela esquerda (o que não houver em d será substituído por NaN
ndf_dados = d.join(df_dados)
#ndf_dados.fillna(0) #Substitui valor NaN por 0
print("Junçao OK")
plt.figure(figsize=(16,15))
#Grafico Temperatura
plt.subplot(3, 1, 1)
plt.title('Dados Brutos Reindexados')
plt.ylabel('Graus')
plt.xlabel('')
ndf_dados.AirTC.plot(legend=True)
#Grafico Umidade
plt.subplot(3, 1, 2)
#plt.title('Dados Brutos Reindexados')
plt.xlabel('')
plt.ylabel('%')
ndf_dados.RH.plot(legend=True)
#Grafico Chuva
plt.subplot(3, 1, 3)
#plt.title('Dados Brutos Reindexados')
plt.xlabel('')
plt.ylabel('mm')
ndf_dados.Rain_mm.plot(legend=True)
#plt.savefig('figs/nome-da-figura.png')
Explanation: Ajustando o dominio temporal da serie de dados
End of explanation
#numpy.all(numpy.isnan(data_list))
# np.any(np.isnan(ndf_dados)) Se returnar True eh porque algum valor NaN foi encontrado
# Mostra onde os dados Possuem valor NaN
ndf_dados[np.isnan(ndf_dados.Rain_mm)]
np.count_nonzero(~np.isnan(ndf_dados))
def numberOfNonNans(data):
count = 0
for i in data:
if not np.isnan(i):
count += 1
return count
print(numberOfNonNans(ndf_dados.AirTC))
print(numberOfNonNans(ndf_dados.RH))
print(numberOfNonNans(ndf_dados.Rain_mm))
ndf_dados.head()
#Exportando para um novo arquivo
ndf_dados.to_csv('sao_roque_2011-AirTC-RH-Rain.csv',na_rep='NaN')
# TODO: Este arquivo nao possui mais gaps no dominio temporal (as imagens foram ajustadas para o valor NaN), portanto pode-se pular a etapa reindexar caso seja utilizado no futuro.
Explanation: Em busca dos GAPs
End of explanation
df_dados_diarios = ndf_dados[['AirTC','RH']] .resample('D', how='mean')
chuva = ndf_dados.Rain_mm.resample('D', how='sum')
df_dados_diarios['Acum_Chuva'] = chuva
df_dados_diarios.head()
#Mostrando tudo
plt.figure(figsize=(16,8))
plt.subplot(2,1,1)
plt.title('Media Diaria 2015')
plt.xlabel("")
plt.ylabel("mm, Graus, %")
df_dados_diarios.AirTC.plot(legend=True)
df_dados_diarios.RH.plot(legend=True)
df_dados_diarios.Acum_Chuva.plot(legend=True)
plt.subplot(2,1,2)
acumulado = df_dados_diarios.Acum_Chuva.cumsum()
plt.xlabel("Data")
plt.ylabel("mm")
acumulado.plot(legend=True)
#plt.savefig('figs/nome-da-figura.png')
# Histograma do Acumulado Diario
plt.figure(figsize=(16,8))
plt.xlabel("Acorrencia")
plt.ylabel("mm")
df_dados_diarios.Acum_Chuva.plot(kind='hist', orientation='horizontal', alpha=.75,legend=True)
#plt.savefig('figs/nome-da-figura.png')
Explanation: Analises Diarias
End of explanation
# Quais dias o Acumulado de chuva foi superior a 20mm em 2015?
df_dados_diarios.Acum_Chuva[df_dados_diarios.Acum_Chuva > 20.]
Explanation: Quais dias o Acumulado de chuva foi superior a 20mm em 2011 ?
End of explanation
# Quantos dias o Acumulado de chuva foi superior a 20mm em 2015?
df_dados_diarios.Acum_Chuva[df_dados_diarios.Acum_Chuva > 20.].count()
Explanation: Quantos dias o Acumulado de chuva foi superior a 20mm em 2011 ?
End of explanation
ndf_dados.describe(), df_dados_diarios.describe()
Explanation: Estatistica geral do DataFrame
End of explanation |
5,612 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A NYC Taxi data cleaning and model building pipeline to forecast the trip time from A2B in NYC
Step1: Use the bash =)
Step2: So parsing does not work, do it manually
Step3: Some statistics about the payment.
Step4: So thats the statistic about payments. Remember, there are to tips recorded for cash payment
How many trips are affected by tolls?
Step5: So 95% of the drives do not deal with tolls. We will drop the column then.
We are not interested in the following features (they do not add any further information)
Step6: First, we want to generate the trip_time because this is our target.
Step7: Check for missing and false data
Step8: So there is not that much data missing. That's quite surprising, maybe it's wrong.
Step9: So we have many zeros in the data. How much percent?
Step10: <font color = 'blue' > Most of the zeros are missing data. So flag them as NaN (means also NA) to be consistent! </font color>
Step11: Quick preview about the trip_times
A quick look at the trip time before preprocessing
Step12: That many unique values do we have in trip_time.
Identify the the cases without geo data and remove them from our data to be processed.
Step13: So how many percent of data are left to be processed?
Step14: <font color = 'black'> So we only dropped 2% of the data because of missing geo tags. Someone could search the 'anomaly'-data for patterns, e.g. for fraud detection. We are also going to drop all the unrecognized trip_distances because we cannot (exactly) generate them (an approximation would be possible). </font color>
Step15: Drop all the columns with trip_time.isnull()
Step16: This is quite unreasonable. We have dropoff_datetime = pickup_datetime and the geo-coords of pickup and dropoff do not match! trip_time equals NaT here.
Step17: After filtering regarding the trip_time
Step18: We sometimes have some unreasonably small trip_times.
Step19: <font color = 'blue'> So all in all, we dropped less than 3% of the data. </font color>
Step20: We can deal with that. External investigation of the anomaly is recommended.
Start validating the non-anomaly data
Step21: Distribution of the avg_amount_per_minute
Step22: Compare to http
Step23: So we dropped around 6% of the data.
Step24: Only look at trips in a given bounding box
Step25: So we've omitted about 2% of the data because the trips do not start and end in the box
Inspect Manhattan only.
Step26: Again, let's take a look at the distribution of the target variable we want to estimate
Step27: Make a new dataframe with features and targets to train the model
Step28: Use minutes for prediction instead of seconds (ceil the time). Definitley more robust than seconds!
Step29: So we hace 148 different times to predict.
Step30: So 90% of the trip_times are between 3 and 30 minutes.
A few stats about the avg. pickups per hour
Step31: Split the data into a training dataset and a test dataset. Evaluate the performance of the decision tree on the test data
Step32: Start model building
Step33: Train and compare a few decision trees with different parameters
Step34: Some more results
| Sum of abs. deviation | max_depth | max_depth | max_depth | max_depth | max_depth |
|---------------------------|-----------|------|------|------|------|
| min_samples_split | 10 | 15 | 20 | 25 | 30 |
| 3 | 1543 | 1267 | 1127 | 1088 | 1139 |
| 10 | 1544 | 1266 | 1117 | 1062 | 1086 |
| 20 | 1544 | 1265 | 1108 | 1037 | 1034 |
| 50 | 1544 | 1263 | 1097 | 1011 | 994 |
| 250 | 1544 | 1266 | 1103 | 1019 | 1001 |
| 1000 | 1548 | 1284 | 1144 | 1085 | 1077 |
| 2500 | 1555 | 1307 | 1189 | 1150 | 1146 |
Min_samples_split = 3
(10, 15, 20, 25, 30)
[0.51550436937183575, 0.64824394212610637, 0.68105673170887715, 0.66935222696811203, 0.62953726391785103]
[1543779.4758261547, 1267630.6429649692, 1126951.2647852183, 1088342.055931434, 1139060.7870262777]
[14.802491903305054, 21.25719118118286, 27.497225046157837, 32.381808280944824, 35.0844943523407]
Min_samples_split = 10
(10, 15, 20, 25, 30)
[0.51546967657630205, 0.65055440252664309, 0.69398351369676525, 0.69678113708751077, 0.67518497976746361]
[1543829.4000325042, 1266104.6486240581, 1117165.9640872395, 1061893.3390857978, 1086045.4846943137]
[14.141993999481201, 20.831212759017944, 25.626588821411133, 29.81039047241211, 32.23483180999756]
Min_samples_split = 20
(10, 15, 20, 25, 30)
[0.51537943698967736, 0.65215078696481421, 0.70216115764491505, 0.71547757670696144, 0.70494598277965781]
[1543841.1100632891, 1264595.0251062319, 1108064.4596608584, 1036593.8033015681, 1039378.3133869285]
[14.048030376434326, 20.481205463409424, 25.652794361114502, 29.03341507911682, 31.56394076347351]
min_samples_split=50
(10, 15, 20, 25, 30)
[0.51540742268899331, 0.65383862050244068, 0.71125658610588971, 0.73440457163892259, 0.73435595461521908]
[1543721.3435906437, 1262877.4227863667, 1097080.889761846, 1010511.305738725, 994244.46643680066]
[14.682952404022217, 21.243955373764038, 25.80405569076538, 28.731933116912842, 32.00149917602539]
min_samples_split=250
(10, 15, 20, 25, 30)
[0.51532618474195502, 0.65304694576643452, 0.712453138233199, 0.73862283625684677, 0.74248829470934752]
[1544004.1103626473, 1266358.9437320188, 1102793.6462709717, 1018555.9754967012, 1000675.2014443219]
[14.215412378311157, 20.32301664352417, 25.39385199546814, 27.81620717048645, 28.74960231781006]
min_samples_split=1000
(10, 15, 20, 25, 30)
[0.51337097515902541, 0.64409382777503155, 0.6957163207523871, 0.71429589738370614, 0.7159815227278703]
[1547595.3414912082, 1284490.8114952976, 1143568.0997977962, 1084873.9820350427, 1077427.5321143884]
[14.676559448242188, 20.211236476898193, 23.846965551376343, 26.270352125167847, 26.993313789367676]
min_samples_split=2500
(10, 15, 20, 25, 30)
[0.50872112253965895, 0.63184888428446373, 0.67528344919996985, 0.68767132817144228, 0.68837707513473978]
[1554528.9746030923, 1306995.3609336747, 1188981.9585730932, 1149615.9326777055, 1146209.3017767756]
[14.31177806854248, 20.02240490913391, 23.825161457061768, 24.616609811782837, 25.06274127960205]
Train the most promising decision tree again
Step35: A tree with this depth is too big to dump. Graphviz works fine until around depth 12.
Step36: A few stats about the trained tree
Step37: Finding the leaves / predicted times
Step38: So we have 67260 leaves.
Step39: So 50% of the nodes are leaves. A little bit cross-checking
Step40: To get a feeling for the generalization of the tree
Step41: The above plot looks promising, but is not very useful. Nonetheless, you can represent this in a Lorenzcurve.
Step42: About 5% of the leaves represent about 40% of the samples
Step43: We found out that all samples have been considered.
Inspect an arbitrary leaf and extract the rule set
We are taking a look at the leaf that represents the most samples
Step44: Retrieve the decision path that leads to the leaf
Step45: Be aware, read this branch bottom up!
Processing is nicer if the path is in a data frame.
Step46: Via grouping, we can extract the relevant splits that are always the ones towards the end of the branch. Earlier splits become obsolete if the feature is splitted in the same manner again downwards the tree.
Step47: Groupby is very helpful here. Choose always the split with the first index. "min()" is used here for demonstration purposes only.
Step48: One might use an own get_group method. This will throw less exceptions if the key is not valid (e.g. there is no lower range on day_of_week). This can especially happen in trees with low depth.
Step49: Extract the pickup- and dropoff-area.
Step50: In order to draw the rectangle, we need the side lengths of the areas.
Step51: White is pickup- red is dropoff. But are this really the correct areas? Lets check it via filtering in the training set. Do we get the same amount of trips as the no. of samples in the leaf? (Time splits are hard coded in this case)
Step52: So we've found the trips that belong to this branch! The discrepancy, that might be caused by numeric instability when comparing the geo coordinates, is not that big.
A little bit of gmaps....
Step53: We can see, that the intersection of the pickup- and dropoff-area is quite big. This corresponds to the predicted trip_time that is about 6.5 minutes. In this short period of time, a car cannot drive that far. It looks also quite similar to the picture above, that is based on a 2d-histogram of the pickups. We can qualitatively see, that the dropoff-area is smaller than the pickup-area. | Python Code:
import os as os
import pandas as pd
import numpy as np
from scipy import stats, integrate
import matplotlib.pyplot as plt
import matplotlib as mpl
import seaborn as sns
from statsmodels.distributions.empirical_distribution import ECDF
import datetime as dt
plt.style.use('seaborn-whitegrid')
plt.rcParams['image.cmap'] = 'blue'
#sns.set_context('notebook',font_scale=2)
sns.set_style("whitegrid")
% matplotlib inline
labelsize = 22
mpl.rcParams.update({'font.size': labelsize})
mpl.rcParams.update({'figure.figsize': (20,10)})
mpl.rcParams.update({'axes.titlesize': 'large'})
mpl.rcParams.update({'axes.labelsize': 'large'})
mpl.rcParams.update({'xtick.labelsize': labelsize})
mpl.rcParams.update({'ytick.labelsize': labelsize})
# mpl.rcParams.keys()
!cd data && ls
Explanation: A NYC Taxi data cleaning and model building pipeline to forecast the trip time from A2B in NYC
End of explanation
data = pd.read_csv('data/Taxi_from_2013-05-06_to_2013-05-13.csv', index_col=0, parse_dates=True)
data.info()
Explanation: Use the bash =)
End of explanation
data['pickup_datetime'] =pd.to_datetime(data['pickup_datetime'], format = '%Y-%m-%d %H:%M:%S')
data['dropoff_datetime'] =pd.to_datetime(data['dropoff_datetime'], format = '%Y-%m-%d %H:%M:%S')
data.describe().transpose()
data.head()
payments = data.payment_type.value_counts()
Explanation: So parsing does not work, do it manually:
End of explanation
payments/len(data)
Explanation: Some statistics about the payment.
End of explanation
data.tolls_amount.value_counts()/len(data)
Explanation: So thats the statistic about payments. Remember, there are to tips recorded for cash payment
How many trips are affected by tolls?
End of explanation
data = data.drop(['vendor_id', 'rate_code', 'store_and_fwd_flag','payment_type','mta_tax', 'tolls_amount',
'surcharge'], axis=1)
data.describe().transpose()
Explanation: So 95% of the drives do not deal with tolls. We will drop the column then.
We are not interested in the following features (they do not add any further information):
End of explanation
data['trip_time']=data.dropoff_datetime-data.pickup_datetime
data.head()
data.info()
Explanation: First, we want to generate the trip_time because this is our target.
End of explanation
data.isnull().sum()
Explanation: Check for missing and false data:
End of explanation
(data==0).sum()
Explanation: So there is not that much data missing. That's quite surprising, maybe it's wrong.
End of explanation
(data==0).sum()/len(data)
Explanation: So we have many zeros in the data. How much percent?
End of explanation
data = data.replace(np.float64(0), np.nan);
data.isnull().sum()
Explanation: <font color = 'blue' > Most of the zeros are missing data. So flag them as NaN (means also NA) to be consistent! </font color>
End of explanation
trip_times_in_minutes = data['trip_time'] / np.timedelta64(1, 'm')
plt.hist(trip_times_in_minutes , bins=30, range=[0, 60],
weights=np.zeros_like(trip_times_in_minutes) + 1. / trip_times_in_minutes.size)
#plt.yscale('log')
print(trip_times_in_minutes.quantile(q=[0.025, 0.5, 0.75, 0.95, 0.975, 0.99]))
plt.xlabel('Trip Time in Minutes')
plt.ylabel('Relative Frequency')
plt.title('Distribution of Trip Time')
plt.savefig('figures/trip_time_distribution.eps', format='eps', dpi=1000)
len(data.trip_time.value_counts().values)
Explanation: Quick preview about the trip_times
A quick look at the trip time before preprocessing
End of explanation
anomaly = data.loc[(data['dropoff_longitude'].isnull()) | (data['dropoff_latitude'].isnull()) |
(data['pickup_longitude'].isnull()) | (data['pickup_latitude'].isnull())]
data = data.drop(anomaly.index)
anomaly['flag'] = 'geo_NA'
data.isnull().sum()
Explanation: That many unique values do we have in trip_time.
Identify the the cases without geo data and remove them from our data to be processed.
End of explanation
len(data)/(len(data)+len(anomaly))
anomaly.tail()
Explanation: So how many percent of data are left to be processed?
End of explanation
anomaly = anomaly.append(data.loc[(data['trip_distance'].isnull())])
anomaly.loc[data.loc[(data['trip_distance'].isnull())].index,'flag'] = 'trip_dist_NA'
anomaly.tail()
data = data.drop(anomaly.index, errors='ignore') # ignore uncontained labels
data.isnull().sum()
1-len(data)/(len(data)+len(anomaly))
Explanation: <font color = 'black'> So we only dropped 2% of the data because of missing geo tags. Someone could search the 'anomaly'-data for patterns, e.g. for fraud detection. We are also going to drop all the unrecognized trip_distances because we cannot (exactly) generate them (an approximation would be possible). </font color>
End of explanation
anomaly = anomaly.append(data.loc[(data['trip_time'].isnull())])
anomaly.loc[data.loc[(data['trip_time'].isnull())].index,'flag'] = 'trip_time_NA'
anomaly.tail()
data = data.drop(anomaly.index, errors='ignore') # ignore uncontained labels
Explanation: Drop all the columns with trip_time.isnull()
End of explanation
data.describe().transpose()
Explanation: This is quite unreasonable. We have dropoff_datetime = pickup_datetime and the geo-coords of pickup and dropoff do not match! trip_time equals NaT here.
End of explanation
plt.hist(data.trip_time.values / np.timedelta64(1, 'm'), bins=50, range=[0,100])
print(data.trip_time.describe())
np.percentile(data.trip_time, [1,5,10,15,25,50,75,85,95,99]) / np.timedelta64(1,'m')
Explanation: After filtering regarding the trip_time
End of explanation
anomaly.tail()
1-len(data)/(len(data)+len(anomaly))
Explanation: We sometimes have some unreasonably small trip_times.
End of explanation
data.isnull().sum()
Explanation: <font color = 'blue'> So all in all, we dropped less than 3% of the data. </font color>
End of explanation
data['avg_amount_per_minute'] = (data.fare_amount-2.5) / (data.trip_time / np.timedelta64(1,'m'))
data.avg_amount_per_minute.describe()
Explanation: We can deal with that. External investigation of the anomaly is recommended.
Start validating the non-anomaly data: Valid trip_time, valid distance?
Correct the avg amount for the initial charge.
End of explanation
h = data.avg_amount_per_minute
plt.figure(figsize=(20,10))
plt.hist(h, normed=False, stacked=True, bins=40, range=[0 , 100], )
#, histtype='stepfilled')
plt.yscale('log')
plt.ylabel('log(freq x)', fontsize=40)
plt.xlabel('x = avg_amount_per_minute', fontsize=40)
print('Min:' + str(min(h)) + '\nMax:' + str(max(h)))
plt.yticks(fontsize=40)
plt.xticks(fontsize=40)
plt.locator_params(axis = 'x', nbins = 20)
plt.show()
data.head()
data.avg_amount_per_minute.quantile([.0001,.01, .5, .75, .95, .975, .99, .995])
Explanation: Distribution of the avg_amount_per_minute
End of explanation
lb = 0.5
ub = 2.5
anomaly = anomaly.append(data.loc[(data['avg_amount_per_minute'] > ub) |
(data['avg_amount_per_minute'] < lb)])
anomaly.loc[data.loc[(data['avg_amount_per_minute'] > ub)].index,'flag'] = 'too fast'
anomaly.loc[data.loc[(data['avg_amount_per_minute'] < lb)].index,'flag'] = 'too slow'
data = data.drop(anomaly.index, errors='ignore') # ignore uncontained labels / indices
print(1-len(data)/(len(data)+len(anomaly)))
Explanation: Compare to http://www.nyc.gov/html/tlc/html/passenger/taxicab_rate.shtml . We have a strict lower bound with .5 \$ per minute (taxi waiting in congestion). 2.5 \$ per minute match roughly 1 mile / minute (no static fares included!). So the taxi would drive 60 mp/h. We take this as an upper bound.
End of explanation
data.avg_amount_per_minute.describe()
anomaly.tail()
Explanation: So we dropped around 6% of the data.
End of explanation
jfk_geodata = (40.641547, -73.778118)
ridgefield_geodata = (40.856406, -74.020642)
data_in_box = data.loc[(data['dropoff_latitude'] > jfk_geodata[0]) &
(data['dropoff_longitude'] < jfk_geodata[1]) &
(data['dropoff_latitude'] < ridgefield_geodata[0]) &
(data['dropoff_longitude'] > ridgefield_geodata[1]) &
(data['pickup_latitude'] > jfk_geodata[0]) &
(data['pickup_longitude'] < jfk_geodata[1]) &
(data['pickup_latitude'] < ridgefield_geodata[0]) &
(data['pickup_longitude'] > ridgefield_geodata[1])
]
# taxidata = taxidata.drop(anomaly.index)
data_in_box.head()
print(jfk_geodata < ridgefield_geodata,
len(data_in_box)/len(data))
Explanation: Only look at trips in a given bounding box
End of explanation
x = data_in_box.pickup_longitude
y = data_in_box.pickup_latitude
plt.jet()
H, xedges, yedges = np.histogram2d(x, y, bins=300)#, normed=False, weights=None)
fig = plt.figure(figsize=(20, 10))
plt.hist2d(x, y, bins=300, range=[[min(x.values),-73.95],[40.675,40.8]])
plt.colorbar()
plt.title('Pickup density (first full week in May 2013)')
plt.ylabel('Latitude')
plt.xlabel('Longitude')
ax = fig.gca()
ax.grid(False)
# plt.savefig('figures/pickup_density_manhattan_13.png', format='png', dpi=150)
Explanation: So we've omitted about 2% of the data because the trips do not start and end in the box
Inspect Manhattan only.
End of explanation
h = data_in_box.trip_time.values / np.timedelta64(1, 'm')
plt.hist(h, normed=False, bins=150)
plt.yticks(fontsize=40)
plt.xticks(fontsize=40)
plt.show()
data_in_box.head()
Explanation: Again, let's take a look at the distribution of the target variable we want to estimate:
End of explanation
time_regression_df = pd.DataFrame([#data_in_box['pickup_datetime'].dt.day,
data_in_box['pickup_datetime'].dt.dayofweek,
data_in_box['pickup_datetime'].dt.hour,
data_in_box['pickup_latitude'],
data_in_box['pickup_longitude'],
data_in_box['dropoff_latitude'],
data_in_box['dropoff_longitude'],
np.ceil(data_in_box['trip_time']/np.timedelta64(1, 'm')),
]).T
time_regression_df.columns = ['pickup_datetime_dayofweek', 'pickup_datetime_hour',
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude',
'trip_time']
Explanation: Make a new dataframe with features and targets to train the model
End of explanation
time_regression_df.tail()
time_regression_df.head()
time_regression_df.ix[:,0:6].describe()
print(time_regression_df.trip_time.value_counts())
print(len(time_regression_df.trip_time.value_counts()))
Explanation: Use minutes for prediction instead of seconds (ceil the time). Definitley more robust than seconds!
End of explanation
time_regression_df.trip_time.quantile([0.05, 0.95])
Explanation: So we hace 148 different times to predict.
End of explanation
hour_stats = time_regression_df.groupby(time_regression_df.pickup_datetime_hour)
plt.bar(left = hour_stats.pickup_datetime_hour.count().keys(), height=hour_stats.pickup_datetime_hour.count().values/7,
tick_label=hour_stats.pickup_datetime_hour.count().keys(), align='center')
plt.title('Avg. pickups per hour')
plt.xlabel('datetime_hour')
plt.ylabel('frequency')
plt.savefig('avg_pickups_per_hour.png')
print('Avg. pickups per half-hour (summarized over 1 week)')
hour_stats.pickup_datetime_hour.count()/14
(hour_stats.count()/14).quantile([.5])
Explanation: So 90% of the trip_times are between 3 and 30 minutes.
A few stats about the avg. pickups per hour
End of explanation
time_regression_df.columns
from sklearn import cross_validation as cv
time_regression_df_train, time_regression_df_test = cv.train_test_split(time_regression_df, test_size=0.1, random_state=99)
y_train = time_regression_df_train['trip_time']
x_train = time_regression_df_train.ix[:, 0:6]
y_test = time_regression_df_test['trip_time']
x_test = time_regression_df_test.ix[:, 0:6]
time_regression_df_train.tail()
len(x_train)
xy_test = pd.concat([x_test, y_test], axis=1)
xy_test.head()
# xy_test.to_csv('taxi_tree_test_Xy_20130506-12.csv')
# x_test.to_csv('taxi_tree_test_X_20130506-12.csv')
# y_test.to_csv('taxi_tree_test_y_20130506-12.csv')
# xy_test_sample = Xy_test.sample(10000, random_state=99)
# xy_test_sample.to_csv('taxi_tree_test_Xy_sample.csv')
# xy_test_sample.head()
print(x_train.shape)
print(x_train.size)
print(x_test.shape)
print(time_regression_df.shape)
print(x_train.shape[0]+x_test.shape[0])
Explanation: Split the data into a training dataset and a test dataset. Evaluate the performance of the decision tree on the test data
End of explanation
import time
# Import the necessary modules and libraries
from sklearn.tree import DecisionTreeRegressor
import numpy as np
import matplotlib.pyplot as plt
Explanation: Start model building
End of explanation
#features = ['pickup_latitude', 'pickup_longitude', 'dropoff_latitude', 'dropoff_longitude','pickup_datetime']
#print("* features:", features, sep="\n")
max_depth_list = (10,15,20,25,30)
scores = [-1, -1, -1, -1, -1]
sum_abs_devs = [-1, -1, -1, -1, -1]
times = [-1, -1, -1, -1, -1]
for i in range(0,len(max_depth_list)):
start = time.time()
regtree = DecisionTreeRegressor(min_samples_split=1000, random_state=10, max_depth=max_depth_list[i])# formerly 15. 15 is reasonable,
# 30 brings best results # random states: 99
regtree.fit(x_train, y_train)
scores[i]= regtree.score(x_test, y_test)
y_pred = regtree.predict(x_test)
sum_abs_devs[i] = sum(abs(y_pred-y_test))
times[i] = time.time() - start
print(max_depth_list)
print(scores)
print(sum_abs_devs)
print(times)
Explanation: Train and compare a few decision trees with different parameters
End of explanation
start = time.time()
regtree = DecisionTreeRegressor(min_samples_split=50, random_state=10, max_depth=25, splitter='best' )
regtree.fit(x_train, y_train)
regtree.score(x_test, y_test)
y_pred = regtree.predict(x_test)
sum_abs_devs = sum(abs(y_pred-y_test))
elapsed = time.time() - start
print(elapsed)
Explanation: Some more results
| Sum of abs. deviation | max_depth | max_depth | max_depth | max_depth | max_depth |
|---------------------------|-----------|------|------|------|------|
| min_samples_split | 10 | 15 | 20 | 25 | 30 |
| 3 | 1543 | 1267 | 1127 | 1088 | 1139 |
| 10 | 1544 | 1266 | 1117 | 1062 | 1086 |
| 20 | 1544 | 1265 | 1108 | 1037 | 1034 |
| 50 | 1544 | 1263 | 1097 | 1011 | 994 |
| 250 | 1544 | 1266 | 1103 | 1019 | 1001 |
| 1000 | 1548 | 1284 | 1144 | 1085 | 1077 |
| 2500 | 1555 | 1307 | 1189 | 1150 | 1146 |
Min_samples_split = 3
(10, 15, 20, 25, 30)
[0.51550436937183575, 0.64824394212610637, 0.68105673170887715, 0.66935222696811203, 0.62953726391785103]
[1543779.4758261547, 1267630.6429649692, 1126951.2647852183, 1088342.055931434, 1139060.7870262777]
[14.802491903305054, 21.25719118118286, 27.497225046157837, 32.381808280944824, 35.0844943523407]
Min_samples_split = 10
(10, 15, 20, 25, 30)
[0.51546967657630205, 0.65055440252664309, 0.69398351369676525, 0.69678113708751077, 0.67518497976746361]
[1543829.4000325042, 1266104.6486240581, 1117165.9640872395, 1061893.3390857978, 1086045.4846943137]
[14.141993999481201, 20.831212759017944, 25.626588821411133, 29.81039047241211, 32.23483180999756]
Min_samples_split = 20
(10, 15, 20, 25, 30)
[0.51537943698967736, 0.65215078696481421, 0.70216115764491505, 0.71547757670696144, 0.70494598277965781]
[1543841.1100632891, 1264595.0251062319, 1108064.4596608584, 1036593.8033015681, 1039378.3133869285]
[14.048030376434326, 20.481205463409424, 25.652794361114502, 29.03341507911682, 31.56394076347351]
min_samples_split=50
(10, 15, 20, 25, 30)
[0.51540742268899331, 0.65383862050244068, 0.71125658610588971, 0.73440457163892259, 0.73435595461521908]
[1543721.3435906437, 1262877.4227863667, 1097080.889761846, 1010511.305738725, 994244.46643680066]
[14.682952404022217, 21.243955373764038, 25.80405569076538, 28.731933116912842, 32.00149917602539]
min_samples_split=250
(10, 15, 20, 25, 30)
[0.51532618474195502, 0.65304694576643452, 0.712453138233199, 0.73862283625684677, 0.74248829470934752]
[1544004.1103626473, 1266358.9437320188, 1102793.6462709717, 1018555.9754967012, 1000675.2014443219]
[14.215412378311157, 20.32301664352417, 25.39385199546814, 27.81620717048645, 28.74960231781006]
min_samples_split=1000
(10, 15, 20, 25, 30)
[0.51337097515902541, 0.64409382777503155, 0.6957163207523871, 0.71429589738370614, 0.7159815227278703]
[1547595.3414912082, 1284490.8114952976, 1143568.0997977962, 1084873.9820350427, 1077427.5321143884]
[14.676559448242188, 20.211236476898193, 23.846965551376343, 26.270352125167847, 26.993313789367676]
min_samples_split=2500
(10, 15, 20, 25, 30)
[0.50872112253965895, 0.63184888428446373, 0.67528344919996985, 0.68767132817144228, 0.68837707513473978]
[1554528.9746030923, 1306995.3609336747, 1188981.9585730932, 1149615.9326777055, 1146209.3017767756]
[14.31177806854248, 20.02240490913391, 23.825161457061768, 24.616609811782837, 25.06274127960205]
Train the most promising decision tree again
End of explanation
# from sklearn import tree
# tree.export_graphviz(regtree, out_file='figures/tree_d10.dot', feature_names=time_regression_df.ix[:,0:6].columns, class_names=time_regression_df.columns[6])
regtree.tree_.impurity
y_train.describe()
print('R²: ', regtree.score(x_test, y_test))
from sklearn.externals import joblib
joblib.dump(regtree, 'treelib/regtree_depth_25_mss_50_rs_10.pkl', protocol=2)
Explanation: A tree with this depth is too big to dump. Graphviz works fine until around depth 12.
End of explanation
print(regtree.feature_importances_ ,'\n',
regtree.class_weight,'\n',
regtree.min_samples_leaf,'\n',
regtree.tree_.n_node_samples,'\n'
)
y_pred = regtree.predict(x_test)
np.linalg.norm(np.ceil(y_pred)-y_test)
diff = (y_pred-y_test)
# plt.figure(figsize=(12,10)) # not needed. set values globally
plt.hist(diff.values, bins=100, range=[-50, 50])
print('Perzentile(%): ', [1,5,10,15,25,50,75,90,95,99], '\n', np.percentile(diff.values, [1,5,10,15,25,50,75,90,95,99]))
print('Absolute time deviation (in 1k): ', sum(abs(diff))/1000)
plt.title('Error Distribution on the 2013 Test Set')
plt.xlabel('Error in Minutes')
plt.ylabel('Frequency')
plt.savefig('figures/simple_tree_error_d25_msp_50.eps', format='eps', dpi=1000)
diff.describe()
Explanation: A few stats about the trained tree:
End of explanation
leaves = regtree.tree_.children_left*regtree.tree_.children_right
for idx, a in enumerate(leaves):
if a==1:
x=1# do nothing
else:
leaves[idx] = 0
print(leaves)
print(leaves[leaves==1].sum())
len(leaves[leaves==1])
Explanation: Finding the leaves / predicted times
End of explanation
len(leaves[leaves==1])/regtree.tree_.node_count
Explanation: So we have 67260 leaves.
End of explanation
print((leaves==1).sum()+(leaves==0).sum())
print(len(leaves))
node_samples = regtree.tree_.n_node_samples
node_samples
leaf_samples = np.multiply(leaves, node_samples)
stats = np.unique(leaf_samples, return_counts=True)
stats
Explanation: So 50% of the nodes are leaves. A little bit cross-checking:
End of explanation
plt.scatter(stats[0][1:], stats[1][1:])
plt.yscale('log')
plt.xscale('log')
Explanation: To get a feeling for the generalization of the tree: Do some leaves represent the vast amount of trips? This is what we would expect.
End of explanation
node_perc = np.cumsum(stats[1][1:]) # Cumulative sum of nodes
samples_perc = np.cumsum(np.multiply(stats[0][1:],stats[1][1:]))
node_perc = node_perc / node_perc[-1]
samples_perc = samples_perc / samples_perc[-1]
plt.plot(node_perc, samples_perc)
plt.plot((np.array(range(0,100,1))/100), (np.array(range(0,100,1))/100), color='black')
plt.ylim(0,1)
plt.xlim(0,1)
plt.title('Lorenz Curve Between Nodes And Samples')
plt.xlabel('Leaves %')
plt.ylabel('Samples %')
plt.fill_between(node_perc, samples_perc , color='blue', alpha='1')
plt.savefig('figures/lorenzcurve_d25_msp_50.eps', format='eps', dpi=1000)
plt.savefig('figures/lorenzcurve_d25_msp_50.png', format='png', dpi=300)
Explanation: The above plot looks promising, but is not very useful. Nonetheless, you can represent this in a Lorenzcurve.
End of explanation
len(leaf_samples)==regtree.tree_.node_count
Explanation: About 5% of the leaves represent about 40% of the samples
End of explanation
max_leaf = [np.argmax(leaf_samples), max(leaf_samples)]
print('So node no.', max_leaf[0] ,'is a leaf and has', max_leaf[1] ,'samples in it.')
print(max_leaf)
Explanation: We found out that all samples have been considered.
Inspect an arbitrary leaf and extract the rule set
We are taking a look at the leaf that represents the most samples
End of explanation
# Inspired by: http://stackoverflow.com/questions/20224526/
# how-to-extract-the-decision-rules-from-scikit-learn-decision-tree
def get_rule(tree, feature_names, leaf):
left = tree.tree_.children_left
right = tree.tree_.children_right
threshold = tree.tree_.threshold
features = [feature_names[i] for i in tree.tree_.feature]
value = tree.tree_.value
samples = tree.tree_.n_node_samples
global count
count = 0;
global result
result = {};
def recurse_up(left, right, threshold, features, node):
global count
global result
count = count+1;
#print(count)
if node != 0:
for i, j in enumerate(right):
if j == node:
print( 'Node:', node, 'is right of:',i, ' with ', features[i], '>', threshold[i])
result[count] = [features[i], False, threshold[i]]
return(recurse_up(left, right, threshold, features, i))
for i, j in enumerate(left):
if j == node:
print('Node:', node, 'is left of',i,' with ', features[i], '<= ', threshold[i])
result[count] = [features[i], True, threshold[i]]
return(recurse_up(left, right, threshold, features, i))
else :
return(result)
print('Leaf:',leaf, ', value: ', value[leaf][0][0], ', samples: ', samples[leaf])
recurse_up(left, right, threshold, features, leaf)
return(result)
branch_to_leaf=get_rule(regtree, time_regression_df.ix[:,0:6].columns,max_leaf[0])
branch_to_leaf
Explanation: Retrieve the decision path that leads to the leaf
End of explanation
splitsdf = pd.DataFrame(branch_to_leaf).transpose()
splitsdf.columns = ['features', 'leq', 'value']
splitsdf
Explanation: Be aware, read this branch bottom up!
Processing is nicer if the path is in a data frame.
End of explanation
splitstats = splitsdf.groupby(['features','leq'])
splitstats.groups
Explanation: Via grouping, we can extract the relevant splits that are always the ones towards the end of the branch. Earlier splits become obsolete if the feature is splitted in the same manner again downwards the tree.
End of explanation
splitstats.min()
Explanation: Groupby is very helpful here. Choose always the split with the first index. "min()" is used here for demonstration purposes only.
End of explanation
def get_group(g, key):
if key in g.groups: return g.get_group(key)
return pd.DataFrame(list(key).append(np.nan))
Explanation: One might use an own get_group method. This will throw less exceptions if the key is not valid (e.g. there is no lower range on day_of_week). This can especially happen in trees with low depth.
End of explanation
area_coords = dict()
area_coords['dropoff_upper_left'] = [splitstats.get_group(('dropoff_latitude', True)).iloc[0].value,
splitstats.get_group(('dropoff_longitude', False)).iloc[0].value]
area_coords['dropoff_lower_right'] = [splitstats.get_group(('dropoff_latitude',False)).iloc[0].value,
splitstats.get_group(('dropoff_longitude',True)).iloc[0].value]
area_coords['pickup_upper_left'] = [splitstats.get_group(('pickup_latitude',True)).iloc[0].value,
splitstats.get_group(('pickup_longitude',False)).iloc[0].value]
area_coords['pickup_lower_right'] = [splitstats.get_group(('pickup_latitude',False)).iloc[0].value,
splitstats.get_group(('pickup_longitude',True)).iloc[0].value]
area_coords
import operator
dropoff_rect_len = list(map(operator.sub,area_coords['dropoff_upper_left'],
area_coords['dropoff_lower_right']))
pickup_rect_len = list(map(operator.sub,area_coords['pickup_upper_left'],
area_coords['pickup_lower_right']))
dropoff_rect_len, pickup_rect_len
Explanation: Extract the pickup- and dropoff-area.
End of explanation
import matplotlib.patches as patches
x = data_in_box.pickup_longitude
y = data_in_box.pickup_latitude
fig = plt.figure(figsize=(20, 10))
# Reduce the plot to Manhattan
plt.hist2d(x, y, bins=300, range=[[min(x.values),-73.95],[40.675,40.8]])
plt.colorbar()
plt.title('Pickup density (first full week in May 2013)')
plt.ylabel('Latitude')
plt.xlabel('Longitude')
plt.hold(True)
ax = fig.gca()
ax.add_patch(patches.Rectangle((area_coords['dropoff_upper_left'][1], area_coords['dropoff_lower_right'][0]),
abs(dropoff_rect_len[1]), dropoff_rect_len[0], fill=False, edgecolor='red', linewidth=5))
ax.add_patch(patches.Rectangle((area_coords['pickup_upper_left'][1], area_coords['pickup_lower_right'][0]),
abs(pickup_rect_len[1]), pickup_rect_len[0], fill=False, edgecolor='white', linewidth=5))
ax.grid(False)
plt.hold(False)
Explanation: In order to draw the rectangle, we need the side lengths of the areas.
End of explanation
trips_of_leaf = x_train.loc[(x_train['dropoff_latitude'] > area_coords['dropoff_lower_right'][0]) &
(x_train['dropoff_longitude'] < area_coords['dropoff_lower_right'][1]) &
(x_train['dropoff_latitude'] < area_coords['dropoff_upper_left'][0]) &
(x_train['dropoff_longitude'] > area_coords['dropoff_upper_left'][1]) &
(x_train['pickup_latitude'] > area_coords['pickup_lower_right'][0]) &
(x_train['pickup_longitude'] < area_coords['pickup_lower_right'][1]) &
(x_train['pickup_latitude'] < area_coords['pickup_upper_left'][0]) &
(x_train['pickup_longitude'] > area_coords['pickup_upper_left'][1]) &
(x_train['pickup_datetime_dayofweek'] < 4.5) &
(x_train['pickup_datetime_hour'] < 18.5) &
(x_train['pickup_datetime_hour'] > 7.5)
]
trips_of_leaf.head()
print('Filtered trips: ', len(trips_of_leaf))
print('Trips in leaf: ', max_leaf[1])
len(trips_of_leaf) == max_leaf[1]
Explanation: White is pickup- red is dropoff. But are this really the correct areas? Lets check it via filtering in the training set. Do we get the same amount of trips as the no. of samples in the leaf? (Time splits are hard coded in this case)
End of explanation
import gmaps
import gmaps.datasets
gmaps.configure(api_key='AI****') # Fill in your API-Code here
trips_of_leaf_pickup_list = trips_of_leaf.iloc[:,[2,3]].as_matrix().tolist()
trips_of_leaf_dropoff_list = trips_of_leaf.iloc[:,[4,5]].as_matrix().tolist()
data = gmaps.datasets.load_dataset('taxi_rides')
pickups_gmap = gmaps.Map()
dropoffs_gmap = gmaps.Map()
pickups_gmap.add_layer(gmaps.Heatmap(data=trips_of_leaf_pickup_list[0:1000]))
dropoffs_gmap.add_layer(gmaps.Heatmap(data=trips_of_leaf_dropoff_list[0:1000]))
Explanation: So we've found the trips that belong to this branch! The discrepancy, that might be caused by numeric instability when comparing the geo coordinates, is not that big.
A little bit of gmaps....
End of explanation
pickups_gmap
dropoffs_gmap
Explanation: We can see, that the intersection of the pickup- and dropoff-area is quite big. This corresponds to the predicted trip_time that is about 6.5 minutes. In this short period of time, a car cannot drive that far. It looks also quite similar to the picture above, that is based on a 2d-histogram of the pickups. We can qualitatively see, that the dropoff-area is smaller than the pickup-area.
End of explanation |
5,613 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XGBoost
We use the XGBoost Python package to separate signal from background for rare radiative decays $b \rightarrow s (d) \gamma$. XGBoost is a scalable, distributed implementation of gradient tree boosting that builds the tree itself in parallel, leading to speedy cross validation (relative to other iterative algorithms). Refer to the original paper by Chen et. al
Step1: Training
Step2: Load feature vectors saved as Pandas dataframe, convert to data matrix structure used by XGBoost.
Step3: Specify the starting hyperparameters for the boosting algorithm. Ideally this would be optimized using cross-validation. Refer to https
Step4: Crossfeed accuracy
Step6: Model performance varies dramatically depending on our choice of hyperparameters. Tuning a large number of hyperparameters a grid search may be prohibitively expensive and we can instead randomly sample from distributions of hyperparamters and evaluate the model at these points.
Inference
Step7: Feature Importances
Plot the feature importances of the 20 features that contributed the most to overall tree impurity reduction. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import xgboost as xgb
import time, os
Explanation: XGBoost
We use the XGBoost Python package to separate signal from background for rare radiative decays $b \rightarrow s (d) \gamma$. XGBoost is a scalable, distributed implementation of gradient tree boosting that builds the tree itself in parallel, leading to speedy cross validation (relative to other iterative algorithms). Refer to the original paper by Chen et. al: https://arxiv.org/pdf/1603.02754v1.pdf as well as the github: https://github.com/dmlc/xgboost
Author: Justin Tan - 5/04/17
End of explanation
# Set training mode, hadronic channel.
mode = 'gamma_only'
channel = 'kstar0'
Explanation: Training
End of explanation
df = pd.read_hdf('/home/ubuntu/radiative/df/kstar0/kstar0_gamma_sig_cont.h5', 'df')
df = pd.read_hdf('/home/ubuntu/radiative/df/rho0/std_norm_sig_cus.h5', 'df')
from sklearn.model_selection import train_test_split
# Split data into training, testing sets
df_X_train, df_X_test, df_y_train, df_y_test = train_test_split(df[df.columns[:-1]], df['labels'],
test_size = 0.05, random_state = 24601)
dTrain = xgb.DMatrix(data = df_X_train.values, label = df_y_train.values, feature_names = df.columns[:-1])
dTest = xgb.DMatrix(data = df_X_test.values, label = df_y_test.values, feature_names = df.columns[:-1])
# Save to XGBoost binary file for faster loading
dTrain.save_binary("dTrain" + mode + channel + ".buffer")
dTest.save_binary("dTest" + mode + channel + ".buffer")
Explanation: Load feature vectors saved as Pandas dataframe, convert to data matrix structure used by XGBoost.
End of explanation
# Boosting hyperparameters
params = {'eta': 0.2, 'seed':0, 'subsample': 0.9, 'colsample_bytree': 0.9, 'gamma': 0.05,
'objective': 'binary:logistic', 'max_depth':5, 'min_child_weight':1, 'silent':0}
# Specify multiple evaluation metrics for validation set
params['eval_metric'] = '[email protected]'
pList = list(params.items())+[('eval_metric', 'auc')]
# Number of boosted trees to construct
nTrees = 75
# Specify validation set to watch performance
evalList = [(dTrain,'train'), (dTest,'eval')]
evalDict = {}
print("Starting model training\n")
start_time = time.time()
# Train the model using the above parameters
bst = xgb.train(params = pList, dtrain = dTrain, evals = evalList, num_boost_round = nTrees,
evals_result = evalDict, early_stopping_rounds = 20)
# Save our model
model_name = mode + channel + str(nTrees) + '.model'
bst.save_model(model_name)
delta_t = time.time() - start_time
print("Training ended. Elapsed time: (%.3f s)" %(delta_t))
Explanation: Specify the starting hyperparameters for the boosting algorithm. Ideally this would be optimized using cross-validation. Refer to https://github.com/dmlc/xgboost/blob/master/doc/parameter.md for the full list.
Important parameters for regularization control model complexity and add randomness to make training robust against noise.
* eta: Reduces feature weights after each boosting iteration
* subsample: Adjusts proportion of instance that XGBoost collects to grow trees
* max_depth: Maximum depth of tree structure. Larger depth $\rightarrow$ greater complexity/overfitting
* gamma: Minimum loss reduction required to further partition a leaf node on the tree
End of explanation
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
import scipy.stats as stats
# Set number of parameter settings to be sampled
n_iter = 25
# Set parameter distributions for random search CV using the AUC metric
cv_paramDist = {'learning_rate': stats.uniform(loc = 0.05, scale = 0.15), # 'n_estimators': stats.randint(150, 300),
'colsample_bytree': stats.uniform(0.8, 0.195),
'subsample': stats.uniform(loc = 0.8, scale = 0.195),
'max_depth': [3, 4, 5, 6],
'min_child_weight': [1, 2, 3]}
fixed_params = {'n_estimators': 350, 'seed': 24601, 'objective': 'binary:logistic'}
xgb_randCV = RandomizedSearchCV(xgb.XGBClassifier(**fixed_params), cv_paramDist, scoring = 'roc_auc', cv = 5,
n_iter = n_iter, verbose = 2, n_jobs = -1)
start = time.time()
xgb_randCV.fit(df_X_train.values, df_y_train.values)
print("RandomizedSearchCV complete. Time elapsed: %.2f seconds for %d candidates" % ((time.time() - start), n_iter))
# Best set of hyperparameters
xgb_randCV.best_params_
optParams = {'eta': 0.1, 'seed':0, 'subsample': 0.95, 'colsample_bytree': 0.9, 'gamma': 0.05,
'objective': 'binary:logistic', 'max_depth':6, 'min_child_weight':1, 'silent':0}
# Cross-validation on optimal parameters
xgb_cv = xgb.cv(params = optParams, dtrain = dTrain, nfold = 5, metrics = ['error', 'auc'], verbose_eval = 10,
stratified = True, as_pandas = True, early_stopping_rounds = 30, num_boost_round = 500)
Explanation: Crossfeed accuracy: 96%, AUC = 0.991
Continuum accuracy: ~ 99%, AUC ~ 1
Custom accuracy: ~
Optimizing Hyperparameters
The set of parameters which control the behaviour of the algorithm are not learned from the training data, called hyperparameters. The best value for each will depend on the dataset. We can optimize these by performing a grid search over the parameter space, using $n-$fold cross validation: The original sample is randomly partitioned into $n$ equally size subsamples - a single subsample is retained as the validation and the remaining $n-1$ subsamples are used as training data. Repeat, with each subsample used exactly once as the validation data. XGBoost is compatible with scikit-learn's API, so we can reuse code from our AdaBoost notebook. See http://scikit-learn.org/stable/modules/grid_search.html
End of explanation
def plot_ROC_curve(y_true, network_output, meta):
Plots the receiver-operating characteristic curve
Inputs: y: One-hot encoded binary labels
network_output: NN output probabilities
Output: AUC: Area under the ROC Curve
from sklearn.metrics import roc_curve, auc
# Compute ROC curve, integrate
fpr, tpr, thresholds = roc_curve(y_true, network_output)
roc_auc = auc(fpr, tpr)
plt.figure()
plt.axes([.1,.1,.8,.7])
plt.figtext(.5,.9, r'$\mathrm{Receiver \;operating \;characteristic}$', fontsize=15, ha='center')
plt.figtext(.5,.85, meta, fontsize=10,ha='center')
plt.plot(fpr, tpr, color='darkorange',
lw=2, label='ROC curve - custom (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], color='navy', lw=1.0, linestyle='--')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel(r'$\mathrm{False \;Positive \;Rate}$')
plt.ylabel(r'$\mathrm{True \;Positive \;Rate}$')
plt.legend(loc="lower right")
plt.savefig("graphs/" + "clf_ROCcurve.pdf",format='pdf', dpi=1000)
plt.show()
plt.gcf().clear()
# Load previously trained model
xgb_pred = bst.predict(dTest)
meta = 'XGBoost - max_depth: 5, subsample: 0.9, $\eta = 0.2$'
plot_ROC_curve(df_y_test.values, xgb_pred, meta)
Explanation: Model performance varies dramatically depending on our choice of hyperparameters. Tuning a large number of hyperparameters a grid search may be prohibitively expensive and we can instead randomly sample from distributions of hyperparamters and evaluate the model at these points.
Inference
End of explanation
%matplotlib inline
importances = bst.get_fscore()
df_importance = pd.DataFrame({'Importance': list(importances.values()), 'Feature': list(importances.keys())})
df_importance.sort_values(by = 'Importance', inplace = True)
df_importance[-20:].plot(kind = 'barh', x = 'Feature', color = 'orange', figsize = (10,10),
title = 'Feature Importances')
Explanation: Feature Importances
Plot the feature importances of the 20 features that contributed the most to overall tree impurity reduction.
End of explanation |
5,614 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
List vs numpy array
Germain Salvato Vallverdu
Step1: Utilisation d'une fonction
Step2: Produit scalaire
Ou produit de matrices ou matrices-vecteurs ou opération sur des vecteurs (sommes, produit) en général.
Step3: Centre de gravité
Step4: Matrice de distances
Un peu plus sioux ... calculer toutes les distances entre atomes. En ajoutant un axe dans le tableau numpy il fait le calcul de toutes les opérations.
Step5: Je découpe un peu l'ajout des axes. L'idée c'est d'avoir quelque chose de 5 x 5 à la fin (la matrice).
Step6: Du coup le calcul des distances avec numpy ça donne
Step7: Avec plus d'atomes
Step8: Tu noteras qu'en plus, dans ce cas, l'implémentation numpy fait le double de calculs que l'implémentation standard... Dans le calcul numpy tu calcules la distance i-j et j-i alors que tu évites de faire ça dans l'implementation standard.
Step9: C'est 2 fois plus long ... normal tu as le double de calcul. Du coup numpy est environ 40 fois plus rapide. | Python Code:
import numpy as np
import math as m
Explanation: List vs numpy array
Germain Salvato Vallverdu
End of explanation
def np_func(x):
return (3 * x ** 2 + 2 * x - 1) * np.exp(- x / 2.3) * np.sin(2 * x)
def m_func(x):
return (3 * x ** 2 + 2 * x - 1) * m.exp(- x / 2.3) * m.sin(2 * x)
x = np.linspace(0, 10, 1000)
xl = x.tolist()
%timeit y = np_func(x)
%%timeit
y = list()
for xi in xl:
yi = m_func(xi)
y.append(yi)
642 / 37.7
Explanation: Utilisation d'une fonction
End of explanation
x = np.random.random(1000)
y = np.random.random(1000)
%timeit np.dot(x, y)
xl = x.tolist()
yl = y.tolist()
%%timeit
scal = 0
for xi, yi in zip(xl, yl):
scal += xi * yi
68.3 / 1.38
Explanation: Produit scalaire
Ou produit de matrices ou matrices-vecteurs ou opération sur des vecteurs (sommes, produit) en général.
End of explanation
coords = np.random.uniform(0, 30, size=(100, 3))
weight = np.random.random(100)
lcoords = coords.tolist()
lweight = weight.tolist()
%%timeit
w_coords = coords * weight[:, np.newaxis]
G = w_coords.sum(axis=0) / len(w_coords)
w_coords = coords * weight[:, np.newaxis]
G = w_coords.sum(axis=0) / len(w_coords)
print(G)
%%timeit
G = [0, 0, 0]
for w_i, coords_i in zip(lweight, lcoords):
w_coords_i = [w_i * xi for xi in coords_i]
for i in range(3):
G[i] += w_coords_i[i]
nat = len(lweight)
for i in range(3):
G[i] /= nat
107 / 10.4
G = [0, 0, 0]
for w_i, coords_i in zip(lweight, lcoords):
w_coords_i = [w_i * xi for xi in coords_i]
for i in range(3):
G[i] += w_coords_i[i]
nat = len(lweight)
for i in range(3):
G[i] /= nat
print(G)
Explanation: Centre de gravité
End of explanation
coords = np.random.uniform(0, 30, (5, 3))
Explanation: Matrice de distances
Un peu plus sioux ... calculer toutes les distances entre atomes. En ajoutant un axe dans le tableau numpy il fait le calcul de toutes les opérations.
End of explanation
print(coords[:, np.newaxis, :].shape)
print(coords[:, np.newaxis, :])
print(coords[np.newaxis, :, :].shape)
print(coords[np.newaxis, :, :])
rij = coords[:, np.newaxis, :] - coords[np.newaxis, :, :]
print(rij.shape)
print(rij)
Explanation: Je découpe un peu l'ajout des axes. L'idée c'est d'avoir quelque chose de 5 x 5 à la fin (la matrice).
End of explanation
rij2 = (coords[:, np.newaxis, :] - coords[np.newaxis, :, :]) ** 2
d = np.sum(rij2, axis=2) ** 0.5
print(d)
Explanation: Du coup le calcul des distances avec numpy ça donne :
End of explanation
coords = np.random.uniform(0, 30, (100, 3))
lcoords = coords.tolist()
%%timeit
rij2 = (coords[:, np.newaxis, :] - coords[np.newaxis, :, :]) ** 2
d = np.sum(rij2, axis=2) ** 0.5
%%timeit
nat = len(lcoords)
distances = [[0. for iat in range(nat)] for jat in range(nat)]
for iat in range(nat):
for jat in range(iat + 1, nat):
ix = lcoords[iat]
jx = lcoords[jat]
rij2 = [(ix[i] - jx[i]) **2 for i in range(3)]
d2 = sum(rij2)
distances[iat][jat] = m.sqrt(d2)
distances[jat][iat] = distances[iat][jat]
7940 / 378
Explanation: Avec plus d'atomes
End of explanation
%%timeit
# en calculant toutes les distances
nat = len(lcoords)
distances = [[0. for iat in range(nat)] for jat in range(nat)]
for iat in range(nat):
for jat in range(nat):
ix = lcoords[iat]
jx = lcoords[jat]
rij2 = [(ix[i] - jx[i]) **2 for i in range(3)]
d2 = sum(rij2)
distances[iat][jat] = m.sqrt(d2)
Explanation: Tu noteras qu'en plus, dans ce cas, l'implémentation numpy fait le double de calculs que l'implémentation standard... Dans le calcul numpy tu calcules la distance i-j et j-i alors que tu évites de faire ça dans l'implementation standard.
End of explanation
14.6e3 / 378
Explanation: C'est 2 fois plus long ... normal tu as le double de calcul. Du coup numpy est environ 40 fois plus rapide.
End of explanation |
5,615 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Part 6
Step1: Let's start with some basic imports
Step2: We use propane( $CH_3 CH_2 CH_3 $ ) as a running example throughout this tutorial. Many of the featurization methods use conformers or the molecules. A conformer can be generated using the ConformerGenerator class in deepchem.utils.conformers.
RDKitDescriptors
RDKitDescriptors featurizes a molecule by computing descriptors values for specified descriptors. Intrinsic to the featurizer is a set of allowed descriptors, which can be accessed using RDKitDescriptors.allowedDescriptors.
The featurizer uses the descriptors in rdkit.Chem.Descriptors.descList, checks if they are in the list of allowed descriptors and computes the descriptor value for the molecule.
Step3: Let's check the allowed list of descriptors. As you will see shortly, there's a wide range of chemical properties that RDKit computes for us.
Step4: BPSymmetryFunction
Behler-Parinello Symmetry function or BPSymmetryFunction featurizes a molecule by computing the atomic number and coordinates for each atom in the molecule. The features can be used as input for symmetry functions, like RadialSymmetry, DistanceMatrix and DistanceCutoff . More details on these symmetry functions can be found in this paper. These functions can be found in deepchem.feat.coulomb_matrices
The featurizer takes in max_atoms as an argument. As input, it takes in a conformer of the molecule and computes
Step5: Let's now take a look at the actual featurized matrix that comes out.
Step6: A simple check for the featurization would be to count the different atomic numbers present in the features.
Step7: For propane, we have $3$ C-atoms and $8$ H-atoms, and these numbers are in agreement with the results shown above. There's also the additional padding of 9 atoms, to equalize with max_atoms.
CoulombMatrix
CoulombMatrix, featurizes a molecule by computing the coulomb matrices for different conformers of the molecule, and returning it as a list.
A Coulomb matrix tries to encode the energy structure of a molecule. The matrix is symmetric, with the off-diagonal elements capturing the Coulombic repulsion between pairs of atoms and the diagonal elements capturing atomic energies using the atomic numbers. More information on the functional forms used can be found here.
The featurizer takes in max_atoms as an argument and also has options for removing hydrogens from the molecule (remove_hydrogens), generating additional random coulomb matrices(randomize), and getting only the upper triangular matrix (upper_tri).
Step8: A simple check for the featurization is to see if the feature list has the same length as the number of conformers
Step9: CoulombMatrixEig
CoulombMatrix is invariant to molecular rotation and translation, since the interatomic distances or atomic numbers do not change. However the matrix is not invariant to random permutations of the atom's indices. To deal with this, the CoulumbMatrixEig featurizer was introduced, which uses the eigenvalue spectrum of the columb matrix, and is invariant to random permutations of the atom's indices.
CoulombMatrixEig inherits from CoulombMatrix and featurizes a molecule by first computing the coulomb matrices for different conformers of the molecule and then computing the eigenvalues for each coulomb matrix. These eigenvalues are then padded to account for variation in number of atoms across molecules.
The featurizer takes in max_atoms as an argument and also has options for removing hydrogens from the molecule (remove_hydrogens), generating additional random coulomb matrices(randomize). | Python Code:
%tensorflow_version 1.x
!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import deepchem_installer
%time deepchem_installer.install(version='2.3.0')
Explanation: Tutorial Part 6: Going Deeper On Molecular Featurizations
One of the most important steps of doing machine learning on molecular data is transforming this data into a form amenable to the application of learning algorithms. This process is broadly called "featurization" and involves tutrning a molecule into a vector or tensor of some sort. There are a number of different ways of doing such transformations, and the choice of featurization is often dependent on the problem at hand.
In this tutorial, we explore the different featurization methods available for molecules. These featurization methods include:
ConvMolFeaturizer,
WeaveFeaturizer,
CircularFingerprints
RDKitDescriptors
BPSymmetryFunction
CoulombMatrix
CoulombMatrixEig
AdjacencyFingerprints
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
End of explanation
from __future__ import print_function
from __future__ import division
from __future__ import unicode_literals
import numpy as np
from rdkit import Chem
from deepchem.feat import ConvMolFeaturizer, WeaveFeaturizer, CircularFingerprint
from deepchem.feat import AdjacencyFingerprint, RDKitDescriptors
from deepchem.feat import BPSymmetryFunctionInput, CoulombMatrix, CoulombMatrixEig
from deepchem.utils import conformers
Explanation: Let's start with some basic imports
End of explanation
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
Explanation: We use propane( $CH_3 CH_2 CH_3 $ ) as a running example throughout this tutorial. Many of the featurization methods use conformers or the molecules. A conformer can be generated using the ConformerGenerator class in deepchem.utils.conformers.
RDKitDescriptors
RDKitDescriptors featurizes a molecule by computing descriptors values for specified descriptors. Intrinsic to the featurizer is a set of allowed descriptors, which can be accessed using RDKitDescriptors.allowedDescriptors.
The featurizer uses the descriptors in rdkit.Chem.Descriptors.descList, checks if they are in the list of allowed descriptors and computes the descriptor value for the molecule.
End of explanation
for descriptor in RDKitDescriptors.allowedDescriptors:
print(descriptor)
rdkit_desc = RDKitDescriptors()
features = rdkit_desc._featurize(example_mol)
print('The number of descriptors present are: ', len(features))
Explanation: Let's check the allowed list of descriptors. As you will see shortly, there's a wide range of chemical properties that RDKit computes for us.
End of explanation
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
engine = conformers.ConformerGenerator(max_conformers=1)
example_mol = engine.generate_conformers(example_mol)
Explanation: BPSymmetryFunction
Behler-Parinello Symmetry function or BPSymmetryFunction featurizes a molecule by computing the atomic number and coordinates for each atom in the molecule. The features can be used as input for symmetry functions, like RadialSymmetry, DistanceMatrix and DistanceCutoff . More details on these symmetry functions can be found in this paper. These functions can be found in deepchem.feat.coulomb_matrices
The featurizer takes in max_atoms as an argument. As input, it takes in a conformer of the molecule and computes:
coordinates of every atom in the molecule (in Bohr units)
the atomic numbers for all atoms.
These features are concantenated and padded with zeros to account for different number of atoms, across molecules.
End of explanation
bp_sym = BPSymmetryFunctionInput(max_atoms=20)
features = bp_sym._featurize(mol=example_mol)
features
Explanation: Let's now take a look at the actual featurized matrix that comes out.
End of explanation
atomic_numbers = features[:, 0]
from collections import Counter
unique_numbers = Counter(atomic_numbers)
print(unique_numbers)
Explanation: A simple check for the featurization would be to count the different atomic numbers present in the features.
End of explanation
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
engine = conformers.ConformerGenerator(max_conformers=1)
example_mol = engine.generate_conformers(example_mol)
print("Number of available conformers for propane: ", len(example_mol.GetConformers()))
coulomb_mat = CoulombMatrix(max_atoms=20, randomize=False, remove_hydrogens=False, upper_tri=False)
features = coulomb_mat._featurize(mol=example_mol)
Explanation: For propane, we have $3$ C-atoms and $8$ H-atoms, and these numbers are in agreement with the results shown above. There's also the additional padding of 9 atoms, to equalize with max_atoms.
CoulombMatrix
CoulombMatrix, featurizes a molecule by computing the coulomb matrices for different conformers of the molecule, and returning it as a list.
A Coulomb matrix tries to encode the energy structure of a molecule. The matrix is symmetric, with the off-diagonal elements capturing the Coulombic repulsion between pairs of atoms and the diagonal elements capturing atomic energies using the atomic numbers. More information on the functional forms used can be found here.
The featurizer takes in max_atoms as an argument and also has options for removing hydrogens from the molecule (remove_hydrogens), generating additional random coulomb matrices(randomize), and getting only the upper triangular matrix (upper_tri).
End of explanation
print(len(example_mol.GetConformers()) == len(features))
Explanation: A simple check for the featurization is to see if the feature list has the same length as the number of conformers
End of explanation
example_smile = "CCC"
example_mol = Chem.MolFromSmiles(example_smile)
engine = conformers.ConformerGenerator(max_conformers=1)
example_mol = engine.generate_conformers(example_mol)
print("Number of available conformers for propane: ", len(example_mol.GetConformers()))
coulomb_mat_eig = CoulombMatrixEig(max_atoms=20, randomize=False, remove_hydrogens=False)
features = coulomb_mat_eig._featurize(mol=example_mol)
print(len(example_mol.GetConformers()) == len(features))
Explanation: CoulombMatrixEig
CoulombMatrix is invariant to molecular rotation and translation, since the interatomic distances or atomic numbers do not change. However the matrix is not invariant to random permutations of the atom's indices. To deal with this, the CoulumbMatrixEig featurizer was introduced, which uses the eigenvalue spectrum of the columb matrix, and is invariant to random permutations of the atom's indices.
CoulombMatrixEig inherits from CoulombMatrix and featurizes a molecule by first computing the coulomb matrices for different conformers of the molecule and then computing the eigenvalues for each coulomb matrix. These eigenvalues are then padded to account for variation in number of atoms across molecules.
The featurizer takes in max_atoms as an argument and also has options for removing hydrogens from the molecule (remove_hydrogens), generating additional random coulomb matrices(randomize).
End of explanation |
5,616 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Bayesian model of book sales and literary prestige
Historians often have to work with missing evidence.
Book sales figures, for instance, are notoriously patchy. If we're interested in comparing a few famous authors, we can do pretty well. That legwork has been done. But if we want to construct a "sales estimate" for every single name in a library containing thousands of authors — good luck! You would have to dig around in publishers' archives for decades, and you still might not find evidence for half the names on your list. Not everything has been preserved.
And yet, in reality, we can of course make pretty decent guesses. We may not know exactly how many volumes they sold, but we can safely wager that Willa Cather was more widely read in her era than Wirt Sikes was in his.
Well, that guesswork is real reasoning, and a Bayesian theory of knowledge would allow us to explain it in a principled way. That's my goal in this notebook. I'm going to use an approach called "empirical Bayes" to estimate the relative market prominence of fiction writers between 1850 and 1950, treating the US and UK as parts of a single market. The strategy I adopt is based on leveraging two sources of evidence that loosely correlate with each other
Step1: Bestseller lists
Basically, my strategy here is simply to count the number of times an author was mentioned on a bestseller list between 1850 and 1949. A book that appears on two different lists will count twice. This strategy will reward authors for being prolific more than it rewards them for writing a single mega-blockbuster, but that's exactly what I want to do — I'm interested in authorial market share here, not in individual books.
This data relies on underlying reports from Mott and Publisher's Weekly, who emphasize sales in the United States, as well as Altick, Hackett, and Bloom, who pay more attention to Britain. Bloom considers the pulp market more than most previous lists had done. In short, the lists overlap a lot, but also diverge. When they're aggregated, many authors will be mentioned once or twice, while prolific authors who are the subjects of strong consensus may be mentioned ten or twelve times.
Overall, coverage of the period after 1895 is much better than it is as we go back into the nineteenth century, so we should expect authors in the earlier period both to be
1) undercounted, and
2) counted less reliably overall.
Also, authors near either end of the timeline may be undercounted, because their careers will extend beyond the end points.
The actual counting is done in a "child" notebook; here we just import the results it produced. The most important column is salesevidence, which aggregates the counts from US-focused and UK-focused sources.
Step2: That's a nice list, but it only covers 378 out of 1177 authors — a relatively small slice at the top of the market. Many of the differences that might interest us are located further down the spectrum of prestige. But those differences are invisible in bestseller lists. So we need to enrich them with a second source.
Works preserved in libraries and other bibliographic records
One crude way to assess an author's prominence during their lifetime is to ask how many of their books got bought and preserved.
Please note the word "crude." Book historians have spent a lot of energy demonstrating that the sample of volumes preserved in a university library is not the same thing as the broader field of literary circulation. And of course they're right.
But we're not looking for an infallible measure of sales here — just a proxy that will loosely correlate with popularity, allowing us to flesh out some of the small differences at the bottom of the market that are elided in a "bestseller list." We're not going to use this evidence to definitively settle the kind of long-running dispute book historians may have in mind. ("Was Agatha Christie really more popular than Wilkie Collins?") We're going to use it to make a rough guess that Wirt Sikes was less popular than either Christie or Collins, and probably less popular for that matter than lots of other semi-obscure names.
The code that does the actual counting is complex and messy, because we have to worry about pseudonyms, initials, and so on. So I've placed all that in a "child" notebook, and just imported the results here. Suffice it to say that we estimate a "midcareer" year for each author, and then count volumes of fiction in HathiTrust published thirty years before or after that date (but not outside 1835 or 1965). I also count references to publication in pulp magazines drawn from a dataset organized by Jordan Sellers; this doesn't completely compensate for the underrepresentation of the pulps in university libraries, but it's a gesture in that direction.
Step3: correcting for temporal biases
The number of volumes preserved can depend on a lot of historical accidents. It varies over time in a way that will need correction. As you can see below, there's a bias toward the middle of the timeline. By measuring the bias, we can adjust for it.
Step4: Now we can multiply each author's raw number of volumes by the adjustment factor for that year. Not elegant, not perfect, but it gets rid of the biggest distortions.
Step5: Fusing and comparing the two sources
Our first task is just to get these two sources into one consistent dataframe. In particular, we want to be able to work with the column salesevidence, from the bestseller lists, and the column num_vols, from the bibliographic records. Let's put them both in one dataframe.
Step6: Correlation
I mentioned that authors' representation on bestseller lists loosely correlates with their representation in libraries. The correlation is not tremendously strong, but it's clearly visible when both variables are log-scaled.
Step7: I'm going to use log-scaling for visualization in the rest of the notebook, because it makes the correlation we're interested in more visible. But I will continue to manipulate the unscaled variables.
Bayesian inferences
As you can see above, most of the weight of the graph is in that bottom row, where we have no direct evidence about the author's sales. That's because we're relying on "bestseller lists," which only report the very top of the market. But there is good reason to suspect that the loose correlation between volumes-preserved and sales would also extend down into the space above that bottom row, if we were using a source of evidence that didn't have such a huge gap between "zero appearances on a bestseller list" and "one appearance on a bestseller list."
More generally, there's good reason to suspect that a lot of our estimates of sales are too far from the blue regression line marked above. If you went out and gathered sales evidence from a different source, and asked me to predict where each author would fall in your sample, I would be well advised to slide my predictions toward that regression line, for reasons well explained in a classic article by Bradley Efron and Carl Morris.
On the other hand, I wouldn't want to slide every prediction an equal distance. If I were smart, I would be particularly flexible about the authors where I have little evidence. We have a lot of evidence in the upper right-hand corner; we're not going to wake up tomorrow and discover that Charles Dickens was actually obscure. But toward the bottom of the graph, and the left-hand side, discovering a single new piece of evidence could dramatically change my estimate about an author who I thought was totally unknown.
Let's translate this casual reasoning into Bayesian language. We can use the loose correlation of sales and volumes-preserved to articulate a rough prior probability about the ratio likely to hold between those two variables.
Then we can improve our estimates of sales by viewing the bestseller lists as new evidence that merely updates our prior beliefs about authors, based on the shelf space they occupy in libraries. (Bayes' theorem is a formal way of explaining how we update our beliefs in the light of new evidence.)
But how much weight, exactly, should we give our prior? My first approach here was to say "um, I dunno, maybe fifty percent?" Which might be fine. But it turns out there's a more principled way to do this, which allows the weight of our prior to vary depending on the amount of new evidence we have for each writer. It's an approach called "empirical Bayes," because it infers a prior probability from patterns in the evidence. (Some people would argue that all Bayesian reasoning is empirical, but this notebook is not the place to hash that out!)
As David Robinson has explained, batting ratios can be imagined as a binomial distribution. Each time a batter goes up to the plate, they get a hit (1) or a miss (0), and the batting ratio is hits, divided by total at-bats. But if a batter has had very few visits to the plate, we're well advised to treat that ratio skeptically, sliding it toward our prior about the grand mean of batting averages for all hitters. It turns out that a good way to improve your estimate of a binomial distribution is to add a fixed amount to both sides of the ratio
Step8: The Beta distribution above
The histogram shows the number of occurrences of different ratios between salesevidence and num_vols (actually using a noisy prior instead of salesevidence itself).
The red line shows a Beta distribution fit to the histogram. Alpha is 0.0597, Beta is 8.807.
Applying the prior, and updating our posterior estimate
We inherit parameters alpha0 and beta0 from the cell above, and apply them to update salesevidence.
The logic of the updating is
Step9: This is my favorite part of the notebook, because principled inferences turn out to be pretty.
You can see that the Bayesian prior has relatively little effect on the rankings of prominent authors in the upper right-hand corner (Charles Dickens and Ellen Wood are still going to be the two dots at the top). But it very substantially warps space at the bottom and on the left side of the graph, giving the lower image a bit more flair.
The other nice thing about this method is that it allows us to calculate uncertainty. I'm not going to do that in this notebook, but if we wanted to, we could calculate "credible intervals" for each author.
How this affects history
In this project, I'm less worried about individual authors than about the overall historical picture. How has that changed?
We can use our "midcareer" estimates for each author to plot them on a timeline.
Step10: You can still see that authors not mentioned in bestseller lists are grouped together at the bottom of the frame. But they are no longer separated from the rest of the data by a rigid gulf. We can make a slightly better guess about the differences between different people in that group.
The Bayesian prior also slightly tweaks our sense of the relative prominence of bestselling authors, by using the number of volumes preserved in libraries as an additional source of evidence. But, as you can see, it doesn't make big alterations at the top of the scale.
By the way, who's actually up there? Mostly familiar names. I'm not hung up on the exact sorting of the top ten or twenty names. If that was our goal, we could certainly do a better job by checking publishers' archives for the top ten or twenty people. Even existing studies by Altick, etc, could give us a better sorting at the top of the list. But remember, we've got more than a thousand authors in this corpus! The real goal of this project is to achieve a coarse macroscopic sorting of the "top," "middle," and "bottom" of that much larger list. So here are the top fifty, in no particular order.
Step11: percentile calculations
The distribution of authors across prominence is not really uniform. But for some purposes of visualization it's nice to have a uniform distribution, so let's create one. We can think of it as the author's "percentile" location within a moving 50-year window.
Step12: How sales interact with critical prestige
In a separate analytical project, I've contrasted two samples of fiction (one drawn from reviews in elite periodicals, the other selected at random) to develop a loose model of literary prestige, as seen by reviewers.
Underline loose. Evidence about prestige is even fuzzier and harder to come by than evidence about sales. So this model has to rely on a couple of random samples, combined with the language of the text itself. It is only 72.5% accurate at predicting which works of fiction come from the reviewed set -- so by all means, take its predictions with a grain of salt! However, even an imperfect model can reveal loose organizing principles, and that's what we're interested in here.
Loading evidence of prestige
The model we're loading was produced by modeling each of the four quarter-centuries separately
Step13: Let's explore the literary field!
Using this data, we can quantitatively recreate Pierre Bourdieu's map of social space.
To get a sense of what we're looking at, it may be helpful first of all just to see where some familiar names fall.
Step14: The upward drift of these points reveals a fairly strong correlation between prestige and sales. It is possible to find a few high-selling authors who are predicted to lack critical prestige -- notably, for instance, the historical novelist W. H. Ainsworth and the sensation novelist Ellen Wood, author of East Lynne. It's harder to find authors who have prestige but no sales
Step15: Here, again, an upward drift reveals a loose correlation between prestige and sales. But is the correlation perhaps a bit weaker? There seem to be more authors in the "upper midwest" portion of the map now -- people like Wyndham Lewis and Sherwood Anderson, who have critical prestige but not enormous sales.
There's also a distinct "genre fiction" and "pulp fiction" world emerging in the southeast corner of this map. E. Phillips Oppenheim, British author of adventure fiction, would be right next to Edgar Rice Burroughs, if we had room to print both names.
Moreover, if you just look at the large circles (the authors we're likely to remember), you can start to see how people in this period might get the idea that sales are actually negatively correlated with critical prestige. It almost looks like a diagonal line slanting down from Sherwood Anderson to Zane Grey. If you look at the bigger picture, the slant is actually going the other direction. The people over by Elma Travis would sadly remind us that, in fact, prestige still correlates positively with sales! But you can see how that might not be obvious at the top of the market. There's a faint backward slant on the right-hand side that wasn't visible among the Victorians.
Step16: In the second quarter of the twentieth century, the slope of the upper right quadrant becomes even more visible. The whole field is now, in effect, round. Scroll back up to the Victorians, and you'll see that wasn't true.
Also, the locations of my samples have moved around the map. There are a lot more blue, "random" books over on the right side now, among the bestsellers, than there were in the Victorian era. So the strength of the linguistic boundary between "reviewed" and "random" samples may be roughly the same, but its meaning has probably changed; it's becoming more properly a boundary between the prestigious and the popular, whereas in the 19c, it tended to be a boundary between the prestigious and the obscure.
The overall correlation between sales and prestige
But we don't have to rely on vague visual guesses to estimate the strength of the correlation between two variables. Let's measure the correlation, and ask how it varies over time.
Step17: So what have we achieved?
There is a steady decline in the correlation between prestige and sales. The correlation coefficient (r) steadily declines -- and for whatever it's worth, the p value is less than 0.05 in the first three plots, but not significant in the last. It's not a huge change, but that itself may be part of what we learn using this method.
I think this decline is roughly what we might expect to see
Step18: You don't even have to use my posterior estimate of sales. The raw counts will work, though the correlation is not as strong.
Step19: Let's save the data so other notebooks can use it. | Python Code:
# BASIC IMPORTS BEFORE WE BEGIN
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
import pandas as pd
import csv
import statsmodels.formula.api as smf
from scipy.stats import pearsonr
import numpy as np
import random
import scipy.stats as ss
from patsy import dmatrices
Explanation: A Bayesian model of book sales and literary prestige
Historians often have to work with missing evidence.
Book sales figures, for instance, are notoriously patchy. If we're interested in comparing a few famous authors, we can do pretty well. That legwork has been done. But if we want to construct a "sales estimate" for every single name in a library containing thousands of authors — good luck! You would have to dig around in publishers' archives for decades, and you still might not find evidence for half the names on your list. Not everything has been preserved.
And yet, in reality, we can of course make pretty decent guesses. We may not know exactly how many volumes they sold, but we can safely wager that Willa Cather was more widely read in her era than Wirt Sikes was in his.
Well, that guesswork is real reasoning, and a Bayesian theory of knowledge would allow us to explain it in a principled way. That's my goal in this notebook. I'm going to use an approach called "empirical Bayes" to estimate the relative market prominence of fiction writers between 1850 and 1950, treating the US and UK as parts of a single market. The strategy I adopt is based on leveraging two sources of evidence that loosely correlate with each other:
1) Bestseller lists, whether provided by Publisher's Weekly or assembled less formally by book historians.
2) The number of books by each author preserved in US university libraries, plus some evidence about publication in pulp magazines.
Neither of these sources is complete; neither is perfectly accurate. But we're going to combine them in a way that will allow each source to compensate for the errors and omissions in the other.
If we had unlimited time and energy, we could gather lots of additional sources of evidence: library circulation records, individual publishers' balance sheets, and so on. But time is limited, and I just want a rough estimate of authors' relative market prominence. So we'll keep this quick and dirty.
Acknowledgments
This notebook is a tiny lantern placed on top of a tower built by many other hands.
The idea for this approach was drawn from Julia Silge's post on empirical Bayes in song lyrics, which in turn drew on blog posts by David Robinson. Evidence about nineteenth-century bestsellers was extracted from Altick 1957, Mott 1947, Leavis 1935, and Hackett 1977, and drawn up in tabular form by Jordan Sellers. Jordan Sellers also created a sample of stories in pulp magazines between 1897 and 1939. Lists of twentieth-century bestsellers were transcribed by John Unsworth, and scraped into tabular form by Kyle R. Johnston (see his post on modeling the texts of bestsellers).
At the bottom of the notebook, when we start talking about critical prestige, I'll draw on information about reviews collected by Sabrina Lee and Jessica Mercado.
References
Altick, Richard D. The English Common Reader: A Social History of the Mass Reading Public 1800-1900. Chicago: University of Chicago Press, 1957.
Bloom, Clive. Bestsellers: Popular Fiction Since 1900. 2nd edition. Houndmills: Palgrave Macmillan, 2008.
Hackett, Alice Payne, and James Henry Burke. 80 Years of Best Sellers 1895-1975.
New York: R.R. Bowker, 1977.
Leavis, Q. D. Fiction and the Reading Public. 1935.
Mott, Frank Luther. Golden Multitudes: The Story of Bestsellers in the United States. New York: R. R. Bowker, 1947.
Unsworth, John. 20th Century American Bestsellers. (http://bestsellers.lib.virginia.edu)
End of explanation
list_occurrences = pd.read_csv('counted_bestsellers.csv', index_col = 'author')
# that combines Altick, Hackett, Leavis, Mott, Publishers Weekly
list_occurrences = list_occurrences.sort_values(by = 'salesevidence', ascending = False)
list_occurrences.head()
print("The whole data frame has " + str(list_occurrences.shape[0]) + " rows.")
nonzero = sum(list_occurrences.salesevidence > 0)
print("But only " + str(nonzero) + " authors have nonzero occurrences in the bestseller lists.")
Explanation: Bestseller lists
Basically, my strategy here is simply to count the number of times an author was mentioned on a bestseller list between 1850 and 1949. A book that appears on two different lists will count twice. This strategy will reward authors for being prolific more than it rewards them for writing a single mega-blockbuster, but that's exactly what I want to do — I'm interested in authorial market share here, not in individual books.
This data relies on underlying reports from Mott and Publisher's Weekly, who emphasize sales in the United States, as well as Altick, Hackett, and Bloom, who pay more attention to Britain. Bloom considers the pulp market more than most previous lists had done. In short, the lists overlap a lot, but also diverge. When they're aggregated, many authors will be mentioned once or twice, while prolific authors who are the subjects of strong consensus may be mentioned ten or twelve times.
Overall, coverage of the period after 1895 is much better than it is as we go back into the nineteenth century, so we should expect authors in the earlier period both to be
1) undercounted, and
2) counted less reliably overall.
Also, authors near either end of the timeline may be undercounted, because their careers will extend beyond the end points.
The actual counting is done in a "child" notebook; here we just import the results it produced. The most important column is salesevidence, which aggregates the counts from US-focused and UK-focused sources.
End of explanation
career_volumes = pd.read_csv('career_volumes.csv')
career_volumes.set_index('author', inplace = True)
career_volumes.head()
Explanation: That's a nice list, but it only covers 378 out of 1177 authors — a relatively small slice at the top of the market. Many of the differences that might interest us are located further down the spectrum of prestige. But those differences are invisible in bestseller lists. So we need to enrich them with a second source.
Works preserved in libraries and other bibliographic records
One crude way to assess an author's prominence during their lifetime is to ask how many of their books got bought and preserved.
Please note the word "crude." Book historians have spent a lot of energy demonstrating that the sample of volumes preserved in a university library is not the same thing as the broader field of literary circulation. And of course they're right.
But we're not looking for an infallible measure of sales here — just a proxy that will loosely correlate with popularity, allowing us to flesh out some of the small differences at the bottom of the market that are elided in a "bestseller list." We're not going to use this evidence to definitively settle the kind of long-running dispute book historians may have in mind. ("Was Agatha Christie really more popular than Wilkie Collins?") We're going to use it to make a rough guess that Wirt Sikes was less popular than either Christie or Collins, and probably less popular for that matter than lots of other semi-obscure names.
The code that does the actual counting is complex and messy, because we have to worry about pseudonyms, initials, and so on. So I've placed all that in a "child" notebook, and just imported the results here. Suffice it to say that we estimate a "midcareer" year for each author, and then count volumes of fiction in HathiTrust published thirty years before or after that date (but not outside 1835 or 1965). I also count references to publication in pulp magazines drawn from a dataset organized by Jordan Sellers; this doesn't completely compensate for the underrepresentation of the pulps in university libraries, but it's a gesture in that direction.
End of explanation
# Let's explore historical variation.
years = []
meanvols = []
movingwindow = []
for i in range (1841, 1960):
years.append(i)
meanvols.append(np.mean(career_volumes.raw_num_vols[career_volumes.midcareer == i]))
window = np.mean(career_volumes.raw_num_vols[(career_volumes.midcareer >= i - 20) & (career_volumes.midcareer < i + 21)])
movingwindow.append(window)
fig, ax = plt.subplots(figsize = (8, 6))
ax.scatter(years, meanvols)
ax.plot(years, movingwindow, color = 'red', linewidth = 2)
ax.set_xlim((1840,1960))
ax.set_ylabel('mean number of volumes')
plt.show()
# Let's calculate the necessary adjustment factor.
historical_adjuster = np.mean(movingwindow) / np.array(movingwindow)
dfdict = {'year': years, 'adjustmentfactor': historical_adjuster}
adjustframe = pd.DataFrame(dfdict)
adjustframe.set_index('year', inplace = True)
adjustframe.head()
Explanation: correcting for temporal biases
The number of volumes preserved can depend on a lot of historical accidents. It varies over time in a way that will need correction. As you can see below, there's a bias toward the middle of the timeline. By measuring the bias, we can adjust for it.
End of explanation
career_volumes['num_vols'] = 0
for author in career_volumes.index:
midcareer = career_volumes.loc[author, 'midcareer']
rawvols = career_volumes.loc[author, 'raw_num_vols']
adjustment = adjustframe.loc[midcareer, 'adjustmentfactor']
career_volumes.loc[author, 'num_vols'] = int(rawvols * adjustment * 100) / 100
career_volumes.head()
years = []
meanvols = []
movingwindow = []
for i in range (1841, 1960):
years.append(i)
meanvols.append(np.mean(career_volumes.num_vols[career_volumes.midcareer == i]))
window = np.mean(career_volumes.num_vols[(career_volumes.midcareer >= i - 20) & (career_volumes.midcareer < i + 21)])
movingwindow.append(window)
fig, ax = plt.subplots(figsize = (8, 6))
ax.scatter(years, meanvols)
ax.plot(years, movingwindow, color = 'red', linewidth = 2)
ax.set_xlim((1840,1960))
plt.show()
Explanation: Now we can multiply each author's raw number of volumes by the adjustment factor for that year. Not elegant, not perfect, but it gets rid of the biggest distortions.
End of explanation
print("List_occurrences shape: ", list_occurrences.shape)
print("Career_volumes shape: ", career_volumes.shape)
authordata = pd.concat([list_occurrences, career_volumes], axis = 1)
# There are some (120) authors found in bestseller lists that were
# not present in my reviewed or random samples. I exclude these,
# because a fair number of them are not English-language
# writers, or not fiction writers. But first I make a list.
with open('authors_not_in_my_metadata.csv', mode = 'w', encoding = 'utf-8') as f:
for i in authordata.index:
if pd.isnull(authordata.loc[i, 'num_vols']):
f.write(i + '\n')
authordata = authordata.dropna(subset = ['num_vols'])
authordata = authordata.fillna(0)
authordata = authordata.sort_values(by = 'salesevidence', ascending = False)
print('Authordata shape:', authordata.shape)
authordata.head()
Explanation: Fusing and comparing the two sources
Our first task is just to get these two sources into one consistent dataframe. In particular, we want to be able to work with the column salesevidence, from the bestseller lists, and the column num_vols, from the bibliographic records. Let's put them both in one dataframe.
End of explanation
# Let's start by log-scaling both variables
authordata['logvols'] = np.log(authordata.num_vols + 1)
authordata['logsales'] = np.log(authordata.salesevidence + 1)
def get_binned_grid(authordata, yvariable):
''' This creates a dataframe that can be used to produce
a scatterplot where circle size
is proportional to the number of authors located in a
particular "cell" of the grid.
'''
lv = []
ls = []
binsize = []
for i in np.arange(0, 7, 0.1):
for j in np.arange(0, 3.3, 0.12):
thisbin = authordata[(authordata.logvols >= i - 0.05) & (authordata.logvols < i + 0.05) & (authordata[yvariable] >= j - 0.06) & (authordata[yvariable] < j + 0.06)]
if len(thisbin) > 0:
lv.append(i)
ls.append(j)
binsize.append(len(thisbin))
dfdict = {'log-scaled career volumes': lv, 'log-scaled sales evidence': ls, 'binsize': binsize}
df = pd.DataFrame(dfdict)
return df
df = get_binned_grid(authordata, 'logsales')
lm = smf.ols(formula='logsales ~ logvols', data=authordata).fit()
xpred = np.linspace(0, 6.5, 50)
xpred = pd.DataFrame({'logvols': xpred})
ypred = lm.predict(xpred)
ax = df.plot.scatter(x = 'log-scaled career volumes', y = 'log-scaled sales evidence', s = df.binsize * 20, color = 'darkorchid',
alpha = 0.5, figsize = (12,8))
ax.set_title('Occurrences on a bestseller list\ncorrelate with # vols in libraries\n', fontsize = 18)
ax.set_ylabel('Log-scaled bestseller references')
ax.set_xlabel('Log-scaled bibliographic records in libraries (and pulps)')
plt.plot(xpred, ypred, linewidth = 2)
plt.savefig('images/logscaledcorrelation.png', bbox_inches = 'tight')
plt.show()
print('Pearson correlation & p value, unscaled: ', pearsonr(authordata.salesevidence, authordata.num_vols))
print('correlation & p value, logscaled: ', pearsonr(authordata.logsales, authordata.logvols))
Explanation: Correlation
I mentioned that authors' representation on bestseller lists loosely correlates with their representation in libraries. The correlation is not tremendously strong, but it's clearly visible when both variables are log-scaled.
End of explanation
# Let's create some normally-distributed noise
# To make this replicable, set a specific seed
random.seed(1702)
# The birthday of Rev. Thomas Bayes.
randomnoise = np.array([np.random.normal(loc = 0, scale = 0.5) for x in range(len(authordata))])
for i in range(len(randomnoise)):
if randomnoise[i] < 0 and authordata.salesevidence[i] < abs(randomnoise[i]):
randomnoise[i] = abs(randomnoise[i])
authordata['noisyprior'] = authordata.salesevidence + randomnoise
authordata['ratio'] = authordata.noisyprior / (authordata.num_vols + 1)
fig, ax = plt.subplots(figsize = (10,8))
authordata['ratio'].plot.hist(bins = 100, alpha = 0.5, normed = True)
# The method of moments gives us an initial approximation for alpha and beta.
mu = np.mean(authordata.ratio)
sigma2 = np.var(authordata.ratio)
alpha0 = ((1 - mu) / sigma2 - 1 / mu) * (mu**2)
beta0 = alpha0 * (1 / mu - 1)
print("Initial guess: ", alpha0, beta0)
alpha0, beta0 , c, d = ss.beta.fit(authordata.ratio, alpha0, beta0)
x = np.linspace(0, 1.2, 100)
betamachine = ss.beta(alpha0, beta0)
plt.plot(x, betamachine.pdf(x), 'r-', lw=3, alpha=0.8, label='beta pdf')
print("Improved guess: ", alpha0, beta0)
Explanation: I'm going to use log-scaling for visualization in the rest of the notebook, because it makes the correlation we're interested in more visible. But I will continue to manipulate the unscaled variables.
Bayesian inferences
As you can see above, most of the weight of the graph is in that bottom row, where we have no direct evidence about the author's sales. That's because we're relying on "bestseller lists," which only report the very top of the market. But there is good reason to suspect that the loose correlation between volumes-preserved and sales would also extend down into the space above that bottom row, if we were using a source of evidence that didn't have such a huge gap between "zero appearances on a bestseller list" and "one appearance on a bestseller list."
More generally, there's good reason to suspect that a lot of our estimates of sales are too far from the blue regression line marked above. If you went out and gathered sales evidence from a different source, and asked me to predict where each author would fall in your sample, I would be well advised to slide my predictions toward that regression line, for reasons well explained in a classic article by Bradley Efron and Carl Morris.
On the other hand, I wouldn't want to slide every prediction an equal distance. If I were smart, I would be particularly flexible about the authors where I have little evidence. We have a lot of evidence in the upper right-hand corner; we're not going to wake up tomorrow and discover that Charles Dickens was actually obscure. But toward the bottom of the graph, and the left-hand side, discovering a single new piece of evidence could dramatically change my estimate about an author who I thought was totally unknown.
Let's translate this casual reasoning into Bayesian language. We can use the loose correlation of sales and volumes-preserved to articulate a rough prior probability about the ratio likely to hold between those two variables.
Then we can improve our estimates of sales by viewing the bestseller lists as new evidence that merely updates our prior beliefs about authors, based on the shelf space they occupy in libraries. (Bayes' theorem is a formal way of explaining how we update our beliefs in the light of new evidence.)
But how much weight, exactly, should we give our prior? My first approach here was to say "um, I dunno, maybe fifty percent?" Which might be fine. But it turns out there's a more principled way to do this, which allows the weight of our prior to vary depending on the amount of new evidence we have for each writer. It's an approach called "empirical Bayes," because it infers a prior probability from patterns in the evidence. (Some people would argue that all Bayesian reasoning is empirical, but this notebook is not the place to hash that out!)
As David Robinson has explained, batting ratios can be imagined as a binomial distribution. Each time a batter goes up to the plate, they get a hit (1) or a miss (0), and the batting ratio is hits, divided by total at-bats. But if a batter has had very few visits to the plate, we're well advised to treat that ratio skeptically, sliding it toward our prior about the grand mean of batting averages for all hitters. It turns out that a good way to improve your estimate of a binomial distribution is to add a fixed amount to both sides of the ratio: the number of hits and the number of at-bats. If we're looking at someone who has had very few at-bats, those small fixed amounts can dramatically change their batting average, moving it toward the grand mean. Hank Aaron has had a lot of at-bats, so he's not going to slide around much.
We could view appearances on bestseller lists roughly the same way. Each time an author publishes a book, they have a chance to appear on a bestseller list. So the bestseller count, divided by total volumes preserved, is sort of roughly like hits, divided by total at-bats. I'm repeating "roughly" in italics because, of course, we aren't literally counting each book that an author published and checking how many copies it sold. We're using very rough and incomplete proxies for both variables. So the binomial assumption is probably not entirely accurate. But it may be accurate enough to improve our very spotty evidence about sales.
Well, as Robinson and Julia Silge have explained better than I can, if you've got a bunch of binomial distributions, you can often improve your estimates by assuming that the parameters \theta of those distributions are drawn from a hyperparameter that has the shape of a Beta distribution, governed by parameters \alpha and \beta. (Alpha and beta then become the fixed amounts you add to your counts of "hits" and "at-bats.") The code below is going to try to find those parameters.
There is one tricky, ad-hoc thing I add to the mix, which actually does make a big difference. One of the big problems with my evidence is that there's a huge gap between one volume on a bestseller list and no volumes. In reality, I know that market prominence is not counted by integers, and there ought to be a lot more fuzziness, which will matter especially at the bottom of the scale. To reflect this assumption I add some normally-distributed noise to the sales estimates before modeling them with a Beta distribution.
End of explanation
# Below we actually create a posterior estimate.
authordata['posterior'] = 0
for i in authordata.index:
y = authordata.loc[i, 'salesevidence']
x = authordata.loc[i, 'num_vols'] + 1
authordata.loc[i, 'posterior'] = np.log((x * ((y + alpha0) / (x + alpha0 + beta0))) + 1)
# Now we visualize the change.
# Two subplots, sharing the X axis
fig, axarr = plt.subplots(2, sharex = True, figsize = (10, 12))
matplotlib.rcParams.update({'font.size': 18})
axarr[0].scatter(authordata.logvols, authordata.logsales, c = 'darkorchid')
axarr[0].set_title('Log-scaled raw sales estimates')
axarr[1].scatter(authordata.logvols, authordata.posterior, c = 'seagreen', alpha = 0.7)
axarr[1].set_title('Sales estimates after applying Bayesian prior')
axarr[1].set_xlabel('log(number of career volumes)')
fig.savefig('bayesiantransform.png', bbox_inches = 'tight')
plt.show()
Explanation: The Beta distribution above
The histogram shows the number of occurrences of different ratios between salesevidence and num_vols (actually using a noisy prior instead of salesevidence itself).
The red line shows a Beta distribution fit to the histogram. Alpha is 0.0597, Beta is 8.807.
Applying the prior, and updating our posterior estimate
We inherit parameters alpha0 and beta0 from the cell above, and apply them to update salesevidence.
The logic of the updating is: first, use our prior about the distribution of sales ratios to slide the observed ratio for an author toward the prior. We do that by adding alpha0 to salesevidence and both alpha0 and beta0 to num_vols, then calculating a new ratio.
Finally, we use the recalculated ratio to adjust our original estimate of salesevidence by multiplying the new ratio times num_vols. Note that I also log-scale the posterior.
End of explanation
# Two subplots, sharing the X axis
fig, axarr = plt.subplots(2, sharex = True, figsize = (10, 12))
matplotlib.rcParams.update({'font.size': 18})
axarr[0].scatter(authordata.midcareer, authordata.logsales, c = 'darkorchid')
axarr[0].set_title('Log-scaled raw sales estimates, across time')
axarr[1].scatter(authordata.midcareer, authordata.posterior, c = 'green')
axarr[1].set_title('Sales estimates after applying Bayesian prior')
axarr[1].set_xlim((1840, 1956))
axarr[1].set_ylim((-0.2, 3.2))
fig.savefig('images/bayesianacrosstime.png', bbox_inches = 'tight')
plt.show()
Explanation: This is my favorite part of the notebook, because principled inferences turn out to be pretty.
You can see that the Bayesian prior has relatively little effect on the rankings of prominent authors in the upper right-hand corner (Charles Dickens and Ellen Wood are still going to be the two dots at the top). But it very substantially warps space at the bottom and on the left side of the graph, giving the lower image a bit more flair.
The other nice thing about this method is that it allows us to calculate uncertainty. I'm not going to do that in this notebook, but if we wanted to, we could calculate "credible intervals" for each author.
How this affects history
In this project, I'm less worried about individual authors than about the overall historical picture. How has that changed?
We can use our "midcareer" estimates for each author to plot them on a timeline.
End of explanation
authordata.sort_values(by = 'posterior', inplace = True, ascending = False)
thetop = np.array(authordata.index.tolist()[0:50])
thetop.shape = (25, 2)
thetop
Explanation: You can still see that authors not mentioned in bestseller lists are grouped together at the bottom of the frame. But they are no longer separated from the rest of the data by a rigid gulf. We can make a slightly better guess about the differences between different people in that group.
The Bayesian prior also slightly tweaks our sense of the relative prominence of bestselling authors, by using the number of volumes preserved in libraries as an additional source of evidence. But, as you can see, it doesn't make big alterations at the top of the scale.
By the way, who's actually up there? Mostly familiar names. I'm not hung up on the exact sorting of the top ten or twenty names. If that was our goal, we could certainly do a better job by checking publishers' archives for the top ten or twenty people. Even existing studies by Altick, etc, could give us a better sorting at the top of the list. But remember, we've got more than a thousand authors in this corpus! The real goal of this project is to achieve a coarse macroscopic sorting of the "top," "middle," and "bottom" of that much larger list. So here are the top fifty, in no particular order.
End of explanation
authordata['percentile'] = 0
for name in authordata.index:
mid = authordata.loc[name, 'midcareer']
subset = authordata.posterior[(authordata.midcareer > mid - 25) & ((authordata.midcareer < mid + 25))]
ranking = sorted(list(subset))
thisauth = authordata.loc[name, 'posterior']
for idx, element in enumerate(ranking):
if thisauth < element:
break
percent = idx / len(ranking)
authordata.loc[name, 'percentile'] = percent
authordata.plot.scatter(x = 'midcareer', y = 'percentile', figsize = (8,6))
authordata.to_csv('authordata.csv')
Explanation: percentile calculations
The distribution of authors across prominence is not really uniform. But for some purposes of visualization it's nice to have a uniform distribution, so let's create one. We can think of it as the author's "percentile" location within a moving 50-year window.
End of explanation
prestigery = dict()
with open('prestigedata.csv', encoding = 'utf-8') as f:
reader = csv.DictReader(f)
for row in reader:
auth = row['author']
if auth not in prestigery:
prestigery[auth] = []
prestigery[auth].append(float(row['logistic']))
authordata['prestige'] = float('nan')
for k, v in prestigery.items():
if k in authordata.index:
authordata.loc[k, 'prestige'] = sum(v) / len(v)
onlyboth = authordata.dropna(subset = ['prestige'])
onlyboth.shape
Explanation: How sales interact with critical prestige
In a separate analytical project, I've contrasted two samples of fiction (one drawn from reviews in elite periodicals, the other selected at random) to develop a loose model of literary prestige, as seen by reviewers.
Underline loose. Evidence about prestige is even fuzzier and harder to come by than evidence about sales. So this model has to rely on a couple of random samples, combined with the language of the text itself. It is only 72.5% accurate at predicting which works of fiction come from the reviewed set -- so by all means, take its predictions with a grain of salt! However, even an imperfect model can reveal loose organizing principles, and that's what we're interested in here.
Loading evidence of prestige
The model we're loading was produced by modeling each of the four quarter-centuries separately: (1850-74, 75-99, 1900-24, 25-49). Then the results of all four models were concatenated. This is going to be important when we analyze changes over time; we know that volumes in the twentieth century, for instance, were not evaluated using nineteenth-century critical standards. (In practice, the standards we can model don't change very rapidly, but it's worth making these divisions just to be on the safe side.)
End of explanation
def get_a_period(aframe, floor, ceiling):
''' Extracts a chronological slice of our data.
'''
subset = aframe[(aframe.midcareer >= floor) & (aframe.midcareer < ceiling)]
x = subset.percentile
y = subset.prestige
return x, y, subset
def get_an_author(anauthor, aframe):
''' Gets coordinates for an author in a given space.
'''
if anauthor not in aframe.index:
return 0, 0
else:
x = aframe.loc[anauthor, 'percentile']
y = aframe.loc[anauthor, 'prestige']
return x, y
def plot_author(officialname, vizname, aperiod, ax):
x, y = get_an_author(officialname, aperiod)
ax.text(x, y, vizname, fontsize = 14)
def revtocolor(number):
if number > 0.1:
return 'red'
else:
return 'blue'
def revtobinary(number):
if number > 0.1:
return 1
else:
return 0
# Let's plot the mid-Victorians
xvals, yvals, victoriana = get_a_period(onlyboth, 1840, 1875)
victoriana = victoriana.assign(samplecolor = victoriana.reviews.apply(revtocolor))
ax = victoriana.plot.scatter(x = 'percentile', y = 'prestige', s = victoriana.num_vols * 3 + 3, alpha = 0.25,
c = victoriana.samplecolor, figsize = (12,9))
authors_to_plot = {'Dickens, Charles': 'Charles Dickens', "Wood, Ellen": 'Ellen Wood',
'Ainsworth, William Harrison': 'W. H. Ainsworth',
'Lytton, Edward Bulwer Lytton': 'Edward Bulwer-Lytton',
'Eliot, George': 'George Eliot', 'Sikes, Wirt': 'Wirt Sikes',
'Collins, A. Maria': 'Maria A Collins',
'Hawthorne, Nathaniel': "Nathaniel Hawthorne",
'Southworth, Emma Dorothy Eliza Nevitte': 'E.D.E.N. Southworth',
'Helps, Arthur': 'Arthur Helps'}
for officialname, vizname in authors_to_plot.items():
plot_author(officialname, vizname, victoriana, ax)
ax.set_xlabel('percentile ranking, sales')
ax.set_ylabel('probability of review in elite journals')
ax.set_title('The literary field, 1850-74\n')
ax.text(0.5, 0.1, 'circle area = number of volumes in Hathi\nred = reviewed sample', color = 'blue', fontsize = 14)
ax.set_ylim((0,0.9))
ax.set_xlim((-0.05,1.1))
#victoriana = victoriana.assign(binaryreview = victoriana.reviews.apply(revtobinary))
#y, X = dmatrices('binaryreview ~ percentile + prestige', data=victoriana, return_type='dataframe')
#crudelm = smf.Logit(y, X).fit()
#params = crudelm.params
#theintercept = params[0] / -params[2]
#theslope = params[1] / -params[2]
#newX = np.linspace(0, 1, 20)
#abline_values = [theslope * i + theintercept for i in newX]
#ax.plot(newX, abline_values, color = 'black')
spines_to_remove = ['top', 'right']
for spine in spines_to_remove:
ax.spines[spine].set_visible(False)
plt.savefig('images/field1850noline.png', bbox_inches = 'tight')
plt.show()
Explanation: Let's explore the literary field!
Using this data, we can quantitatively recreate Pierre Bourdieu's map of social space.
To get a sense of what we're looking at, it may be helpful first of all just to see where some familiar names fall.
End of explanation
plt.savefig('images/field1900.png', bbox_inches = 'tight')
plt.show()
print(theslope)
Explanation: The upward drift of these points reveals a fairly strong correlation between prestige and sales. It is possible to find a few high-selling authors who are predicted to lack critical prestige -- notably, for instance, the historical novelist W. H. Ainsworth and the sensation novelist Ellen Wood, author of East Lynne. It's harder to find authors who have prestige but no sales: there's not much in the northwest corner of the map. Arthur Helps, a Cambridge Apostle, is a fairly lonely figure.
Fast-forward fifty years and we see a different picture.
End of explanation
# Let's plot modernism!
xvals, yvals, modernity2 = get_a_period(onlyboth, 1925,1950)
modernity2 = modernity2.assign(samplecolor = modernity2.reviews.apply(revtocolor))
ax = modernity2.plot.scatter(x = 'percentile', y = 'prestige', s = modernity2.num_vols * 3 + 3,
c = modernity2.samplecolor, alpha = 0.3, figsize = (12,9))
authors_to_plot = {'Cain, James M': 'James M Cain', 'Faulkner, William': 'William Faulkner',
'Stein, Gertrude': 'Gertrude Stein', 'Hemingway, Ernest': 'Ernest Hemingway',
'Joyce, James': 'James Joyce', 'Forester, C. S. (Cecil Scott)': 'C S Forester',
'Spillane, Mickey': 'Mickey Spillane', 'Welty, Eudora': 'Eudora Welty',
'Howard, Robert E': 'Robert E Howard', 'Buck, Pearl S': 'Pearl S Buck',
'Shute, Nevil': 'Nevil Shute', 'Waugh, Evelyn': 'Evelyn Waugh',
'Christie, Agatha': 'Agatha Christie', 'Rhys, Jean': 'Jean Rhys',
'Wodehouse, P. G': 'P G Wodehouse', 'Wright, Richard': 'Richard Wright',
'Hurston, Zora Neale': 'Zora Neale Hurston'}
for officialname, vizname in authors_to_plot.items():
plot_author(officialname, vizname, modernity2, ax)
ax.set_xlabel('percentile ranking, sales')
ax.set_ylabel('probability of review in elite journals')
ax.set_title('The literary field, 1925-49\n')
ax.text(0.5, 0.0, 'circle area = number of volumes in Hathi\nred = reviewed sample', color = 'blue', fontsize = 12)
ax.set_ylim((-0.05,1))
ax.set_xlim((-0.05, 1.2))
#modernity2 = modernity2.assign(binaryreview = modernity2.reviews.apply(revtobinary))
#y, X = dmatrices('binaryreview ~ percentile + prestige', data=modernity2, return_type='dataframe')
#crudelm = smf.Logit(y, X).fit()
#params = crudelm.params
#theintercept = params[0] / -params[2]
#theslope = params[1] / -params[2]
#newX = np.linspace(0, 1, 20)
#abline_values = [theslope * i + theintercept for i in newX]
#ax.plot(newX, abline_values, color = 'black')
spines_to_remove = ['top', 'right']
for spine in spines_to_remove:
ax.spines[spine].set_visible(False)
plt.savefig('images/field1925noline.png', bbox_inches = 'tight')
plt.show()
print(theslope)
Explanation: Here, again, an upward drift reveals a loose correlation between prestige and sales. But is the correlation perhaps a bit weaker? There seem to be more authors in the "upper midwest" portion of the map now -- people like Wyndham Lewis and Sherwood Anderson, who have critical prestige but not enormous sales.
There's also a distinct "genre fiction" and "pulp fiction" world emerging in the southeast corner of this map. E. Phillips Oppenheim, British author of adventure fiction, would be right next to Edgar Rice Burroughs, if we had room to print both names.
Moreover, if you just look at the large circles (the authors we're likely to remember), you can start to see how people in this period might get the idea that sales are actually negatively correlated with critical prestige. It almost looks like a diagonal line slanting down from Sherwood Anderson to Zane Grey. If you look at the bigger picture, the slant is actually going the other direction. The people over by Elma Travis would sadly remind us that, in fact, prestige still correlates positively with sales! But you can see how that might not be obvious at the top of the market. There's a faint backward slant on the right-hand side that wasn't visible among the Victorians.
End of explanation
pears = []
pvals = []
for floor in range(1850, 1950, 25):
ceiling = floor + 25
pear = pearsonr(onlyboth.percentile[(onlyboth.midcareer >= floor) & (onlyboth.midcareer < ceiling)], onlyboth.prestige[(onlyboth.midcareer >= floor) & (onlyboth.midcareer < ceiling)])
pears.append(int(pear[0]*1000)/1000)
pvals.append(int((pear[1]+ .0001)*10000)/10000)
def get_line(subset):
lm = smf.ols(formula='prestige ~ percentile', data=subset).fit()
xpred = np.linspace(-0.1, 1.1, 50)
xpred = pd.DataFrame({'percentile': xpred})
ypred = lm.predict(xpred)
return xpred, ypred
matplotlib.rcParams.update({'font.size': 16})
f, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex='row', sharey='row', figsize = (14, 14))
x, y, subset = get_a_period(onlyboth, 1840, 1875)
ax1.scatter(x, y)
xpred, ypred = get_line(subset)
ax1.plot(xpred, ypred, linewidth = 2, color = 'red')
ax1.set_title('1850-74: r=' + str(pears[0]) + ", p<" + str(pvals[0]))
x, y, subset = get_a_period(onlyboth, 1875, 1900)
ax2.scatter(x, y)
xpred, ypred = get_line(subset)
ax2.plot(xpred, ypred, linewidth = 2, color = 'red')
ax2.set_title('1875-99: r=' + str(pears[1])+ ", p<" + str(pvals[1]))
x, y, subset = get_a_period(onlyboth, 1900, 1925)
ax3.scatter(x, y)
xpred, ypred = get_line(subset)
ax3.plot(xpred, ypred, linewidth = 2, color = 'red')
ax3.set_title('1900-24: r=' + str(pears[2])+ ", p<" + str(pvals[2]))
x, y, subset = get_a_period(onlyboth, 1925, 1960)
ax4.scatter(x, y)
xpred, ypred = get_line(subset)
ax4.plot(xpred, ypred, linewidth = 2, color = 'red')
ax4.set_title('1925-49: r=' + str(pears[3])+ ", p<" + str(pvals[3]))
f.savefig('images/fourcorrelations.png', bbox_inches = 'tight')
Explanation: In the second quarter of the twentieth century, the slope of the upper right quadrant becomes even more visible. The whole field is now, in effect, round. Scroll back up to the Victorians, and you'll see that wasn't true.
Also, the locations of my samples have moved around the map. There are a lot more blue, "random" books over on the right side now, among the bestsellers, than there were in the Victorian era. So the strength of the linguistic boundary between "reviewed" and "random" samples may be roughly the same, but its meaning has probably changed; it's becoming more properly a boundary between the prestigious and the popular, whereas in the 19c, it tended to be a boundary between the prestigious and the obscure.
The overall correlation between sales and prestige
But we don't have to rely on vague visual guesses to estimate the strength of the correlation between two variables. Let's measure the correlation, and ask how it varies over time.
End of explanation
for floor in range(1850, 1950, 25):
ceiling = floor + 25
x, y, aperiod = get_a_period(authordata, floor, ceiling)
pear = pearsonr(aperiod.posterior, aperiod.reviews)
print(floor, ceiling, pear)
Explanation: So what have we achieved?
There is a steady decline in the correlation between prestige and sales. The correlation coefficient (r) steadily declines -- and for whatever it's worth, the p value is less than 0.05 in the first three plots, but not significant in the last. It's not a huge change, but that itself may be part of what we learn using this method.
I think this decline is roughly what we might expect to see: popularity and prestige are supposed to stop correlating as we enter the twentieth century. But I'm still extremely happy to see the pattern emerge.
For one thing, we may be getting some new insights about the way well-known transformations at the top of the market relate to the less-publicized struggles of Wirt Sikes and Elma A Travis. Our received literary histories sometimes make it sound like modernism introduces a yawning chasm in the literary field — Andreas Huyssen's "Great Divide." If you focus on the upper right-hand corner of the map, that description may be valid. But if you back out for a broader picture, this starts to look like a rather subtle shift — perhaps just a rounding out of the literary field as new niches are filled in.
More fundamentally, I'm pleased to see that the method itself works. The evidence we're working with is rough and patchy; we had to make a lot of dubious inferences along the road. But the inferences seem to work: we're getting a map of the literary field that looks loosely familiar, and that changes over time roughly as we might expect it to change. This looks like a foundation we could build on: we could start to pose questions, for instance, about the way publishing houses positioned themselves in this space.
A simpler solution
However, if you're not impressed by the clouds of dots above, the truth is that we don't need my textual model of literary prestige to confirm the divergence of sales and prestige across this century. At the top of the market, the changes are quite dramatic. So a straightforward Pearson correlation between sales evidence and the number of reviews an author gets from elite publications will do the trick.
Note that I make no pretense of having counted all reviews; this is just the number of times I (and the research assistants mentioned in "Acknowledgements" above) encountered an author in our random sample of elite journals. And since it's hard to sample entirely randomly, it is possible that the number of reviews is shaped (half-consciously) by our contemporary assumptions about prestige, encouraging us to "make sure we get so-and-so." I think, however, the declining correlation is too dramatic to be explained away in that fashion.
End of explanation
for floor in range(1850, 1950, 25):
ceiling = floor + 25
x, y, aperiod = get_a_period(authordata, floor, ceiling)
pear = pearsonr(aperiod.salesevidence, aperiod.reviews)
print(floor, ceiling, pear)
Explanation: You don't even have to use my posterior estimate of sales. The raw counts will work, though the correlation is not as strong.
End of explanation
authordata.to_csv('data/authordata.csv', index_label = 'author')
onlyboth.to_csv('data/pairedwithprestige.csv', index_label = 'author')
Explanation: Let's save the data so other notebooks can use it.
End of explanation |
5,617 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamic UNet
Unet model using PixelShuffle ICNR upsampling that can be built on top of any pretrained architecture
Step1: Export - | Python Code:
#|export
def _get_sz_change_idxs(sizes):
"Get the indexes of the layers where the size of the activation changes."
feature_szs = [size[-1] for size in sizes]
sz_chg_idxs = list(np.where(np.array(feature_szs[:-1]) != np.array(feature_szs[1:]))[0])
return sz_chg_idxs
#|hide
test_eq(_get_sz_change_idxs([[3,64,64], [16,64,64], [32,32,32], [16,32,32], [32,32,32], [16,16]]), [1,4])
test_eq(_get_sz_change_idxs([[3,64,64], [16,32,32], [32,32,32], [16,32,32], [32,16,16], [16,16]]), [0,3])
test_eq(_get_sz_change_idxs([[3,64,64]]), [])
test_eq(_get_sz_change_idxs([[3,64,64], [16,32,32]]), [0])
#|export
class UnetBlock(Module):
"A quasi-UNet block, using `PixelShuffle_ICNR upsampling`."
@delegates(ConvLayer.__init__)
def __init__(self, up_in_c, x_in_c, hook, final_div=True, blur=False, act_cls=defaults.activation,
self_attention=False, init=nn.init.kaiming_normal_, norm_type=None, **kwargs):
self.hook = hook
self.shuf = PixelShuffle_ICNR(up_in_c, up_in_c//2, blur=blur, act_cls=act_cls, norm_type=norm_type)
self.bn = BatchNorm(x_in_c)
ni = up_in_c//2 + x_in_c
nf = ni if final_div else ni//2
self.conv1 = ConvLayer(ni, nf, act_cls=act_cls, norm_type=norm_type, **kwargs)
self.conv2 = ConvLayer(nf, nf, act_cls=act_cls, norm_type=norm_type,
xtra=SelfAttention(nf) if self_attention else None, **kwargs)
self.relu = act_cls()
apply_init(nn.Sequential(self.conv1, self.conv2), init)
def forward(self, up_in):
s = self.hook.stored
up_out = self.shuf(up_in)
ssh = s.shape[-2:]
if ssh != up_out.shape[-2:]:
up_out = F.interpolate(up_out, s.shape[-2:], mode='nearest')
cat_x = self.relu(torch.cat([up_out, self.bn(s)], dim=1))
return self.conv2(self.conv1(cat_x))
#|export
class ResizeToOrig(Module):
"Merge a shortcut with the result of the module by adding them or concatenating them if `dense=True`."
def __init__(self, mode='nearest'): self.mode = mode
def forward(self, x):
if x.orig.shape[-2:] != x.shape[-2:]:
x = F.interpolate(x, x.orig.shape[-2:], mode=self.mode)
return x
#|export
class DynamicUnet(SequentialEx):
"Create a U-Net from a given architecture."
def __init__(self, encoder, n_out, img_size, blur=False, blur_final=True, self_attention=False,
y_range=None, last_cross=True, bottle=False, act_cls=defaults.activation,
init=nn.init.kaiming_normal_, norm_type=None, **kwargs):
imsize = img_size
sizes = model_sizes(encoder, size=imsize)
sz_chg_idxs = list(reversed(_get_sz_change_idxs(sizes)))
self.sfs = hook_outputs([encoder[i] for i in sz_chg_idxs], detach=False)
x = dummy_eval(encoder, imsize).detach()
ni = sizes[-1][1]
middle_conv = nn.Sequential(ConvLayer(ni, ni*2, act_cls=act_cls, norm_type=norm_type, **kwargs),
ConvLayer(ni*2, ni, act_cls=act_cls, norm_type=norm_type, **kwargs)).eval()
x = middle_conv(x)
layers = [encoder, BatchNorm(ni), nn.ReLU(), middle_conv]
for i,idx in enumerate(sz_chg_idxs):
not_final = i!=len(sz_chg_idxs)-1
up_in_c, x_in_c = int(x.shape[1]), int(sizes[idx][1])
do_blur = blur and (not_final or blur_final)
sa = self_attention and (i==len(sz_chg_idxs)-3)
unet_block = UnetBlock(up_in_c, x_in_c, self.sfs[i], final_div=not_final, blur=do_blur, self_attention=sa,
act_cls=act_cls, init=init, norm_type=norm_type, **kwargs).eval()
layers.append(unet_block)
x = unet_block(x)
ni = x.shape[1]
if imsize != sizes[0][-2:]: layers.append(PixelShuffle_ICNR(ni, act_cls=act_cls, norm_type=norm_type))
layers.append(ResizeToOrig())
if last_cross:
layers.append(MergeLayer(dense=True))
ni += in_channels(encoder)
layers.append(ResBlock(1, ni, ni//2 if bottle else ni, act_cls=act_cls, norm_type=norm_type, **kwargs))
layers += [ConvLayer(ni, n_out, ks=1, act_cls=None, norm_type=norm_type, **kwargs)]
apply_init(nn.Sequential(layers[3], layers[-2]), init)
#apply_init(nn.Sequential(layers[2]), init)
if y_range is not None: layers.append(SigmoidRange(*y_range))
layers.append(ToTensorBase())
super().__init__(*layers)
def __del__(self):
if hasattr(self, "sfs"): self.sfs.remove()
from fastai.vision.models import resnet34
m = resnet34()
m = nn.Sequential(*list(m.children())[:-2])
tst = DynamicUnet(m, 5, (128,128), norm_type=None)
x = cast(torch.randn(2, 3, 128, 128), TensorImage)
y = tst(x)
test_eq(y.shape, [2, 5, 128, 128])
tst = DynamicUnet(m, 5, (128,128), norm_type=None)
x = torch.randn(2, 3, 127, 128)
y = tst(x)
Explanation: Dynamic UNet
Unet model using PixelShuffle ICNR upsampling that can be built on top of any pretrained architecture
End of explanation
#|hide
from nbdev.export import *
notebook2script()
Explanation: Export -
End of explanation |
5,618 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
California house price prediction
Load the data and explore it
Step1: Create a test set
Step2: Do stratified sampling
Assuming median_income is an important predictor, we need to categorize it. It is important to build categories such that there are a sufficient number of data points in each strata, else the stratum's importance is biased. To make sure, we need not too many strata (like it is now with median income) and strata are relatively wide.
Step3: Now remove the income_cat field used for this sampling. We will learn on the median_income data instead
Step4: Exploratory data analysis
Step5: Do pairwise plot to understand how each feature is correlated to each other
Step6: Focussing on relationship between income and house value
Step7: Creating new features that are meaningful and also useful in prediction
Create the number of rooms per household, bedrooms per household, ratio of bedrooms to the rooms, number of people per household. We do this on the whole dataset, then collect the train and test datasets.
Step8: Prepare data for ML
Step9: Fill missing values using Imputer - using median values | Python Code:
import pandas as pd
housing = pd.read_csv(r"E:\GIS_Data\file_formats\CSV\housing.csv")
housing.head()
housing.info()
# find unique values in ocean proximity column
housing.ocean_proximity.value_counts()
#describe all numerical rows - basic stats
housing.describe()
%matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
Explanation: California house price prediction
Load the data and explore it
End of explanation
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(housing, test_size=0.2, random_state=42)
print(train_set.shape)
print(test_set.shape)
Explanation: Create a test set
End of explanation
# scale the median income down by dividing it by 1.5 and rounding up those which are greater than 5 to 5.0
import numpy as np
housing['income_cat'] = np.ceil(housing['median_income'] / 1.5) #up round to integers
#replace those with values > 5 with 5.0, values < 5 remain as is
housing['income_cat'].where(housing['income_cat'] < 5, 5.0, inplace=True)
housing['income_cat'].hist()
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in split.split(housing, housing['income_cat']):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
Explanation: Do stratified sampling
Assuming median_income is an important predictor, we need to categorize it. It is important to build categories such that there are a sufficient number of data points in each strata, else the stratum's importance is biased. To make sure, we need not too many strata (like it is now with median income) and strata are relatively wide.
End of explanation
for _temp in (strat_test_set, strat_train_set):
_temp.drop("income_cat", axis=1, inplace=True)
# Write the train and test data to disk
strat_test_set.to_csv('./housing_strat_test.csv')
strat_train_set.to_csv('./housing_strat_train.csv')
Explanation: Now remove the income_cat field used for this sampling. We will learn on the median_income data instead
End of explanation
strat_train_set.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4, s=strat_train_set['population']/100,
label='population', figsize=(10,7), color=strat_train_set['median_house_value'],
cmap=plt.get_cmap('jet'), colorbar=True)
plt.legend()
Explanation: Exploratory data analysis
End of explanation
import seaborn as sns
sns.pairplot(data=strat_train_set[['median_house_value','median_income','total_rooms','housing_median_age']])
Explanation: Do pairwise plot to understand how each feature is correlated to each other
End of explanation
strat_train_set.plot(kind='scatter', x='median_income', y='median_house_value', alpha=0.1)
Explanation: Focussing on relationship between income and house value
End of explanation
housing['rooms_per_household'] = housing['total_rooms'] / housing['households']
housing['bedrooms_per_household'] = housing['total_bedrooms'] / housing['households']
housing['bedrooms_per_rooms'] = housing['total_bedrooms'] / housing['total_rooms']
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
housing.plot(kind='scatter', x='bedrooms_per_household',y='median_house_value', alpha=0.5)
Explanation: Creating new features that are meaningful and also useful in prediction
Create the number of rooms per household, bedrooms per household, ratio of bedrooms to the rooms, number of people per household. We do this on the whole dataset, then collect the train and test datasets.
End of explanation
#create a copy without the house value column
housing = strat_train_set.drop('median_house_value', axis=1)
#create a copy of house value column into a new series, which will be the labeled data
housing_labels = strat_test_set['median_house_value'].copy()
Explanation: Prepare data for ML
End of explanation
from sklearn.preprocessing import Imputer
housing_imputer = Imputer(strategy='median')
#drop text columns let Imputer learn
housing_numeric = housing.drop('ocean_proximity', axis=1)
housing_imputer.fit(housing_numeric)
housing_imputer.statistics_
_x = housing_imputer.transform(housing_numeric)
housing_filled= pd.DataFrame(_x, columns=housing_numeric.columns)
housing_filled['ocean_proximity'] = housing['ocean_proximity']
housing_filled.head()
Explanation: Fill missing values using Imputer - using median values
End of explanation |
5,619 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VTK tools
Pygslib use VTK
Step1: Functions in vtktools
Step2: Load a cube defined in an stl file and plot it
STL is a popular mesh format included an many non-commercial and commercial software, example
Step3: Ray casting to find intersections of a lines with the cube
This is basically how we plan to find points inside solid and to define blocks inside solid
Step4: Test line on surface
Step5: Finding points
Step6: Find points inside a solid
Step7: Find points over a surface
Step8: Find points below a surface
Step9: Export points to a VTK file | Python Code:
import pygslib
import numpy as np
Explanation: VTK tools
Pygslib use VTK:
as data format and data converting tool
to plot in 3D
as a library with some basic computational geometry functions, for example to know if a point is inside a surface
Some of the functions in VTK were obtained or modified from Adamos Kyriakou at https://pyscience.wordpress.com/
End of explanation
help(pygslib.vtktools)
Explanation: Functions in vtktools
End of explanation
#load the cube
mycube=pygslib.vtktools.loadSTL('../datasets/stl/cube.stl')
# see the information about this data... Note that it is an vtkPolyData
print mycube
# Create a VTK render containing a surface (mycube)
renderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.50, background=(1,1,1))
# Now we plot the render
pygslib.vtktools.vtk_show(renderer, camera_position=(-20,20,20), camera_focalpoint=(0,0,0))
Explanation: Load a cube defined in an stl file and plot it
STL is a popular mesh format included an many non-commercial and commercial software, example: Paraview, Datamine Studio, etc.
End of explanation
# we have a line, for example a block model row
# defined by two points or an infinite line passing trough a dillhole sample
pSource = [-50.0, 0.0, 0.0]
pTarget = [50.0, 0.0, 0.0]
# now we want to see how this looks like
pygslib.vtktools.addLine(renderer,pSource, pTarget, color=(0, 1, 0))
pygslib.vtktools.vtk_show(renderer) # the camera position was already defined
# now we find the point coordinates of the intersections
intersect, points, pointsVTK= pygslib.vtktools.vtk_raycasting(mycube, pSource, pTarget)
print "the line intersects? ", intersect==1
print "the line is over the surface?", intersect==-1
# list of coordinates of the points intersecting
print points
#Now we plot the intersecting points
# To do this we add the points to the renderer
for p in points:
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
pygslib.vtktools.vtk_show(renderer)
Explanation: Ray casting to find intersections of a lines with the cube
This is basically how we plan to find points inside solid and to define blocks inside solid
End of explanation
# we have a line, for example a block model row
# defined by two points or an infinite line passing trough a dillhole sample
pSource = [-50.0, 5.01, 0]
pTarget = [50.0, 5.01, 0]
# now we find the point coordinates of the intersections
intersect, points, pointsVTK= pygslib.vtktools.vtk_raycasting(mycube, pSource, pTarget)
print "the line intersects? ", intersect==1
print "the line is over the surface?", intersect==-1
# list of coordinates of the points intersecting
print points
# now we want to see how this looks like
pygslib.vtktools.addLine(renderer,pSource, pTarget, color=(0, 1, 0))
for p in points:
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
pygslib.vtktools.vtk_show(renderer) # the camera position was already defined
# note that there is a tolerance of about 0.01
Explanation: Test line on surface
End of explanation
#using same cube but generation arbitrary random points
x = np.random.uniform(-10,10,150)
y = np.random.uniform(-10,10,150)
z = np.random.uniform(-10,10,150)
Explanation: Finding points
End of explanation
# selecting all inside the solid
# This two methods are equivelent but test=4 also works with open surfaces
inside,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=1)
inside1,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=4)
err=inside==inside1
#print inside, tuple(p)
print x[~err]
print y[~err]
print z[~err]
# here we prepare to plot the solid, the x,y,z indicator and we also
# plot the line (direction) used to ray trace
# convert the data in the STL file into a renderer and then we plot it
renderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.70, background=(1,1,1))
# add indicator (r->x, g->y, b->z)
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-7,-10,-10], color=(1, 0, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-7,-10], color=(0, 1, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-10,-7], color=(0, 0, 1))
# add ray to see where we are pointing
pygslib.vtktools.addLine(renderer, (0.,0.,0.), tuple(p), color=(0, 0, 0))
# here we plot the points selected and non-selected in different color and size
# add the points selected
for i in range(len(inside)):
p=[x[i],y[i],z[i]]
if inside[i]!=0:
#inside
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
else:
pygslib.vtktools.addPoint(renderer, p, radius=0.2, color=(0.0, 1.0, 0.0))
#lets rotate a bit this
pygslib.vtktools.vtk_show(renderer, camera_position=(0,0,50), camera_focalpoint=(0,0,0))
Explanation: Find points inside a solid
End of explanation
# selecting all over a solid (test = 2)
inside,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=2)
# here we prepare to plot the solid, the x,y,z indicator and we also
# plot the line (direction) used to ray trace
# convert the data in the STL file into a renderer and then we plot it
renderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.70, background=(1,1,1))
# add indicator (r->x, g->y, b->z)
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-7,-10,-10], color=(1, 0, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-7,-10], color=(0, 1, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-10,-7], color=(0, 0, 1))
# add ray to see where we are pointing
pygslib.vtktools.addLine(renderer, (0.,0.,0.), tuple(-p), color=(0, 0, 0))
# here we plot the points selected and non-selected in different color and size
# add the points selected
for i in range(len(inside)):
p=[x[i],y[i],z[i]]
if inside[i]!=0:
#inside
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
else:
pygslib.vtktools.addPoint(renderer, p, radius=0.2, color=(0.0, 1.0, 0.0))
#lets rotate a bit this
pygslib.vtktools.vtk_show(renderer, camera_position=(0,0,50), camera_focalpoint=(0,0,0))
Explanation: Find points over a surface
End of explanation
# selecting all over a solid (test = 2)
inside,p=pygslib.vtktools.pointquering(mycube, azm=0, dip=0, x=x, y=y, z=z, test=3)
# here we prepare to plot the solid, the x,y,z indicator and we also
# plot the line (direction) used to ray trace
# convert the data in the STL file into a renderer and then we plot it
renderer = pygslib.vtktools.polydata2renderer(mycube, color=(1,0,0), opacity=0.70, background=(1,1,1))
# add indicator (r->x, g->y, b->z)
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-7,-10,-10], color=(1, 0, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-7,-10], color=(0, 1, 0))
pygslib.vtktools.addLine(renderer,[-10,-10,-10], [-10,-10,-7], color=(0, 0, 1))
# add ray to see where we are pointing
pygslib.vtktools.addLine(renderer, (0.,0.,0.), tuple(p), color=(0, 0, 0))
# here we plot the points selected and non-selected in different color and size
# add the points selected
for i in range(len(inside)):
p=[x[i],y[i],z[i]]
if inside[i]!=0:
#inside
pygslib.vtktools.addPoint(renderer, p, radius=0.5, color=(0.0, 0.0, 1.0))
else:
pygslib.vtktools.addPoint(renderer, p, radius=0.2, color=(0.0, 1.0, 0.0))
#lets rotate a bit this
pygslib.vtktools.vtk_show(renderer, camera_position=(0,0,50), camera_focalpoint=(0,0,0))
Explanation: Find points below a surface
End of explanation
data = {'inside': inside}
pygslib.vtktools.points2vtkfile('points', x,y,z, data)
Explanation: Export points to a VTK file
End of explanation |
5,620 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook describes the setup of CLdb with a set of E. coli genomes.
Notes
It is assumed that you have CLdb in your PATH
Step1: The required files are in '../ecoli_raw/'
Step2: Checking that CLdb is installed in PATH
Step3: Setting up the CLdb directory
Step4: Downloading the genome genbank files. Using the 'GIs.txt' file
GIs.txt is just a list of GIs and taxon names.
Step5: Creating/loading CLdb of E. coli CRISPR data
Step6: Making CLdb sqlite file
Step7: Setting up CLdb config
This way, the CLdb script will know where the CLdb database is located.
Otherwise, you would have to keep telling the CLdb script where the database is.
Step8: Loading loci
The next step is loading the loci table.
This table contains the user-provided info on each CRISPR-CAS system in the genomes.
Let's look at the table before loading it in CLdb
Checking out the CRISPR loci table
Step9: Notes on the loci table
Step10: Notes on loading
A lot is going on here
Step11: The summary doesn't show anything for spacers, DRs, genes or leaders!
That's because we haven't loaded that info yet...
Loading CRISPR arrays
The next step is to load the CRISPR array tables.
These are tables in 'CRISPRFinder format' that have CRISPR array info.
Let's take a look at one of the array files before loading them all.
Step12: Note
Step13: Note
Step14: Setting array sense strand
The strand that is transcribed needs to be defined in order to have the correct sequence for downstream analyses (e.g., blasting spacers and getting PAM regions)
The sense (reading) strand is defined by (order of precedence)
Step15: Spacer and DR clustering
Clustering of spacer and/or DR sequences accomplishes
Step16: Database summary | Python Code:
# path to raw files
## CHANGE THIS!
rawFileDir = "~/perl/projects/CLdb/data/Ecoli/"
# directory where the CLdb database will be created
## CHANGE THIS!
workDir = "~/t/CLdb_Ecoli/"
# viewing file links
import os
import zipfile
import csv
from IPython.display import FileLinks
# pretty viewing of tables
## get from: http://epmoyer.github.io/ipy_table/
from ipy_table import *
rawFileDir = os.path.expanduser(rawFileDir)
workDir = os.path.expanduser(workDir)
Explanation: This notebook describes the setup of CLdb with a set of E. coli genomes.
Notes
It is assumed that you have CLdb in your PATH
End of explanation
FileLinks(rawFileDir)
Explanation: The required files are in '../ecoli_raw/':
a loci table
array files
genome nucleotide sequences
genbank (preferred) or fasta format
Let's look at the provided files for this example:
End of explanation
!CLdb -h
Explanation: Checking that CLdb is installed in PATH
End of explanation
# this makes the working directory
if not os.path.isdir(workDir):
os.makedirs(workDir)
# unarchiving files in the raw folder over to the newly made working folder
files = ['array.zip','loci.zip', 'GIs.txt.zip']
files = [os.path.join(rawFileDir, x) for x in files]
for f in files:
if not os.path.isfile(f):
raise IOError, 'Cannot find file: {}'.format(f)
else:
zip = zipfile.ZipFile(f)
zip.extractall(path=workDir)
print 'unzipped raw files:'
FileLinks(workDir)
Explanation: Setting up the CLdb directory
End of explanation
# making genbank directory
genbankDir = os.path.join(workDir, 'genbank')
if not os.path.isdir(genbankDir):
os.makedirs(genbankDir)
# downloading genomes
!cd $genbankDir; \
CLdb -- accession-GI2fastaGenome -format genbank -fork 5 < ../GIs.txt
# checking files
!cd $genbankDir; \
ls -thlc *.gbk
Explanation: Downloading the genome genbank files. Using the 'GIs.txt' file
GIs.txt is just a list of GIs and taxon names.
End of explanation
!CLdb -- makeDB -h
Explanation: Creating/loading CLdb of E. coli CRISPR data
End of explanation
!cd $workDir; \
CLdb -- makeDB -r -drop
CLdbFile = os.path.join(workDir, 'CLdb.sqlite')
print 'CLdb file location: {}'.format(CLdbFile)
Explanation: Making CLdb sqlite file
End of explanation
s = 'DATABASE = ' + CLdbFile
configFile = os.path.join(os.path.expanduser('~'), '.CLdb')
with open(configFile, 'wb') as outFH:
outFH.write(s)
print 'Config file written: {}'.format(configFile)
Explanation: Setting up CLdb config
This way, the CLdb script will know where the CLdb database is located.
Otherwise, you would have to keep telling the CLdb script where the database is.
End of explanation
lociFile = os.path.join(workDir, 'loci', 'loci.txt')
# reading in file
tbl = []
with open(lociFile, 'rb') as f:
reader = csv.reader(f, delimiter='\t')
for row in reader:
tbl.append(row)
# making table
make_table(tbl)
apply_theme('basic')
Explanation: Loading loci
The next step is loading the loci table.
This table contains the user-provided info on each CRISPR-CAS system in the genomes.
Let's look at the table before loading it in CLdb
Checking out the CRISPR loci table
End of explanation
!CLdb -- loadLoci -h
!CLdb -- loadLoci < $lociFile
Explanation: Notes on the loci table:
* As you can see, not all of the fields have values. Some are not required (e.g., 'fasta_file').
* You will get an error if you try to load a table with missing values in required fields.
* For a list of required columns, see the documentation for CLdb -- loadLoci -h.
Loading loci info into database
End of explanation
# This is just a quick summary of the database
## It should show 10 loci for the 'loci' rows
!CLdb -- summary
Explanation: Notes on loading
A lot is going on here:
Various checks on the input files
Extracting the genome fasta sequence from each genbank file
the genome fasta is required
Loading of the loci information into the sqlite database
Notes on the command
Why didn't I use the 'required' -database flag for CLdb -- loadLoci???
I didn't have to use the -database flag because it is provided via the .CLdb config file that was previously created.
End of explanation
# an example array file (obtained from CRISPRFinder)
arrayFile = os.path.join(workDir, 'array', 'Ecoli_0157_H7_a1.txt')
!head $arrayFile
Explanation: The summary doesn't show anything for spacers, DRs, genes or leaders!
That's because we haven't loaded that info yet...
Loading CRISPR arrays
The next step is to load the CRISPR array tables.
These are tables in 'CRISPRFinder format' that have CRISPR array info.
Let's take a look at one of the array files before loading them all.
End of explanation
# loading CRISPR array info
!CLdb -- loadArrays
# This is just a quick summary of the database
!CLdb -- summary
Explanation: Note: the array file consists of 4 columns:
spacer start
spacer sequence
direct-repeat sequence
direct-repeat stop
All extra columns ignored!
End of explanation
geneDir = os.path.join(workDir, 'genes')
if not os.path.isdir(geneDir):
os.makedirs(geneDir)
!cd $geneDir; \
CLdb -- getGenesInLoci 2> CAS.log > CAS.txt
# checking output
!cd $geneDir; \
head -n 5 CAS.log; \
echo -----------; \
tail -n 5 CAS.log; \
echo -----------; \
head -n 5 CAS.txt
# loading gene table into the database
!cd $geneDir; \
CLdb -- loadGenes < CAS.txt
Explanation: Note: The output should show 75 spacer & 85 DR entries in the database
Loading CAS genes
Technically, all coding seuqences in the region specified in the loci table (CAS_start, CAS_end) will be loaded.
This requires 2 subcommands:
The 1st gets the gene info
The 2nd loads the info into CLdb
End of explanation
!CLdb -- setSenseStrand
Explanation: Setting array sense strand
The strand that is transcribed needs to be defined in order to have the correct sequence for downstream analyses (e.g., blasting spacers and getting PAM regions)
The sense (reading) strand is defined by (order of precedence):
The leader region (if defined; in this case, no).
Array_start,Array_end in the loci table
The genome negative strand will be used if array_start > array_end
End of explanation
!CLdb -- clusterArrayElements -s -r
Explanation: Spacer and DR clustering
Clustering of spacer and/or DR sequences accomplishes:
A method of comparing within and between CRISPRs
A reducing redundancy for spacer and DR blasting
End of explanation
!CLdb -- summary -name -subtype
Explanation: Database summary
End of explanation |
5,621 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic PowerShell Execution
Metadata
| Metadata | Value |
|
Step1: Download & Process Security Dataset
Step2: Analytic I
Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. You can filter on powershell.exe as a host application if you want to or leave it without a filter to capture every single PowerShell host
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Looking for non-interactive powershell session might be a sign of PowerShell being executed by another application in the background
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Looking for non-interactive powershell session might be a sign of PowerShell being executed by another application in the background
| Data source | Event Provider | Relationship | Event |
|
Step5: Analytic IV
Monitor for processes loading PowerShell DLL system.management.automation
| Data source | Event Provider | Relationship | Event |
|
Step6: Analytic V
Monitoring for PSHost* pipes is another interesting way to find PowerShell execution
| Data source | Event Provider | Relationship | Event |
|
Step7: Analytic VI
The "PowerShell Named Pipe IPC" event will indicate the name of the PowerShell AppDomain that started. Sign of PowerShell execution
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Basic PowerShell Execution
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/04/10 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be leveraging PowerShell to execute code within my environment
Technical Context
None
Offensive Tradecraft
Adversaries can use PowerShell to perform a number of actions, including discovery of information and execution of code.
Therefore, it is important to understand the basic artifacts left when PowerShell is used in your environment.
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/execution/SDWIN-190518182022.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/execution/host/empire_launcher_vbs.zip |
Analytics
Initialize Analytics Engine
End of explanation
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/execution/host/empire_launcher_vbs.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
Explanation: Download & Process Security Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Channel
FROM sdTable
WHERE (Channel = "Microsoft-Windows-PowerShell/Operational" OR Channel = "Windows PowerShell")
AND (EventID = 400 OR EventID = 4103)
'''
)
df.show(10,False)
Explanation: Analytic I
Within the classic PowerShell log, event ID 400 indicates when a new PowerShell host process has started. You can filter on powershell.exe as a host application if you want to or leave it without a filter to capture every single PowerShell host
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Powershell | Windows PowerShell | Application host started | 400 |
| Powershell | Microsoft-Windows-PowerShell/Operational | User started Application host | 4103 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, NewProcessName, ParentProcessName
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 4688
AND NewProcessName LIKE "%powershell.exe"
AND NOT ParentProcessName LIKE "%explorer.exe"
'''
)
df.show(10,False)
Explanation: Analytic II
Looking for non-interactive powershell session might be a sign of PowerShell being executed by another application in the background
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Security-Auditing | Process created Process | 4688 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, ParentImage
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 1
AND Image LIKE "%powershell.exe"
AND NOT ParentImage LIKE "%explorer.exe"
'''
)
df.show(10,False)
Explanation: Analytic III
Looking for non-interactive powershell session might be a sign of PowerShell being executed by another application in the background
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Process | Microsoft-Windows-Sysmon/Operational | Process created Process | 1 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, ImageLoaded
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 7
AND (lower(Description) = "system.management.automation" OR lower(ImageLoaded) LIKE "%system.management.automation%")
'''
)
df.show(10,False)
Explanation: Analytic IV
Monitor for processes loading PowerShell DLL system.management.automation
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Module | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Image, PipeName
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 17
AND lower(PipeName) LIKE "\\\\pshost%"
'''
)
df.show(10,False)
Explanation: Analytic V
Monitoring for PSHost* pipes is another interesting way to find PowerShell execution
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Named Pipe | Microsoft-Windows-Sysmon/Operational | Process created Pipe | 17 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Message
FROM sdTable
WHERE Channel = "Microsoft-Windows-PowerShell/Operational"
AND EventID = 53504
'''
)
df.show(10,False)
Explanation: Analytic VI
The "PowerShell Named Pipe IPC" event will indicate the name of the PowerShell AppDomain that started. Sign of PowerShell execution
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Powershell | Microsoft-Windows-PowerShell/Operational | Application domain started | 53504 |
End of explanation |
5,622 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Compression using Autoencoders with BPSK
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
* joint compression and error protection of images by auto-encoders
* generation of BPSK symbols using stochastic quantizers
* transmission over a binary symmetric channel (BSC)
Step1: Illustration of a straight-through estimator
Step2: Import and load MNIST dataset (Preprocessing)
Dataloader are powerful instruments, which help you to prepare your data. E.g. you can shuffle your data, transform data (standardize/normalize), divide it into batches, ... For more information see https
Step3: Plot 8 random images
Step4: Specify Autoencoder
As explained in the lecture, we are using Stochstic Quantization. This means for the training process (def forward), we employ stochastic quantization in forward path but during back-propagation, we consider the quantization device as being
non-existent (.detach()). While validating and testing, use deterministic quantization (def test) <br>
Note
Step5: Helper function to get a random mini-batch of images
Step6: Perform the training
Step7: Evaluation
Compare sent and received images
Step8: Generate 8 arbitary images just by sampling random bit strings | Python Code:
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import numpy as np
from matplotlib import pyplot as plt
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("We are using the following device for learning:",device)
Explanation: Image Compression using Autoencoders with BPSK
This code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>
This code illustrates
* joint compression and error protection of images by auto-encoders
* generation of BPSK symbols using stochastic quantizers
* transmission over a binary symmetric channel (BSC)
End of explanation
def quantizer(x):
return torch.sign(x)
x = torch.tensor([-6,-4,-3,-2,0,1,2,3,4,5], dtype=torch.float, requires_grad=True, device=device)
q = quantizer(x)
y = torch.mean(2*q)
# calculate gradient of y
y.backward()
# print input vector
print('x:', x)
print('y:', y)
# print gradient, we can see that it is zero everywhere, as the quantizer is killing the gradient
print('Gradient w.r.t x', x.grad)
# quantizer without killing gradient
# the detach function detaches this part of function from the graph that is used to calculate the gradient
# The gradient is hence only computed with respect to x
qp = x + (quantizer(x) - x).detach()
yp = torch.mean(2*qp)
# compute gradient
# reset the gradient of x, needs to be done as pytorch accumulates the gradients
x.grad.zero_()
# compute gradient
yp.backward()
# print input vector
print('x\':', x)
print('y\':', yp)
# print gradient
print('Gradient w.r.t x\'', x.grad)
Explanation: Illustration of a straight-through estimator
End of explanation
batch_size_train = 60000 # Samples per Training Batch
batch_size_test = 10000 # just create one large test dataset (MNIST test dataset has 10.000 Samples)
# Get Training and Test Dataset with a Dataloader
train_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./files/', train=True, download=True,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()])),
batch_size=batch_size_train, shuffle=True)
test_loader = torch.utils.data.DataLoader(
torchvision.datasets.MNIST('./files/', train=False, download=True,
transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor()])),
batch_size=batch_size_test, shuffle=True)
# We are only interessted in the data and not in the targets
for idx, (data, targets) in enumerate(train_loader):
x_train = data[:,0,:,:]
for idx, (data, targets) in enumerate(test_loader):
x_test = data[:,0,:,:]
image_size = x_train.shape[1]
x_test_flat = torch.reshape(x_test, (x_test.shape[0], image_size*image_size))
Explanation: Import and load MNIST dataset (Preprocessing)
Dataloader are powerful instruments, which help you to prepare your data. E.g. you can shuffle your data, transform data (standardize/normalize), divide it into batches, ... For more information see https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader <br>
In our case, we just use the dataloader to download the Dataset and preprocess the data on our own.
End of explanation
plt.figure(figsize=(16,2))
for k in range(8):
plt.subplot(1,8,k+1)
plt.imshow(x_train[np.random.randint(x_train.shape[0])], interpolation='nearest', cmap='binary')
plt.xticks(())
plt.yticks(())
Explanation: Plot 8 random images
End of explanation
# target compression rate
bit_per_image = 24
# BSC error probability
Pe = 0.05
hidden_encoder_1 = 500
hidden_encoder_2 = 250
hidden_encoder_3 = 100
hidden_encoder = [hidden_encoder_1, hidden_encoder_2, hidden_encoder_3]
hidden_decoder_1 = 100
hidden_decoder_2 = 250
hidden_decoder_3 = 500
hidden_decoder = [hidden_decoder_1, hidden_decoder_2, hidden_decoder_3]
class Autoencoder(nn.Module):
def __init__(self, hidden_encoder, hidden_decoder, image_size, bit_per_image):
super(Autoencoder, self).__init__()
# Define Transmitter Layer: Linear function, M input neurons (symbols), 2 output neurons (real and imaginary part)
self.We1 = nn.Linear(image_size*image_size, hidden_encoder[0])
self.We2 = nn.Linear(hidden_encoder[0], hidden_encoder[1])
self.We3 = nn.Linear(hidden_encoder[1], hidden_encoder[2])
self.We4 = nn.Linear(hidden_encoder[2], bit_per_image)
# Define Receiver Layer: Linear function, 2 input neurons (real and imaginary part), M output neurons (symbols)
self.Wd1 = nn.Linear(bit_per_image,hidden_decoder[0])
self.Wd2 = nn.Linear(hidden_decoder[0], hidden_decoder[1])
self.Wd3 = nn.Linear(hidden_decoder[1], hidden_decoder[2])
self.Wd4 = nn.Linear(hidden_decoder[2], image_size*image_size)
# Non-linearity (used in transmitter and receiver)
self.activation_function = nn.ELU()
self.sigmoid = nn.Sigmoid()
self.softsign = nn.Softsign()
def forward(self, training_data, Pe):
encoded = self.encoder(training_data)
# random binarization in training
ti = encoded.clone()
# stop gradient in backpropagation, in forward path we use the binarizer, in backward path, just the encoder output
compressed = ti + (self.binarizer(ti) - ti).detach()
# add error pattern (flip the bit or not)
error_tensor = torch.distributions.Bernoulli(Pe * torch.ones_like(compressed)).sample()
received = torch.mul( compressed, 1 - 2*error_tensor)
reconstructed = self.decoder(received)
return reconstructed
def test(self, valid_data, Pe):
encoded_test = self.encoder(valid_data)
compressed_test = self.binarizer(encoded_test)
error_tensor_test = torch.distributions.Bernoulli(Pe * torch.ones_like(compressed_test)).sample()
received_test = torch.mul( compressed_test, 1 - 2*error_tensor_test )
reconstructed_test = self.decoder(received_test)
loss_test = torch.mean(torch.square(valid_data - reconstructed_test))
reconstructed_test_noerror = self.decoder(compressed_test)
return reconstructed_test
def encoder(self, batch):
temp = self.activation_function(self.We1(batch))
temp = self.activation_function(self.We2(temp))
temp = self.activation_function(self.We3(temp))
output = self.softsign(self.We4(temp))
return output
def decoder(self, batch):
temp = self.activation_function(self.Wd1(batch))
temp = self.activation_function(self.Wd2(temp))
temp = self.activation_function(self.Wd3(temp))
output = self.sigmoid(self.Wd4(temp))
return output
def binarizer(self, input):
return torch.sign(input)
Explanation: Specify Autoencoder
As explained in the lecture, we are using Stochstic Quantization. This means for the training process (def forward), we employ stochastic quantization in forward path but during back-propagation, we consider the quantization device as being
non-existent (.detach()). While validating and testing, use deterministic quantization (def test) <br>
Note: .detach() removes the tensor from the computation graph
End of explanation
def get_batch(x, batch_size):
idxs = np.random.randint(0, x.shape[0], (batch_size))
return torch.stack([torch.reshape(x[k], (-1,)) for k in idxs])
Explanation: Helper function to get a random mini-batch of images
End of explanation
batch_size = 250
model = Autoencoder(hidden_encoder, hidden_decoder, image_size, bit_per_image)
model.to(device)
# Mean Squared Error loss
loss_fn = nn.MSELoss()
# Adam Optimizer
optimizer = optim.Adam(model.parameters())
print('Start Training') # Training loop
for it in range(25000): # Original paper does 50k iterations
mini_batch = torch.Tensor(get_batch(x_train, batch_size)).to(device)
# Propagate (training) data through the net
reconstructed = model(mini_batch, Pe)
# compute loss
loss = loss_fn(mini_batch, reconstructed)
# compute gradients
loss.backward()
# Adapt weights
optimizer.step()
# reset gradients
optimizer.zero_grad()
# Evaulation with the test data
if it % 1000 == 0:
reconstructed_test = model.test(x_test_flat.to(device), Pe)
loss_test = torch.mean(torch.square(x_test_flat.to(device) - reconstructed_test))
print('It %d: Loss %1.5f' % (it, loss_test.detach().cpu().numpy().squeeze()))
print('Training finished')
Explanation: Perform the training
End of explanation
valid_images = model.test(x_test_flat.to(device), Pe).detach().cpu().numpy()
valid_binary = 0.5*(1 - model.binarizer(model.encoder(x_test_flat.to(device)))).detach().cpu().numpy() # from bipolar (BPSK) to binary
# show 8 images and their reconstructed versions
plt.figure(figsize=(16,4))
idxs = np.random.randint(x_test.shape[0],size=8)
for k in range(8):
plt.subplot(2,8,k+1)
plt.imshow(np.reshape(x_test_flat[idxs[k]], (image_size,image_size)), interpolation='nearest', cmap='binary')
plt.xticks(())
plt.yticks(())
plt.subplot(2,8,k+1+8)
plt.imshow(np.reshape(valid_images[idxs[k]], (image_size,image_size)), interpolation='nearest', cmap='binary')
plt.xticks(())
plt.yticks(())
# print binary data of the images
for k in range(8):
print('Image %d: ' % (k+1), valid_binary[idxs[k],:])
Explanation: Evaluation
Compare sent and received images
End of explanation
random_data = 1-2*np.random.randint(2,size=(8,bit_per_image))
generated_images = model.decoder(torch.Tensor(random_data).to(device)).detach().cpu().numpy()
plt.figure(figsize=(16,2))
for k in range(8):
plt.subplot(1,8,k+1)
plt.imshow(np.reshape(generated_images[k],(image_size,image_size)), interpolation='nearest', cmap='binary')
plt.xticks(())
plt.yticks(())
Explanation: Generate 8 arbitary images just by sampling random bit strings
End of explanation |
5,623 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
author = "Peter J Usherwood"
Esta tutorial é um exemplo de um aplicação em Python padrão, sem pacotes não padrão. Porque de isto, este codigo não é o mais simples ou eficiente, mas é transparente e um bom ferremento para apreder ambos Python, e cassificadores de apredizado de máquina. Isto é um aplicação útil tambem, vai dar os mesmos resultados de outros pactoes.
A aplicação nos vamos criar aqui é uma árvore de classificação. Uma árvore de classificação é um modelo de apredizado de maquinas classical que é usado para prever a classe de algumas observaçoes. este tecnico é usado hoje em muitas apliçoes comercial. É um modelo que aprende sobre lendo muitos exemplos de dados onde a classe é conhecido para aprender as regras que assina os arquivos para uma classe.
Step1: Carregando os dados e pre-procesamento
Antes que nos comecamos criando a nossa árvore de classificação nos precisamos criar algumas funçoes que vamos nos ajudar. Primeiro nos precisamos
Step2: Proximo nos precisamos fazer os dados tem a mesma quantidade de cada classe. Isso é importante por a maioria de classificadores em machine learning porque se não o classificador preveria o modo cada vez, porque isso daria a melhor precisão.
Tambem nos precisamos dividir os nossos dados dentro dois conjuntos
Step9: Proximo nos podemos começar construir a nossa árvore.
A árvore vai funcionar de dividido os registros de dados dentro grupos onde a distribução de classes são distinto, ela vai fazer isso muitas vezes ate uma boa previsão pode feitado.
Cada vez a árvore faz uma divisão é chamado um nó.
Para fazeras divisões de um nó a árvore recebe um valor de dado, por uma característica, e olhando o que vai acontecer para a distribução dos classes se a árvore dividido todos os registros sobre essa valore, por esta característica. Se os classes sera em groupos com a diferencia maior, isso e bom. Para escolher qual valor de qual característica para usar, a árvore iterar atraves cada característica em cada registro dos dados. Então ela compara todas as divisões e eschole a melhor. A medida de quanto separado sáo os classes, é o gini indíce.
Para começar nos vamos criar a função que faz a divião por um nó, vamos chamar obter_melhor_divisão.
Step11: Finalmente nos criamos uma função simples que nos vamos excecutar para criar a árvore.
Step12: E carrega!
Step13: Agora nos podemos usar a nossa árvore para prever a classe de dados.
Step14: Agora nos podemos fazer preveções usando a nossa função prever, é melhor se nos usamos registros no nosso conjuncto de teste porque a árvore nao viu essas antes. Nos podemos fazer uma previção e depois comparar o resulto para a classe atual.
Step15: Proximo nos vamos criar uma função que vai comparar tudos os registros no nosso conjunto de teste e da a precisão para nos. A precisão é definido de o por cento a árvore preveu corrigir. | Python Code:
from random import seed
from random import randrange
import random
from csv import reader
from math import sqrt
import copy
Explanation: author = "Peter J Usherwood"
Esta tutorial é um exemplo de um aplicação em Python padrão, sem pacotes não padrão. Porque de isto, este codigo não é o mais simples ou eficiente, mas é transparente e um bom ferremento para apreder ambos Python, e cassificadores de apredizado de máquina. Isto é um aplicação útil tambem, vai dar os mesmos resultados de outros pactoes.
A aplicação nos vamos criar aqui é uma árvore de classificação. Uma árvore de classificação é um modelo de apredizado de maquinas classical que é usado para prever a classe de algumas observaçoes. este tecnico é usado hoje em muitas apliçoes comercial. É um modelo que aprende sobre lendo muitos exemplos de dados onde a classe é conhecido para aprender as regras que assina os arquivos para uma classe.
End of explanation
# carregar o arquivo de CSV
def carregar_csv(nome_arquivo):
dados = list()
with open(nome_arquivo, 'r') as arquivo:
leitor_csv = reader(arquivo)
for linha in leitor_csv:
if not linha:
continue
dados.append(linha)
return dados
def str_coluna_para_int(dados, coluna):
classe = [row[column] for row in dataset]
unique = set(class_values)
lookup = dict()
for i, value in enumerate(unique):
lookup[value] = i
for row in dataset:
row[column] = lookup[row[column]]
return lookup
# Convert string column to float
def str_column_to_float(dataset, column):
column = column
dataset_copy = copy.deepcopy(dataset)
for row in dataset_copy:
row[column] = float(row[column].strip())
return dataset_copy
# carregar os dados
arquivo = '../../data_sets/sonar.all-data.csv'
dados = carregar_csv(arquivo)
# converte atributos de string para números inteiros
for i in range(0, len(dados[0])-1):
dados = str_column_to_float(dados, i)
dados_X = [linha[:-1] for linha in dados]
dados_Y = [linha[-1] for linha in dados]
Explanation: Carregando os dados e pre-procesamento
Antes que nos comecamos criando a nossa árvore de classificação nos precisamos criar algumas funçoes que vamos nos ajudar. Primeiro nos precisamos
End of explanation
def equilibrar_as_classes(dados_X, dados_Y):
classes = set(dados_Y)
conta_min = len(dados_Y)
for classe in classes:
conta = dados_Y.count(classe)
if conta < conta_min:
conta_min = conta
dados_igual_X = []
dados_igual_Y = []
indíces = set()
for classe in classes:
while len(dados_igual_Y) < len(classes)*conta_min:
indíce = random.randint(0,len(dados_X)-1)
classe = dados_Y[indíce]
if (indíce not in indíces) and (dados_igual_Y.count(classe) < conta_min):
indíces.update([indíce])
dados_igual_X.append(dados_X[indíce])
dados_igual_Y.append(dados_Y[indíce])
return dados_igual_X, dados_igual_Y
def criar_divisão_trem_teste(dados_X, dados_Y, relação=.8):
classes = set(dados_Y)
n_classes = len(classes)
trem_classe_tamanho = int((len(dados_Y)*relação)/n_classes)
indíces_todo = set(range(len(dados_X)))
indíces_para_escolher = set(range(len(dados_X)))
indíces = set()
trem_X = []
trem_Y = []
teste_X = []
teste_Y = []
while len(trem_Y) < trem_classe_tamanho*n_classes:
indíce = random.choice(list(indíces_para_escolher))
indíces_para_escolher.remove(indíce)
classe = dados_Y[indíce]
if (trem_Y.count(classe) < trem_classe_tamanho):
indíces.update([indíce])
trem_X.append(dados_X[indíce])
trem_Y.append(dados_Y[indíce])
indíces_teste = indíces_todo - indíces
for indíce in indíces_teste:
teste_X.append(dados_X[indíce])
teste_Y.append(dados_Y[indíce])
return trem_X, trem_Y, teste_X, teste_Y
dados_igual_X, dados_igual_Y = equilibrar_as_classes(dados_X, dados_Y)
trem_X, trem_Y, teste_X, teste_Y = criar_divisão_trem_teste(dados_igual_X, dados_igual_Y, relação=.8)
Explanation: Proximo nos precisamos fazer os dados tem a mesma quantidade de cada classe. Isso é importante por a maioria de classificadores em machine learning porque se não o classificador preveria o modo cada vez, porque isso daria a melhor precisão.
Tambem nos precisamos dividir os nossos dados dentro dois conjuntos: um conjunto de trem, e um conjunto de teste. Nos nao vamos olhar com o nosso conjunto de teste, vamos só usar esse no fim para dar uma precisão. O trem nos usaremos para treinar a nossa árvore. Normalmente vamos usar 80% dos nossos dados para a treinar, 20% para a provar.
Nos poderiamos fazer esses passos em o outro ordem, o resulto é mais ou menos a mesma.
End of explanation
def obter_melhor_divisão(dados_X, dados_Y, n_características=None):
Obter a melhor divisão pelo dados
:param dados_X: Lista, o conjuncto de dados
:param dados_Y: Lista, os classes
:param n_características: Int, o numero de características para usar, quando você está usando a árvore sozinha fica
esta entrada em None
:return: dicionário, pela melhor divisáo, o indíce da característica, o valor para dividir, e os groupos de registors
resultandos da divisão
classes = list(set(dados_Y)) #lista único de classes
b_indíce, b_valor, b_ponto, b_grupos = 999, 999, 999, None
# Addicionar os classes (dados_Y) para os registros
for i in range(len(dados_X)):
dados_X[i].append(dados_Y[i])
dados = dados_X
if n_características is None:
n_características = len(dados_X) - 1
# Faz uma lista de características únicos para usar
características = list()
while len(características) < n_características:
indíce = randrange(len(dados_X[0]))
if indíce not in características:
características.append(indíce)
for indíce in características:
for registro in dados_X:
grupos = tentar_divisão(indíce, registro[indíce], dados_X, dados_Y)
gini = gini_indíce(grupos, classes)
if gini < b_ponto:
b_indíce, b_valor, b_ponto, b_grupos = indíce, registro[indíce], gini, grupos
return {'indíce':b_indíce, 'valor':b_valor, 'grupos':b_grupos}
def tentar_divisão(indíce, valor, dados_X, dados_Y):
Dividir o dados sobre uma característica e o valor da caracaterística dele
:param indíce: Int, o indíce da característica
:param valor: Float, o valor do indíce por um registro
:param dados_X: List, o conjuncto de dados
:param dados_Y: List, o conjuncto de classes
:return: esquerda, direitaç duas listas de registros dividou de o valor de característica
esquerda_X, esquerda_Y, direita_X, direita_Y = [], [], [], []
for linha_ix in range(len(dados_X)):
if dados_X[linha_ix][indíce] < valor:
esquerda_X.append(dados_X[linha_ix])
esquerda_Y.append(dados_Y[linha_ix])
else:
direita_X.append(dados_X[linha_ix])
direita_Y.append(dados_Y[linha_ix])
return esquerda_X, esquerda_Y, direita_X, direita_Y
def gini_indíce(grupos, classes):
Calcular o indíce-Gini pelo dados diversão
:param grupos: O grupo de registros
:param classes: O conjuncto de alvos
:return: gini, Float a pontuação de pureza
grupos_X = grupos[0], grupos[2]
grupos_Y = grupos[1], grupos[3]
gini = 0.0
for valor_alvo in classes:
for grupo_ix in [0,1]:
tomanho = len(grupos_X[grupo_ix])
if tomanho == 0:
continue
proporção = grupos_Y[grupo_ix].count(classes) / float(tomanho)
gini += (proporção * (1.0 - proporção))
return gini
Agora que nos podemos obter a melhor divisão uma vez, nos precisamos fazer isso muitas vezes, e volta a resposta da árvore
def to_terminal(grupo_Y):
Voltar o valor alvo para uma grupo no fim de uma filial
:param grupo_Y: O conjuncto de classes em um lado de uma divisão
:return: valor_de_alvo, Int
valor_de_alvo = max(set(grupo_Y), key=grupo_Y.count)
return valor_de_alvo
def dividir(nó_atual, profundidade_max, tamanho_min, n_características, depth):
Recursivo, faz subdivisões por um nó ou faz um terminal
:param nó_atual: o nó estar analisado agora, vai mudar o root
:param max_profundidade: Int, o número máximo de iterações
esquerda_X, esquerda_Y, direita_X, direita_Y = nó_atual['grupos']
del(nó_atual['grupos'])
# provar por um nó onde um dos lados tem todos os dados
if not esquerda_X or not direita_X:
nó_atual['esquerda'] = nó_atual['direita'] = to_terminal(esquerda_Y + direita_Y)
return
# provar por profundidade maximo
if depth >= profundidade_max:
nó_atual['esquerda'], nó_atual['direita'] = to_terminal(esquerda_Y), to_terminal(direita_Y)
return
# processar o lado esquerda
if len(esquerda_X) <= tamanho_min:
nó_atual['esquerda'] = to_terminal(esquerda_Y)
else:
nó_atual['esquerda'] = obter_melhor_divisão(esquerda_X, esquerda_Y, n_características)
dividir(nó_atual['esquerda'], max_depth, min_size, n_características, depth+1)
# processar o lado direita
if len(direita_X) <= tamanho_min:
nó_atual['direita'] = to_terminal(direita_Y)
else:
nó_atual['direita'] = obter_melhor_divisão(direita_X, direita_Y, n_características)
dividir(nó_atual['direita'], max_depth, min_size, n_características, depth+1)
Explanation: Proximo nos podemos começar construir a nossa árvore.
A árvore vai funcionar de dividido os registros de dados dentro grupos onde a distribução de classes são distinto, ela vai fazer isso muitas vezes ate uma boa previsão pode feitado.
Cada vez a árvore faz uma divisão é chamado um nó.
Para fazeras divisões de um nó a árvore recebe um valor de dado, por uma característica, e olhando o que vai acontecer para a distribução dos classes se a árvore dividido todos os registros sobre essa valore, por esta característica. Se os classes sera em groupos com a diferencia maior, isso e bom. Para escolher qual valor de qual característica para usar, a árvore iterar atraves cada característica em cada registro dos dados. Então ela compara todas as divisões e eschole a melhor. A medida de quanto separado sáo os classes, é o gini indíce.
Para começar nos vamos criar a função que faz a divião por um nó, vamos chamar obter_melhor_divisão.
End of explanation
def criar_árvore(trem_X, trem_Y, profundidade_max, tamanho_min, n_características):
Criar árvore
:param:
root = obter_melhor_divisão(trem_X, trem_Y, n_características)
dividir(root, profundidade_max, tamanho_min, n_características, 1)
return root
Explanation: Finalmente nos criamos uma função simples que nos vamos excecutar para criar a árvore.
End of explanation
n_características = len(dados_X[0])-1
profundidade_max = 10
tamanho_min = 1
árvore = criar_árvore(trem_X, trem_Y, profundidade_max, tamanho_min, n_características)
Explanation: E carrega!
End of explanation
def prever(nó, linha):
if linha[nó['indíce']] < nó['valor']:
if isinstance(nó['esquerda'], dict):
return prever(nó['esquerda'], linha)
else:
return nó['esquerda']
else:
if isinstance(nó['direita'], dict):
return prever(nó['direita'], linha)
else:
return nó['direita']
Explanation: Agora nos podemos usar a nossa árvore para prever a classe de dados.
End of explanation
teste_ix = 9
print('A classe preveu da árvore é: ', str(prever(árvore, teste_X[teste_ix])))
print('A classe atual é: ', str(teste_Y[teste_ix][-1]))
Explanation: Agora nos podemos fazer preveções usando a nossa função prever, é melhor se nos usamos registros no nosso conjuncto de teste porque a árvore nao viu essas antes. Nos podemos fazer uma previção e depois comparar o resulto para a classe atual.
End of explanation
def precisão(teste_X, teste_Y, árvore):
pontos = []
for teste_ix in range(len(teste_X)):
preverção = prever(árvore, teste_X[teste_ix])
if preverção == teste_Y[teste_ix]:
pontos += [1]
else:
pontos += [0]
precisão_valor = sum(pontos)/len(pontos)
return precisão_valor, pontos
precisão_valor = precisão(teste_X, teste_Y, árvore)[0]
precisão_valor
Explanation: Proximo nos vamos criar uma função que vai comparar tudos os registros no nosso conjunto de teste e da a precisão para nos. A precisão é definido de o por cento a árvore preveu corrigir.
End of explanation |
5,624 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Avani Goyal, Nathaniel Dirks
Step1: Load the RECS dataset into the memory.
It is loaded in two different variables to use it for two different purposes.
1. datanames
Step2: Preliminary analysis of dataset
The dataset is categorized for different regions such as 'midatlantic' (regional division - #2) and 'westsouthcentral' (regional division - #7)
Step3: 'TOTALBTU' column represents the total energy consumption including electricity and other fuels like natural gas. Each regional dataset is plotted to observe the individual trends and to get a comparative picture.
Step4: The individual trends are similar and show an almost linear horizontal line.
'MIDATLANTIC' region is selected for carrying out further analysis and build a regression model for predicting energy consumption values.
Step5: Space heating energy consumption is analyzed against the dollar cost for space heating use to observe the correlation and check if it can be used for regression modeling.
Step6: Plotting a linear least squares fit line.
The line is observed to see the trendline of the randomly distributed data.
Step7: The least square fit line is observed to be almost horizontal suggesting uniform distribution of the data across the mean value of 104,896 BTU.
Selection of highest correlated variables impacting total energy consumption.
Preliminarily, names and explanation of the variables are obtained by the 'public layout' file.
Step8: Different variables are checked for their correlation value with the total energy consumption(TOTALBTU) based on manual understanding of the variables as shown below.
Step9: Result
Step10: Multivariable regression modeling for midatlantic residential energy consumption
The top predictor variables are plotted against total the energy consumption values to visualize the trend.
Step11: Base function for making designmatrix, beta_hat and R2 coefficents are defined for multi-variable regression modeling.
Step12: To remove the outliers, 'k' is defined as the cutoff above which the data will be trimmed. A 'for' loop is run below to optimize the 'k' value to obtain the maximum value of the R2 coefficient.
Step13: Using the results from above, the final dataset is created after removing the outliers having a value below k_max
Step14: Split the final dataset into train and test data
Step15: Validation
Step16: Mean of one variable is compared for both test and train dataset to check for significant difference between them.
Step17: Cross-validation
Step18: Three pairs of train and test datasets are created for cross validation purpose using the three datasets.
Step19: Final Result
Step20: Calculate uncertainties using 95% confidence intervals corresponding to t-distribution
This is calculated using the first train dataset created and the average beta_hat matrix. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import datetime as dt
from operator import itemgetter
import math
%matplotlib inline
Explanation: Avani Goyal, Nathaniel Dirks :
12752 :
Final Project
Due: 12/13/2015
End of explanation
f= open('recs2009_public.csv','r')
datanames = np.genfromtxt(f,delimiter=',', names=True,dtype=None)
data1 = np.genfromtxt('recs2009_public.csv',delimiter=',', skip_header=1)
Explanation: Load the RECS dataset into the memory.
It is loaded in two different variables to use it for two different purposes.
1. datanames: It stores RECS dataset along with the column names in tuple format
2. data1: It stores the data into a structured array format to be used for running iterations across all columns
End of explanation
midatlantic = datanames[np.where(datanames['DIVISION']==2)]
# print midatlantic[0]
print midatlantic.shape
wesouthcen = datanames[np.where(datanames['DIVISION']==7)]
# wesouthcen[0]
print wesouthcen.shape
Explanation: Preliminary analysis of dataset
The dataset is categorized for different regions such as 'midatlantic' (regional division - #2) and 'westsouthcentral' (regional division - #7)
End of explanation
plt.plot(midatlantic['TOTALBTU'], 'rd')
plt.plot(wesouthcen['TOTALBTU'], 'bd')
Explanation: 'TOTALBTU' column represents the total energy consumption including electricity and other fuels like natural gas. Each regional dataset is plotted to observe the individual trends and to get a comparative picture.
End of explanation
plt.hist(midatlantic['TOTALBTU'],bins=100)
Explanation: The individual trends are similar and show an almost linear horizontal line.
'MIDATLANTIC' region is selected for carrying out further analysis and build a regression model for predicting energy consumption values.
End of explanation
plt.plot(newdata['TOTALBTUSPH'],newdata['TOTALDOLSPH'], 'rd')
plt.xlabel('Space Heating Energy consumption (BTU)')
plt.ylabel('Total cost for space heating ($)')
Explanation: Space heating energy consumption is analyzed against the dollar cost for space heating use to observe the correlation and check if it can be used for regression modeling.
End of explanation
xi = np.arange(0,1328)
A = np.array([ xi, np.ones(1328)])
# linearly generated sequence
y = midatlantic['TOTALBTU']
# obtaining the parameters
w = np.linalg.lstsq(A.T,y)[0]
xa = np.arange(0,1328,5)
y = y[0:-1:5]
# plotting the regression line
line = w[0]*xa+w[1]
plt.plot(xa,line,'ro',xa,y)
plt.title('Linear least squares fit line')
plt.ylabel('Total energy usage (BTU)')
plt.show()
print "Average value of energy consumption (BTU):"
print np.average(y)
Explanation: Plotting a linear least squares fit line.
The line is observed to see the trendline of the randomly distributed data.
End of explanation
names = np.genfromtxt('public_layout.csv', delimiter=',',skip_header=1,dtype=None,usecols=[1])
print names
Explanation: The least square fit line is observed to be almost horizontal suggesting uniform distribution of the data across the mean value of 104,896 BTU.
Selection of highest correlated variables impacting total energy consumption.
Preliminarily, names and explanation of the variables are obtained by the 'public layout' file.
End of explanation
np.corrcoef(midatlantic['WINDOWS'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['TOTSQFT_EN'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['TEMPHOME'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['NWEIGHT'],midatlantic['TOTALBTU'])[1,0]
years = lambda d : ((dt.datetime.now()).year - d)
yearsold = np.array(list(map(years, midatlantic['YEARMADE'])))
midatlantic['YEARMADE']
print yearsold
np.corrcoef(midatlantic['YEARMADE'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['TOTROOMS'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['NHSLDMEM'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['MONEYPY'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['STORIES'],midatlantic['TOTALBTU'])[1,0]
np.corrcoef(midatlantic['WASHTEMP'],midatlantic['TOTALBTU'])[1,0]
Explanation: Different variables are checked for their correlation value with the total energy consumption(TOTALBTU) based on manual understanding of the variables as shown below.
End of explanation
data1_ma = data1[(np.where(data1[:,2]==2))]
def bestcorrelation(X):
vector = np.zeros((len(X.T), 2))
for i in range(len(X.T)):
vector[i,0] = int(i)
vector[i,1] = np.corrcoef(X[:,i],X[:,907])[1,0]
return vector
v = bestcorrelation(data1_ma)
plt.plot(v[:,1])
highcorr = v[(np.where(v[:,1]>=0.47))]
print "Variable with correlation values greater than 0.53: "
print highcorr
Explanation: Result: The top factors based on the manual selection of variables are 'TOTSQFT_EN', 'TOTROOMS' and 'WINDOWS' with the correlation coefficient values ranging from 0.49 - 0.55.
This is further validated by running iteration using 'for' loop to obtain correlation coefficent values for all 931 variables.
End of explanation
fig = plt.figure(1)
fig.set_size_inches(15, 4)
ax1 = fig.add_subplot(1,3,1)
ax1.plot((data[:,0]),(data[:,3]),'ro')
ax1.set_title("Total sqft")
ax1.set_ylabel("Energy consumption (BTU)")
ax2 = fig.add_subplot(1,3,2)
ax2.plot((data[:,1]),(data[:,3]),'bo')
ax2.set_title("Total rooms")
ax2.set_ylabel("Energy consumption (BTU)")
ax3 = fig.add_subplot(1,3,3)
ax3.plot((data[:,2]),(data[:,3]),'ro')
ax3.set_title("Total windows")
ax3.set_ylabel("Energy consumption (BTU)")
plt.show()
Explanation: Multivariable regression modeling for midatlantic residential energy consumption
The top predictor variables are plotted against total the energy consumption values to visualize the trend.
End of explanation
def designmatrix(var1, var2, var3):
designmatrix = np.vstack((var1, var2, var3))
designmatrix = designmatrix.T
return designmatrix
def beta_hat(X,Y):
dotp = np.dot(X.T,X)
Ainv = np.linalg.inv(dotp)
final = np.dot(Ainv,X.T)
final = np.dot(final,Y)
return final
def R2(X,Y,beta_hat):
m2 = Y-np.dot(X,beta_hat)
m1 = m2.T
y_avg =np.mean(Y)
n2 = Y - y_avg
n1 = n2.T
R2_value = 1 - ((np.dot(m1,m2))/(np.dot(n1,n2)))
return R2_value
Explanation: Base function for making designmatrix, beta_hat and R2 coefficents are defined for multi-variable regression modeling.
End of explanation
R2_max = 0
for k in range(150000,400000,10000):
newdata = midatlantic[np.where(midatlantic['TOTALBTU']<k)]
data = newdata['TOTSQFT_EN'],newdata['TOTROOMS'],newdata['WINDOWS'],newdata['TOTALBTU']
data = np.transpose(data)
data_sorted = sorted(data, key=itemgetter(1))
#Divide
data = data[0:-1]
data_train = data[::2]
data_test = data[1::2]
#Train dataset
area_train = data_train[:,0]
rooms_train = data_train[:,1]
windows_train = data_train[:,2]
btu_train = data_train[:,3]
dmx1 = designmatrix(area_train,rooms_train,windows_train)
beta_hat1 = beta_hat(dmx1,btu_train)
#Test dataset
area_test = data_test[:,0]
rooms_test = data_test[:,1]
windows_test = data_test[:,2]
btu_test = data_test[:,3]
dmx2 = designmatrix(area_test,rooms_test,windows_test)
btu_pre = np.dot(dmx2,beta_hat1)
R2_val = R2(dmx2,btu_test,beta_hat1)
plt.plot(k,R2_val,'ro')
plt.title('Distribution of R2 values')
plt.xlabel('Cutoff values of outlier (k)')
plt.ylabel('R2 value')
if R2_max < R2_val:
R2_max = R2_val
k_max = k
else:
R2_max = R2_max
k_max = k_max
print "Maximum value of R2: ",R2_max
print "At k value (k_max): ",k_max
btu_test.shape
Explanation: To remove the outliers, 'k' is defined as the cutoff above which the data will be trimmed. A 'for' loop is run below to optimize the 'k' value to obtain the maximum value of the R2 coefficient.
End of explanation
newdata = midatlantic[np.where(midatlantic['TOTALBTU']<k_max)]
data = newdata['TOTSQFT_EN'],newdata['TOTROOMS'],newdata['WINDOWS'],newdata['TOTALBTU']
data = np.transpose(data)
Explanation: Using the results from above, the final dataset is created after removing the outliers having a value below k_max
End of explanation
# Data is sorted on number of total rooms
data_sorted = sorted(data, key=itemgetter(1))
# Divide alternative values are taken henceforth for train and test dataset
data_sorted = np.array(data_sorted[0:-1])
data_train1 = np.array(data_sorted[::2])
data_test1 = np.array(data_sorted[1::2])
data_sorted
Explanation: Split the final dataset into train and test data
End of explanation
def validation(data_train,data_test):
#Train dataset
btu_train = data_train[:,3]
dmx1 = designmatrix(data_train[:,0],data_train[:,1],data_train[:,2])
beta_hat1 = beta_hat(dmx1,btu_train)
#Test dataset
btu_test = data_test[:,3]
dmx2 = designmatrix(data_test[:,0],data_test[:,1],data_test[:,2])
btu_pre = np.dot(dmx2,beta_hat1)
R2_val = R2(dmx2,btu_test,beta_hat1)
print "R2 value is: ",R2_val
plt.plot(data_test[:,0],btu_test,'.b')
plt.plot(data_test[:,0],btu_pre,'.r')
plt.legend(['Actual data','Predicted data'])
plt.title('Validation of model')
print "Beta matrix:",beta_hat1
return (beta_hat1, R2_val)
beta1, R2_1 = validation(data_train1,data_test1)
Explanation: Validation:
'Validation' function is created to build the model and make predictions for the energy consumption of test dataset.
It takes train dataset and test dataset as input and returns the R2 value and beta_matrix as output.
It gives a plot to observe the comparison between actual and predicted values.
End of explanation
print np.mean(data_test[:,0])
print np.mean(data_train[:,0])
print np.mean(data_test[:,1])
print np.mean(data_train[:,1])
Explanation: Mean of one variable is compared for both test and train dataset to check for significant difference between them.
End of explanation
print data_sorted
first = np.array(data_sorted[::3])
second = np.array(data_sorted[1::3])
third = np.array(data_sorted[2::3])
print "First dataset[0]:",first[0]
print "Second dataset[0]:",second[0]
print "Third dataset[0]:",third[0]
Explanation: Cross-validation:
The data has been split into three equal parts by selecting every third value for a dataset starting at different points.
End of explanation
data_train2 = np.vstack((first,second))
data_test2 = np.array(third)
print "Second split of datasets"
print data_train2.shape
print data_test2.shape
data_train3 = np.vstack((first,third))
data_test3 = np.array(second)
print "Third split of datasets"
print data_train3.shape
print data_test3.shape
data_train4 = np.vstack((third,second))
data_test4 = np.array(first)
print "Fourth split of datasets"
print data_train4.shape
print data_test4.shape
beta2, R2_2 = validation(data_train2,data_test2)
beta3, R2_3 = validation(data_train3,data_test3)
beta4, R2_4 = validation(data_train4,data_test4)
Explanation: Three pairs of train and test datasets are created for cross validation purpose using the three datasets.
End of explanation
l = [R2_1,R2_2,R2_3,R2_4]
R2_avg = np.mean(l)
print "Mean R2 value: ",R2_avg
beta_avg = np.mean([beta1,beta2,beta3,beta4],axis=0)
print "Mean Beta_hat matrix: ",beta_avg
Explanation: Final Result: Mean values of R2 and Beta_hat matrices
End of explanation
# calculating error matrix: (Y-XB)
btu_test = data_test1[:,3]
dmx2 = designmatrix(data_test1[:,0],data_test1[:,1],data_test1[:,2])
error = btu_test - np.dot(dmx2,beta_avg)
# defining N for the number of data points in the test dataset
N = error.size
# defining the number of co-efficients in the beta_hat matrix
p = beta_avg.size
X = dmx2
print "N=",N
print "p=",p
#squaring of error matrix is calculated by multiplying by its transpose
errormatrix = (np.dot(error,error.T))/(N-p-1)
# print "Standard mean error:",errormatrix
s_var = errormatrix*(np.linalg.inv(np.dot(X.T,X)))
# print s_var
import math
sqrt = lambda d: (math.sqrt(d))
s_dev = map(sqrt,np.diag(s_var))
# s_dev
from scipy.stats import t
T_val = t.isf((1-0.95)/2,(N-p-1))
max_val = beta_avg + np.dot(T_val,s_dev)
min_val = beta_avg - np.dot(T_val,s_dev)
print "Base value: "+str(np.round(beta_avg, decimals=1))
print "Maximum value: "+str(np.round(max_val, decimals=1))
print "Minimum value: "+str(np.round(min_val, decimals=1))
Explanation: Calculate uncertainties using 95% confidence intervals corresponding to t-distribution
This is calculated using the first train dataset created and the average beta_hat matrix.
End of explanation |
5,625 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding Context to Word Frequency Counts
While the raw data from word frequency counts is compelling, it does little but describe quantitative features of the corpus. In order to determine if the statistics are indicative of a trend in word usage we must add value to the word frequencies. In this exercise we will produce a ratio of the occurences of privacy to the number of words in the entire corpus. Then we will compare the occurences of privacy to the indivudal number of transcripts within the corpus. This data will allow us identify trends that are worthy of further investigation.
Finally, we will determine the number of words in the corpus as a whole and investigate the 50 most common words by creating a frequency plot. The last statistic we will generate is the type/token ratio, which is a measure of the variability of the words used in the corpus.
Part 1
Step1: In the next piece of code we will cycle through our directory again
Step2: Here we recreate our list from the last exercise, counting the instances of the word privacy in each file.
Step3: Next we use the len function to count the total number of words in each file.
Step4: Now we can calculate the ratio of the word privacy to the total number of words in the file. To accomplish this we simply divide the two numbers.
Step5: Now our descriptive statistics concerning word frequencies have added value. We can see that there has indeed been a steady increase in the frequency of the use of the word privacy in our corpus. When we investigate the yearly usage, we can see that the frequency almost doubled between 2008 and 2009, as well as dramatic increase between 2012 and 2014. This is also apparent in the difference between the 39th and the 40th sittings of Parliament.
Let's package all of the data together so it can be displayed as a table or exported to a CSV file. First we will write our values to a list
Step6: Using the tabulate module, we will display our tuple as a table.
Step7: And finally, we will write the values to a CSV file called privacyFreqTable.
Step8: Part 2
Step9: Now, we can count the number of files in each dataset. This is also an important activity for error-checking. While it is easy to trust the numerical output of the code when it works sucessfully, we must always be sure to check that the code is actually performing in exactly the way we want it to. In this case, these numbers can be cross-referenced with the original XML data, where each transcript exists as its own file. A quick check of the directory shows that the numbers are correct.
Step10: Here is a screenshot of some of the raw data. We can see that there are <u>97</u> files in 2006, <u>117</u> in 2007 and <u>93</u> in 2008. The rest of the data is also correct.
<img src="filecount.png">
Now we can compare the amount of occurences of privacy with the number of debates occuring in each dataset.
Step11: These numbers confirm our earlier results. There is a clear indication that the usage of the term privacy is increasing, with major changes occuring between the years 2008 and 2009, as well as between 2012 and 2014. This trend is also clearly obervable between the 39th and 40th sittings of Parliament.
Part 3
Step12: Now we will combine the three lists into one large list and assign it to the variable large.
Step13: We can use the same calculations to determine the total number of occurences of privacy, as well as the total number of words in the corpus. We can also calculate the total ratio of privacy to the total number of words.
Step14: Another type of word frequency statistic we can generate is a type/token ratio. The types are the total number of unique words in the corpus, while the tokens are the total number of words. The type/token ratio is used to determine the variability of the language used in the text. The higher the ratio, the more complex the text will be. First we'll determine the total number of types, using <i>Python's</i> set function.
Step15: Now we can divide the types by the tokens to determine the ratio.
Step16: Finally, we will use the NLTK module to create a graph that shows the top 50 most frequent words in the Hansard corpus. Although privacy will not appear in the graph, it's always interesting to see what types of words are most common, and what their distribution is. NLTK will be introduced with more detail in the next section featuring concordance outputs, but here all we need to know is that we assign our variable large to the NLTK function Text in order to work with the corpus data. From there we can determine the frequency distribution for the whole text.
Step17: Here we will assign the frequency distribution to the plot function to produce a graph. While it's a little hard to read, the most commonly used word in the Hansard corpus is the, with a frequency just over 400,000 occurences. The next most frequent word is to, which only has a frequency of about 225,000 occurences, almost half of the first most common word. The first 10 most frequent words appear with a much greater frequency than any of the other words in the corpus.
Step18: Another feature of the NLTK frequency distribution function is the generation of a list of hapaxes. These are words that appear only once in the entire corpus. While not meaningful for this study, it's an interesting way to explore the data. | Python Code:
# This is where the modules are imported
import nltk
from os import listdir
from os.path import splitext
from os.path import basename
from tabulate import tabulate
# These functions iterate through the directory and create a list of filenames
def list_textfiles(directory):
"Return a list of filenames ending in '.txt'"
textfiles = []
for filename in listdir(directory):
if filename.endswith(".txt"):
textfiles.append(directory + "/" + filename)
return textfiles
def remove_ext(filename):
"Removes the file extension, such as .txt"
name, extension = splitext(filename)
return name
def remove_dir(filepath):
"Removes the path from the file name"
name = basename(filepath)
return name
def get_filename(filepath):
"Removes the path and file extension from the file name"
filename = remove_ext(filepath)
name = remove_dir(filename)
return name
# These functions work on the content of the files
def read_file(filename):
"Read the contents of FILENAME and return as a string."
infile = open(filename)
contents = infile.read()
infile.close()
return contents
def count_in_list(item_to_count, list_to_search):
"Counts the number of a specified word within a list of words"
number_of_hits = 0
for item in list_to_search:
if item == item_to_count:
number_of_hits += 1
return number_of_hits
Explanation: Adding Context to Word Frequency Counts
While the raw data from word frequency counts is compelling, it does little but describe quantitative features of the corpus. In order to determine if the statistics are indicative of a trend in word usage we must add value to the word frequencies. In this exercise we will produce a ratio of the occurences of privacy to the number of words in the entire corpus. Then we will compare the occurences of privacy to the indivudal number of transcripts within the corpus. This data will allow us identify trends that are worthy of further investigation.
Finally, we will determine the number of words in the corpus as a whole and investigate the 50 most common words by creating a frequency plot. The last statistic we will generate is the type/token ratio, which is a measure of the variability of the words used in the corpus.
Part 1: Determining a ratio
To add context to our word frequency counts, we can work with the corpus in a number of different ways. One of the easiest is to compare the number of words in the entire corpus to the frequency of the word we are investigating.
Let's begin by calling on all the <span style="cursor:help;" title="a set of instructions that performs a specific task"><b>functions</b></span> we will need. Remember that the first few sentences are calling on pre-installed <i>Python</i> <span style="cursor:help;" title="packages of functions and code that serve specific purposes"><b>modules</b></span>, and anything with a def at the beginning is a custom function built specifically for these exercises. The text in red describes the purpose of the function.
End of explanation
filenames = []
for files in list_textfiles('../Counting Word Frequencies/data'):
files = get_filename(files)
filenames.append(files)
corpus = []
for filename in list_textfiles('../Counting Word Frequencies/data'):
text = read_file(filename)
words = text.split()
clean = [w.lower() for w in words if w.isalpha()]
corpus.append(clean)
Explanation: In the next piece of code we will cycle through our directory again: first assigning readable names to our files and storing them as a list in the variable filenames; then we will remove the case and punctuation from the text, split the words into a list of tokens, and assign the words in each file to a list in the variable corpus.
End of explanation
for words, names in zip(corpus, filenames):
print("Instances of the word \'privacy\' in", names, ":", count_in_list("privacy", words))
Explanation: Here we recreate our list from the last exercise, counting the instances of the word privacy in each file.
End of explanation
for files, names in zip(corpus, filenames):
print("There are", len(files), "words in", names)
Explanation: Next we use the len function to count the total number of words in each file.
End of explanation
print("Ratio of instances of privacy to total number of words in the corpus:")
for words, names in zip(corpus, filenames):
print('{:.6f}'.format(float(count_in_list("privacy", words))/(float(len(words)))),":",names)
Explanation: Now we can calculate the ratio of the word privacy to the total number of words in the file. To accomplish this we simply divide the two numbers.
End of explanation
raw = []
for i in range(len(corpus)):
raw.append(count_in_list("privacy", corpus[i]))
ratio = []
for i in range(len(corpus)):
ratio.append('{:.3f}'.format((float(count_in_list("privacy", corpus[i]))/(float(len(corpus[i])))) * 100))
table = zip(filenames, raw, ratio)
Explanation: Now our descriptive statistics concerning word frequencies have added value. We can see that there has indeed been a steady increase in the frequency of the use of the word privacy in our corpus. When we investigate the yearly usage, we can see that the frequency almost doubled between 2008 and 2009, as well as dramatic increase between 2012 and 2014. This is also apparent in the difference between the 39th and the 40th sittings of Parliament.
Let's package all of the data together so it can be displayed as a table or exported to a CSV file. First we will write our values to a list: raw contains the raw frequencies, and ratio contains the ratios. Then we will create a <span style="cursor:help;" title="a type of list where the values are permanent"><b>tuple</b></span> that contains the filename variable and includes the corresponding raw and ratio variables. Here we'll generate the ratio as a percentage.
End of explanation
print(tabulate(table, headers = ["Filename", "Raw", "Ratio %"], floatfmt=".3f", numalign="left"))
Explanation: Using the tabulate module, we will display our tuple as a table.
End of explanation
import csv
with open('privacyFreqTable.csv','wb') as f:
w = csv.writer(f)
w.writerows(table)
Explanation: And finally, we will write the values to a CSV file called privacyFreqTable.
End of explanation
corpus_1 = []
for filename in list_textfiles('../Counting Word Frequencies/data'):
text = read_file(filename)
words = text.split(" OFFICIAL REPORT (HANSARD)")
corpus_1.append(words)
Explanation: Part 2: Counting the number of transcripts
Another way we can provide context is to process the corpus in a different way. Instead of splitting the data by word, we will split it in larger chunks pertaining to each individual transcript. Each transcript corresponds to a unique debate but starts with exactly the same formatting, making the files easy to split. The text below shows the beginning of a transcript. The first words are OFFICIAL REPORT (HANSARD).
<img src="hansardText.png">
Here we will pass the files to another variable, called corpus_1. Instead of removing capitalization and punctuation, all we will do is split the files at every occurence of OFFICIAL REPORT (HANSARD).
End of explanation
for files, names in zip(corpus_1, filenames):
print("There are", len(files), "files in", names)
Explanation: Now, we can count the number of files in each dataset. This is also an important activity for error-checking. While it is easy to trust the numerical output of the code when it works sucessfully, we must always be sure to check that the code is actually performing in exactly the way we want it to. In this case, these numbers can be cross-referenced with the original XML data, where each transcript exists as its own file. A quick check of the directory shows that the numbers are correct.
End of explanation
for names, files, words in zip(filenames, corpus_1, corpus):
print("In", names, "there were", len(files), "debates. The word privacy was said", \
count_in_list('privacy', words), "times.")
Explanation: Here is a screenshot of some of the raw data. We can see that there are <u>97</u> files in 2006, <u>117</u> in 2007 and <u>93</u> in 2008. The rest of the data is also correct.
<img src="filecount.png">
Now we can compare the amount of occurences of privacy with the number of debates occuring in each dataset.
End of explanation
corpus_3 = []
for filename in list_textfiles('../Counting Word Frequencies/data2'):
text = read_file(filename)
words = text.split()
clean = [w.lower() for w in words if w.isalpha()]
corpus_3.append(clean)
Explanation: These numbers confirm our earlier results. There is a clear indication that the usage of the term privacy is increasing, with major changes occuring between the years 2008 and 2009, as well as between 2012 and 2014. This trend is also clearly obervable between the 39th and 40th sittings of Parliament.
Part 3: Looking at the corpus as a whole
While chunking the corpus into pieces can help us understand the distribution or dispersion of words throughout the corpus, it's valuable to look at the corpus as a whole. Here we will create a third corpus variable corpus_3 that only contains the files named 39, 40, and 41. Note the new directory named data2. We only need these files; if we used all of the files we would literally duplicate the results.
End of explanation
large = list(sum(corpus_3, []))
Explanation: Now we will combine the three lists into one large list and assign it to the variable large.
End of explanation
print("There are", count_in_list('privacy', large), "occurences of the word 'privacy' and a total of", \
len(large), "words.")
print("The ratio of instances of privacy to total number of words in the corpus is:", \
'{:.6f}'.format(float(count_in_list("privacy", large))/(float(len(large)))), "or", \
'{:.3f}'.format((float(count_in_list("privacy", large))/(float(len(large)))) * 100),"%")
Explanation: We can use the same calculations to determine the total number of occurences of privacy, as well as the total number of words in the corpus. We can also calculate the total ratio of privacy to the total number of words.
End of explanation
print("There are", (len(set(large))), "unique words in the Hansard corpus.")
Explanation: Another type of word frequency statistic we can generate is a type/token ratio. The types are the total number of unique words in the corpus, while the tokens are the total number of words. The type/token ratio is used to determine the variability of the language used in the text. The higher the ratio, the more complex the text will be. First we'll determine the total number of types, using <i>Python's</i> set function.
End of explanation
print("The type/token ratio is:", ('{:.6f}'.format(len(set(large))/(float(len(large))))), "or",\
'{:.3f}'.format(len(set(large))/(float(len(large)))*100),"%")
Explanation: Now we can divide the types by the tokens to determine the ratio.
End of explanation
text = nltk.Text(large)
fd = nltk.FreqDist(text)
Explanation: Finally, we will use the NLTK module to create a graph that shows the top 50 most frequent words in the Hansard corpus. Although privacy will not appear in the graph, it's always interesting to see what types of words are most common, and what their distribution is. NLTK will be introduced with more detail in the next section featuring concordance outputs, but here all we need to know is that we assign our variable large to the NLTK function Text in order to work with the corpus data. From there we can determine the frequency distribution for the whole text.
End of explanation
%matplotlib inline
fd.plot(50,cumulative=False)
Explanation: Here we will assign the frequency distribution to the plot function to produce a graph. While it's a little hard to read, the most commonly used word in the Hansard corpus is the, with a frequency just over 400,000 occurences. The next most frequent word is to, which only has a frequency of about 225,000 occurences, almost half of the first most common word. The first 10 most frequent words appear with a much greater frequency than any of the other words in the corpus.
End of explanation
fd.hapaxes()
Explanation: Another feature of the NLTK frequency distribution function is the generation of a list of hapaxes. These are words that appear only once in the entire corpus. While not meaningful for this study, it's an interesting way to explore the data.
End of explanation |
5,626 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 6
Animation
Step1: Look at a superposition of two sinusoidal signals
Step2: This product can be rewritten as a sum using trigonometric equalities. SymPy has a special function for those
Step3: Now let us take an example with two different angular frequencies
Step4: Now plot the product of both frequencies | Python Code:
from sympy import init_session
init_session()
%matplotlib notebook
Explanation: Excercises Electric Machinery Fundamentals
Chapter 6
Animation: Determining the rotor-slip with a compass
End of explanation
f=sin(x)*sin(y)
f
Explanation: Look at a superposition of two sinusoidal signals:
End of explanation
from sympy.simplify.fu import *
g=TR8(f) # TR8 is a trigonometric expression function from Fu paper
Eq(f, g)
Explanation: This product can be rewritten as a sum using trigonometric equalities. SymPy has a special function for those:
End of explanation
s = 0.03 # slip
fs = 50 # stator frequency in Hz
fr = (1-s)*fs # rotor frequency in Hz
fr
alpha=2*pi*fs*t
beta=2*pi*fr*t
Explanation: Now let us take an example with two different angular frequencies:
End of explanation
# Create the plot of the stator rotation frequency in blue:
p1=plot(sin(alpha), (t, 0, 1), show=False, line_color='b', adaptive=False, nb_of_points=5000)
# Create the plot of the rotor rotation frequency in green:
p2=plot(0.5*sin(beta), (t, 0, 1), show=False, line_color='g', adaptive=False, nb_of_points=5000)
# Create the plot of the combined flux in red:
p3=plot(0.5*f.subs([(x, alpha), (y, beta)]), (t, 0, 1),
show=False, line_color='r', adaptive=False, nb_of_points=5000)
# Make the second and third one a part of the first one.
p1.extend(p2)
p1.extend(p3)
# Display the modified plot object.
p1.show()
Explanation: Now plot the product of both frequencies:
End of explanation |
5,627 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Duhamel Integral
Problem Data
Step1: Natural Frequency, Damped Frequency
Step2: Computation
Preliminaries
We chose a time step and we compute a number of constants of the integration procedure that depend on the time step
Step3: We initialize a time variable
Step4: We compute the load, the sines and the cosines of $\omega_D t$ and their products
Step5: The main (and only) loop in our code, we initialize A, B and a container for saving the deflections x,
then we compute the next values of A and B, the next value of x is eventually appended to the container.
Step6: It is necessary to plot the response. | Python Code:
M = 600000
T = 0.6
z = 0.10
p0 = 400000
t0, t1, t2, t3 = 0.0, 1.0, 3.0, 6.0
Explanation: Duhamel Integral
Problem Data
End of explanation
wn = 2*np.pi/T
wd = wn*np.sqrt(1-z**2)
Explanation: Natural Frequency, Damped Frequency
End of explanation
dt = 0.05
edt = np.exp(-z*wn*dt)
fac = dt/(2*M*wd)
Explanation: Computation
Preliminaries
We chose a time step and we compute a number of constants of the integration procedure that depend on the time step
End of explanation
t = dt*np.arange(1+int(t3/dt))
Explanation: We initialize a time variable
End of explanation
p = np.where(t<=t1, p0*(t-t0)/(t1-t0), np.where(t<t2, p0*(1-(t-t1)/(t2-t1)), 0))
s = np.sin(wd*t)
c = np.cos(wd*t)
sp = s*p
cp = c*p
plt.plot(t, p/1000)
plt.xlabel('Time/s')
plt.ylabel('Force/kN')
plt.xlim((t0,t3))
plt.grid();
Explanation: We compute the load, the sines and the cosines of $\omega_D t$ and their products
End of explanation
A, B, x = 0, 0, [0]
for i, _ in enumerate(t[1:], 1):
A = A*edt+fac*(cp[i-1]*edt+cp[i])
B = B*edt+fac*(sp[i-1]*edt+sp[i])
x.append(A*s[i]-B*c[i])
Explanation: The main (and only) loop in our code, we initialize A, B and a container for saving the deflections x,
then we compute the next values of A and B, the next value of x is eventually appended to the container.
End of explanation
x = np.array(x)
plt.plot(t, x*1000)
plt.xlabel('Time/s')
plt.ylabel('Deflection/mm')
plt.xlim((t0,t3))
plt.grid()
plt.show();
Explanation: It is necessary to plot the response.
End of explanation |
5,628 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optymalizacja i propagacja wsteczna (backprop)
Zaczniemy od prostego przykładu. Funkcji kwadratowej
Step1: Funkcja ta ma swoje minimum w punkcie $x = 0$. Jak widać na powyższym rysunku, gdy pochodna jest dodatnia (co oznacza, że funkcja jest rosnąca) lub gdy pochodna jest ujemna (gdy funkcja jest malejąca), żeby zminimalizować wartość funkcji potrzebujemy wykonywać krok optymalizacji w kierunku przeciwnym do tego wyznaczanego przez gradient.
Przykładowo gdybyśmy byli w punkcie $x = 2$, gradient wynosiłby $4$. Ponieważ jest dodatni, żeby zbliżyć się do minimum potrzebujemy przesunąć naszą pozycje w kierunku przeciwnym czyli w stonę ujemną.
Ponieważ gradient nie mówi nam dokładnie jaki krok powinniśmy wykonać, żeby dotrzeć do minimum a raczej wskazuje kierunek. Żeby nie "przeskoczyć" minimum zwykle skaluje się krok o pewną wartość $\alpha$ nazywaną krokiem uczenia (ang. learning rate).
Prosty przykład optymalizacji $f(x) = x^2$ przy użyciu gradient descent.
Sprawdź różne wartości learning_step, w szczególności [0.1, 1.0, 1.1].
Step2: Backprop - propagacja wsteczna - metoda liczenia gradientów przy pomocy reguły łańcuchowej (ang. chain rule)
Rozpatrzymy przykład minimalizacji troche bardziej skomplikowanej jednowymiarowej funkcji
$$f(x) = \frac{x \cdot \sigma(x)}{x^2 + 1}$$
Do optymalizacji potrzebujemy gradientu funkcji. Do jego wyliczenia skorzystamy z chain rule oraz grafu obliczeniowego.
Chain rule mówi, że
Step3: Jeśli do węzła przychodzi więcej niż jedna krawędź (np. węzeł x), gradienty sumujemy.
Step4: Posiadając gradient możemy próbować optymalizować funkcję, podobnie jak poprzenio.
Sprawdź różne wartości parametru x_, który oznacza punkt startowy optymalizacji. W szczególności zwróć uwagę na wartości [-5.0, 1.3306, 1.3307, 1.330696146306314]. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn
%matplotlib inline
x = np.linspace(-3, 3, 100)
plt.plot(x, x**2, label='f(x)') # optymalizowana funkcja
plt.plot(x, 2 * x, label='pochodna -- f\'(x)') # pochodna
plt.legend()
plt.show()
Explanation: Optymalizacja i propagacja wsteczna (backprop)
Zaczniemy od prostego przykładu. Funkcji kwadratowej: $f(x) = x^2$
Spróbujemy ją zoptymalizować (znaleźć minimum) przy pomocy metody zwanej gradient descent. Polega ona na wykorzystaniu gradientu (wartości pierwszej pochodnej) przy wykonywaniu kroku optymalizacji.
End of explanation
learning_rate = ...
nb_steps = 10
x_ = 1
steps = [x_]
for _ in range(nb_steps):
x_ -= learning_rate * (2 * x_) # learning_rate * pochodna
steps += [x_]
plt.plot(x, x**2, alpha=0.7)
plt.plot(steps, np.array(steps)**2, 'r-', alpha=0.7)
plt.xlim(-3, 3)
plt.ylim(-1, 10)
plt.show()
Explanation: Funkcja ta ma swoje minimum w punkcie $x = 0$. Jak widać na powyższym rysunku, gdy pochodna jest dodatnia (co oznacza, że funkcja jest rosnąca) lub gdy pochodna jest ujemna (gdy funkcja jest malejąca), żeby zminimalizować wartość funkcji potrzebujemy wykonywać krok optymalizacji w kierunku przeciwnym do tego wyznaczanego przez gradient.
Przykładowo gdybyśmy byli w punkcie $x = 2$, gradient wynosiłby $4$. Ponieważ jest dodatni, żeby zbliżyć się do minimum potrzebujemy przesunąć naszą pozycje w kierunku przeciwnym czyli w stonę ujemną.
Ponieważ gradient nie mówi nam dokładnie jaki krok powinniśmy wykonać, żeby dotrzeć do minimum a raczej wskazuje kierunek. Żeby nie "przeskoczyć" minimum zwykle skaluje się krok o pewną wartość $\alpha$ nazywaną krokiem uczenia (ang. learning rate).
Prosty przykład optymalizacji $f(x) = x^2$ przy użyciu gradient descent.
Sprawdź różne wartości learning_step, w szczególności [0.1, 1.0, 1.1].
End of explanation
def sigmoid(x):
pass
def forward_pass(x):
pass
Explanation: Backprop - propagacja wsteczna - metoda liczenia gradientów przy pomocy reguły łańcuchowej (ang. chain rule)
Rozpatrzymy przykład minimalizacji troche bardziej skomplikowanej jednowymiarowej funkcji
$$f(x) = \frac{x \cdot \sigma(x)}{x^2 + 1}$$
Do optymalizacji potrzebujemy gradientu funkcji. Do jego wyliczenia skorzystamy z chain rule oraz grafu obliczeniowego.
Chain rule mówi, że:
$$ \frac{\partial f}{\partial x} = \frac{\partial f}{\partial y} \cdot \frac{\partial y}{\partial x}$$
Żeby łatwiej zastosować chain rule stworzymy z funkcji graf obliczeniowy, którego wykonanie zwróci nam wynik funkcji.
End of explanation
def backward_pass(x):
# kopia z forward_pass, ponieważ potrzebujemy wartości
# pośrednich by wyliczyć pochodne
# >>>
...
# <<<
pass
x = np.linspace(-10, 10, 200)
plt.plot(x, forward_pass(x), label='f(x)')
plt.plot(x, backward_pass(x), label='poochodna -- f\'(x)')
plt.legend()
plt.show()
Explanation: Jeśli do węzła przychodzi więcej niż jedna krawędź (np. węzeł x), gradienty sumujemy.
End of explanation
learning_rate = 1
nb_steps = 100
x_ = ...
steps = [x_]
for _ in range(nb_steps):
x_ -= learning_rate * backward_pass(x_)
steps += [x_]
plt.plot(x, forward_pass(x), alpha=0.7)
plt.plot(steps, forward_pass(np.array(steps)), 'r-', alpha=0.7)
plt.show()
Explanation: Posiadając gradient możemy próbować optymalizować funkcję, podobnie jak poprzenio.
Sprawdź różne wartości parametru x_, który oznacza punkt startowy optymalizacji. W szczególności zwróć uwagę na wartości [-5.0, 1.3306, 1.3307, 1.330696146306314].
End of explanation |
5,629 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to numerical simulations
Step1: Now we will define the physical constants of our system, which will also establish the unit system we have chosen. We'll use SI units here. Below, I've already created the constants. Make sure you understand what they are before moving on.
Step2: Next, we will need parameters for the simulation. These are known as initial condititons. For a 2 body gravitation problem, we'll need to know the masses of the two objects, the starting posistions of the two objects, and the starting velocities of the two objects.
Below, I've included the initial conditions for the earth (a) and the Sun (b) at the average distance from the sun and the average velocity around the sun. We also need a starting time, and ending time for the simulation, and a "time-step" for the system. Feel free to adjust all of these as you see fit once you have built the system!
<br>
<br>
<br>
<br>
a note on dt
Step3: It will be nice to create a function for the force between Ma and Mb. Below is the physics for the force of Ma on Mb. How the physics works here is not important for the moment. Right now, I want to make sure you can translate the math shown into a python function. (I'll show a picture of the physics behind this math for those interested.)
$$\vec{F_g}=\frac{-GM_aM_b}{r^3}\vec{r}$$
and
$$\vec{r}=(x_b-x_a)\hat{x}+ (y_b-y_a)\hat{y}$$
$$r^3=((x_b-x_a)^2+(y_b-y_a)^2)^{3/2}$$
If we break Fg into the x and y componets we get
Step4: Now that we have our force function, we will make a new function which does the whole simulation for a set of initial conditions. We call this function 'simulate' and it will take all the initial conditions as inputs. It will loop over each time step and call the force function to find the new positions for the asteroids at each time step.
The first part of our simulate function will be to initialize the loop and choose a loop type, for or while. Below is the general outline for how each type of loop can go.
<br>
<br>
<br>
For loop
Step5: Now we will call our simulate function with the initial conditions we defined earlier! We will take the output of simulate and store the x and y positions of the two particles.
Step6: Now for the fun part (or not so fun part if your simulation has an issue), plot your results! This is something well covered in previous lectures. Show me a plot of (xa,ya) and (xb,yb). Does it look sort of familiar? Hopefully you get something like the below image (in units of AU).
Step7: Challenge #1
Step8: We now wish to draw a random sample of asteroid masses from this distribution (Hint
Step9: Now let's loop over our random asteroid sample, run simulate and plot the results, for each one!
Step10: Going further
Step11: Challenge #2
Step12: Additionally, publications won't always be printed in color, and not all readers have the ability to distinguish colors or text size in the same way, so differences in style improve accessibility as well.
Luckily, Matplotlib can do all of this and more! Let's experiment with some variations in how we can make our plots. We can use the 'marker =' argument in plt.plot to choose a marker for every datapoint. We can use the 'linestyle = ' argument to have a dotted line instead of a solid line. Try experimenting with the extra arguments in the below plotting code to make it look good to you!
Now, add some plotting arguments to your loop like those you experimented with above. Can you make your plot more interesting and clear by changing the plotting parameters, or adding new plotting commands?
See the jupyter notebook called Plotting Demos in this same folder for some more examples of ways to make your plots pop! | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Introduction to numerical simulations: The 2 Body Problem
Many problems in statistical physics and astrophysics require solving problems consisting of many particles at once (sometimes on the order of thousands or more!) This can't be done by the traditional pen and paper techniques you would encounter in a physics class. Instead, we must implement numerical solutions to these problems.
Today, you will create your own numerical simulation for a simple problem is that solvable by pen and paper already, the 2 body problem in 2D. In this problem, we will describe the motion between two particles that share a force between them (such as Gravity). We'll design the simulation from an astronomer's mindset with astronomical units in mind. This simulation will be used to confirm the general motion of the earth around the Sun, and later will be used to predict the motion between two stars within relatively close range.
<br>
<br>
<br>
We will guide you through the physics and math required to create this simulation.
First, a brief review of the kinematic equations (remembering Order of Operations or PEMDAS, and that values can be positive or negative depending on the reference frame):
new time = old time + time change ($t = t_0 + \Delta t$)
new position = old position + velocity x time change ($x = x_0 + v \times \Delta t$)
new velocity = old velocity + acceleration x time change ($v = v_0 + a \times \Delta t$)
The problem here is designed to use the knowledge of scientific python you have been developing this week.
Like any code in python, The first thing we need to do is import the libraries we need. Go ahead and import Numpy and Pyplot below as np and plt respectively. Don't forget to put matplotlib inline to get everything within the notebook.
End of explanation
#Physical Constants (SI units)
G=6.67e-11 #Universal Gravitational constant in m^3 per kg per s^2
AU=1.5e11 #Astronomical Unit in meters = Distance between sun and earth
daysec=24.0*60*60 #seconds in a day
Explanation: Now we will define the physical constants of our system, which will also establish the unit system we have chosen. We'll use SI units here. Below, I've already created the constants. Make sure you understand what they are before moving on.
End of explanation
#####run specific constants. Change as needed#####
#Masses in kg
Ma=6.0e24 #always set as smaller mass
Mb=2.0e30 #always set as larger mass
#Time settings
t=0.0 #Starting time
dt=.01*daysec #Time set for simulation
tend=300*daysec #Time where simulation ends
#Initial conditions (position [m] and velocities [m/s] in x,y,z coordinates)
#For Ma
xa=1.0*AU
ya=0.0
vxa=0.0
vya=30000.0
#For Mb
xb=0.0
yb=0.0
vxb=0.0
vyb=0.0
Explanation: Next, we will need parameters for the simulation. These are known as initial condititons. For a 2 body gravitation problem, we'll need to know the masses of the two objects, the starting posistions of the two objects, and the starting velocities of the two objects.
Below, I've included the initial conditions for the earth (a) and the Sun (b) at the average distance from the sun and the average velocity around the sun. We also need a starting time, and ending time for the simulation, and a "time-step" for the system. Feel free to adjust all of these as you see fit once you have built the system!
<br>
<br>
<br>
<br>
a note on dt:
As already stated, numeric simulations are approximations. In our case, we are approximating how time flows. We know it flows continiously, but the computer cannot work with this. So instead, we break up our time into equal chunks called "dt". The smaller the chunks, the more accurate you will become, but at the cost of computer time.
End of explanation
#Function to compute the force between the two objects
def Fg(Ma,Mb,G,xa,xb,ya,yb):
#Compute rx and ry between Ma and Mb
rx=xb-xa
ry=yb-ya#Write it in
#compute r^3, remembering r=sqrt(rx^2+ry^2)
r3=np.sqrt(rx**2+ry**2)**3 #Write in r^3 using the equation above. Make use of np.sqrt()
#Compute the force in Newtons. Use the equations above as a Guide!
fx=-G*Ma*Mb*rx/r3 #Write it in
fy=-G*Ma*Mb*ry/r3 #Write it in
return fx,fy #What do we return?
Explanation: It will be nice to create a function for the force between Ma and Mb. Below is the physics for the force of Ma on Mb. How the physics works here is not important for the moment. Right now, I want to make sure you can translate the math shown into a python function. (I'll show a picture of the physics behind this math for those interested.)
$$\vec{F_g}=\frac{-GM_aM_b}{r^3}\vec{r}$$
and
$$\vec{r}=(x_b-x_a)\hat{x}+ (y_b-y_a)\hat{y}$$
$$r^3=((x_b-x_a)^2+(y_b-y_a)^2)^{3/2}$$
If we break Fg into the x and y componets we get:
$$F_x=\frac{-GM_aM_b}{r^3}r_x$$
$$F_y=\frac{-GM_aM_b}{r^3}r_y$$
<br><br>So, $Fg$ will only need to be a function of xa, xb, ya, and yb. The velocities of the bodies will not be needed. Create a function that calculates the force between the bodies given the positions of the bodies. My recommendation here will be to feed the inputs as separate components and also return the force in terms of components (say, fx and fy). This will make your code easier to write and easier to read.
End of explanation
def simulate(Ma,Mb,G,xa,ya,vxa,vya,xb,yb,vxb,vyb):
t=0
#Run a loop for the simulation. Keep track of Ma and Mb posistions and velocites
#Initialize vectors (otherwise there is nothing to append to!)
xaAr=np.array([])
yaAr=np.array([])
vxaAr=np.array([])
vyaAr=np.array([])
xbAr=np.array([])#Write it in for Particle B
ybAr=np.array([])#Write it in for Particle B
vxbAr=np.array([])
vybAr=np.array([])
#using while loop method with appending. Can also be done with for loops
while t<tend: #Write the end condition here.
#Compute current force on Ma and Mb. Ma recieves the opposite force of Mb
fx,fy=Fg(Ma,Mb,G,xa,xb,ya,yb)
#Update the velocities and positions of the particles
vxa=vxa-fx*dt/Ma
vya=vya-fy*dt/Ma#Write it in for y
vxb=vxb+fx*dt/Mb#Write it in for x
vyb=vyb+fy*dt/Mb
xa=xa+vxa*dt
ya=ya+vya*dt#Write it in for y
xb=xb+vxb*dt#Write it in for x
yb=yb+vyb*dt
#Save data to lists
xaAr=np.append(xaAr,xa)
yaAr=np.append(yaAr,ya)
xbAr=np.append(xbAr,xb)#How will we append it here?
ybAr=np.append(ybAr,yb)
#update the time by one time step, dt
t=t+dt
return(xaAr,yaAr,xbAr,ybAr)
Explanation: Now that we have our force function, we will make a new function which does the whole simulation for a set of initial conditions. We call this function 'simulate' and it will take all the initial conditions as inputs. It will loop over each time step and call the force function to find the new positions for the asteroids at each time step.
The first part of our simulate function will be to initialize the loop and choose a loop type, for or while. Below is the general outline for how each type of loop can go.
<br>
<br>
<br>
For loop:
initialize position and velocity arrays with np.zeros or np.linspace for the amount of steps needed to go through the simulation (which is numSteps=(tend-t)/dt the way we have set up the problem). The for loop condition is based off time and should read rough like: for i in range(numSteps)
<br>
<br>
<br>
While loop:
initialize posistion and velocity arrays with np.array([]) and use np.append() to tact on new values at each step like so, xaArray=np.append(xaArray,NEWVALUE). The while condition should read, while t<tend
My preference here is while since it keeps my calculations and appending separate. But, feel free to use which ever feels best for you!
Now for the actual simulation. This is the hardest part to code in. The general idea behind our loop is that as we step through time, we calculate the force, then calculate the new velocity, then the new position for each particle. At the end, we must update our arrays to reflect the new changes and update the time of the system. The time is super important! If we don't change the time (say in a while loop), the simulation would never end and we would never get our result. :(
Outline for the loop (order matters here)
Calculate the force with the last known positions (use your function!)
Calculate the new velocities using the approximation: vb = vb + dt*fg/Mb and va= va - dt*fg/Ma Note the minus sign here, and the need to do this for the x and y directions!
Calculate the new positions using the approximation: xb = xb + dt*Vb (same for a and for y's. No minus problem here)
Update the arrays to reflect our new values
Update the time using t=t+dt
<br>
<br>
<br>
<br>
Now when the loop closes back in, the cycle repeats in a logical way. Go one step at a time when creating this loop and use comments to help guide yourself. Ask for help if it gets tricky!
End of explanation
#####Reminder of specific constants. Change as needed#####
#Masses in kg
Ma=6.0e24 #always set as smaller mass
Mb=2.0e30 #always set as larger mass
#Time settings
t=0.0 #Starting time
dt=.01*daysec #Time set for simulation
tend=300*daysec #Time where simulation ends
#Intial conditions (posistion [m] and velocities [m/s] in x,y,z coordinates)
#For Ma
xa=1.0*AU
ya=0.0
vxa=0.0
vya=30000.0
#For Mb
xb=0.0
yb=0.0
vxb=0.0
vyb=0.0
#Do simulation with these parameters
xaAr,yaAr,xbAr,ybAr = simulate(Ma,Mb,G,xa,ya,vxa,vya,xb,yb,vxb,vyb)#Insert the variable for y position of B particle)
Explanation: Now we will call our simulate function with the initial conditions we defined earlier! We will take the output of simulate and store the x and y positions of the two particles.
End of explanation
from IPython.display import Image
Image("Earth-Sun-averageResult.jpg")
plt.figure()
plt.plot(xaAr/AU,yaAr/AU)
plt.plot(xbAr/AU,ybAr/AU)#Add positions for B particle)
plt.show()
Explanation: Now for the fun part (or not so fun part if your simulation has an issue), plot your results! This is something well covered in previous lectures. Show me a plot of (xa,ya) and (xb,yb). Does it look sort of familiar? Hopefully you get something like the below image (in units of AU).
End of explanation
#Mass distribution parameters
Mave=7.0e24 #The average asteroid mass
Msigma=1.0e24 #The standard deviation of asteroid masses
Size=3 #The number of asteroids we wish to simulate
Explanation: Challenge #1: Random Sampling of Initial Simulation Conditions
Now let's try to plot a few different asteroids with different initial conditions at once! Let's first produce the orbits of three asteroids with different masses. Suppose the masses of all asteroids in the main asteroid belt follow a Gaussian distribution. The parameters of the distribution of asteroid masses are defined below.
End of explanation
#Draw 3 masses from normally distributed asteroid mass distribution
MassAr = Msigma * np.random.randn(Size) + Mave #Add your normal a.k.a. Gaussian distribution function, noting that the input to your numpy random number generator function will be: (Size)
Explanation: We now wish to draw a random sample of asteroid masses from this distribution (Hint: Look back at Lecture #3).
End of explanation
plt.figure()
for mass in MassAr:#What array should we loop over?:
xaAr,yaAr,xbAr,ybAr=simulate(mass,Mb,G,xa,ya,vxa,vya,xb,yb,vyb,vyb)
plt.plot(xaAr/AU,yaAr/AU,label='Mass = %.2e'%mass) #Provide labels for each asteroid mass so we can generate a legend.
#Pro tip: The percent sign replaces '%.2e' in the string with the variable formatted the way we want!
plt.legend()
plt.show()
Explanation: Now let's loop over our random asteroid sample, run simulate and plot the results, for each one!
End of explanation
#draw 5 normally distributed mass values using the above parameters:
Size=5
MassAr = Msigma * np.random.randn(Size) + Mave
plt.figure()
for mass in MassAr:
xaAr,yaAr,xbAr,ybAr=simulate(mass,Mb,G,xa,ya,vxa,vya,xb,yb,vyb,vyb)
plt.plot(xaAr/AU,yaAr/AU,label='Mass = %.2e'%mass)
plt.legend()
plt.show()
#Draw 3 velocities from normally distributed asteroid mass distribution
Size = 3
Dimensions = 2
Vave=20000 #The average asteroid velocity in m
Vsigma=6000 #The standard deviation of asteroid velocities in m
#You can make normal arrays with different dimensions! See: https://docs.scipy.org/doc/numpy-1.15.1/reference/generated/numpy.random.randn.html
VelAr = Vsigma * np.random.randn(Size,Dimensions) + Vave #a 2D array
for v in VelAr:
xaAr,yaAr,xbAr,ybAr=simulate(mass,Mb,G,xa,ya,v[0],v[1],xb,yb,vxb,vyb)
plt.plot(xaAr/AU,yaAr/AU,label='Velocity of Ma: vx = %.2e, vy = %.2e'%(v[0],v[1]))
plt.legend()
plt.show()
Explanation: Going further:
Can you make a plot with 5 asteroid masses instead of 3?
<b>
If you've got some extra time, now is a great chance to experiment with plotting various initial conditions and how the orbits change! What happens if we draw some random initial velocities instead of random masses, for example?
End of explanation
from IPython.display import Image
Image(filename="fig_example.jpg")
Explanation: Challenge #2: Fancy Plotting Fun!
When showing off your results to people unfamiliar with your research, it helps to make them more easy to understand through different visualization techniques (like legends, labels, patterns, different shapes, and sizes). You may have found that textbooks or news articles are more fun and easy when concepts are illustrated colorfully yet clearly, such as the example figure below, which shows different annotations in the form of text:
End of explanation
SMALL_SIZE = 12
MEDIUM_SIZE = 15
BIGGER_SIZE = 20
plt.rc('font', size=SMALL_SIZE) # controls default text sizes
plt.rc('axes', titlesize=BIGGER_SIZE) # fontsize of the figure title
plt.rc('axes', labelsize=MEDIUM_SIZE) # fontsize of the x and y labels
plt.rc('xtick', labelsize=SMALL_SIZE) # fontsize of the x number labels
plt.rc('ytick', labelsize=SMALL_SIZE) # fontsize of the y numer labels
plt.rc('legend', fontsize=MEDIUM_SIZE) # legend fontsize
colors=['black','blue','orange']
markers=['x','*','+']
styles=['--','-',':']
plt.figure(figsize=(8,6))
dt=10*daysec #Increase time set for simulation to better show markers individually
for mass,color,mrk,sty in zip(MassAr,colors,markers,styles):
xaAr,yaAr,xbAr,ybAr=simulate(mass,Mb,G,xa,ya,vxa,vya,xb,yb,vyb,vyb)
plt.plot(xaAr/AU,yaAr/AU,label='Mass = %.2e'%mass, color=color, marker=mrk,linestyle=sty,linewidth=mass/Mave) #weighting width of lines by mass
plt.legend()
plt.title('Asteroid Trajectories')
plt.xlabel('x position (m)')
plt.ylabel('y position (m)')
plt.show()
Explanation: Additionally, publications won't always be printed in color, and not all readers have the ability to distinguish colors or text size in the same way, so differences in style improve accessibility as well.
Luckily, Matplotlib can do all of this and more! Let's experiment with some variations in how we can make our plots. We can use the 'marker =' argument in plt.plot to choose a marker for every datapoint. We can use the 'linestyle = ' argument to have a dotted line instead of a solid line. Try experimenting with the extra arguments in the below plotting code to make it look good to you!
Now, add some plotting arguments to your loop like those you experimented with above. Can you make your plot more interesting and clear by changing the plotting parameters, or adding new plotting commands?
See the jupyter notebook called Plotting Demos in this same folder for some more examples of ways to make your plots pop!
End of explanation |
5,630 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I'm looking into doing a delta_sigma emulator. This is testing if the cat side works. Then i'll make an emulator for it.
Step1: Load up a snapshot at a redshift near the center of this bin.
Step2: Use my code's wrapper for halotools' xi calculator. Full source code can be found here.
Step3: Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now.
Step4: This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
This plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.
Perform the below integral in each theta bin
Step5: The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.
Step6: The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason. | Python Code:
from pearce.mocks import cat_dict
import numpy as np
from os import path
from astropy.io import fits
import matplotlib
#matplotlib.use('Agg')
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
z_bins = np.array([0.15, 0.3, 0.45, 0.6, 0.75, 0.9])
zbin=1
a = 0.81120
z = 1.0/a - 1.0
Explanation: I'm looking into doing a delta_sigma emulator. This is testing if the cat side works. Then i'll make an emulator for it.
End of explanation
print z
cosmo_params = {'simname':'chinchilla', 'Lbox':400.0, 'scale_factors':[a]}
cat = cat_dict[cosmo_params['simname']](**cosmo_params)#construct the specified catalog!
cat.load_catalog(a, particles=True, tol = 0.01, downsample_factor=1e-2)
cat.load_model(a, 'redMagic')
params = cat.model.param_dict.copy()
#params['mean_occupation_centrals_assembias_param1'] = 0.0
#params['mean_occupation_satellites_assembias_param1'] = 0.0
params['logMmin'] = 13.4
params['sigma_logM'] = 0.1
params['f_c'] = 1.0
params['alpha'] = 1.0
params['logM1'] = 14.0
params['logM0'] = 12.0
print params
cat.populate(params)
nd_cat = cat.calc_analytic_nd()
print nd_cat
rp_bins = np.logspace(-1.1, 1.5, 9) #binning used in buzzard mocks
rpoints = (rp_bins[1:]+rp_bins[:-1])/2
rpoints
ds = np.loadtxt('/u/ki/swmclau2/Git/pearce/bin/ds.npy')
plt.plot(rpoints, ds)
plt.loglog();
#plt.xscale('log')
from astropy import constants as c
rpoints.shape
Explanation: Load up a snapshot at a redshift near the center of this bin.
End of explanation
r_bins = np.logspace(-1.1, 1.6, 14)
rbc = (r_bins[1:] + r_bins[:-1])/2.0
xi = cat.calc_xi_gm(r_bins)
xi
plt.plot(rbc, xi)
plt.loglog();
rbc
Explanation: Use my code's wrapper for halotools' xi calculator. Full source code can be found here.
End of explanation
import george
from george.kernels import ExpSquaredKernel
kernel = ExpSquaredKernel(0.05)
gp = george.GP(kernel)
gp.compute(np.log10(rpoints))
print xi
xi[xi<=0] = 1e-2 #ack
Explanation: Interpolate with a Gaussian process. May want to do something else "at scale", but this is quick for now.
End of explanation
from scipy.interpolate import interp1d
import pyccl as ccl
from astropy import units
xi_interp = interp1d(np.log10(rbc), np.log10(xi))
'''
names, vals = cat._get_cosmo_param_names_vals()
param_dict = { n:v for n,v in zip(names, vals)}
if 'Omega_c' not in param_dict:
param_dict['Omega_c'] = param_dict['Omega_m'] - param_dict['Omega_b']
del param_dict['Omega_m']
cosmo = ccl.Cosmology(**param_dict)
big_rbins = np.logspace(1, 2.1, 21)
big_rbc = (big_rbins[1:] + big_rbins[:-1])/2.0
xi_mm = ccl.correlation_3d(cosmo, cat.a, big_rbc)
#bias2 = np.mean(xi[-3:]/xi_mm[-3:]) #estimate the large scale bias from the box
#note i don't use the bias builtin cuz i've already computed xi_gg.
xi_mm_interp = interp1d(np.log10(big_rbc), np.log10(xi_mm))
bias2 = np.power(10, xi_interp(1.2)-xi_mm_interp(1.2))
'''
'''
theta_bins = np.logspace(np.log10(2.5), np.log10(250), 21)/60 #binning used in buzzard mocks
tpoints = (theta_bins[1:] + theta_bins[:-1])/2.0
ds = np.zeros_like(tpoints)
x = cat.cosmology.angular_diameter_distance(cat.z)/cat.h
print tpoints[0]*x.to("Mpc").value/cat.h, rp_bins[0]
assert tpoints[0]*x.to("Mpc").value/cat.h >= rp_bins[0]
#ubins = np.linspace(10**-6, 10**4.0, 1001)
ubins = np.logspace(-6, 2.0, 501)
ubc = (ubins[1:]+ubins[:-1])/2.0
'''
rhocrit = cat.cosmology.critical_density(0).to('Msun/(Mpc^3)').value
print rhocrit
def integrand_medium_scales(lRz, Rp, xi_interp):
#integrand_params pars=*(integrand_params*)params;
Rz = np.exp(lRz)
#print Rz
out= Rz * 10**xi_interp(np.log10(Rz*Rz + Rp*Rp)*0.5)
print Rz, out
return out
from scipy.integrate import quad
np.log10(np.exp(-10))
def Sigma_at_R_arr(R, Rxi, xi, Om):
rhom = Om*rhocrit*1e-12 #SM h^2/pc^2/Mpc; integral is over Mpc/h
Rxi0, Rxi_max = Rxi[0], Rxi[-1]
Sigma = np.zeros_like(R)
for i, Rp in enumerate(R):
print Rp
ln_z_max = np.log(np.sqrt(Rxi_max*Rxi_max - Rp*Rp))#; //Max distance to integrate to
result2 = quad(integrand_medium_scales, -10, ln_z_max, args=(Rp, xi))[0]
print result2
Sigma[i] = result2#(result1+result2)*rhom*2;
return rhom*2*Sigma
print rpoints
sigma = Sigma_at_R_arr(rpoints, rbc, xi_interp, cat.cosmology.Om(0))
rhom = cat.cosmology.Om(0)*rhocrit*1e-12 #SM h^2/pc^2/Mpc; integral is over Mpc/h
print rhom
plt.plot(rpoints, sigma)
#plt.xscale('log')
plt.loglog();
print rpoints
sigma
def DS_integrand_medium_scales(lR, sigma_interp):
R = np.exp(lR);
return R * R * sigma_interp(np.log10(R))
small_scales = r_bins<1.
np.sum(~small_scales)
def DeltaSigma_at_R_arr(R, Rs, Sigma):
lrmin = np.log(Rs[0]);
DeltaSigma = np.zeros_like(R)
sigma_interp = interp1d(np.log10(Rs), Sigma)
print R
for i,r in enumerate(R):
result2 = quad(DS_integrand_medium_scales, lrmin, np.log(r), args= (sigma_interp,))[0]
#print result2*2/(r**2), sigma_interp(np.log10(r));
#print result2, r, sigma_interp(np.log10(r))
DeltaSigma[i] = result2*2/(r**2) - sigma_interp(np.log10(r));
return cat.h*DeltaSigma
rpoints
rpoints2 = np.logspace(np.log10(rpoints[0]), np.log10(rpoints[-1]), 500)
ds2 = DeltaSigma_at_R_arr(rpoints, rpoints, sigma)
print ds2
plt.plot(rpoints, ds, label = 'Halotools')
plt.plot(rpoints2, ds2, label = 'Integral')
plt.loglog();
plt.legend(loc = 'best')
ds2
# TODO this is like this cause of a half-assed attempt at parraleization
# this is unesscary now, and could be discarded for a simpler for loop
def integrate_xi(bin_no):#, w_theta, bin_no, ubc, ubins)
int_xi1, int_xi2 = 0, 0
#t_med = np.radians(tpoints[bin_no])
rp = rpoints[bin_no]
dr = ((rp-rpoints[0])/500)*units.Mpc/cat.h
for _r in np.linspace(rbc[0]+1e-5, rp, 500):
if _r < rp:
continue
print _r
r = _r*units.Mpc/cat.h
for ubin_no, _u in enumerate(ubc):
_du = ubins[ubin_no+1]-ubins[ubin_no]
u = _u*units.Mpc/cat.h
du = _du*units.Mpc/cat.h
rad = np.sqrt(r**2 + u**2 )#*cat.h#not sure about the h
if rad >= (units.Mpc)*rpoints[-1]:
try:
int_xi1+=du*r*dr*bias2*(np.power(10, \
xi_mm_interp(np.log10(rad.value))))
except ValueError:
#interpolation failed
int_xi1+=du*0*dr*r
else:
int_xi1+=du*r*dr*(np.power(10, \
xi_interp(np.log10(rad.value))))
for ubin_no, _u in enumerate(ubc):
_du = ubins[ubin_no+1]-ubins[ubin_no]
u = _u*units.Mpc/cat.h
du = _du*units.Mpc/cat.h
rad = np.sqrt((rp*units.Mpc)**2 + u**2 )#*cat.h#not sure about the h
if rad >= (units.Mpc)*rpoints[-1]:
try:
int_xi2+=du*bias2*(np.power(10, \
xi_mm_interp(np.log10(rad.value))))
except ValueError:
#interpolation failed
int_xi2+=du*0
else:
int_xi2+=du*(np.power(10, \
xi_interp(np.log10(rad.value))))
print int_xi1, int_xi2
ds[bin_no] = ( (4/( (rp*units.Mpc)**2))*int_xi1 - 2*int_xi2).value
#Currently this doesn't work cuz you can't pickle the integrate_xi function.
#I'll just ignore for now. This is why i'm making an emulator anyway
#p = Pool(n_cores)
map(integrate_xi, range(rpoints.shape[0]))
#p.map(integrate_xi, range(tpoints.shape[0]))
#p.terminate()
print ds
Explanation: This plot looks bad on large scales. I will need to implement a linear bias model for larger scales; however I believe this is not the cause of this issue. The overly large correlation function at large scales if anything should increase w(theta).
This plot shows the regimes of concern. The black lines show the value of r for u=0 in the below integral for each theta bin. The red lines show the maximum value of r for the integral I'm performing.
Perform the below integral in each theta bin:
$$ \Delta \Sigma(\theta) = \int_0^\infty du \xi_{gm} \left(r = \sqrt{(u \theta)^2 + (u-D_A(z))^2} \right) $$
Where $\bar{x}$ is the median comoving distance to z.
End of explanation
plt.plot(tpoints, ds, label = 'My Calculation')
plt.plot(tpoints_rm, ds, label = 'Halotools')
#plt.plot(tpoints_rm, W.to("1/Mpc").value*mathematica_calc, label = 'Mathematica Calc')
#plt.plot(tpoints, wt_analytic(m,10**b, np.radians(tpoints), x),label = 'Mathematica Calc' )
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
wt_redmagic/(W.to("1/Mpc").value*mathematica_calc)
import cPickle as pickle
with open('/u/ki/jderose/ki23/bigbrother-addgals/bbout/buzzard-flock/buzzard-0/buzzard0_lb1050_xigg_ministry.pkl') as f:
xi_rm = pickle.load(f)
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].mbins
xi_rm.metrics[0].cbins
#plt.plot(np.log10(rpoints), b2+(np.log10(rpoints)*m2))
#plt.plot(np.log10(rpoints), 90+(np.log10(rpoints)*(-2)))
plt.scatter(rpoints, xi)
for i in xrange(3):
for j in xrange(3):
plt.plot(xi_rm.metrics[0].rbins[:-1], xi_rm.metrics[0].xi[:,i,j,0])
plt.loglog();
plt.subplot(211)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
#plt.ylim([0,10])
plt.subplot(212)
plt.plot(tpoints_rm, wt_redmagic/wt)
plt.xscale('log')
plt.ylim([2.0,4])
xi_rm.metrics[0].xi.shape
xi_rm.metrics[0].rbins #Mpc/h
Explanation: The below plot shows the problem. There appears to be a constant multiplicative offset between the redmagic calculation and the one we just performed. The plot below it shows their ratio. It is near-constant, but there is some small radial trend. Whether or not it is significant is tough to say.
End of explanation
x = cat.cosmology.comoving_distance(z)*a
#ubins = np.linspace(10**-6, 10**2.0, 1001)
ubins = np.logspace(-6, 2.0, 51)
ubc = (ubins[1:]+ubins[:-1])/2.0
#NLL
def liklihood(params, wt_redmagic,x, tpoints):
#print _params
#prior = np.array([ PRIORS[pname][0] < v < PRIORS[pname][1] for v,pname in zip(_params, param_names)])
#print param_names
#print prior
#if not np.all(prior):
# return 1e9
#params = {p:v for p,v in zip(param_names, _params)}
#cat.populate(params)
#nd_cat = cat.calc_analytic_nd(parmas)
#wt = np.zeros_like(tpoints_rm[:-5])
#xi = cat.calc_xi(r_bins, do_jackknife=False)
#m,b,_,_,_ = linregress(np.log10(rpoints), np.log10(xi))
#if np.any(xi < 0):
# return 1e9
#kernel = ExpSquaredKernel(0.05)
#gp = george.GP(kernel)
#gp.compute(np.log10(rpoints))
#for bin_no, t_med in enumerate(np.radians(tpoints_rm[:-5])):
# int_xi = 0
# for ubin_no, _u in enumerate(ubc):
# _du = ubins[ubin_no+1]-ubins[ubin_no]
# u = _u*unit.Mpc*a
# du = _du*unit.Mpc*a
#print np.sqrt(u**2+(x*t_med)**2)
# r = np.sqrt((u**2+(x*t_med)**2))#*cat.h#not sure about the h
#if r > unit.Mpc*10**1.7: #ignore large scales. In the full implementation this will be a transition to a bias model.
# int_xi+=du*0
#else:
# the GP predicts in log, so i predict in log and re-exponate
# int_xi+=du*(np.power(10, \
# gp.predict(np.log10(xi), np.log10(r.value), mean_only=True)[0]))
# int_xi+=du*(10**b)*(r.to("Mpc").value**m)
#print (((int_xi*W))/wt_redmagic[0]).to("m/m")
#break
# wt[bin_no] = int_xi*W.to("1/Mpc")
wt = wt_analytic(params[0],params[1], tpoints, x.to("Mpc").value)
chi2 = np.sum(((wt - wt_redmagic[:-5])**2)/(1e-3*wt_redmagic[:-5]) )
#chi2=0
#print nd_cat
#print wt
#chi2+= ((nd_cat-nd_mock.value)**2)/(1e-6)
#mf = cat.calc_mf()
#HOD = cat.calc_hod()
#mass_bin_range = (9,16)
#mass_bin_size = 0.01
#mass_bins = np.logspace(mass_bin_range[0], mass_bin_range[1], int( (mass_bin_range[1]-mass_bin_range[0])/mass_bin_size )+1 )
#mean_host_mass = np.sum([mass_bin_size*mf[i]*HOD[i]*(mass_bins[i]+mass_bins[i+1])/2 for i in xrange(len(mass_bins)-1)])/\
# np.sum([mass_bin_size*mf[i]*HOD[i] for i in xrange(len(mass_bins)-1)])
#chi2+=((13.35-np.log10(mean_host_mass))**2)/(0.2)
print chi2
return chi2 #nll
print nd_mock
print wt_redmagic[:-5]
import scipy.optimize as op
results = op.minimize(liklihood, np.array([-2.2, 10**1.7]),(wt_redmagic,x, tpoints_rm[:-5]))
results
#plt.plot(tpoints_rm, wt, label = 'My Calculation')
plt.plot(tpoints_rm, wt_redmagic, label = 'Buzzard Mock')
plt.plot(tpoints_rm, wt_analytic(-1.88359, 2.22353827e+03,tpoints_rm, x.to("Mpc").value), label = 'Mathematica Calc')
plt.ylabel(r'$w(\theta)$')
plt.xlabel(r'$\theta \mathrm{[degrees]}$')
plt.loglog();
plt.legend(loc='best')
plt.plot(np.log10(rpoints), np.log10(2.22353827e+03)+(np.log10(rpoints)*(-1.88)))
plt.scatter(np.log10(rpoints), np.log10(xi) )
np.array([v for v in params.values()])
Explanation: The below cell calculates the integrals jointly instead of separately. It doesn't change the results significantly, but is quite slow. I've disabled it for that reason.
End of explanation |
5,631 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow图相关知识简介
参考
Step1: tensorflow/python/framework/ops.py
+ Tensor
Step2: Operator
Step3: 实际上,对变量的读是通过tf.identity算子得到:
python
c = tf.add(b, tf.identity(v))
Variable
Step4: graph.pbtxt部份示意:v1 + 1 | Python Code:
a = tf.constant(1)
b = a * 2
b
b.op
b.consumers()
a.op
a.consumers()
Explanation: TensorFlow图相关知识简介
参考: https://www.tensorflow.org/programmers_guide/graphs
tf.Graph
op, tensor
variable
name_scope, variable_scop, collection
save and restore
0. tf.Graph
tf.Graph: GraphDef => *.pb文件
+ Graph structure: Operator, Tensor-like object, 连接关系
+ Graph collections: metadata
tf.Session():
+ 本地
+ 分布式:master (worker_0)
python
with tf.Session("grpc://example.org:2222"):
pass
状态(Variable) => *.ckpt文件
1. 算子与Tensor
End of explanation
b.op.outputs
list(b.op.inputs)
print(b.op.inputs[0])
print(a)
list(a.op.inputs)
Explanation: tensorflow/python/framework/ops.py
+ Tensor:
- device
- graph
- op
- consumers
- _override_operator:
数学算子:math_op.add 重载 __add__
End of explanation
v = tf.Variable([0])
c = b + v
c
list(c.op.inputs)
c.op.inputs[1].op
list(c.op.inputs[1].op.inputs)
v
Explanation: Operator: NodeDef
device
inputs
outputs
graph
node_def
op_def
run
traceback
Operator和Tensor构成无向图
```python
run
sess.run([b])
```
参考:
+ tf.Tensor: https://www.tensorflow.org/versions/master/api_docs/python/tf/Tensor
+ tf.Operator: https://www.tensorflow.org/versions/master/api_docs/python/tf/Operation
2. 变量
End of explanation
graph_a = tf.Graph()
with graph_a.as_default():
v1 = tf.get_variable("v1", shape=[3], initializer = tf.zeros_initializer)
print(v1)
inc_v1 = v1.assign(v1+1)
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
sess.run(init_op)
inc_v1.op.run()
save_path = saver.save(sess, "./tmp/model.ckpt", write_meta_graph=True)
print("Model saved in path: %s" % save_path)
pb_path = tf.train.write_graph(graph_a.as_graph_def(), "./tmp/", "graph.pbtxt", as_text=True)
print("Graph saved in path: %s" % pb_path)
Explanation: 实际上,对变量的读是通过tf.identity算子得到:
python
c = tf.add(b, tf.identity(v))
Variable: act like Tensor
ops
VariableV2
ResourceVariable
_AsTensor -> g.as_graph_element
value: Identity(variable) -> Tensor
assign
init_op: Assign(self, init_value)
to_proto: VariableDef
参考:https://www.tensorflow.org/versions/master/api_docs/python/tf/Variable
3. collections
collections: 按作用分组
Variable: global_varialbe
更多见tf.GraphKeys
name_scope: Operator, Tensor
variable_scope: Variable
伴生name_scope
python
class Layer:
def build(self):
pass
def call(self, inputs):
pass
参考:https://www.tensorflow.org/programmers_guide/summaries_and_tensorboard
4. 保存与恢复
End of explanation
graph_b = tf.Graph()
with graph_b.as_default():
with tf.Session() as sess:
saver = tf.train.import_meta_graph('./tmp/model.ckpt.meta')
saver.restore(sess, "./tmp/model.ckpt")
print(graph_b.get_operations())
v1 = graph_b.get_tensor_by_name("v1:0")
print("------------------")
print("v1 : %s" % v1.eval(session=sess))
Explanation: graph.pbtxt部份示意:v1 + 1:
bash
node {
name: "add"
op: "Add"
input: "v1/read"
input: "add/y"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
}
End of explanation |
5,632 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Error Estimation for Survey Data
the issue we have is the following
Step1: Closed Form Approximation
of course we could have done this analytically using Normal approximation
Step2: Thats the one-standard deviation range about the estimator. For example
Step3: that's the same relationship as a plot
Step4: For reference, the 2-sided tail probabilites as a function of $z$ (the way to read it is as follows
Step5: Using Monte Carlo
we also need some additional parameters for our Monte Carlo
Step6: We do some intermediate calculations...
Step7: ...and then generate our random numbers...
Step8: ...that we then reduce in one dimension (ie, over that people in the sample) to obtain our estimator for the probas for males and females as well as the difference. On the differences finally we look at the mean (should be zero-ish) and the standard deviation (should be consistent with the numbers above) | Python Code:
N_people = 500
ratio_female = 0.30
proba = 0.40
Explanation: Error Estimation for Survey Data
the issue we have is the following: we are drawing indendent random numbers from a binary distribution of probability $p$ (think: the probability of a certain person liking the color blue) and we have two groups (think: male and female). Those two groups dont necessarily have the same size.
The question we ask is what difference we can expect in the spread of the ex-post estimation of $p$
We first define our population parameters
End of explanation
def the_sd(N, p, r):
N = float(N)
p = float(p)
r = float(r)
return sqrt(1.0/N*(p*(1.0-p))/(r*(1.0-r)))
def sd_func_factory(N,r):
def func(p):
return the_sd(N,p,r)
return func
f = sd_func_factory(N_people, ratio_female)
f2 = sd_func_factory(N_people/2, ratio_female)
Explanation: Closed Form Approximation
of course we could have done this analytically using Normal approximation: we have two independent Normal random variables, both with expectation $p$. The variance of the $male$ variable is $p(1-p)/N_{male}$ and the one of the female one accordingly. The overall variance of the difference (or sum, it does not matter here because they are uncorrelated) is
$$
var = p(1-p)\times \left(\frac{1}{N_{male}} + \frac{1}{N_{female}}\right)
$$
Using the female/male ratio $r$ instead we can write for the standard deviation
$$
sd = \sqrt{var} = \sqrt{\frac{1}{N}\frac{p(1-p)}{r(1-r)}}
$$
meaning that we expect the difference in estimators for male and female of the order of $sd$
End of explanation
p = linspace(0,0.25,5)
f = sd_func_factory(N_people, ratio_female)
f2 = sd_func_factory(N_people/2, ratio_female)
sd = list(map(f, p))
sd2 = list(map(f2, p))
pd.DataFrame(data= {'p':p, 'sd':sd, 'sd2':sd2})
Explanation: Thats the one-standard deviation range about the estimator. For example: if the underlying probability is $0.25=25\%$ then the difference between the estimators for the male and the female group is $4.2\%$ for the full group (sd), or $5.9\%$ for if only half of the people replied (sd2)
End of explanation
p = linspace(0,0.25,50)
sd = list(map(f, p))
sd2 = list(map(f2, p))
plot (p,p, 'k')
plot (p,p-sd, 'g--')
plot (p,p+sd, 'g--')
plot (p,p-sd2, 'r--')
plot (p,p+sd2, 'r--')
grid(b=True, which='major', color='k', linestyle='--')
Explanation: that's the same relationship as a plot
End of explanation
z=linspace(1.,3,100)
plot(z,1. - (norm.cdf(z)-norm.cdf(-z)))
grid(b=True, which='major', color='k', linestyle='--')
plt.title("Probability of being beyond Z (2-sided) vs Z")
Explanation: For reference, the 2-sided tail probabilites as a function of $z$ (the way to read it is as follows: the probability of a Normal distribution being 2 standard deviations away from its mean to either side is about 0.05, or 5%). Saying it the other way round, a two-standard-deviation difference corresponds to about 95% confidence
End of explanation
number_of_tries = 1000
Explanation: Using Monte Carlo
we also need some additional parameters for our Monte Carlo
End of explanation
N_female = int (N_people * ratio_female)
N_male = N_people - N_female
Explanation: We do some intermediate calculations...
End of explanation
data_male = np.random.binomial(n=1, p=proba, size=(number_of_tries, N_male))
data_female = np.random.binomial(n=1, p=proba, size=(number_of_tries, N_female))
Explanation: ...and then generate our random numbers...
End of explanation
proba_male = map(numpy.mean, data_male)
proba_female = map(numpy.mean, data_female)
proba_diff = list((pm-pf) for pm,pf in zip(proba_male, proba_female))
np.mean(proba_diff), np.std(proba_diff)
Explanation: ...that we then reduce in one dimension (ie, over that people in the sample) to obtain our estimator for the probas for males and females as well as the difference. On the differences finally we look at the mean (should be zero-ish) and the standard deviation (should be consistent with the numbers above)
End of explanation |
5,633 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 2
Step1: This is called an "if / else" statement. It basically allows you to create a "fork" in the flow of your program based on a condition that you define. If the condition is True, the "if"-block of code is executed. If the condition is False, the else-block is executed.
Here, our condition is simply the value of the variable construction. Since we defined this variable to quite literally hold the value False (this is a special data type called a Boolean, more on that in a minute), this means that we skip over the if-block and only execute the else-block. If instead we had set construction to True, we would have executed only the if-block.
Let's define Booleans and if / else statements more formally now.
[ Definition ] Booleans
A Boolean ("bool") is a type of variable, like a string, int, or float.
However, a Boolean is much more restricted than these other data types because it is only allowed to take two values
Step2: So what types of conditionals are we allowed to use in an if / else statement? Anything that can be evaluated as True or False! For example, in natural language we might ask the following true/false questions
Step3: Since the line print "Goodbye now!" is not indented, it is NOT considered part of the if-statement.
Therefore, it is always printed regardless of whether the if-statement was True or False.
Step4: Since a and b are not both True, the conditional statement "a and b" as a whole is False. Therefore, we execute the else-block.
Step5: By using "not" before b, we negate its current value (False), making b True. Thus the entire conditional as a whole becomes True, and we execute the if-block.
Step6: "not" only applies to the variable directly in front of it (in this case, a). So here, a becomes False, so the conditional as a whole becomes False.
Step7: When we use parentheses in a conditional, whatever is within the parentheses is evaluated first. So here, the evaluation proceeds like this
Step8: As you would probably expect, when we use "or", we only need a or b to be True in order for the whole conditional to be True.
Step9: Ok, this one is a little bit much! Try to avoid complex conditionals like this if possible, since it can be difficult to tell if they're actually testing what you think they're testing. If you do need to use a complex conditional, use parentheses to make it more obvious which terms will be evaluated first!
Note on indentation
Indentation is very important in Python; it’s how Python tells what code belongs to which control statements
Consecutive lines of code with the same indenting are sometimes called "blocks"
Indenting should only be done in specific circumstances (if statements are one example, and we'll see a few more soon). Indent anywhere else and you'll get an error.
You can indent by however much you want, but you must be consistent. Pick one indentation scheme (e.g. 1 tab per indent level, or 4 spaces) and stick to it.
[ Check yourself! ] if/else practice
Think you got it? In the code block below, write an if/else statement to print a different message depending on whether x is positive or negative.
Step10: 2. Built-in functions
Python provides some useful built-in functions that perform specific tasks. What makes them "built-in"? Simply that you don’t have to "import" anything in order to use them -- they're always available. This is in contrast the the non-built-in functions, which are packaged into modules of similar functions (e.g. "math") that you must import before using. More on this in a minute!
We've already seen some examples of built-in functions, such as print, int(), float(), and str(). Now we'll look at a few more that are particularly useful
Step11: [ Definition ] len()
Description
Step12: [ Definition ] abs()
Description
Step13: [ Definition ] round()
Description
Step14: If you want to learn more built in functions, go here
Step15: [ Definition ] The random module
Description
Step16: 4. Test your understanding | Python Code:
construction = False
print "Turn right onto Main Street"
print "Turn left onto Maple Ave"
if construction:
print "Continue straight on Maple Ave"
print "Turn right onto Cat Lane"
print "Turn left onto Fake Street"
else:
print "Cut through the empty lot to Fake Street"
print "Go straight on Fake Street until house 123"
Explanation: Lesson 2: if / else and Functions
Table of Contents
Conditionals I: The "if / else" statement
Built-in functions
Modules
Test your understanding: practice set 2
1. Conditionals I: The "if / else" statement
Programming is a lot like giving someone instructions or directions. For example, if I wanted to give you directions to my house, I might say...
Turn right onto Main Street
Turn left onto Maple Ave
If there is construction, continue straight on Maple Ave, turn right on Cat Lane, and left on Fake Street; else, cut through the empty lot to Fake Street
Go straight on Fake Street until house 123
The same directions, but in code:
End of explanation
x = 5
if (x > 0):
print "x is positive"
else:
print "x is negative"
Explanation: This is called an "if / else" statement. It basically allows you to create a "fork" in the flow of your program based on a condition that you define. If the condition is True, the "if"-block of code is executed. If the condition is False, the else-block is executed.
Here, our condition is simply the value of the variable construction. Since we defined this variable to quite literally hold the value False (this is a special data type called a Boolean, more on that in a minute), this means that we skip over the if-block and only execute the else-block. If instead we had set construction to True, we would have executed only the if-block.
Let's define Booleans and if / else statements more formally now.
[ Definition ] Booleans
A Boolean ("bool") is a type of variable, like a string, int, or float.
However, a Boolean is much more restricted than these other data types because it is only allowed to take two values: True or False.
In Python, True and False are always capitalized and never in quotes.
Don't think of True and False as words! You can't treat them like you would strings. To Python, they're actually interpreted as the numbers 1 and 0, respectively.
Booleans are most often used to create the "conditional statements" used in if / else statements and loops.
[ Definition ] The if / else statement
Purpose: creates a fork in the flow of the program based on whether a conditional statement is True or False.
Syntax:
if (conditional statement):
this code is executed
else:
this code is executed
Notes:
Based on the Boolean (True / False) value of a conditional statement, either executes the if-block or the else-block
The "blocks" are indicated by indentation.
The else-block is optional.
Colons are required after the if condition and after the else.
All code that is part of the if or else blocks must be indented.
Example:
End of explanation
a = True
if a:
print "Hooray, a was true!"
a = True
if a:
print "Hooray, a was true!"
print "Goodbye now!"
a = False
if a:
print "Hooray, a was true!"
print "Goodbye now!"
Explanation: So what types of conditionals are we allowed to use in an if / else statement? Anything that can be evaluated as True or False! For example, in natural language we might ask the following true/false questions:
is a True?
is a less than b?
is a equal to b?
is a equal to "ATGCTG"?
is (a greater than b) and (b greater than c)?
To ask these questions in our code, we need to use a special set of symbols/words. These are called the logical operators, because they allow us to form logical (true/false) statements. Below is a chart that lists the most common logical operators:
Most of these are pretty intuitive. The big one people tend to mess up on in the beginning is ==. Just remember: a single equals sign means assignment, and a double equals means is the same as/is equal to. You will NEVER use a single equals sign in a conditional statement because assignment is not allowed in a conditional! Only True / False questions are allowed!
if / else statements in action
Below are several examples of code using if / else statements. For each code block, first try to guess what the output will be, and then run the block to see the answer.
End of explanation
a = True
b = False
if a and b:
print "Apple"
else:
print "Banana"
Explanation: Since the line print "Goodbye now!" is not indented, it is NOT considered part of the if-statement.
Therefore, it is always printed regardless of whether the if-statement was True or False.
End of explanation
a = True
b = False
if a and not b:
print "Apple"
else:
print "Banana"
Explanation: Since a and b are not both True, the conditional statement "a and b" as a whole is False. Therefore, we execute the else-block.
End of explanation
a = True
b = False
if not a and b:
print "Apple"
else:
print "Banana"
Explanation: By using "not" before b, we negate its current value (False), making b True. Thus the entire conditional as a whole becomes True, and we execute the if-block.
End of explanation
a = True
b = False
if not (a and b):
print "Apple"
else:
print "Banana"
Explanation: "not" only applies to the variable directly in front of it (in this case, a). So here, a becomes False, so the conditional as a whole becomes False.
End of explanation
a = True
b = False
if a or b:
print "Apple"
else:
print "Banana"
Explanation: When we use parentheses in a conditional, whatever is within the parentheses is evaluated first. So here, the evaluation proceeds like this:
First Python decides how to evaluate (a and b). As we saw above, this must be False because a and b are not both True.
Then Python applies the "not", which flips that False into a True. So then the final answer is True!
End of explanation
cat = "Mittens"
if cat == "Mittens":
print "Awwww"
else:
print "Get lost, cat"
a = 5
b = 10
if (a == 5) and (b > 0):
print "Apple"
else:
print "Banana"
a = 5
b = 10
if ((a == 1) and (b > 0)) or (b == (2 * a)):
print "Apple"
else:
print "Banana"
Explanation: As you would probably expect, when we use "or", we only need a or b to be True in order for the whole conditional to be True.
End of explanation
x = 6 * -5 - 4 * 2 + -7 * -8 + 3
# ******add your code here!*********
Explanation: Ok, this one is a little bit much! Try to avoid complex conditionals like this if possible, since it can be difficult to tell if they're actually testing what you think they're testing. If you do need to use a complex conditional, use parentheses to make it more obvious which terms will be evaluated first!
Note on indentation
Indentation is very important in Python; it’s how Python tells what code belongs to which control statements
Consecutive lines of code with the same indenting are sometimes called "blocks"
Indenting should only be done in specific circumstances (if statements are one example, and we'll see a few more soon). Indent anywhere else and you'll get an error.
You can indent by however much you want, but you must be consistent. Pick one indentation scheme (e.g. 1 tab per indent level, or 4 spaces) and stick to it.
[ Check yourself! ] if/else practice
Think you got it? In the code block below, write an if/else statement to print a different message depending on whether x is positive or negative.
End of explanation
name = raw_input("Your name: ")
print "Hi there", name, "!"
age = int(raw_input("Your age: ")) #convert input to an int
print "Wow, I can't believe you're only", age
Explanation: 2. Built-in functions
Python provides some useful built-in functions that perform specific tasks. What makes them "built-in"? Simply that you don’t have to "import" anything in order to use them -- they're always available. This is in contrast the the non-built-in functions, which are packaged into modules of similar functions (e.g. "math") that you must import before using. More on this in a minute!
We've already seen some examples of built-in functions, such as print, int(), float(), and str(). Now we'll look at a few more that are particularly useful: raw_input(), len(), abs(), and round().
[ Definition ] raw_input()
Description: A built-in function that allows user input to be read from the terminal.
Syntax:
raw_input("Optional prompt: ")
Notes:
The execution of the code will pause when it reaches the raw_input() function and wait for the user to input something.
The input ends when the user hits "enter".
The user input that is read by raw_input() can then be stored in a variable and used in the code.
Important: This function always returns a string, even if the user entered a number! You must convert the input with int() or float() if you expect a number input.
Examples:
End of explanation
print len("cat")
print len("hi there")
seqLength = len("ATGGTCGCAT")
print seqLength
Explanation: [ Definition ] len()
Description: Returns the length of a string (also works on certain data structures). Doesn’t work on numerical types.
Syntax:
len(string)
Examples:
End of explanation
print abs(-10)
print abs(int("-10"))
positiveNum = abs(-23423)
print positiveNum
Explanation: [ Definition ] abs()
Description: Returns the absolute value of a numerical value. Doesn't accept strings.
Syntax:
abs(number)
Examples:
End of explanation
print round(10.12345)
print round(10.12345, 2)
print round(10.9999, 2)
Explanation: [ Definition ] round()
Description: Rounds a float to the indicated number of decimal places. If no number of decimal places is indicated, rounds to zero decimal places.
Synatx:
round(someNumber, numDecimalPlaces)
Examples:
End of explanation
import math
print math.sqrt(4)
print math.log10(1000)
print math.sin(1)
print math.cos(0)
Explanation: If you want to learn more built in functions, go here: https://docs.python.org/2/library/functions.html
3. Modules
Modules are groups of additional functions that come with Python, but unlike the built-in functions we just saw, these functions aren't accessible until you import them. Why aren’t all functions just built-in? Basically, it improves speed and memory usage to only import what is needed (there are some other considerations, too, but we won't get into it here).
The functions in a module are usually all related to a certain kind of task or subject area. For example, there are modules for doing advanced math, generating random numbers, running code in parallel, accessing your computer's file system, and so on. We’ll go over just two modules today: math and random. See the full list here: https://docs.python.org/2.7/py-modindex.html
How to use a module
Using a module is very simple. First you import the module. Add this to the top of your script:
import <moduleName>
Then, to use a function of the module, you prefix the function name with the name of the module (using a period between them):
<moduleName>.<functionName>
(Replace <moduleName> with the name of the module you want, and <functionName> with the name of a function in the module.)
The <moduleName>.<functionName> synatx is needed so that Python knows where the function comes from. Sometimes, especially when using user created modules, there can be a function with the same name as a function that's already part of Python. Using this syntax prevents functions from overwriting each other or causing ambiguity.
[ Definition ] The math module
Description: Contains many advanced math-related functions.
See full list of functions here: https://docs.python.org/2/library/math.html
Examples:
End of explanation
import random
print random.random() # Return a random floating point number in the range [0.0, 1.0)
print random.randint(0, 10) # Return a random integer between the specified range (inclusive)
print random.gauss(5, 2) # Draw from the normal distribution given a mean and standard deviation
# this code will output something different every time you run it!
Explanation: [ Definition ] The random module
Description: contains functions for generating random numbers.
See full list of functions here: https://docs.python.org/2/library/random.html
Examples:
End of explanation
# RUN THIS BLOCK FIRST TO SET UP VARIABLES!
a = True
b = False
x = 2
y = -2
cat = "Mittens"
print a
print (not a)
print (a == b)
print (a != b)
print (x == y)
print (x > y)
print (x = 2)
print (a and b)
print (a and not b)
print (a or b)
print (not b or a)
print not (b or a)
print (not b) or a
print (not b and a)
print not (b and a)
print (not b) and a
print (x == abs(y))
print len(cat)
print cat + x
print cat + str(x)
print float(x)
print ("i" in cat)
print ("g" in cat)
print ("Mit" in cat)
if (x % 2) == 0:
print "x is even"
else:
print "x is odd"
if (x - 4*y) < 0:
print "Invalid!"
else:
print "Banana"
if "Mit" in cat:
print "Hey Mits!"
else:
print "Where's Mits?"
x = "C"
if x == "A" or "B":
print "yes"
else:
print "no"
x = "C"
if (x == "A") or (x == "B"):
print "yes"
else:
print "no"
Explanation: 4. Test your understanding: practice set 2
For the following blocks of code, first try to guess what the output will be, and then run the code yourself. These examples may introduce some ideas and common pitfalls that were not explicitly covered in the text above, so be sure to complete this section.
The first block below holds the variables that will be used in the problems. Since variables are shared across blocks in Jupyter notebooks, you just need to run this block once and then those variables can be used in any other code block.
End of explanation |
5,634 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spectra in (optical) Astronomy
Here we introduce a simple spectrum, the example taken from the optical, where it all started.
To quote Roger Wesson, the author of this particular spectrum
Step1: Reading the data
The astropy package has an I/O package to simplify reading and writing a number of popular formats common in astronomy.
Step2: We obtained a very simple ascii table with two columns, wavelength and intensity, of a spectrum at a particular point in the sky. The astropy module has some classes to deal with such ascii tables. Although this is not a very reliable way to store data, it is very simple and portable. Almost any software will be able to deal with such ascii files. Your favorite spreadsheet program probably prefers "csv" (comma-separated-values), where the space is replaced by a comma.
Step3: Plotting the basics
Step4: There is a very strong line around 4865 Angstrom dominating this plot, and many weaker lines. If we rescale this plot and look what is near the baseline, we get to see these weaker lines much better | Python Code:
%matplotlib inline
# python 2-3 compatibility
from __future__ import print_function
Explanation: Spectra in (optical) Astronomy
Here we introduce a simple spectrum, the example taken from the optical, where it all started.
To quote Roger Wesson, the author of this particular spectrum:
The sample spectrum was extracted from a FORS2 observation of a planetary nebula with strong recombination lines. The extract covered Hβ, some collisionally excited lines of [Ar IV], and a number of recombination lines of O II, N II, N III and He II. I chose the spectrum so that it would contain some strong isolated lines and some weak blended lines, such that codes could be compared on the strong lines where user input should make minimal difference, and on the weak lines where subjective considerations come into play.
This particular spectrum was used in a comparison of codes that identify spectral lines
End of explanation
from astropy.io import ascii
Explanation: Reading the data
The astropy package has an I/O package to simplify reading and writing a number of popular formats common in astronomy.
End of explanation
data = ascii.read('../data/extract.dat')
print(data)
x = data['col1']
y = data['col2']
Explanation: We obtained a very simple ascii table with two columns, wavelength and intensity, of a spectrum at a particular point in the sky. The astropy module has some classes to deal with such ascii tables. Although this is not a very reliable way to store data, it is very simple and portable. Almost any software will be able to deal with such ascii files. Your favorite spreadsheet program probably prefers "csv" (comma-separated-values), where the space is replaced by a comma.
End of explanation
import matplotlib.pyplot as plt
plt.plot(x,y)
Explanation: Plotting the basics
End of explanation
plt.plot(x,y)
z = y*0.0
plt.plot(x,z)
plt.ylim(-0.5,2.0)
Explanation: There is a very strong line around 4865 Angstrom dominating this plot, and many weaker lines. If we rescale this plot and look what is near the baseline, we get to see these weaker lines much better:
End of explanation |
5,635 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using the graph from figure 10.1 from the textbook (http
Step1: http
Step2: Let's play with some algorithms in class | Python Code:
#Example Small social newtork as a connection matrix
sc1 = ([(0, 1, 1, 0, 0, 0, 0),
(1, 0, 1, 1, 0, 0, 0),
(1, 1, 0, 0, 0, 0, 0),
(0, 1, 0, 0, 1, 1, 1),
(0, 0, 0, 1, 0, 1, 0),
(0, 0, 0, 1, 1, 0, 1),
(0, 0, 0, 1, 0, 1, 0)])
Explanation: Using the graph from figure 10.1 from the textbook (http://infolab.stanford.edu/~ullman/mmds/book.pdf) to demonstrate
<img src="images/smallsocialgraph.png">
End of explanation
import networkx as nx
G1 = nx.Graph()
G1.add_nodes_from(['A','B','C','D','E','F','G'])
G1.nodes()
G1.add_edges_from([('A','B'),('A','C')])
G1.add_edges_from([('B','C'),('B','D')])
G1.add_edges_from([('D','E'),('D','F'),('D','G')])
G1.add_edges_from([('E','F')])
G1.add_edges_from([('F','G')])
G1.number_of_edges()
G1.edges()
G1.neighbors('D')
import matplotlib.pyplot as plt
#drawing the graph
%matplotlib inline
nx.draw(G1)
pos=nx.spring_layout(G1)
nx.draw(G1,pos,node_color='y', edge_color='r', node_size=600, width=3.0)
nx.draw_networkx_labels(G1,pos,color='W',font_size=20,font_family='sans-serif')
#https://networkx.github.io/documentation/latest/reference/generated/networkx.drawing.nx_pylab.draw_networkx.html
#Some parameters to play with
Explanation: http://networkx.github.io/documentation/latest/install.html
pip install networkx
to install networkx into your python modules
pip install networkx --upgrade
to upgrade you previously installed version
You may have to restart your ipython engine to pick up the new module
Test your installation by running the new box locally
A full tutorial is available at:
http://networkx.github.io/documentation/latest/tutorial/index.html
There are other packages for graph manipulation for python:
python-igraph, (http://igraph.org/python/#pydoc1)
Graph-tool, (https://graph-tool.skewed.de)
I picked networkx because it took little effort to install.
End of explanation
#Enumeration of all cliques
list(nx.enumerate_all_cliques(G1))
list(nx.cliques_containing_node(G1,'A'))
list(nx.cliques_containing_node(G1,'D'))
Explanation: Let's play with some algorithms in class:
https://networkx.github.io/documentation/latest/reference/algorithms.html
Social Graph analysis algorithms
Betweenness:
https://networkx.github.io/documentation/latest/reference/algorithms.centrality.html#betweenness
EigenVector centrality
https://networkx.github.io/documentation/latest/reference/algorithms.centrality.html#eigenvector
Clustering per node
https://networkx.github.io/documentation/latest/reference/generated/networkx.algorithms.cluster.clustering.html
All Shortest Paths
https://networkx.github.io/documentation/latest/reference/generated/networkx.algorithms.shortest_paths.generic.all_shortest_paths.html
Laplacian matrix
https://networkx.github.io/documentation/latest/reference/generated/networkx.linalg.laplacianmatrix.laplacian_matrix.html#networkx.linalg.laplacianmatrix.laplacian_matrix
Each of the students pair up and work on demonstrating a networkx algorithm
Create a new ipython page and submit it
End of explanation |
5,636 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First, some code. Scroll down.
Step1: Initialize some feature-locations
Step2: Issue
Step3: We're testing L2 in isolation, so these "A", "B", etc. patterns are L4 representations, i.e. "feature-locations".
Train an array of 5 columns to recognize these objects, then show it Object 1. It will randomly move its sensors to different feature-locations on the object. It will never put two sensors on the same feature-location at the same time.
Step4: Print what just happened
Step5: Each column is activating a union of cells. Column 2 sees input G, so it knows this isn't "Object 2", but multiple other columns are including "Object 2" in their unions, so Column 2's voice gets drowned out.
How does this vary with number of columns? | Python Code:
import itertools
import random
from htmresearch.algorithms.column_pooler import ColumnPooler
INPUT_SIZE = 10000
def createFeatureLocationPool(size=10):
duplicateFound = False
for _ in xrange(5):
candidateFeatureLocations = [frozenset(random.sample(xrange(INPUT_SIZE), 40))
for featureNumber in xrange(size)]
# Sanity check that they're pretty unique.
duplicateFound = False
for pattern1, pattern2 in itertools.combinations(candidateFeatureLocations, 2):
if len(pattern1 & pattern2) >= 5:
duplicateFound = True
break
if not duplicateFound:
break
if duplicateFound:
raise ValueError("Failed to generate unique feature-locations")
featureLocationPool = {}
for i, featureLocation in enumerate(candidateFeatureLocations):
if i < 26:
name = chr(ord('A') + i)
else:
name = "Feature-location %d" % i
featureLocationPool[name] = featureLocation
return featureLocationPool
def getLateralInputs(columnPoolers):
cellsPerColumnPooler = columnPoolers[0].numberOfCells()
assert all(column.numberOfCells() == cellsPerColumnPooler
for column in columnPoolers)
inputsByColumn = []
for recipientColumnIndex in xrange(len(columnPoolers)):
columnInput = []
for inputColumnIndex, column in enumerate(columnPoolers):
if inputColumnIndex == recipientColumnIndex:
continue
elif inputColumnIndex < recipientColumnIndex:
relativeIndex = inputColumnIndex
elif inputColumnIndex > recipientColumnIndex:
relativeIndex = inputColumnIndex - 1
offset = relativeIndex * cellsPerColumnPooler
columnInput.extend(cell + offset
for cell in column.getActiveCells())
inputsByColumn.append(columnInput)
return inputsByColumn
def getColumnPoolerParams(inputWidth, numColumns):
cellCount = 2048
return {
"inputWidth": inputWidth,
"lateralInputWidth": cellCount * (numColumns - 1),
"columnDimensions": (cellCount,),
"activationThresholdDistal": 13,
"initialPermanence": 0.41,
"connectedPermanence": 0.50,
"minThresholdProximal": 10,
"minThresholdDistal": 10,
"maxNewProximalSynapseCount": 20,
"maxNewDistalSynapseCount": 20,
"permanenceIncrement": 0.10,
"permanenceDecrement": 0.10,
"predictedSegmentDecrement": 0.0,
"synPermProximalInc": 0.1,
"synPermProximalDec": 0.001,
"initialProximalPermanence": 0.6,
"seed": 42,
"numActiveColumnsPerInhArea": 40,
"maxSynapsesPerProximalSegment": inputWidth,
}
def experiment(objects, numColumns):
#
# Initialize
#
params = getColumnPoolerParams(INPUT_SIZE, numColumns)
columnPoolers = [ColumnPooler(**params) for _ in xrange(numColumns)]
#
# Learn
#
columnObjectRepresentations = [{} for _ in xrange(numColumns)]
for objectName, objectFeatureLocations in objects.iteritems():
for featureLocationName in objectFeatureLocations:
pattern = featureLocationPool[featureLocationName]
for _ in xrange(10):
lateralInputs = getLateralInputs(columnPoolers)
for i, column in enumerate(columnPoolers):
column.compute(feedforwardInput=pattern,
lateralInput = lateralInputs[i],
learn=True)
for i, column in enumerate(columnPoolers):
columnObjectRepresentations[i][objectName] = frozenset(column.getActiveCells())
column.reset()
objectName = "Object 1"
objectFeatureLocations = objects[objectName]
success = False
featureLocationLog = []
activeCellsLog = []
for attempt in xrange(60):
featureLocations = random.sample(objectFeatureLocations, numColumns)
featureLocationLog.append(featureLocations)
# Give the feedforward input 3 times so that the lateral inputs have time to spread.
for _ in xrange(3):
lateralInputs = getLateralInputs(columnPoolers)
for i, column in enumerate(columnPoolers):
pattern = featureLocationPool[featureLocations[i]]
column.compute(feedforwardInput=pattern,
lateralInput=lateralInputs[i],
learn=False)
allActiveCells = [set(column.getActiveCells()) for column in columnPoolers]
activeCellsLog.append(allActiveCells)
if all(set(column.getActiveCells()) == columnObjectRepresentations[i][objectName]
for i, column in enumerate(columnPoolers)):
success = True
print "Converged after %d steps" % (attempt + 1)
break
if not success:
print "Failed to converge after %d steps" % (attempt + 1)
return (objectName, columnPoolers, featureLocationLog, activeCellsLog, columnObjectRepresentations)
Explanation: First, some code. Scroll down.
End of explanation
featureLocationPool = createFeatureLocationPool(size=8)
Explanation: Initialize some feature-locations
End of explanation
objects = {"Object 1": set(["A", "B", "C", "D", "E", "F", "G"]),
"Object 2": set(["A", "B", "C", "D", "E", "F", "H"]),
"Object 3": set(["A", "B", "C", "D", "E", "G", "H"]),
"Object 4": set(["A", "B", "C", "D", "F", "G", "H"]),
"Object 5": set(["A", "B", "C", "E", "F", "G", "H"]),
"Object 6": set(["A", "B", "D", "E", "F", "G", "H"]),
"Object 7": set(["A", "C", "D", "E", "F", "G", "H"]),
"Object 8": set(["B", "C", "D", "E", "F", "G", "H"])}
Explanation: Issue: One column spots a difference, but its voice is drowned out
Create 8 objects, each with 7 feature-locations. Each object is 1 different from each other object.
End of explanation
(testObject,
columnPoolers,
featureLocationLog,
activeCellsLog,
columnObjectRepresentations) = experiment(objects, numColumns=5)
Explanation: We're testing L2 in isolation, so these "A", "B", etc. patterns are L4 representations, i.e. "feature-locations".
Train an array of 5 columns to recognize these objects, then show it Object 1. It will randomly move its sensors to different feature-locations on the object. It will never put two sensors on the same feature-location at the same time.
End of explanation
columnContentsLog = []
for timestep, allActiveCells in enumerate(activeCellsLog):
columnContents = []
for columnIndex, activeCells in enumerate(allActiveCells):
contents = {}
for objectName, objectCells in columnObjectRepresentations[columnIndex].iteritems():
containsRatio = len(activeCells & objectCells) / float(len(objectCells))
if containsRatio >= 0.20:
contents[objectName] = containsRatio
columnContents.append(contents)
columnContentsLog.append(columnContents)
for timestep in xrange(len(featureLocationLog)):
allFeedforwardInputs = featureLocationLog[timestep]
allActiveCells = activeCellsLog[timestep]
allColumnContents = columnContentsLog[timestep]
print "Step %d" % timestep
for columnIndex in xrange(len(allFeedforwardInputs)):
feedforwardInput = allFeedforwardInputs[columnIndex]
activeCells = allActiveCells[columnIndex]
columnContents = allColumnContents[columnIndex]
print "Column %d: Input: %s, Active cells: %d %s" % (columnIndex,
allFeedforwardInputs[columnIndex],
len(activeCells),
columnContents)
print
Explanation: Print what just happened
End of explanation
for numColumns in xrange(2, 8):
print "With %d columns:" % numColumns
experiment(objects, numColumns)
print
Explanation: Each column is activating a union of cells. Column 2 sees input G, so it knows this isn't "Object 2", but multiple other columns are including "Object 2" in their unions, so Column 2's voice gets drowned out.
How does this vary with number of columns?
End of explanation |
5,637 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Plotting with matplotlib
You can show matplotlib figures directly in the notebook by using the %matplotlib notebook and %matplotlib inline magic commands.
%matplotlib notebook provides an interactive environment.
Step1: Let's see how to make a plot without using the scripting layer.
Step2: We can use html cell magic to display the image.
Step3: Scatterplots
Step4: Line Plots
Step5: Let's try working with dates!
Step6: Let's try using pandas
Step7: Bar Charts | Python Code:
%matplotlib notebook
import matplotlib as mpl
mpl.get_backend()
import matplotlib.pyplot as plt
plt.plot?
# because the default is the line style '-',
# nothing will be shown if we only pass in one point (3,2)
plt.plot(3, 2)
# we can pass in '.' to plt.plot to indicate that we want
# the point (3,2) to be indicated with a marker '.'
plt.plot(3, 2, '.')
Explanation: Basic Plotting with matplotlib
You can show matplotlib figures directly in the notebook by using the %matplotlib notebook and %matplotlib inline magic commands.
%matplotlib notebook provides an interactive environment.
End of explanation
# First let's set the backend without using mpl.use() from the scripting layer
from matplotlib.backends.backend_agg import FigureCanvasAgg
from matplotlib.figure import Figure
# create a new figure
fig = Figure()
# associate fig with the backend
canvas = FigureCanvasAgg(fig)
# add a subplot to the fig
ax = fig.add_subplot(111)
# plot the point (3,2)
ax.plot(3, 2, '.')
# save the figure to test.png
# you can see this figure in your Jupyter workspace afterwards by going to
# https://hub.coursera-notebooks.org/
canvas.print_png('test.png')
Explanation: Let's see how to make a plot without using the scripting layer.
End of explanation
%%html
<img src='test.png' />
# create a new figure
plt.figure()
# plot the point (3,2) using the circle marker
plt.plot(3, 2, 'o')
# get the current axes
ax = plt.gca()
# Set axis properties [xmin, xmax, ymin, ymax]
ax.axis([0,6,0,10])
# create a new figure
plt.figure()
# plot the point (1.5, 1.5) using the circle marker
plt.plot(1.5, 1.5, 'o')
# plot the point (2, 2) using the circle marker
plt.plot(2, 2, 'o')
# plot the point (2.5, 2.5) using the circle marker
plt.plot(2.5, 2.5, 'o')
# get current axes
ax = plt.gca()
# get all the child objects the axes contains
ax.get_children()
Explanation: We can use html cell magic to display the image.
End of explanation
import numpy as np
x = np.array([1,2,3,4,5,6,7,8])
y = x
plt.figure()
plt.scatter(x, y) # similar to plt.plot(x, y, '.'), but the underlying child objects in the axes are not Line2D
import numpy as np
x = np.array([1,2,3,4,5,6,7,8])
y = x
# create a list of colors for each point to have
# ['green', 'green', 'green', 'green', 'green', 'green', 'green', 'red']
colors = ['green']*(len(x)-1)
colors.append('red')
plt.figure()
# plot the point with size 100 and chosen colors
plt.scatter(x, y, s=100, c=colors)
# convert the two lists into a list of pairwise tuples
zip_generator = zip([1,2,3,4,5], [6,7,8,9,10])
print(list(zip_generator))
# the above prints:
# [(1, 6), (2, 7), (3, 8), (4, 9), (5, 10)]
zip_generator = zip([1,2,3,4,5], [6,7,8,9,10])
# The single star * unpacks a collection into positional arguments
print(*zip_generator)
# the above prints:
# (1, 6) (2, 7) (3, 8) (4, 9) (5, 10)
# use zip to convert 5 tuples with 2 elements each to 2 tuples with 5 elements each
print(list(zip((1, 6), (2, 7), (3, 8), (4, 9), (5, 10))))
# the above prints:
# [(1, 2, 3, 4, 5), (6, 7, 8, 9, 10)]
zip_generator = zip([1,2,3,4,5], [6,7,8,9,10])
# let's turn the data back into 2 lists
x, y = zip(*zip_generator) # This is like calling zip((1, 6), (2, 7), (3, 8), (4, 9), (5, 10))
print(x)
print(y)
# the above prints:
# (1, 2, 3, 4, 5)
# (6, 7, 8, 9, 10)
plt.figure()
# plot a data series 'Tall students' in red using the first two elements of x and y
plt.scatter(x[:2], y[:2], s=100, c='red', label='Tall students')
# plot a second data series 'Short students' in blue using the last three elements of x and y
plt.scatter(x[2:], y[2:], s=100, c='blue', label='Short students')
# add a label to the x axis
plt.xlabel('The number of times the child kicked a ball')
# add a label to the y axis
plt.ylabel('The grade of the student')
# add a title
plt.title('Relationship between ball kicking and grades')
# add a legend (uses the labels from plt.scatter)
plt.legend()
# add the legend to loc=4 (the lower right hand corner), also gets rid of the frame and adds a title
plt.legend(loc=4, frameon=False, title='Legend')
# get children from current axes (the legend is the second to last item in this list)
plt.gca().get_children()
# get the legend from the current axes
legend = plt.gca().get_children()[-2]
# you can use get_children to navigate through the child artists
legend.get_children()[0].get_children()[1].get_children()[0].get_children()
# import the artist class from matplotlib
from matplotlib.artist import Artist
def rec_gc(art, depth=0):
if isinstance(art, Artist):
# increase the depth for pretty printing
print(" " * depth + str(art))
for child in art.get_children():
rec_gc(child, depth+2)
# Call this function on the legend artist to see what the legend is made up of
rec_gc(plt.legend())
Explanation: Scatterplots
End of explanation
import numpy as np
linear_data = np.array([1,2,3,4,5,6,7,8])
exponential_data = linear_data**2
plt.figure()
# plot the linear data and the exponential data
plt.plot(linear_data, '-o', exponential_data, '-o')
# plot another series with a dashed red line
plt.plot([22,44,55], '--r')
plt.xlabel('Some data')
plt.ylabel('Some other data')
plt.title('A title')
# add a legend with legend entries (because we didn't have labels when we plotted the data series)
plt.legend(['Baseline', 'Competition', 'Us'])
# fill the area between the linear data and exponential data
plt.gca().fill_between(range(len(linear_data)),
linear_data, exponential_data,
facecolor='blue',
alpha=0.25)
Explanation: Line Plots
End of explanation
plt.figure()
observation_dates = np.arange('2017-01-01', '2017-01-09', dtype='datetime64[D]')
plt.plot(observation_dates, linear_data, '-o', observation_dates, exponential_data, '-o')
Explanation: Let's try working with dates!
End of explanation
import pandas as pd
plt.figure()
observation_dates = np.arange('2017-01-01', '2017-01-09', dtype='datetime64[D]')
observation_dates = map(pd.to_datetime, observation_dates) # trying to plot a map will result in an error
plt.plot(observation_dates, linear_data, '-o', observation_dates, exponential_data, '-o')
plt.figure()
observation_dates = np.arange('2017-01-01', '2017-01-09', dtype='datetime64[D]')
observation_dates = list(map(pd.to_datetime, observation_dates)) # convert the map to a list to get rid of the error
plt.plot(observation_dates, linear_data, '-o', observation_dates, exponential_data, '-o')
x = plt.gca().xaxis
# rotate the tick labels for the x axis
for item in x.get_ticklabels():
item.set_rotation(45)
# adjust the subplot so the text doesn't run off the image
plt.subplots_adjust(bottom=0.25)
ax = plt.gca()
ax.set_xlabel('Date')
ax.set_ylabel('Units')
ax.set_title('Exponential vs. Linear performance')
# you can add mathematical expressions in any text element
ax.set_title("Exponential ($x^2$) vs. Linear ($x$) performance")
Explanation: Let's try using pandas
End of explanation
plt.figure()
xvals = range(len(linear_data))
plt.bar(xvals, linear_data, width = 0.3)
new_xvals = []
# plot another set of bars, adjusting the new xvals to make up for the first set of bars plotted
for item in xvals:
new_xvals.append(item+0.3)
plt.bar(new_xvals, exponential_data, width = 0.3 ,color='red')
from random import randint
linear_err = [randint(0,15) for x in range(len(linear_data))]
# This will plot a new set of bars with errorbars using the list of random error values
plt.bar(xvals, linear_data, width = 0.3, yerr=linear_err)
# stacked bar charts are also possible
plt.figure()
xvals = range(len(linear_data))
plt.bar(xvals, linear_data, width = 0.3, color='b')
plt.bar(xvals, exponential_data, width = 0.3, bottom=linear_data, color='r')
# or use barh for horizontal bar charts
plt.figure()
xvals = range(len(linear_data))
plt.barh(xvals, linear_data, height = 0.3, color='b')
plt.barh(xvals, exponential_data, height = 0.3, left=linear_data, color='r')
import matplotlib.pyplot as plt
import numpy as np
plt.figure()
languages =['Python', 'SQL', 'Java', 'C++', 'JavaScript']
pos = np.arange(len(languages))
popularity = [56, 39, 34, 34, 29]
plt.bar(pos, popularity, align='center')
plt.xticks(pos, languages)
plt.ylabel('% Popularity')
plt.title('Top 5 Languages for Math & Data \nby % popularity on Stack Overflow', alpha=0.8)
#TODO: remove all the ticks (both axes), and tick labels on the Y axis
ax = plt.gca()
ax.set_yticklabels([])
ax.tick_params(top="off", bottom="off", left="off", right="off", labelleft="off", labelbottom="on")
plt.show()
import matplotlib.pyplot as plt
import numpy as np
plt.figure()
languages =['Python', 'SQL', 'Java', 'C++', 'JavaScript']
pos = np.arange(len(languages))
popularity = [56, 39, 34, 34, 29]
plt.bar(pos, popularity, align='center')
plt.xticks(pos, languages)
plt.ylabel('% Popularity')
plt.title('Top 5 Languages for Math & Data \nby % popularity on Stack Overflow', alpha=0.8)
# remove all the ticks (both axes), and tick labels on the Y axis
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on')
# remove the frame of the chart
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.show()
import matplotlib.pyplot as plt
import numpy as np
plt.figure()
languages =['Python', 'SQL', 'Java', 'C++', 'JavaScript']
pos = np.arange(len(languages))
popularity = [56, 39, 34, 34, 29]
# change the bar colors to be less bright blue
bars = plt.bar(pos, popularity, align='center', linewidth=0, color='lightslategrey')
# make one bar, the python bar, a contrasting color
bars[0].set_color('#1F77B4')
# soften all labels by turning grey
plt.xticks(pos, languages, alpha=0.8)
plt.ylabel('% Popularity', alpha=0.8)
plt.title('Top 5 Languages for Math & Data \nby % popularity on Stack Overflow', alpha=0.8)
# remove all the ticks (both axes), and tick labels on the Y axis
plt.tick_params(top='off', bottom='off', left='off', right='off', labelleft='off', labelbottom='on')
# remove the frame of the chart
for spine in plt.gca().spines.values():
spine.set_visible(False)
plt.show()
Explanation: Bar Charts
End of explanation |
5,638 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
02 - Example - Handling Duplicates and Missing Values
This notebook presents how to eliminate duplicates and solve the missing values.
By
Step1: Load the dataset that will be used
Using the data from the previous unit (07-data-diagnostics)
Step2: Dealing with found issues
Time to deal with the issues previously found.
1) Duplicated data
Drop the duplicated rows (which have all column values the same), check the YPUQAPSOYJ row above. Let us use the drop_duplicates to help us with that by keeping only the first of the duplicated rows.
Step3: You could also consider a duplicate a row with the same index and same age only by setting data.drop_duplicates(subset=['age'], keep='first'), but in our case it would lead to the same result. Note that in general it is not a recommended programming practice to use the argument 'inplace=True' (e.g., data.drop_duplicates(subset=['age'], keep='first', inplace=True)) --> may lead to unnexpected results.
2) Missing Values
This one of the major, if not the biggest, data problems that we faced with. There are several ways to deal with them, e.g.
Step4: That is not terrible to the point of fully dropping a column/feature due to the amount of missing values. Nevertheless, the action to do that would be data.drop('age', axis=1). The missing_data variable is our mask for the missing values
Step5: Drop rows with missing values
This can be done with dropna(), for instance
Step6: Fill missing values with a specific value (e.g., 0)
This can be done with fillna(), for instance
Step7: So, what happened with our dataset? Let's take a look where we had missing values before
Step8: Looks like what we did was not the most appropriate. For instance, we create a new category in the gender column
Step9: Fill missing values with mean/mode/median
Step10: age - filling missing values with median
Step11: gender - filling missing values with mode
Remember we had a small problem with the data of this feature (the MALE word instead of male)? Typing problems are very common and they can be hidden problems. That's why it is so important to take a look at the data.
Step12: Let's replace MALE by male to harmonize our feature.
Step13: Now we don't have the MALE entry anymore. Let us fill the missing values with the mode
Step14: Final check
Always a good idea... | Python Code:
import pandas as pd
import numpy as np
% matplotlib inline
from matplotlib import pyplot as plt
Explanation: 02 - Example - Handling Duplicates and Missing Values
This notebook presents how to eliminate duplicates and solve the missing values.
By: Hugo Lopes
Learning Unit 08
Some inital imports:
End of explanation
data = pd.read_csv('../data/data_with_problems.csv', index_col=0)
print('Our dataset has %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))
data.head(15)
Explanation: Load the dataset that will be used
Using the data from the previous unit (07-data-diagnostics)
End of explanation
mask_duplicated = data.duplicated(keep='first')
mask_duplicated.head(10)
data = data.drop_duplicates(keep='first')
print('Our dataset has now %d columns (features) and %d rows (people).' % (data.shape[1], data.shape[0]))
data.head(10)
Explanation: Dealing with found issues
Time to deal with the issues previously found.
1) Duplicated data
Drop the duplicated rows (which have all column values the same), check the YPUQAPSOYJ row above. Let us use the drop_duplicates to help us with that by keeping only the first of the duplicated rows.
End of explanation
missing_data = data.isnull()
print('Number of missing values (NaN) per column/feature:')
print(missing_data.sum())
print('And we currently have %d rows.' % data.shape[0])
Explanation: You could also consider a duplicate a row with the same index and same age only by setting data.drop_duplicates(subset=['age'], keep='first'), but in our case it would lead to the same result. Note that in general it is not a recommended programming practice to use the argument 'inplace=True' (e.g., data.drop_duplicates(subset=['age'], keep='first', inplace=True)) --> may lead to unnexpected results.
2) Missing Values
This one of the major, if not the biggest, data problems that we faced with. There are several ways to deal with them, e.g.:
- drop rows which contain missing values.
- fill missing values with zero.
- fill missing values with mean of the column the missing value is located.
- (more advanced) use decision trees to predict the missing values.
End of explanation
missing_data.head(8)
Explanation: That is not terrible to the point of fully dropping a column/feature due to the amount of missing values. Nevertheless, the action to do that would be data.drop('age', axis=1). The missing_data variable is our mask for the missing values:
End of explanation
data_aux = data.dropna(how='any')
print('Dataset now with %d columns (features) and %d rows (people).' % (data_aux.shape[1], data_aux.shape[0]))
Explanation: Drop rows with missing values
This can be done with dropna(), for instance:
End of explanation
data_aux = data.fillna(value=0)
print('Dataset has %d columns (features) and %d rows (people).' % (data_aux.shape[1], data_aux.shape[0]))
Explanation: Fill missing values with a specific value (e.g., 0)
This can be done with fillna(), for instance:
End of explanation
data_aux[missing_data['age']]
data_aux[missing_data['height']]
data_aux[missing_data['gender']]
Explanation: So, what happened with our dataset? Let's take a look where we had missing values before:
End of explanation
data_aux['gender'].value_counts()
Explanation: Looks like what we did was not the most appropriate. For instance, we create a new category in the gender column:
End of explanation
data['height'] = data['height'].replace(np.nan, data['height'].mean())
data[missing_data['height']]
Explanation: Fill missing values with mean/mode/median:
This is one of the most common approaches.
height - filling missing values with the mean
End of explanation
data.loc[missing_data['age'], 'age'] = data['age'].median()
data[missing_data['age']]
Explanation: age - filling missing values with median
End of explanation
data['gender'].value_counts(dropna=False)
Explanation: gender - filling missing values with mode
Remember we had a small problem with the data of this feature (the MALE word instead of male)? Typing problems are very common and they can be hidden problems. That's why it is so important to take a look at the data.
End of explanation
mask = data['gender'] == 'MALE'
data.loc[mask, 'gender'] = 'male'
# validate we don't have MALE:
data['gender'].value_counts(dropna=False)
Explanation: Let's replace MALE by male to harmonize our feature.
End of explanation
the_mode = data['gender'].mode()
# note that mode() return a dataframe
the_mode
data['gender'] = data['gender'].replace(np.nan, data['gender'].mode()[0])
data[missing_data['gender']]
Explanation: Now we don't have the MALE entry anymore. Let us fill the missing values with the mode:
End of explanation
data.isnull().sum()
Explanation: Final check
Always a good idea...
End of explanation |
5,639 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Callbacks
author
Step5: Using a callback
Let's first take a look at how to use the built-in callbacks. We'll start off with the History callback, which is already automatically created and updated during training. You can return the history object with the data stored in it using the return_history parameter during training.
Step6: After training we can use the history object to make several useful plots. An intuitive plot is the log probability of the data set given the model $P(D|M)$ over the number of epochs of training.
Step7: As we expected, the log probability of the data set goes up during training, because the model is being explicitly fit to the data set.
Now let's look at how to pass in additional callbacks. Let's take a look at the CSV logger. All we have to do is create the CSVLogger object by passing in the name of the file to save to and then pass that object in to the fit function.
Step10: The CSV will now contain the information that the History object stores, but in a convenient written format. Note that some of the columns will correspond to information that isn't particularly useful for normal training, such as "learning rate." While conceptually similar to the learning rate used in training neural networks, EM does not necessarily benefit in the same way that gradient descent does from tuning it.
Implementing a custom callback
Now let's look at an example of creating a custom callback. This callback will take in a training and a validation set and output both the training and validation set log probabilities. Currently, pomegranate does not allow for a user to pass a validation set in to the fit function and monitor performance that way, so this custom callback is an easy way around that limitation.
Step11: The above code seems fairly simple. All we do is store the data sets that are passed in and then calculate their respective log probabilities at the end of each epoch and print that out to the screen. Let's see how it works on a data set. | Python Code:
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
import seaborn; seaborn.set_style('whitegrid')
from pomegranate import *
numpy.random.seed(0)
numpy.set_printoptions(suppress=True)
%load_ext watermark
%watermark -m -n -p numpy,scipy,pomegranate
Explanation: Callbacks
author: Jacob Schreiber <br>
contact: [email protected]
It is sometimes convenient to be able to implement custom code at certain points in the training process. A "callback" is one way of doing this. Essentially, a callback is an object that has certain methods implemented. When the object is passed in to the fit method of a pomegranate object, these methods will be automatically called at predetermined points during training. These methods an implement a wide variety of functionality, but a few common callbacks include model checkpointing, where the model is written out to disk after each epoch, early stopping, where the training of a model stops early based on performance on a validation set, and even TensorBoard, which displays the results of training of multiple models.
Callbacks are implemented in pomegranate using a similar approach to that of keras. The base callback looks like the following:
```python
class Callback(object):
An object that adds functionality during training.
A callback is a function or group of functions that can be executed during
the training process for any of pomegranate's models that have iterative
training procedures. A callback can be called at three stages-- the
beginning of training, at the end of each epoch (or iteration), and at
the end of training. Users can define any functions that they wish in
the corresponding functions.
def __init__(self):
self.model = None
self.params = None
def on_training_begin(self):
Functionality to add to the beginning of training.
This method will be called at the beginning of each model's training
procedure.
pass
def on_training_end(self, logs):
Functionality to add to the end of training.
This method will be called at the end of each model's training
procedure.
pass
def on_epoch_end(self, logs):
Functionality to add to the end of each epoch.
This method will be called at the end of each epoch during the model's
iterative training procedure.
pass
```
During the training process the self.model attribute gets set to the model that is being trained, allowing users to interact with it and use the methods.
A user can define a custom callback by simply by inheriting from this object (in pomegranate.callbacks) and implementing the methods that they care about. This doesn't have to be all of the methods.
There are a few callbacks that are built-in to pomegranate:
1. ModelCheckpoint: This callback will save a copy of the model every few epochs in case the user is performing a long-running job.
2. History: This callback will save the logs generated during training and return them in a convenient object.
3. CSVLogger: This callback is similar to History except that it will save the logs to a given CSV file
4. LambdaCallback: This callback allows one to pass in function objects for each of the three methods to execute rather than define ones own custom callback.
End of explanation
X = numpy.random.randn(10000, 13)
d1 = MultivariateGaussianDistribution(numpy.zeros(13), numpy.eye(13))
d2 = MultivariateGaussianDistribution(numpy.ones(13), numpy.eye(13))
model = GeneralMixtureModel([d1, d2])
_, history = model.fit(X, return_history=True)
Explanation: Using a callback
Let's first take a look at how to use the built-in callbacks. We'll start off with the History callback, which is already automatically created and updated during training. You can return the history object with the data stored in it using the return_history parameter during training.
End of explanation
plt.plot(history.epochs, history.log_probabilities)
plt.xlabel("Epoch", fontsize=12)
plt.ylabel("Log Probability", fontsize=12)
Explanation: After training we can use the history object to make several useful plots. An intuitive plot is the log probability of the data set given the model $P(D|M)$ over the number of epochs of training.
End of explanation
import pandas
from pomegranate.callbacks import CSVLogger
d1 = MultivariateGaussianDistribution(numpy.zeros(13), numpy.eye(13))
d2 = MultivariateGaussianDistribution(numpy.ones(13), numpy.eye(13))
model = GeneralMixtureModel([d1, d2])
model.fit(X, callbacks=[CSVLogger("logs.csv")])
logs = pandas.read_csv("logs.csv")
logs.head()
Explanation: As we expected, the log probability of the data set goes up during training, because the model is being explicitly fit to the data set.
Now let's look at how to pass in additional callbacks. Let's take a look at the CSV logger. All we have to do is create the CSVLogger object by passing in the name of the file to save to and then pass that object in to the fit function.
End of explanation
from pomegranate.callbacks import Callback
class ValidationSetCallback(Callback):
This callback evaluates a validation set after each epoch.
def __init__(self, X_train, X_valid):
self.X_train = X_train
self.X_valid = X_valid
self.model = None
self.params = None
def on_epoch_end(self, logs):
Functionality to add to the end of each epoch.
This method will be called at the end of each epoch during the model's
iterative training procedure.
epoch = logs['epoch']
train_logp = self.model.log_probability(self.X_train).sum()
valid_logp = self.model.log_probability(self.X_valid).sum()
print("Epoch {} -- Training LogP: {:4.4} -- Validation LogP: {:4.4}".format(epoch, train_logp, valid_logp))
Explanation: The CSV will now contain the information that the History object stores, but in a convenient written format. Note that some of the columns will correspond to information that isn't particularly useful for normal training, such as "learning rate." While conceptually similar to the learning rate used in training neural networks, EM does not necessarily benefit in the same way that gradient descent does from tuning it.
Implementing a custom callback
Now let's look at an example of creating a custom callback. This callback will take in a training and a validation set and output both the training and validation set log probabilities. Currently, pomegranate does not allow for a user to pass a validation set in to the fit function and monitor performance that way, so this custom callback is an easy way around that limitation.
End of explanation
numpy.random.seed(0)
X_train = numpy.concatenate([
numpy.random.normal(0, 1.0, size=(500, 5)),
numpy.random.normal(0.3, 0.8, size=(500, 5)),
numpy.random.normal(-0.3, 0.4, size=(500, 5))
])
idx = numpy.arange(X_train.shape[0])
numpy.random.shuffle(idx)
X_train = X_train[idx]
X_valid = X_train[:500]
X_train = X_train[500:]
callback = ValidationSetCallback(X_train, X_valid)
d1 = MultivariateGaussianDistribution(numpy.zeros(5), numpy.eye(5))
d2 = MultivariateGaussianDistribution(numpy.ones(5), numpy.eye(5))
d3 = MultivariateGaussianDistribution(-numpy.ones(5), numpy.eye(5))
model = GeneralMixtureModel([d1, d2, d3])
_ = model.fit(X_train, callbacks=[callback])
Explanation: The above code seems fairly simple. All we do is store the data sets that are passed in and then calculate their respective log probabilities at the end of each epoch and print that out to the screen. Let's see how it works on a data set.
End of explanation |
5,640 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data discovery using FITS (FIeld Time Series) database - for sites
In this notebook we will look at discovering what data exists for a site in the FITS (FIeld Time Series) database. Again some functionality from previous notebooks in this series will be imported as functions.
To begin we will create a list of all the different data types available in the FITS database
Step1: Now we will specify the site(s) we want to query for available data types.
Step2: The next code segment will query the FITS database for data of all types at the given site(s). The output will be a list of data types available at the site(s) specified.
Step3: While a list of available data types is useful, we need a fast way to view data of a given type and decide if it's what we want. The next code segment deals with plotting some data for each site. As there is a limit to how much data can be displayed on a plot, we will specify which data types we want to display for the site(s). Up to 9 data types can be specified. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
import datetime
import numpy as np
# Create list of all typeIDs available in the FITS database
all_type_URL = 'https://fits.geonet.org.nz/type'
all_types = pd.read_json(all_type_URL).iloc[:,0]
all_typeIDs= []
for row in all_types:
all_typeIDs.append(row['typeID'])
Explanation: Data discovery using FITS (FIeld Time Series) database - for sites
In this notebook we will look at discovering what data exists for a site in the FITS (FIeld Time Series) database. Again some functionality from previous notebooks in this series will be imported as functions.
To begin we will create a list of all the different data types available in the FITS database:
End of explanation
# Specify site(s) to get data for
sites = ['RU001', 'WI222']
Explanation: Now we will specify the site(s) we want to query for available data types.
End of explanation
# Ensure list format to sites
if type(sites) != list:
site = sites
sites = []
sites.append(site)
# Prepare data lists
site_data = [[] for j in range(len(sites))]
site_data_types = [[] for j in range(len(sites))]
# Load data from FITS database and parse into data lists
for j in range(len(sites)):
for i in range(len(all_typeIDs)):
# Build query for site, typeID combination
query_suffix = 'siteID=%s&typeID=%s' % (sites[j], all_typeIDs[i])
URL = 'https://fits.geonet.org.nz/observation?' + query_suffix
# Try to load data of the given typeID for the site, if it fails then the data doesn't exist
try:
data = pd.read_csv(URL, names=['date-time', all_typeIDs[i], 'error'], header=0, parse_dates=[0], index_col=0)
if len(data.values) > 1:
site_data[j].append(data)
site_data_types[j].append(all_typeIDs[i])
except:
pass
# Return information to the operator
for i in range(len(site_data_types)):
print('Data types available for ' + sites[i] + ':\n')
for j in range(len(site_data_types[i])):
print(site_data_types[i][j])
print('\n')
Explanation: The next code segment will query the FITS database for data of all types at the given site(s). The output will be a list of data types available at the site(s) specified.
End of explanation
plot_data_types = ['t', 'ph', 'NH3-w']
# Determine number and arrangement of subplots (max 9, less for greater legibility)
subplot_number = len(plot_data_types)
if subplot_number / 3 > 1: # if there are more than 3 subplots
rows = '3'
if subplot_number / 6 > 1: # if there are more than 6 subplots
cols = '3'
else:
cols = '2'
else:
rows = str(subplot_number)
cols = '1'
ax = [[] for i in range(len(plot_data_types))]
# Plot data
plt.figure(figsize = (10,8))
for i in range(len(site_data)): # i is site index
for j in range(len(plot_data_types)): # j is data type index
k = site_data_types[i].index(plot_data_types[j]) # match data type chosen to position in data list
if i == 0:
ax[j] = plt.subplot(int(rows + cols + str(j + 1)))
if ((i == 0) and (j == 0)):
# Set initial min/max times
minmintime = min(site_data[i][k].index.values)
maxmaxtime = max(site_data[i][k].index.values)
# Do not plot empty DataFrames (and avoid cluttering the figure legend)
if len(site_data[i][k].values) < 1:
continue
try:
ax[j].plot(site_data[i][k].loc[:, plot_data_types[j]], label = sites[i],
marker='o', linestyle=':', markersize = 1)
except:
continue
# Get min, max times of dataset
mintime = min(site_data[i][k].index.values)
maxtime = max(site_data[i][k].index.values)
# Set y label
ax[j].set_ylabel(plot_data_types[j], rotation = 90, labelpad = 5, fontsize = 12)
if ((i == 1) and (j == 0)):
# Set legend
plot, labels = ax[j].get_legend_handles_labels()
# ^ due to repetitive nature of plotting, only need to do this once
ax[j].legend(plot, labels, fontsize = 12, bbox_to_anchor=(-0.2, 1.3))
# Note: the legend may extend off the figure if there are many sites
# Set title
plot_data_typesstr = ''
for k in range(len(plot_data_types)): plot_data_typesstr += plot_data_types[k] + ', '
plot_data_typesstr = plot_data_typesstr[:-2]
ax[j].set_title('All site data for plot data types : ' + plot_data_typesstr, loc = 'left', y = 1.03,
fontsize = 16)
# Get min, max times of all data
minmintime = min(mintime, minmintime)
maxmaxtime = max(maxtime, maxmaxtime)
# Add x label
plt.xlabel('Time', rotation = 0, labelpad = 5, fontsize = 12)
# Tidy up plot extent and x-axis
for j in range(len(plot_data_types)):
ax[j].set_xlim([minmintime, maxmaxtime])
ax[j].set_xticks(np.arange(minmintime, maxmaxtime + 1000, (maxmaxtime - minmintime) / 3))
plt.show()
# Optionally, save the figure to the current working directory
import os
plt.savefig(os.getcwd() + '/test.png', format = 'png')
Explanation: While a list of available data types is useful, we need a fast way to view data of a given type and decide if it's what we want. The next code segment deals with plotting some data for each site. As there is a limit to how much data can be displayed on a plot, we will specify which data types we want to display for the site(s). Up to 9 data types can be specified.
End of explanation |
5,641 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EEG forward operator with a template MRI
This tutorial explains how to compute the forward operator from EEG data
using the standard template MRI subject fsaverage.
.. caution
Step1: Load the data
We use here EEG data from the BCI dataset.
<div class="alert alert-info"><h4>Note</h4><p>See `plot_montage` to view all the standard EEG montages
available in MNE-Python.</p></div>
Step2: Setup source space and compute forward
Step3: From here on, standard inverse imaging methods can be used!
Infant MRI surrogates
We don't have a sample infant dataset for MNE, so let's fake a 10-20 one
Step4: Get an infant MRI template
To use an infant head model for M/EEG data, you can use
Step5: It comes with several helpful built-in files, including a 10-20 montage
in the MRI coordinate frame, which can be used to compute the
MRI<->head transform trans
Step6: There are also BEM and source spaces
Step7: You can ensure everything is as expected by plotting the result | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Joan Massich <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
import mne
from mne.datasets import eegbci
from mne.datasets import fetch_fsaverage
# Download fsaverage files
fs_dir = fetch_fsaverage(verbose=True)
subjects_dir = op.dirname(fs_dir)
# The files live in:
subject = 'fsaverage'
trans = 'fsaverage' # MNE has a built-in fsaverage transformation
src = op.join(fs_dir, 'bem', 'fsaverage-ico-5-src.fif')
bem = op.join(fs_dir, 'bem', 'fsaverage-5120-5120-5120-bem-sol.fif')
Explanation: EEG forward operator with a template MRI
This tutorial explains how to compute the forward operator from EEG data
using the standard template MRI subject fsaverage.
.. caution:: Source reconstruction without an individual T1 MRI from the
subject will be less accurate. Do not over interpret
activity locations which can be off by multiple centimeters.
Adult template MRI (fsaverage)
First we show how fsaverage can be used as a surrogate subject.
End of explanation
raw_fname, = eegbci.load_data(subject=1, runs=[6])
raw = mne.io.read_raw_edf(raw_fname, preload=True)
# Clean channel names to be able to use a standard 1005 montage
new_names = dict(
(ch_name,
ch_name.rstrip('.').upper().replace('Z', 'z').replace('FP', 'Fp'))
for ch_name in raw.ch_names)
raw.rename_channels(new_names)
# Read and set the EEG electrode locations, which are already in fsaverage's
# space (MNI space) for standard_1020:
montage = mne.channels.make_standard_montage('standard_1005')
raw.set_montage(montage)
raw.set_eeg_reference(projection=True) # needed for inverse modeling
# Check that the locations of EEG electrodes is correct with respect to MRI
mne.viz.plot_alignment(
raw.info, src=src, eeg=['original', 'projected'], trans=trans,
show_axes=True, mri_fiducials=True, dig='fiducials')
Explanation: Load the data
We use here EEG data from the BCI dataset.
<div class="alert alert-info"><h4>Note</h4><p>See `plot_montage` to view all the standard EEG montages
available in MNE-Python.</p></div>
End of explanation
fwd = mne.make_forward_solution(raw.info, trans=trans, src=src,
bem=bem, eeg=True, mindist=5.0, n_jobs=1)
print(fwd)
Explanation: Setup source space and compute forward
End of explanation
ch_names = \
'Fz Cz Pz Oz Fp1 Fp2 F3 F4 F7 F8 C3 C4 T7 T8 P3 P4 P7 P8 O1 O2'.split()
data = np.random.RandomState(0).randn(len(ch_names), 1000)
info = mne.create_info(ch_names, 1000., 'eeg')
raw = mne.io.RawArray(data, info)
Explanation: From here on, standard inverse imaging methods can be used!
Infant MRI surrogates
We don't have a sample infant dataset for MNE, so let's fake a 10-20 one:
End of explanation
subject = mne.datasets.fetch_infant_template('6mo', subjects_dir, verbose=True)
Explanation: Get an infant MRI template
To use an infant head model for M/EEG data, you can use
:func:mne.datasets.fetch_infant_template to download an infant template:
End of explanation
fname_1020 = op.join(subjects_dir, subject, 'montages', '10-20-montage.fif')
mon = mne.channels.read_dig_fif(fname_1020)
mon.rename_channels(
{f'EEG{ii:03d}': ch_name for ii, ch_name in enumerate(ch_names, 1)})
trans = mne.channels.compute_native_head_t(mon)
raw.set_montage(mon)
print(trans)
Explanation: It comes with several helpful built-in files, including a 10-20 montage
in the MRI coordinate frame, which can be used to compute the
MRI<->head transform trans:
End of explanation
bem_dir = op.join(subjects_dir, subject, 'bem')
fname_src = op.join(bem_dir, f'{subject}-oct-6-src.fif')
src = mne.read_source_spaces(fname_src)
print(src)
fname_bem = op.join(bem_dir, f'{subject}-5120-5120-5120-bem-sol.fif')
bem = mne.read_bem_solution(fname_bem)
Explanation: There are also BEM and source spaces:
End of explanation
fig = mne.viz.plot_alignment(
raw.info, subject=subject, subjects_dir=subjects_dir, trans=trans,
src=src, bem=bem, coord_frame='mri', mri_fiducials=True, show_axes=True,
surfaces=('white', 'outer_skin', 'inner_skull', 'outer_skull'))
mne.viz.set_3d_view(fig, 25, 70, focalpoint=[0, -0.005, 0.01])
Explanation: You can ensure everything is as expected by plotting the result:
End of explanation |
5,642 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
vocab_to_int = {w:i for i, w in enumerate(set(text))}
int_to_vocab = {i:w for i, w in enumerate(set(text))}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_mark||',
'?': '||Question_mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'--': '||Dash||',
"\n": '||Return||'
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input')
targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='targets')
learning_rate = tf.placeholder(dtype=tf.float32, name='learning_rate')
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size, keep_prob=0.8, layers=3):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
multi = tf.contrib.rnn.MultiRNNCell([cell] * layers)
init_state = multi.zero_state(batch_size, tf.float32)
init_state = tf.identity(init_state, 'initial_state')
return multi, init_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
embeddings = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embeddings, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, 'final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
embed = get_embed(input_data, vocab_size, rnn_size)
outputs, final_state = build_rnn(cell, embed)
logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = len(int_text) // (batch_size * seq_length)
result = []
for i in range(n_batches):
inputs = []
targets = []
for j in range(batch_size):
idx = i * seq_length + j * seq_length
inputs.append(int_text[idx:idx + seq_length])
targets.append(int_text[idx + 1:idx + seq_length + 1])
result.append([inputs, targets])
result=np.array(result)
print(result.shape)
print(result[1])
print(n_batches)
print(batch_size)
print(seq_length)
# (number of batches, 2, batch size, sequence length).
return np.array(result)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Sequence Length
seq_length = 25
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 50
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
inputs = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return inputs, init_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
5,643 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Unsupervised Learning
Project 3
Step1: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories
Step2: Implementation
Step3: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint
Step4: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint
Step5: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint
Step6: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
Step7: Implementation
Step8: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer
Step9: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint
Step10: Implementation
Step11: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
Step12: Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer
Step13: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer
Step14: Implementation
Step15: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint
Step16: Answer | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
import renders as rs
from IPython.display import display # Allows the use of display() for DataFrames
# Show matplotlib plots inline (nicely formatted in the notebook)
%matplotlib inline
# Load the wholesale customers dataset
try:
data = pd.read_csv("customers.csv")
data.drop(['Region', 'Channel'], axis = 1, inplace = True)
print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape)
except:
print "Dataset could not be loaded. Is the dataset missing?"
Explanation: Machine Learning Engineer Nanodegree
Unsupervised Learning
Project 3: Creating Customer Segments
Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!
In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.
Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.
Getting Started
In this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.
The dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.
Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
End of explanation
# Display a description of the dataset
display(data.describe())
Explanation: Data Exploration
In this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.
Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.
End of explanation
# TODO: Select three indices of your choice you wish to sample from the dataset
indices = [10, 47, 53]
# Create a DataFrame of the chosen samples
samples = pd.DataFrame(data.loc[indices], columns = data.keys())#.reset_index(drop = True)
print "Chosen samples of wholesale customers dataset:"
display(samples)
print "Deviation from the mean:"
display(samples - data.mean().round())
print "Deviation from the median:"
display(samples - data.median().round())
Explanation: Implementation: Selecting Samples
To get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
End of explanation
# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature
feature = 'Grocery'
new_data = data.drop([feature], axis=1)
# TODO: Split the data into training and testing sets using the given feature as the target
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(new_data, data[feature], test_size = 0.2, random_state=0)
# TODO: Create a decision tree regressor and fit it to the training set
from sklearn.tree import DecisionTreeRegressor
regressor = DecisionTreeRegressor()
regressor.fit(X_train, y_train)
# TODO: Report the score of the prediction using the testing set
score = regressor.score(X_test, y_test)
Explanation: Question 1
Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.
What kind of establishment (customer) could each of the three samples you've chosen represent?
Hint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying "McDonalds" when describing a sample customer as a restaurant.
Answer:
As suggested in a review, I make use of the deviations of these points in respect to the mean and median values esplicitly.
Customer 10: He is balanced in all categories but the fresh feature, since he is well below the average and median. He excels especially in grocery, detergent and frozen. I think he may be a medium size retailer.
Customer 47: He is well above average and median in all features. In my opinion this my be an example of a large retailer.
Customer 53: He has amount of fresh, frozen and delicatess products well below the mean and also the median. On the other hand it excels in milk, grocery and detergents. It may be a bar or a retailer specialized in these products.
Implementation: Feature Relevance
One interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.
In the code block below, you will need to implement the following:
- Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.
- Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.
- Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.
- Import a decision tree regressor, set a random_state, and fit the learner to the training data.
- Report the prediction score of the testing set using the regressor's score function.
End of explanation
# Produce a scatter matrix for each pair of features in the data
pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 2
Which feature did you attempt to predict? What was the reported prediction score? Is this feature is necessary for identifying customers' spending habits?
Hint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.
Answer:
I chose Grocery and, without tuning the model parameters, I get R^2 = 0.623, which shows that a certain degree of correlations with the other variables is present. This feature may not be therefore necessary for identifying the customer habits.
Visualize Feature Distributions
To get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
End of explanation
# TODO: Scale the data using the natural logarithm
log_data = data.apply(np.log)
# TODO: Scale the sample data using the natural logarithm
log_samples = samples.apply(np.log)
# Produce a scatter matrix for each pair of newly-transformed features
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
Explanation: Question 3
Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?
Hint: Is the data normally distributed? Where do most of the data points lie?
Answer:
The Grocery feature and the Detergent_Paper features are clearly linearly correlated. This explaines the R^2 value found above. It therefore confirms the suspiciuos about the relevance of the Grocery feature. One can see a certain degree of correlation between Grocery and Milk, and, Milk and Detergent as well. These three variables seem to be connected (in fact, see PCA first component below).
The distributions are long tailed but not Gaussian, since they are highly positively skewed. Most data points lie close to the origin, but the maximum appears at a finite non zero value.
Data Preprocessing
In this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.
Implementation: Feature Scaling
If data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.
In the code block below, you will need to implement the following:
- Assign a copy of the data to log_data after applying a logarithm scaling. Use the np.log function for this.
- Assign a copy of the sample data to log_samples after applying a logrithm scaling. Again, use np.log.
End of explanation
# Display the log-transformed sample data
display(log_samples)
Explanation: Observation
After applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).
Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
End of explanation
opf = {} #outliers per feature
# For each feature find the data points with extreme high or low values
for feature in log_data.keys():
# TODO: Calculate Q1 (25th percentile of the data) for the given feature
Q1 = np.percentile(log_data[feature], 25)
# TODO: Calculate Q3 (75th percentile of the data) for the given feature
Q3 = np.percentile(log_data[feature], 75)
# TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)
step = (Q3-Q1)*1.5
# Display the outliers
#print "Data points considered outliers for the feature '{}':".format(feature)
#display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])
opf[feature] = list(log_data[~((log_data[feature] >= Q1 - step)
& (log_data[feature] <= Q3 + step))].index)
# OPTIONAL: Select the indices for data points you wish to remove
method = 'pair'
def outliers_remover(out_dic, method='no'):
outliers = []
if method == 'no':
print 'No outlier removed'
elif method == 'all':
for f in out_dic.keys():
outliers += opf[f]
print 'Removed all outliers according to the interquartile method range'
elif method == 'pair':
for i in range(len(out_dic.keys())):
for j in range(i+1, len(out_dic.keys())):
#print data.keys()[i] , data.keys()[j], np.intersect1d(opf[data.keys()[i]], opf[data.keys()[j]])
outliers += list(np.intersect1d(out_dic[out_dic.keys()[i]], out_dic[out_dic.keys()[j]]))
print 'Removed all outliers common to at least two features'
return outliers
outliers = sorted(list(set(outliers_remover(opf, method))))
print len(outliers)
# Remove the outliers, if any were specified
good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Explanation: Implementation: Outlier Detection
Detecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.
In the code block below, you will need to implement the following:
- Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.
- Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.
- Assign the calculation of an outlier step for the given feature to step.
- Optionally remove data points from the dataset by adding indices to the outliers list.
NOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!
Once you have performed this implementation, the dataset will be stored in the variable good_data.
End of explanation
# TODO: Apply PCA by fitting the good data with the same number of dimensions as features
from sklearn.decomposition import PCA
pca = PCA(n_components=6)
pca.fit(good_data)
# TODO: Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)
# Generate PCA results plot
pca_results = rs.pca_results(good_data, pca)
Explanation: Question 4
Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why.
Answer:
Yes there are 5 points [65, 66, 75, 128, 154] considered outliers by more than one feature. The set of points considered outliers by at least one feature is much bigger (42 points). The removal or not of these points may affect the clustering methods below, especially the Gaussian Mixture Model. I performed some test and see that the results obtained by removing the 5 points or the whole 42 points are very similar, meaning that the important outliers are actually those that are considered as such by more than one feature. If these are not removed, the GMM clustering scores much worse. For these reasons I decided to limit the removal only to the points [65, 66, 75, 128, 154].
Feature Transformation
In this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.
Implementation: PCA
Now that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.
In the code block below, you will need to implement the following:
- Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.
- Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after having a PCA transformation applied
display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
Explanation: Question 5
How much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.
Hint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the indivdual feature weights.
Answer:
- The first two components explain around 71% (0.7068) of the variance
- The first four components exprain around 93% (0.9311) of the variance
The first component represents a combination of Detergent_papers, Grocery and Milk in terms of spending
The second component represents a combination of Fresh, Frozen, Delicatessen in terms of spending
Here we can comment that these two components specialize on the different 6 products independently, somehow hinting that two variables, given by the combination of those, are dominating.
The third component represents a combination of Fresh Delicatessen(with negative weight) and Frozen in terms of spending
The fourth component represents a combination of Frozen and Delicatessen (with negative weight) in terms of spending
Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
End of explanation
# TODO: Apply PCA by fitting the good data with only two dimensions
pca = PCA(n_components=2)
pca.fit(good_data)
# TODO: Transform the good data using the PCA fit above
reduced_data = pca.transform(good_data)
# TODO: Transform the sample log-data using the PCA fit above
pca_samples = pca.transform(log_samples)
# Create a DataFrame for the reduced data
reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])
Explanation: Implementation: Dimensionality Reduction
When using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.
In the code block below, you will need to implement the following:
- Assign the results of fitting PCA in two dimensions with good_data to pca.
- Apply a PCA transformation of good_data using pca.transform, and assign the reuslts to reduced_data.
- Apply a PCA transformation of the sample log-data log_samples using pca.transform, and assign the results to pca_samples.
End of explanation
# Display sample log-data after applying PCA transformation in two dimensions
display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
Explanation: Observation
Run the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
End of explanation
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_score
# TODO: Apply your clustering algorithm of choice to the reduced data
kmeans = KMeans(n_clusters=2, random_state=0)
clusterer = kmeans.fit(reduced_data)
# TODO: Predict the cluster for each data point
preds = kmeans.predict(reduced_data)
# TODO: Find the cluster centers
centers = kmeans.cluster_centers_
# TODO: Predict the cluster for each transformed sample data point
sample_preds = kmeans.predict(pca_samples)
# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen
score = silhouette_score(reduced_data, preds)
Explanation: Clustering
In this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale.
Question 6
What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?
Answer:
- K-Means is rigid, providing a clear separation of cluster domains. It is also sensitive to outliers more than the Gaussian Mixture Model. In the absence of a clear idea of the clusters, however, is usually used as a first attempt among clustering techniques.
- Gaussian Mixture Model, instead, allows for cluster into clusters. It contains K-means as a non probabilistic limit case, however it is usually much slower in convergence. K-means may be used before in order to find suitable initialization of the GMM.
Given that PCA shows that 2 components are dominant and observing the distribution of points in this 2D plane, I see that the data are quite uniformly distributed and no clear structure is present. Morevove the outliers have been removed. For this reason I chose to perform the analysis with K-Means. A-posteriori, I find that the two techniques reach very similar conclusions.
Implementation: Creating Clusters
Depending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the "goodness" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.
In the code block below, you will need to implement the following:
- Fit a clustering algorithm to the reduced_data and assign it to clusterer.
- Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.
- Find the cluster centers using the algorithm's respective attribute and assign them to centers.
- Predict the cluster for each sample data point in pca_samples and assign them sample_preds.
- Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.
- Assign the silhouette score to score and print the result.
End of explanation
# Display the results of the clustering from implementation
rs.cluster_results(reduced_data, preds, centers, pca_samples)
Explanation: Question 7
Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?
Answer:
For the K-Means Algorithm
n_clusters = 2, score = 0.426 (BEST SCORE)
n_clusters = 3, score = 0.397
n_clusters = 4, score = 0.331
n_clusters = 5, score = 0.349
n_clusters = 6, score = 0.367
Cluster Visualization
Once you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
End of explanation
# TODO: Inverse transform the centers
log_centers = pca.inverse_transform(centers)
# TODO: Exponentiate the centers
true_centers = np.exp(log_centers)
# Display the true centers
segments = ['Segment {}'.format(i) for i in range(0,len(centers))]
true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())
true_centers.index = segments
display(true_centers)
print "Deviation from the mean:"
display(true_centers - data.mean().round())
print "Deviation from the median:"
display(true_centers - data.median().round())
Explanation: Implementation: Data Recovery
Each cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.
In the code block below, you will need to implement the following:
- Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.
- Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.
End of explanation
# Display the predictions
for i, pred in enumerate(sample_preds):
print "Sample point", i, "predicted to be in Cluster", pred
print "\nWe diplay again the centroids coordinates in the original feature space"
display(true_centers)
print "and the sample points in the original feature space"
display(samples)
Explanation: Question 8
Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?
Hint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.
Answer:
As suggested in a review we make use of the deviation of the representative centroids in respect to the data means and medians.
In Segment 0, looking at the centroid values, we find customers which spend relatively low amount of money in all categories apart from the fresh and moderately in the frozen section. We can see this from the fact that the centroids have values below the data mean in all categories, and below the median in all but the fresh and frozen features. This Segment may be called: The Rest segment, made of small activities (bars, restaurant, small markets...).
In Segment 1 instead, we have more balance between the products. Still however the centroid identifies examples with below average fresh, frozen and delicatessen values and below median fresh and frozen values. This segment may represent the Retailer Segment.
Question 9
For each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?
Run the code block below to find which cluster each sample point is predicted to be.
End of explanation
# Display the clustering results based on 'Channel' data
rs.channel_results(reduced_data, outliers, pca_samples)
Explanation: Answer:
The following analysis refers to the two tables above:
For Sample '0' (index 10), the values of all features, but the Frozen one, are very close to the cluster 1 centroid. This point is therefore well represented by the centroid itself and therefore by the correspondent cluster.
For Sample '1' (index 47): this is a high-seller customer. It did not appear in any of the outlier categories calculated, and therefore we cannot consider it as an outlier. On the other hand looking at the relative values of all the features in respect to the ones pertaining the centroids, we can safely state that it belongs fo the cluster 1, since apart for the Frozen and Fresh features, the centroid of cluster 1 seem to be represent more accurately the high selling instances, in respect to the centroid of cluster 0. Sample 1 therefore can be considered to be an extreme (boundary) point of cluster 1.
For Sample '2' (index 53), The values of Milk, Grocery, Detergents and Frozen are close to the cluster 1 centroid. A relative small deviation exist for the Delicatessen feature and a strong one for the Fresh feature. Overall this sample is relatively well represented by the cluster 1 centroid and therefore we can safely assign it to cluster 1.
To conclude: The clustering algorithm assigned all the points to the Cluster 1, so to the Retailer cluster. In two cases I was correct in the preliminary analysis (Sample 0 and 1 to belong to a retailer class). In the third case I opted for a small activity, a bar, but the analysis shows actually that also this example should belong to the retailer class.
Conclusion
In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.
Question 10
Companies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would reach positively to the change in delivery service?
Hint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?
Answer:
Assuming the clustering is correct, the two types of customers may have different needs and therefore react differently to the change. In order to test whether the decision to change the service is correct, the company will have to separate the two classes, and run the A/B test in each of them separately. The results in each class in fact may be different and a combined result, therefore, misleading. Once the test are conducted in the A/B form, the company, may draw conclusion whether the change is useful for customers belonging to cluster 0 and, if it is useful for those belonging to cluster 1.
Question 11
Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.
How can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?
Hint: A supervised learner could be used to train on the original customers. What would be the target variable?
Answer:
We could run a supervised classification algorithm using the customer segment data as training set. Within the training phase we can make use of cross-validation and all other methods to obtain a suitable model. Then the prediction phase will take as inputs the estimated product spending, and will output the classification, i.e. to which segment the new customers are more likely to belong to.
Visualizing Underlying Distributions
At the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.
Run the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
End of explanation |
5,644 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Box Plots
The following illustrates some options for the boxplot in statsmodels. These include violin_plot and bean_plot.
Step1: Bean Plots
The following example is taken from the docstring of beanplot.
We use the American National Election Survey 1996 dataset, which has Party
Identification of respondents as independent variable and (among other
data) age as dependent variable.
Step3: Group age by party ID, and create a violin plot with it
Step4: Advanced Box Plots
Based of example script example_enhanced_boxplots.py (by Ralf Gommers) | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
Explanation: Box Plots
The following illustrates some options for the boxplot in statsmodels. These include violin_plot and bean_plot.
End of explanation
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
Explanation: Bean Plots
The following example is taken from the docstring of beanplot.
We use the American National Election Survey 1996 dataset, which has Party
Identification of respondents as independent variable and (among other
data) age as dependent variable.
End of explanation
plt.rcParams['figure.subplot.bottom'] = 0.23 # keep labels visible
plt.rcParams['figure.figsize'] = (10.0, 8.0) # make plot larger in notebook
age = [data.exog['age'][data.endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
#plt.show()
def beanplot(data, plot_opts={}, jitter=False):
helper function to try out different plot options
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts_ = {'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30}
plot_opts_.update(plot_opts)
sm.graphics.beanplot(data, ax=ax, labels=labels,
jitter=jitter, plot_opts=plot_opts_)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
fig = beanplot(age, jitter=True)
fig = beanplot(age, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'})
fig = beanplot(age, plot_opts={'violin_fc':'#66c2a5'})
fig = beanplot(age, plot_opts={'bean_size': 0.2, 'violin_width': 0.75, 'violin_fc':'#66c2a5'})
fig = beanplot(age, jitter=True, plot_opts={'violin_fc':'#66c2a5'})
fig = beanplot(age, jitter=True, plot_opts={'violin_width': 0.5, 'violin_fc':'#66c2a5'})
Explanation: Group age by party ID, and create a violin plot with it:
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
# Necessary to make horizontal axis labels fit
plt.rcParams['figure.subplot.bottom'] = 0.23
data = sm.datasets.anes96.load_pandas()
party_ID = np.arange(7)
labels = ["Strong Democrat", "Weak Democrat", "Independent-Democrat",
"Independent-Independent", "Independent-Republican",
"Weak Republican", "Strong Republican"]
# Group age by party ID.
age = [data.exog['age'][data.endog == id] for id in party_ID]
# Create a violin plot.
fig = plt.figure()
ax = fig.add_subplot(111)
sm.graphics.violinplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a bean plot.
fig2 = plt.figure()
ax = fig2.add_subplot(111)
sm.graphics.beanplot(age, ax=ax, labels=labels,
plot_opts={'cutoff_val':5, 'cutoff_type':'abs',
'label_fontsize':'small',
'label_rotation':30})
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create a jitter plot.
fig3 = plt.figure()
ax = fig3.add_subplot(111)
plot_opts={'cutoff_val':5, 'cutoff_type':'abs', 'label_fontsize':'small',
'label_rotation':30, 'violin_fc':(0.8, 0.8, 0.8),
'jitter_marker':'.', 'jitter_marker_size':3, 'bean_color':'#FF6F00',
'bean_mean_color':'#009D91'}
sm.graphics.beanplot(age, ax=ax, labels=labels, jitter=True,
plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Create an asymmetrical jitter plot.
ix = data.exog['income'] < 16 # incomes < $30k
age = data.exog['age'][ix]
endog = data.endog[ix]
age_lower_income = [age[endog == id] for id in party_ID]
ix = data.exog['income'] >= 20 # incomes > $50k
age = data.exog['age'][ix]
endog = data.endog[ix]
age_higher_income = [age[endog == id] for id in party_ID]
fig = plt.figure()
ax = fig.add_subplot(111)
plot_opts['violin_fc'] = (0.5, 0.5, 0.5)
plot_opts['bean_show_mean'] = False
plot_opts['bean_show_median'] = False
plot_opts['bean_legend_text'] = 'Income < \$30k'
plot_opts['cutoff_val'] = 10
sm.graphics.beanplot(age_lower_income, ax=ax, labels=labels, side='left',
jitter=True, plot_opts=plot_opts)
plot_opts['violin_fc'] = (0.7, 0.7, 0.7)
plot_opts['bean_color'] = '#009D91'
plot_opts['bean_legend_text'] = 'Income > \$50k'
sm.graphics.beanplot(age_higher_income, ax=ax, labels=labels, side='right',
jitter=True, plot_opts=plot_opts)
ax.set_xlabel("Party identification of respondent.")
ax.set_ylabel("Age")
ax.set_title("US national election '96 - Age & Party Identification")
# Show all plots.
#plt.show()
Explanation: Advanced Box Plots
Based of example script example_enhanced_boxplots.py (by Ralf Gommers)
End of explanation |
5,645 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
State space modeling
Step2: To take advantage of the existing infrastructure, including Kalman filtering and maximum likelihood estimation, we create a new class which extends from dismalpy.ssm.MLEModel. There are a number of things that must be specified
Step3: Using this simple model, we can estimate the parameters from a local linear trend model. The following example is from Commandeur and Koopman (2007), section 3.4., modeling motor vehicle fatalities in Finland.
Step4: Finally, we can do post-estimation prediction and forecasting. Notice that the end period can be specified as a date. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
from scipy.stats import norm
import dismalpy as dp
import matplotlib.pyplot as plt
Explanation: State space modeling: Local Linear Trends
This notebook describes how to extend the state space classes to create and estimate a custom model. Here we develop a local linear trend model.
The Local Linear Trend model has the form (see Durbin and Koopman 2012, Chapter 3.2 for all notation and details):
$$
\begin{align}
y_t & = \mu_t + \varepsilon_t \qquad & \varepsilon_t \sim
N(0, \sigma_\varepsilon^2) \
\mu_{t+1} & = \mu_t + \nu_t + \xi_t & \xi_t \sim N(0, \sigma_\xi^2) \
\nu_{t+1} & = \nu_t + \zeta_t & \zeta_t \sim N(0, \sigma_\zeta^2)
\end{align}
$$
It is easy to see that this can be cast into state space form as:
$$
\begin{align}
y_t & = \begin{pmatrix} 1 & 0 \end{pmatrix} \begin{pmatrix} \mu_t \ \nu_t \end{pmatrix} + \varepsilon_t \
\begin{pmatrix} \mu_{t+1} \ \nu_{t+1} \end{pmatrix} & = \begin{bmatrix} 1 & 1 \ 0 & 1 \end{bmatrix} \begin{pmatrix} \mu_t \ \nu_t \end{pmatrix} + \begin{pmatrix} \xi_t \ \zeta_t \end{pmatrix}
\end{align}
$$
Notice that much of the state space representation is composed of known values; in fact the only parts in which parameters to be estimated appear are in the variance / covariance matrices:
$$
\begin{align}
H_t & = \begin{bmatrix} \sigma_\varepsilon^2 \end{bmatrix} \
Q_t & = \begin{bmatrix} \sigma_\xi^2 & 0 \ 0 & \sigma_\zeta^2 \end{bmatrix}
\end{align}
$$
End of explanation
Univariate Local Linear Trend Model
class LocalLinearTrend(dp.ssm.MLEModel):
def __init__(self, endog):
# Model order
k_states = k_posdef = 2
# Initialize the statespace
super(LocalLinearTrend, self).__init__(
endog, k_states=k_states, k_posdef=k_posdef,
initialization='approximate_diffuse',
loglikelihood_burn=k_states
)
# Initialize the matrices
self['design'] = np.array([1, 0])
self['transition'] = np.array([[1, 1],
[0, 1]])
self['selection'] = np.eye(k_states)
# Cache some indices
self._state_cov_idx = ('state_cov',) + np.diag_indices(k_posdef)
@property
def param_names(self):
return ['sigma2.measurement', 'sigma2.level', 'sigma2.trend']
@property
def start_params(self):
return [np.std(self.endog)]*3
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, constrained):
return constrained**0.5
def update(self, params, transformed=True):
params = super(LocalLinearTrend, self).update(params, transformed)
# Observation covariance
self['obs_cov',0,0] = params[0]
# State covariance
self[self._state_cov_idx] = params[1:]
Explanation: To take advantage of the existing infrastructure, including Kalman filtering and maximum likelihood estimation, we create a new class which extends from dismalpy.ssm.MLEModel. There are a number of things that must be specified:
k_states, k_posdef: These two parameters must be provided to the base classes in initialization. The inform the statespace model about the size of, respectively, the state vector, above $\begin{pmatrix} \mu_t & \nu_t \end{pmatrix}'$, and the state error vector, above $\begin{pmatrix} \xi_t & \zeta_t \end{pmatrix}'$. Note that the dimension of the endogenous vector does not have to be specified, since it can be inferred from the endog array.
update: The method update, with argument params, must be specified (it is used when fit() is called to calculate the MLE). It takes the parameters and fills them into the appropriate state space matrices. For example, below, the params vector contains variance parameters $\begin{pmatrix} \sigma_\varepsilon^2 & \sigma_\xi^2 & \sigma_\zeta^2\end{pmatrix}$, and the update method must place them in the observation and state covariance matrices. More generally, the parameter vector might be mapped into many different places in all of the statespace matrices.
statespace matrices: by default, all state space matrices (obs_intercept, design, obs_cov, state_intercept, transition, selection, state_cov) are set to zeros. Values that are fixed (like the ones in the design and transition matrices here) can be set in initialization, whereas values that vary with the parameters should be set in the update method. Note that it is easy to forget to set the selection matrix, which is often just the identity matrix (as it is here), but not setting it will lead to a very different model (one where there is not a stochastic component to the transition equation).
start params: start parameters must be set, even if it is just a vector of zeros, although often good start parameters can be found from the data. Maximum likelihood estimation by gradient methods (as employed here) can be sensitive to the starting parameters, so it is important to select good ones if possible. Here it does not matter too much (although as variances, they should't be set zero).
initialization: in addition to defined state space matrices, all state space models must be initialized with the mean and variance for the initial distribution of the state vector. If the distribution is known, initialize_known(initial_state, initial_state_cov) can be called, or if the model is stationary (e.g. an ARMA model), initialize_stationary can be used. Otherwise, initialize_approximate_diffuse is a reasonable generic initialization (exact diffuse initialization is not yet available). Since the local linear trend model is not stationary (it is composed of random walks) and since the distribution is not generally known, we use initialize_approximate_diffuse below.
The above are the minimum necessary for a successful model. There are also a number of things that do not have to be set, but which may be helpful or important for some applications:
transform / untransform: when fit is called, the optimizer in the background will use gradient methods to select the parameters that maximize the likelihood function. By default it uses unbounded optimization, which means that it may select any parameter value. In many cases, that is not the desired behavior; variances, for example, cannot be negative. To get around this, the transform method takes the unconstrained vector of parameters provided by the optimizer and returns a constrained vector of parameters used in likelihood evaluation. untransform provides the reverse operation.
param_names: this internal method can be used to set names for the estimated parameters so that e.g. the summary provides meaningful names. If not present, parameters are named param0, param1, etc.
End of explanation
# Load Dataset
# Note: dataset from http://www.ssfpack.com/CKbook.html
df = pd.read_table('data/NorwayFinland.txt', skiprows=1, header=None)
df.columns = ['date', 'nf', 'ff']
df.index = pd.date_range(start='%d-01-01' % df.date[0], end='%d-01-01' % df.iloc[-1, 0], freq='AS')
# Log transform
df['lff'] = np.log(df['ff'])
# Setup the model
mod = LocalLinearTrend(df['lff'])
# Fit it using MLE (recall that we are fitting the three variance parameters)
res = mod.fit()
print(res.summary())
Explanation: Using this simple model, we can estimate the parameters from a local linear trend model. The following example is from Commandeur and Koopman (2007), section 3.4., modeling motor vehicle fatalities in Finland.
End of explanation
# Perform prediction and forecasting
predict = res.get_prediction()
forecast = res.get_forecast('2014')
fig, ax = plt.subplots(figsize=(10,4))
# Plot the results
df['lff'].plot(ax=ax, style='k.', label='Observations')
predict.predicted_mean.plot(ax=ax, label='One-step-ahead Prediction')
predict_ci = predict.conf_int(alpha=0.05)
ax.fill_between(predict_ci.index[2:], predict_ci.ix[2:, 0], predict_ci.ix[2:, 1], alpha=0.1)
forecast.predicted_mean.plot(ax=ax, style='r', label='Forecast')
forecast_ci = forecast.conf_int()
ax.fill_between(forecast_ci.index, forecast_ci.ix[:, 0], forecast_ci.ix[:, 1], alpha=0.1)
# Cleanup the image
ax.set_ylim((4, 8));
legend = ax.legend(loc='lower left');
Explanation: Finally, we can do post-estimation prediction and forecasting. Notice that the end period can be specified as a date.
End of explanation |
5,646 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LDA Model
Introduces Gensim's LDA model and demonstrates its use on the NIPS corpus.
Step1: The purpose of this tutorial is to demonstrate how to train and tune an LDA model.
In this tutorial we will
Step2: So we have a list of 1740 documents, where each document is a Unicode string.
If you're thinking about using your own corpus, then you need to make sure
that it's in the same format (list of Unicode strings) before proceeding
with the rest of this tutorial.
Step3: Pre-process and vectorize the documents
As part of preprocessing, we will
Step4: We use the WordNet lemmatizer from NLTK. A lemmatizer is preferred over a
stemmer in this case because it produces more readable words. Output that is
easy to read is very desirable in topic modelling.
Step5: We find bigrams in the documents. Bigrams are sets of two adjacent words.
Using bigrams we can get phrases like "machine_learning" in our output
(spaces are replaced with underscores); without bigrams we would only get
"machine" and "learning".
Note that in the code below, we find bigrams and then add them to the
original data, because we would like to keep the words "machine" and
"learning" as well as the bigram "machine_learning".
.. Important
Step6: We remove rare words and common words based on their document frequency.
Below we remove words that appear in less than 20 documents or in more than
50% of the documents. Consider trying to remove words only based on their
frequency, or maybe combining that with this approach.
Step7: Finally, we transform the documents to a vectorized form. We simply compute
the frequency of each word, including the bigrams.
Step8: Let's see how many tokens and documents we have to train on.
Step9: Training
We are ready to train the LDA model. We will first discuss how to set some of
the training parameters.
First of all, the elephant in the room
Step10: We can compute the topic coherence of each topic. Below we display the
average topic coherence and print the topics in order of topic coherence.
Note that we use the "Umass" topic coherence measure here (see | Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: LDA Model
Introduces Gensim's LDA model and demonstrates its use on the NIPS corpus.
End of explanation
import io
import os.path
import re
import tarfile
import smart_open
def extract_documents(url='https://cs.nyu.edu/~roweis/data/nips12raw_str602.tgz'):
with smart_open.open(url, "rb") as file:
with tarfile.open(fileobj=file) as tar:
for member in tar.getmembers():
if member.isfile() and re.search(r'nipstxt/nips\d+/\d+\.txt', member.name):
member_bytes = tar.extractfile(member).read()
yield member_bytes.decode('utf-8', errors='replace')
docs = list(extract_documents())
Explanation: The purpose of this tutorial is to demonstrate how to train and tune an LDA model.
In this tutorial we will:
Load input data.
Pre-process that data.
Transform documents into bag-of-words vectors.
Train an LDA model.
This tutorial will not:
Explain how Latent Dirichlet Allocation works
Explain how the LDA model performs inference
Teach you all the parameters and options for Gensim's LDA implementation
If you are not familiar with the LDA model or how to use it in Gensim, I (Olavur Mortensen)
suggest you read up on that before continuing with this tutorial. Basic
understanding of the LDA model should suffice. Examples:
Introduction to Latent Dirichlet Allocation <http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation>_
Gensim tutorial: sphx_glr_auto_examples_core_run_topics_and_transformations.py
Gensim's LDA model API docs: :py:class:gensim.models.LdaModel
I would also encourage you to consider each step when applying the model to
your data, instead of just blindly applying my solution. The different steps
will depend on your data and possibly your goal with the model.
Data
I have used a corpus of NIPS papers in this tutorial, but if you're following
this tutorial just to learn about LDA I encourage you to consider picking a
corpus on a subject that you are familiar with. Qualitatively evaluating the
output of an LDA model is challenging and can require you to understand the
subject matter of your corpus (depending on your goal with the model).
NIPS (Neural Information Processing Systems) is a machine learning conference
so the subject matter should be well suited for most of the target audience
of this tutorial. You can download the original data from Sam Roweis'
website <http://www.cs.nyu.edu/~roweis/data.html>_. The code below will
also do that for you.
.. Important::
The corpus contains 1740 documents, and not particularly long ones.
So keep in mind that this tutorial is not geared towards efficiency, and be
careful before applying the code to a large dataset.
End of explanation
print(len(docs))
print(docs[0][:500])
Explanation: So we have a list of 1740 documents, where each document is a Unicode string.
If you're thinking about using your own corpus, then you need to make sure
that it's in the same format (list of Unicode strings) before proceeding
with the rest of this tutorial.
End of explanation
# Tokenize the documents.
from nltk.tokenize import RegexpTokenizer
# Split the documents into tokens.
tokenizer = RegexpTokenizer(r'\w+')
for idx in range(len(docs)):
docs[idx] = docs[idx].lower() # Convert to lowercase.
docs[idx] = tokenizer.tokenize(docs[idx]) # Split into words.
# Remove numbers, but not words that contain numbers.
docs = [[token for token in doc if not token.isnumeric()] for doc in docs]
# Remove words that are only one character.
docs = [[token for token in doc if len(token) > 1] for doc in docs]
Explanation: Pre-process and vectorize the documents
As part of preprocessing, we will:
Tokenize (split the documents into tokens).
Lemmatize the tokens.
Compute bigrams.
Compute a bag-of-words representation of the data.
First we tokenize the text using a regular expression tokenizer from NLTK. We
remove numeric tokens and tokens that are only a single character, as they
don't tend to be useful, and the dataset contains a lot of them.
.. Important::
This tutorial uses the nltk library for preprocessing, although you can
replace it with something else if you want.
End of explanation
# Lemmatize the documents.
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
docs = [[lemmatizer.lemmatize(token) for token in doc] for doc in docs]
Explanation: We use the WordNet lemmatizer from NLTK. A lemmatizer is preferred over a
stemmer in this case because it produces more readable words. Output that is
easy to read is very desirable in topic modelling.
End of explanation
# Compute bigrams.
from gensim.models import Phrases
# Add bigrams and trigrams to docs (only ones that appear 20 times or more).
bigram = Phrases(docs, min_count=20)
for idx in range(len(docs)):
for token in bigram[docs[idx]]:
if '_' in token:
# Token is a bigram, add to document.
docs[idx].append(token)
Explanation: We find bigrams in the documents. Bigrams are sets of two adjacent words.
Using bigrams we can get phrases like "machine_learning" in our output
(spaces are replaced with underscores); without bigrams we would only get
"machine" and "learning".
Note that in the code below, we find bigrams and then add them to the
original data, because we would like to keep the words "machine" and
"learning" as well as the bigram "machine_learning".
.. Important::
Computing n-grams of large dataset can be very computationally
and memory intensive.
End of explanation
# Remove rare and common tokens.
from gensim.corpora import Dictionary
# Create a dictionary representation of the documents.
dictionary = Dictionary(docs)
# Filter out words that occur less than 20 documents, or more than 50% of the documents.
dictionary.filter_extremes(no_below=20, no_above=0.5)
Explanation: We remove rare words and common words based on their document frequency.
Below we remove words that appear in less than 20 documents or in more than
50% of the documents. Consider trying to remove words only based on their
frequency, or maybe combining that with this approach.
End of explanation
# Bag-of-words representation of the documents.
corpus = [dictionary.doc2bow(doc) for doc in docs]
Explanation: Finally, we transform the documents to a vectorized form. We simply compute
the frequency of each word, including the bigrams.
End of explanation
print('Number of unique tokens: %d' % len(dictionary))
print('Number of documents: %d' % len(corpus))
Explanation: Let's see how many tokens and documents we have to train on.
End of explanation
# Train LDA model.
from gensim.models import LdaModel
# Set training parameters.
num_topics = 10
chunksize = 2000
passes = 20
iterations = 400
eval_every = None # Don't evaluate model perplexity, takes too much time.
# Make an index to word dictionary.
temp = dictionary[0] # This is only to "load" the dictionary.
id2word = dictionary.id2token
model = LdaModel(
corpus=corpus,
id2word=id2word,
chunksize=chunksize,
alpha='auto',
eta='auto',
iterations=iterations,
num_topics=num_topics,
passes=passes,
eval_every=eval_every
)
Explanation: Training
We are ready to train the LDA model. We will first discuss how to set some of
the training parameters.
First of all, the elephant in the room: how many topics do I need? There is
really no easy answer for this, it will depend on both your data and your
application. I have used 10 topics here because I wanted to have a few topics
that I could interpret and "label", and because that turned out to give me
reasonably good results. You might not need to interpret all your topics, so
you could use a large number of topics, for example 100.
chunksize controls how many documents are processed at a time in the
training algorithm. Increasing chunksize will speed up training, at least as
long as the chunk of documents easily fit into memory. I've set chunksize =
2000, which is more than the amount of documents, so I process all the
data in one go. Chunksize can however influence the quality of the model, as
discussed in Hoffman and co-authors [2], but the difference was not
substantial in this case.
passes controls how often we train the model on the entire corpus.
Another word for passes might be "epochs". iterations is somewhat
technical, but essentially it controls how often we repeat a particular loop
over each document. It is important to set the number of "passes" and
"iterations" high enough.
I suggest the following way to choose iterations and passes. First, enable
logging (as described in many Gensim tutorials), and set eval_every = 1
in LdaModel. When training the model look for a line in the log that
looks something like this::
2016-06-21 15:40:06,753 - gensim.models.ldamodel - DEBUG - 68/1566 documents converged within 400 iterations
If you set passes = 20 you will see this line 20 times. Make sure that by
the final passes, most of the documents have converged. So you want to choose
both passes and iterations to be high enough for this to happen.
We set alpha = 'auto' and eta = 'auto'. Again this is somewhat
technical, but essentially we are automatically learning two parameters in
the model that we usually would have to specify explicitly.
End of explanation
top_topics = model.top_topics(corpus)
# Average topic coherence is the sum of topic coherences of all topics, divided by the number of topics.
avg_topic_coherence = sum([t[1] for t in top_topics]) / num_topics
print('Average topic coherence: %.4f.' % avg_topic_coherence)
from pprint import pprint
pprint(top_topics)
Explanation: We can compute the topic coherence of each topic. Below we display the
average topic coherence and print the topics in order of topic coherence.
Note that we use the "Umass" topic coherence measure here (see
:py:func:gensim.models.ldamodel.LdaModel.top_topics), Gensim has recently
obtained an implementation of the "AKSW" topic coherence measure (see
accompanying blog post, http://rare-technologies.com/what-is-topic-coherence/).
If you are familiar with the subject of the articles in this dataset, you can
see that the topics below make a lot of sense. However, they are not without
flaws. We can see that there is substantial overlap between some topics,
others are hard to interpret, and most of them have at least some terms that
seem out of place. If you were able to do better, feel free to share your
methods on the blog at http://rare-technologies.com/lda-training-tips/ !
End of explanation |
5,647 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mm', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MM
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
5,648 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KubeFlow Pipelines
Step1: Enter your gateway and the auth token
Use this extension on chrome to get token
Update values for the ingress gateway and auth session
Step2: Set the Log bucket and Tensorboard Image
Step3: Set the client and create the experiment
Step4: Set the Inference parameters
Step5: Load the the components yaml files for setting up the components
Step10: Define the pipeline
Step11: Compile the pipeline
Step12: Execute the pipeline
Step13: Wait for inference service below to go to READY True state
Step14: Get the Inference service name
Step15: Use the deployed model for prediction request and save the output into a json
Step16: Use the deployed model for explain request and save the output into a json
Step17: Model Interpretation using Captum Vis and Insights
Install dependencies for Captum Insights
Step18: import the necessary packages
Step19: Read the prediction, explanation, and the class mapping file which saved during the prediction and expalain requests.
Step20: Captum Insights can also be used for visualization
Define the minio client for downloading the artifactes from minio storage ( model pth file and training file)
Step21: Load the downloaded model pth file and classifer
Step22: Captum Insights output image
Clean up
Delete Viewers, Inference Services and Completed pods | Python Code:
! pip uninstall -y kfp
! pip install kfp
import kfp
import json
import os
from kfp.onprem import use_k8s_secret
from kfp import components
from kfp.components import load_component_from_file, load_component_from_url
from kfp import dsl
from kfp import compiler
import numpy as np
import logging
kfp.__version__
Explanation: KubeFlow Pipelines : Pytorch Cifar10 Image classification
This notebook shows PyTorch CIFAR10 end-to-end classification example using Kubeflow Pipelines.
An example notebook that demonstrates how to:
Get different tasks needed for the pipeline
Create a Kubeflow pipeline
Include Pytorch KFP components to preprocess, train, visualize and deploy the model in the pipeline
Submit a job for execution
Query(prediction and explain) the final deployed model
Interpretation of the model using the Captum Insights
import the necessary packages
End of explanation
INGRESS_GATEWAY='http://istio-ingressgateway.istio-system.svc.cluster.local'
AUTH="<enter your auth token>"
NAMESPACE="kubeflow-user-example-com"
COOKIE="authservice_session="+AUTH
EXPERIMENT="Default"
Explanation: Enter your gateway and the auth token
Use this extension on chrome to get token
Update values for the ingress gateway and auth session
End of explanation
MINIO_ENDPOINT="http://minio-service.kubeflow:9000"
LOG_BUCKET="mlpipeline"
TENSORBOARD_IMAGE="public.ecr.aws/pytorch-samples/tboard:latest"
Explanation: Set the Log bucket and Tensorboard Image
End of explanation
client = kfp.Client(host=INGRESS_GATEWAY+"/pipeline", cookies=COOKIE)
client.create_experiment(EXPERIMENT)
experiments = client.list_experiments(namespace=NAMESPACE)
my_experiment = experiments.experiments[0]
my_experiment
Explanation: Set the client and create the experiment
End of explanation
DEPLOY_NAME="torchserve"
MODEL_NAME="cifar10"
ISVC_NAME=DEPLOY_NAME+"."+NAMESPACE+"."+"example.com"
INPUT_REQUEST="https://raw.githubusercontent.com/kubeflow/pipelines/master/samples/contrib/pytorch-samples/cifar10/input.json"
Explanation: Set the Inference parameters
End of explanation
! python utils/generate_templates.py cifar10/template_mapping.json
prepare_tensorboard_op = load_component_from_file("yaml/tensorboard_component.yaml")
prep_op = components.load_component_from_file(
"yaml/preprocess_component.yaml"
)
train_op = components.load_component_from_file(
"yaml/train_component.yaml"
)
deploy_op = load_component_from_file("yaml/deploy_component.yaml")
pred_op = load_component_from_file("yaml/prediction_component.yaml")
minio_op = components.load_component_from_file(
"yaml/minio_component.yaml"
)
Explanation: Load the the components yaml files for setting up the components
End of explanation
@dsl.pipeline(
name="Training Cifar10 pipeline", description="Cifar 10 dataset pipeline"
)
def pytorch_cifar10( # pylint: disable=too-many-arguments
minio_endpoint=MINIO_ENDPOINT,
log_bucket=LOG_BUCKET,
log_dir=f"tensorboard/logs/{dsl.RUN_ID_PLACEHOLDER}",
mar_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/model-store",
config_prop_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/config",
model_uri=f"s3://mlpipeline/mar/{dsl.RUN_ID_PLACEHOLDER}",
tf_image=TENSORBOARD_IMAGE,
deploy=DEPLOY_NAME,
isvc_name=ISVC_NAME,
model=MODEL_NAME,
namespace=NAMESPACE,
confusion_matrix_log_dir=f"confusion_matrix/{dsl.RUN_ID_PLACEHOLDER}/",
checkpoint_dir="checkpoint_dir/cifar10",
input_req=INPUT_REQUEST,
cookie=COOKIE,
ingress_gateway=INGRESS_GATEWAY,
):
def sleep_op(seconds):
Sleep for a while.
return dsl.ContainerOp(
name="Sleep " + str(seconds) + " seconds",
image="python:alpine3.6",
command=["sh", "-c"],
arguments=[
'python -c "import time; time.sleep($0)"',
str(seconds)
],
)
This method defines the pipeline tasks and operations
pod_template_spec = json.dumps({
"spec": {
"containers": [{
"env": [
{
"name": "AWS_ACCESS_KEY_ID",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "accesskey",
}
},
},
{
"name": "AWS_SECRET_ACCESS_KEY",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "secretkey",
}
},
},
{
"name": "AWS_REGION",
"value": "minio"
},
{
"name": "S3_ENDPOINT",
"value": f"{minio_endpoint}",
},
{
"name": "S3_USE_HTTPS",
"value": "0"
},
{
"name": "S3_VERIFY_SSL",
"value": "0"
},
]
}]
}
})
prepare_tb_task = prepare_tensorboard_op(
log_dir_uri=f"s3://{log_bucket}/{log_dir}",
image=tf_image,
pod_template_spec=pod_template_spec,
).set_display_name("Visualization")
prep_task = (
prep_op().after(prepare_tb_task
).set_display_name("Preprocess & Transform")
)
confusion_matrix_url = f"minio://{log_bucket}/{confusion_matrix_log_dir}"
script_args = f"model_name=resnet.pth," \
f"confusion_matrix_url={confusion_matrix_url}"
# For GPU, set number of gpus and accelerator type
ptl_args = f"max_epochs=1, gpus=0, accelerator=None, profiler=pytorch"
train_task = (
train_op(
input_data=prep_task.outputs["output_data"],
script_args=script_args,
ptl_arguments=ptl_args
).after(prep_task).set_display_name("Training")
)
# For GPU uncomment below line and set GPU limit and node selector
# ).set_gpu_limit(1).add_node_selector_constraint
# ('cloud.google.com/gke-accelerator','nvidia-tesla-p4')
(
minio_op(
bucket_name="mlpipeline",
folder_name=log_dir,
input_path=train_task.outputs["tensorboard_root"],
filename="",
).after(train_task).set_display_name("Tensorboard Events Pusher")
)
(
minio_op(
bucket_name="mlpipeline",
folder_name=checkpoint_dir,
input_path=train_task.outputs["checkpoint_dir"],
filename="",
).after(train_task).set_display_name("checkpoint_dir Pusher")
)
minio_mar_upload = (
minio_op(
bucket_name="mlpipeline",
folder_name=mar_path,
input_path=train_task.outputs["checkpoint_dir"],
filename="cifar10_test.mar",
).after(train_task).set_display_name("Mar Pusher")
)
(
minio_op(
bucket_name="mlpipeline",
folder_name=config_prop_path,
input_path=train_task.outputs["checkpoint_dir"],
filename="config.properties",
).after(train_task).set_display_name("Conifg Pusher")
)
model_uri = str(model_uri)
# pylint: disable=unused-variable
isvc_yaml =
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
name: {}
namespace: {}
spec:
predictor:
serviceAccountName: sa
pytorch:
storageUri: {}
resources:
requests:
cpu: 4
memory: 16Gi
limits:
cpu: 4
memory: 16Gi
.format(
deploy, namespace, model_uri
)
# For GPU inference use below yaml with gpu count and accelerator
gpu_count = "1"
accelerator = "nvidia-tesla-p4"
isvc_gpu_yaml = # pylint: disable=unused-variable
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
name: {}
namespace: {}
spec:
predictor:
serviceAccountName: sa
pytorch:
storageUri: {}
resources:
requests:
cpu: 4
memory: 16Gi
limits:
cpu: 4
memory: 16Gi
nvidia.com/gpu: {}
nodeSelector:
cloud.google.com/gke-accelerator: {}
.format(deploy, namespace, model_uri, gpu_count, accelerator)
# Update inferenceservice_yaml for GPU inference
deploy_task = (
deploy_op(action="apply", inferenceservice_yaml=isvc_yaml
).after(minio_mar_upload).set_display_name("Deployer")
)
# Wait here for model to be loaded in torchserve for inference
sleep_task = sleep_op(5).after(deploy_task).set_display_name("Sleep")
# Make Inference request
pred_task = (
pred_op(
host_name=isvc_name,
input_request=input_req,
cookie=cookie,
url=ingress_gateway,
model=model,
inference_type="predict",
).after(sleep_task).set_display_name("Prediction")
)
(
pred_op(
host_name=isvc_name,
input_request=input_req,
cookie=cookie,
url=ingress_gateway,
model=model,
inference_type="explain",
).after(pred_task).set_display_name("Explanation")
)
dsl.get_pipeline_conf().add_op_transformer(
use_k8s_secret(
secret_name="mlpipeline-minio-artifact",
k8s_secret_key_to_env={
"secretkey": "MINIO_SECRET_KEY",
"accesskey": "MINIO_ACCESS_KEY",
},
)
)
Explanation: Define the pipeline
End of explanation
compiler.Compiler().compile(pytorch_cifar10, 'pytorch.tar.gz', type_check=True)
Explanation: Compile the pipeline
End of explanation
run = client.run_pipeline(my_experiment.id, 'pytorch-cifar10', 'pytorch.tar.gz')
Explanation: Execute the pipeline
End of explanation
!kubectl get isvc $DEPLOY
Explanation: Wait for inference service below to go to READY True state
End of explanation
INFERENCE_SERVICE_LIST = ! kubectl get isvc {DEPLOY_NAME} -n {NAMESPACE} -o json | python3 -c "import sys, json; print(json.load(sys.stdin)['status']['url'])"| tr -d '"' | cut -d "/" -f 3
INFERENCE_SERVICE_NAME = INFERENCE_SERVICE_LIST[0]
INFERENCE_SERVICE_NAME
Explanation: Get the Inference service name
End of explanation
!curl -v -H "Host: $INFERENCE_SERVICE_NAME" -H "Cookie: $COOKIE" "$INGRESS_GATEWAY/v1/models/$MODEL_NAME:predict" -d @./cifar10/input.json > cifar10_prediction_output.json
! cat cifar10_prediction_output.json
Explanation: Use the deployed model for prediction request and save the output into a json
End of explanation
!curl -v -H "Host: $INFERENCE_SERVICE_NAME" -H "Cookie: $COOKIE" "$INGRESS_GATEWAY/v1/models/$MODEL_NAME:explain" -d @./cifar10/input.json > cifar10_explanation_output.json
Explanation: Use the deployed model for explain request and save the output into a json
End of explanation
!./install-dependencies.sh
Explanation: Model Interpretation using Captum Vis and Insights
Install dependencies for Captum Insights
End of explanation
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
import torchvision.transforms as transforms
import torch
import torch.nn.functional as F
import json
import captum
from captum.attr import LayerAttribution
from captum.attr import visualization as viz
import base64
import os
import io
Explanation: import the necessary packages
End of explanation
prediction_json = json.loads(open("./cifar10_prediction_output.json", "r").read())
explainations_json = json.loads(open("./cifar10_explanation_output.json", "r").read())
labels_path = './cifar10/class_mapping.json'
with open(labels_path) as json_data:
idx_to_labels = json.load(json_data)
count = 0
for i in range(0, len(explainations_json["explanations"])):
image = base64.b64decode(explainations_json["explanations"][i]["b64"])
fileName = 'captum_kitten_{}.jpeg'.format(count)
imagePath = ( os.getcwd() +"/" + fileName)
img = Image.open(io.BytesIO(image))
img = img.convert('RGB')
img.save(imagePath, 'jpeg', quality=100)
print("Saving ", imagePath)
count += 1
from IPython.display import Image
Image(filename='captum_kitten_0.jpeg')
Image(filename='captum_kitten_1.jpeg')
Image(filename='captum_kitten_2.jpeg')
Explanation: Read the prediction, explanation, and the class mapping file which saved during the prediction and expalain requests.
End of explanation
from minio import Minio
from kubernetes import client, config
import base64
config.load_incluster_config()
v1 = client.CoreV1Api()
sec = v1.read_namespaced_secret("mlpipeline-minio-artifact", NAMESPACE).data
minio_accesskey = base64.b64decode(sec["accesskey"]).decode('UTF-8')
minio_secretkey = base64.b64decode(sec["secretkey"]).decode('UTF-8')
minio_config = {
"HOST": "minio-service.kubeflow:9000",
"ACCESS_KEY": minio_accesskey,
"SECRET_KEY": minio_secretkey,
"BUCKET": "mlpipeline",
"FOLDER": "checkpoint_dir/cifar10"}
def _initiate_minio_client(minio_config):
minio_host = minio_config["HOST"]
access_key = minio_config["ACCESS_KEY"]
secret_key = minio_config["SECRET_KEY"]
client = Minio(minio_host, access_key=access_key, secret_key=secret_key, secure=False)
return client
client= _initiate_minio_client(minio_config)
client
def download_artifact_from_minio(folder: str, artifact: str):
artifact_name = artifact.split("/")[-1]
result = client.fget_object(
minio_config["BUCKET"],
os.path.join(folder, artifact_name),
artifact,
)
download_artifact_from_minio(minio_config["FOLDER"],"resnet.pth")
print("[INFO] Downloaded the Model Pth File.....")
download_artifact_from_minio(minio_config["FOLDER"],"cifar10_train.py")
print("[INFO] Downloaded the Model Classifier File.....")
Explanation: Captum Insights can also be used for visualization
Define the minio client for downloading the artifactes from minio storage ( model pth file and training file)
End of explanation
from cifar10_train import CIFAR10Classifier
model = CIFAR10Classifier()
model_pt_path ="./resnet.pth"
model.load_state_dict(torch.load(model_pt_path,map_location=torch.device('cpu')))
model.eval()
#Lets read two test images and make the prediction and use these images for captum Insights.
imgs = ['./cifar10/kitten.png',"./cifar10/horse.png"]
for img in imgs:
img = Image.open(img)
transformed_img = transform(img)
input_img = transform_normalize(transformed_img)
input_img = input_img.unsqueeze(0) # the model requires a dummy batch dimension
output = model(input_img)
output = F.softmax(output, dim=1)
prediction_score, pred_label_idx = torch.topk(output, 1)
pred_label_idx.squeeze_()
predicted_label = idx_to_labels[str(pred_label_idx.squeeze_().item())]
print('Predicted:', predicted_label, '/', pred_label_idx.item(), ' (', prediction_score.squeeze().item(), ')')
from captum.insights import AttributionVisualizer, Batch
from captum.insights.attr_vis.features import ImageFeature
# Baseline is all-zeros input - this may differ depending on your data
def baseline_func(input):
return input * 0
# merging our image transforms from above
def full_img_transform(input):
i = Image.open(input)
i = transform(i)
i = transform_normalize(i)
i = i.unsqueeze(0)
i.requires_grad = True
return i
input_imgs = torch.cat(list(map(lambda i: full_img_transform(i), imgs)), 0)
visualizer = AttributionVisualizer(
models=[model],
score_func=lambda o: torch.nn.functional.softmax(o, 1),
classes=list(map(lambda k: idx_to_labels[k], idx_to_labels.keys())),
features=[
ImageFeature(
"Photo",
baseline_transforms=[baseline_func],
input_transforms=[],
)
],
dataset=[Batch(input_imgs, labels=[3,7])]
)
visualizer.serve(debug=True,port=6080)
Explanation: Load the downloaded model pth file and classifer
End of explanation
! kubectl delete --all isvc -n $NAMESPACE
! kubectl delete pod --field-selector=status.phase==Succeeded -n $NAMESPACE
Explanation: Captum Insights output image
Clean up
Delete Viewers, Inference Services and Completed pods
End of explanation |
5,649 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The Cirq Developers
Step1: Ion Device Class
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: Defining an IonDevice
To define an IonDevice, we specify
The set of qubits in the device,
The duration of single-qubit gates,
The duration of two-qubit gates, and
The duration of measurement gates.
The code below creates an IonDevice with four qubits in a linear array. The durations we use for each type of gate are reasonable order-of-magnitude estimates, though they will differ for different trapped ion computers.
Step5: We can view some properties of the ion_device as shown below.
Step7: Native Gate Set
An IonDevice can implement single-qubit rotations about the $X$, $Y$, and $Z$ axes of the Bloch sphere
Step9: One can also validate operations and circuits with IonDevice.validate_operation and IonDevice.validate_circuit, respectively.
We can get the duration of valid operations as follows.
Step11: Decomposing Operations and Circuits
Operations which are not valid on the device can be decomposed into a set of valid operations. For example, a CNOT gate is not supported but can be implemented with the following decomposition.
Step13: Circuits can also be decomposed in a similar manner using IonDevice.decompose_circuit. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The Cirq Developers
End of explanation
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install cirq --quiet
print("installed cirq.")
import cirq
import numpy as np
Explanation: Ion Device Class
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/educators/ion_device"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/educators/ion_device.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/educators/ion_device.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/educators/ion_device.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
The IonDevice represents a trapped ion quantum computer with all-to-all qubit connectivity. The number of qubits as well as the duration of gates and measurements are specified by the user when creating an ion device.
Two-qubit gates are implemented by an Ising-type coupling known as the Mølmer–Sørensen gate. The Mølmer–Sørensen gate couples ions through the shared motional modes of the ion chain. The ion motion and internal state decouples at the end of each gate. The IonDevice class assumes this decoupling is perfect and does not explicitly model the ion motion.
End of explanation
Create an IonDevice.
ion_device = cirq.IonDevice(
qubits=cirq.LineQubit.range(4),
oneq_gates_duration=cirq.Duration(micros=10),
twoq_gates_duration=cirq.Duration(micros=200),
measurement_duration=cirq.Duration(micros=100)
)
Explanation: Defining an IonDevice
To define an IonDevice, we specify
The set of qubits in the device,
The duration of single-qubit gates,
The duration of two-qubit gates, and
The duration of measurement gates.
The code below creates an IonDevice with four qubits in a linear array. The durations we use for each type of gate are reasonable order-of-magnitude estimates, though they will differ for different trapped ion computers.
End of explanation
View some properties of the device.
# Display the ion device.
print("Ion Device:\n", ion_device)
# Get all qubits in the device.
print("\nQubits in the IonDevice:\n", sorted(ion_device.qubits))
# Get a qubit at a certain position (if present).
pos = 2
print(f"\nQubit at position {pos}:\n", ion_device.at(pos))
Explanation: We can view some properties of the ion_device as shown below.
End of explanation
Check if gates are valid. Invalid gates raise a ValueError.
# Single-qubit X rotation of any angle is supported.
ion_device.validate_gate(cirq.rx(np.pi / 7))
# Single-qubit Z rotation of any angle is supported.
ion_device.validate_gate(cirq.rz(np.pi / 5))
# Mølmer–Sørensen gate of any angle is supported.
ion_device.validate_gate(cirq.ms(np.pi / 4))
Explanation: Native Gate Set
An IonDevice can implement single-qubit rotations about the $X$, $Y$, and $Z$ axes of the Bloch sphere: namely, cirq.rx, cirq.ry, and cirq.rz.
An IonDevice can implement the two-qubit Mølmer–Sørensen gate, a rotation about the $XX$ axis in the two-qubit Bloch sphere defined as
\begin{equation}
\exp(-i t XX) = \left[ \begin{matrix}
\cos t & 0 & 0 & -i \sin t \
0 & \cos t & -i \sin t & 0 \
0 & -i \sin t & \cos t & 0 \
-i \sin t & 0 & 0 & \cos t
\end{matrix} \right] .
\end{equation}
The Mølmer–Sørensen gate is defined in Cirq as cirq.ms.
One can check if a given gate is valid with IonDevice.validate_gate. This method raises an error if the gate is invalid (not supported by the device) and does nothing if the gate is valid (supported by the device).
End of explanation
Get the duration of valid operations.
# Duration of a single-qubit operation.
ion_device.duration_of(cirq.ry(np.pi / 2).on(ion_device.at(0)))
Explanation: One can also validate operations and circuits with IonDevice.validate_operation and IonDevice.validate_circuit, respectively.
We can get the duration of valid operations as follows.
End of explanation
Decompose a CNOT operation into valid IonDevice operations.
# Get a CNOT operation.
op = cirq.CNOT(ion_device.at(0), ion_device.at(1))
# Decompose it for the IonDevice.
ion_device_ops = cirq.ConvertToIonGates().convert_one(op)
# Print the sequence of operations to implement a CNOT.
print("Sequence of IonDevice operations for a CNOT:\n")
print(cirq.Circuit(ion_device_ops))
Explanation: Decomposing Operations and Circuits
Operations which are not valid on the device can be decomposed into a set of valid operations. For example, a CNOT gate is not supported but can be implemented with the following decomposition.
End of explanation
Decompose a circuit into IonDevice operations.
# Example circuit to decompose.
circuit = cirq.Circuit(
cirq.H(cirq.LineQubit(0)),
cirq.CNOT(cirq.LineQubit(0), cirq.LineQubit(1)),
cirq.CNOT(cirq.LineQubit(0), cirq.LineQubit(2))
)
# Display it.
print("Circuit to decompose:\n")
print(circuit)
# Decompose the circuit.
ion_device_circuit = ion_device.decompose_circuit(circuit)
# Display the decomposed circuit.
print("\nIonDevice circuit:\n")
print(ion_device_circuit)
Explanation: Circuits can also be decomposed in a similar manner using IonDevice.decompose_circuit.
End of explanation |
5,650 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 10
Step1: Create some dictionarys with parameters for cell
Step2: Create an helper function to instantiate a cell object given a set of parameters
Step3: Instantiate a LFPy.Cell object
Step4: Create an electrode using a commercially available design from Neuronexus
Step5: Rotate the probe and move it so that it is in the xz plane and 50 $\mu$m away from the soma
Step6: Create a pulse stimulation current
Step7: Create LFPy electrode object
Step8: Enable extracellular stimulation for the cell using stimulating currents of the electrode object
Step9: Run the simulation with electrode as input to cell.simulate()
Step10: Then plot the somatic potential, the extracellular field and the LFP
from electrode object
Step11: Positive pulses close to the soma location cause an hyperpolarization in the cell. Let's try something else!
Step12: Use the probe field in the electrode object created before to overwrite currents
Step13: Now the membrane potential is depolarizing, but stimulation is not strong enough to elicit an action potential.
Try to crank up the stimulation current to 50$\mu A$
Step14: Finally we got two spikes. We can maybe get the same effect with smaller currents and higher stimulation frequencies / number of pulses / pulse width. Try to increase the pulse width | Python Code:
import LFPy
import MEAutility as mu
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.gridspec import GridSpec
Explanation: Example 10: Extracellular stimulation of neurons
This is an example of LFPy running in an Jupyter notebook. To run through this example code and produce output, press <shift-Enter> in each code block below.
First step is to import LFPy and other packages for analysis and plotting:
End of explanation
cellParameters = {
'morphology' : 'morphologies/L5_Mainen96_LFPy.hoc',
'tstart' : -50, # ignore startup transients
'tstop' : 100,
'dt' : 2**-4,
'v_init' : -60,
'passive' : False,
}
Explanation: Create some dictionarys with parameters for cell:
End of explanation
def instantiate_cell(cellParameters):
cell = LFPy.Cell(**cellParameters, delete_sections=True)
cell.set_pos(x=0, y=0, z=0)
cell.set_rotation(x=4.98919, y=-4.33261, z=np.pi) # Align apical dendrite with z-axis
# insert hh mechanism in everywhere, reduced density elsewhere
for sec in cell.allseclist:
sec.insert('hh')
if not 'soma' in sec.name():
# reduce density of Na- and K-channels to 5% in dendrites
sec.gnabar_hh = 0.006
sec.gkbar_hh = 0.0018
return cell
def plot_results(cell, electrode):
fig = plt.figure(figsize=(10, 6))
gs = GridSpec(2, 2)
ax = fig.add_subplot(gs[0, 1])
im = ax.pcolormesh(np.array(cell.t_ext), cell.z.mean(axis=-1), np.array(cell.v_ext),
cmap='RdBu', vmin=-100, vmax=100,
shading='auto')
ax.set_title('Applied extracellular potential')
ax.set_ylabel('z (um)', labelpad=0)
rect = np.array(ax.get_position().bounds)
rect[0] += rect[2] + 0.01
rect[2] = 0.01
cax = fig.add_axes(rect)
cbar = fig.colorbar(im, cax=cax, extend='both')
cbar.set_label('(mV)', labelpad=0)
ax = fig.add_subplot(gs[1, 1], sharex=ax)
ax.plot(cell.tvec, cell.somav, 'k')
ax.set_title('somatic voltage')
ax.set_ylabel('(mV)', labelpad=0)
ax.set_xlabel('t (ms)')
ax.set_ylim([-90, 20])
ax.set_xlim(cell.tvec[0], cell.tvec[-1])
ax = fig.add_subplot(gs[:, 0])
for sec in cell.allseclist:
idx = cell.get_idx(sec.name())
ax.plot(cell.x[idx], cell.z[idx],
color='k')
if 'soma' in sec.name():
ax.plot(cell.x[idx], cell.z[idx], color='b', lw=5)
ax.plot(electrode.x, electrode.z, marker='o', color='g', markersize=3)
ax.plot(electrode.x[stim_elec], electrode.z[stim_elec], marker='o', color='r', markersize=5)
ax.axis([-500, 500, -400, 1200])
Explanation: Create an helper function to instantiate a cell object given a set of parameters:
End of explanation
cell = instantiate_cell(cellParameters)
Explanation: Instantiate a LFPy.Cell object:
End of explanation
probe = mu.return_mea('Neuronexus-32')
Explanation: Create an electrode using a commercially available design from Neuronexus:
End of explanation
probe.rotate(axis=[0, 0, 1], theta=90)
probe.move([0, 100, 0])
Explanation: Rotate the probe and move it so that it is in the xz plane and 50 $\mu$m away from the soma:
End of explanation
amp = 20000
n_pulses = 2
interpulse = 10
width = 2
dt = cell.dt
t_stop = cell.tstop
t_start = 20
stim_elec = 15
current, t_ext = probe.set_current_pulses(el_id=stim_elec, amp1=amp, width1=width, dt=dt, t_stop=t_stop,
t_start=t_start, n_pulses=n_pulses, interpulse=interpulse)
plt.figure(figsize=(10, 6))
plt.plot(t_ext, current)
plt.title("Stimulating current")
plt.xlabel('t (ms)')
plt.ylabel('(nA)')
plt.xlim(0, cell.tstop)
Explanation: Create a pulse stimulation current:
End of explanation
electrode = LFPy.RecExtElectrode(cell=cell, probe=probe)
Explanation: Create LFPy electrode object:
End of explanation
v_ext = cell.enable_extracellular_stimulation(electrode, t_ext=t_ext)
Explanation: Enable extracellular stimulation for the cell using stimulating currents of the electrode object:
End of explanation
cell.simulate(probes=[electrode], rec_vmem=True)
Explanation: Run the simulation with electrode as input to cell.simulate()
End of explanation
plot_results(cell, electrode)
Explanation: Then plot the somatic potential, the extracellular field and the LFP
from electrode object:
End of explanation
cell = instantiate_cell(cellParameters)
Explanation: Positive pulses close to the soma location cause an hyperpolarization in the cell. Let's try something else!
End of explanation
amp = -20000
n_pulses = 2
interpulse = 10
width = 2
dt = cell.dt
t_stop = cell.tstop
t_start = 20
stim_elec = 15
electrode = LFPy.RecExtElectrode(cell=cell, probe=probe)
current, t_ext = electrode.probe.set_current_pulses(el_id=stim_elec, amp1=amp, width1=width, dt=dt,
t_stop=t_stop, t_start=t_start, n_pulses=n_pulses,
interpulse=interpulse)
v_ext = cell.enable_extracellular_stimulation(electrode, t_ext=t_ext)
cell.simulate(probes=[electrode], rec_vmem=True)
plot_results(cell, electrode)
Explanation: Use the probe field in the electrode object created before to overwrite currents:
End of explanation
amp = -75000
electrode = LFPy.RecExtElectrode(cell=cell, probe=probe)
current, t_ext = electrode.probe.set_current_pulses(el_id=stim_elec, amp1=amp, width1=width, dt=dt,
t_stop=t_stop, t_start=t_start, n_pulses=n_pulses,
interpulse=interpulse)
cell = instantiate_cell(cellParameters)
v_ext = cell.enable_extracellular_stimulation(electrode, t_ext=t_ext)
cell.simulate(probes=[electrode], rec_vmem=True)
plot_results(cell, electrode)
Explanation: Now the membrane potential is depolarizing, but stimulation is not strong enough to elicit an action potential.
Try to crank up the stimulation current to 50$\mu A$
End of explanation
amp = -30000
n_pulses = 1
interpulse = 10
width = 15
dt = cell.dt
t_stop = cell.tstop
t_start = 20
stim_elec = 15
electrode = LFPy.RecExtElectrode(cell=cell, probe=probe)
current, t_ext = electrode.probe.set_current_pulses(el_id=stim_elec, amp1=amp, width1=width, dt=dt,
t_stop=t_stop, t_start=t_start, n_pulses=n_pulses,
interpulse=interpulse)
cell = instantiate_cell(cellParameters)
v_ext = cell.enable_extracellular_stimulation(electrode, t_ext=t_ext)
cell.simulate(probes=[electrode], rec_vmem=True)
plot_results(cell, electrode)
Explanation: Finally we got two spikes. We can maybe get the same effect with smaller currents and higher stimulation frequencies / number of pulses / pulse width. Try to increase the pulse width:
End of explanation |
5,651 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fast Bayesian estimation of SARIMAX models
Introduction
This notebook will show how to use fast Bayesian methods to estimate SARIMAX (Seasonal AutoRegressive Integrated Moving Average with eXogenous regressors) models. These methods can also be parallelized across multiple cores.
Here, fast methods means a version of Hamiltonian Monte Carlo called the No-U-Turn Sampler (NUTS) developed by Hoffmann and Gelman
Step1: 2. Download and plot the data on US CPI
We'll get the data from FRED
Step2: 3. Fit the model with maximum likelihood
Statsmodels does all of the hard work of this for us - creating and fitting the model takes just two lines of code. The model order parameters correspond to auto-regressive, difference, and moving average orders respectively.
Step3: It's a good fit. We can also get the series of one-step ahead predictions and plot it next to the actual data, along with a confidence band.
Step4: 4. Helper functions to provide tensors to the library doing Bayesian estimation
We're almost on to the magic but there are a few preliminaries. Feel free to skip this section if you're not interested in the technical details.
Technical Details
PyMC3 is a Bayesian estimation library ("Probabilistic Programming in Python
Step5: 5. Bayesian estimation with NUTS
The next step is to set the parameters for the Bayesian estimation, specify our priors, and run it.
Step6: Now for the fun part! There are three parameters to estimate
Step7: Note that the NUTS sampler is auto-assigned because we provided gradients. PyMC3 will use Metropolis or Slicing samplers if it does not find that gradients are available. There are an impressive number of draws per second for a "block box" style computation! However, note that if the model can be represented directly by PyMC3 (like the AR(p) models mentioned above), then computation can be substantially faster.
Inference is complete, but are the results any good? There are a number of ways to check. The first is to look at the posterior distributions (with lines showing the MLE values)
Step8: The estimated posteriors clearly peak close to the parameters found by MLE. We can also see a summary of the estimated values
Step9: Here $\hat{R}$ is the Gelman-Rubin statistic. It tests for lack of convergence by comparing the variance between multiple chains to the variance within each chain. If convergence has been achieved, the between-chain and within-chain variances should be identical. If $\hat{R}<1.2$ for all model parameters, we can have some confidence that convergence has been reached.
Additionally, the highest posterior density interval (the gap between the two values of HPD in the table) is small for each of the variables.
6. Application of Bayesian estimates of parameters
We'll now re-instigate a version of the model but using the parameters from the Bayesian estimation, and again plot the one-step-ahead forecasts.
Step10: Appendix A. Application to UnobservedComponents models
We can reuse the Loglike and Score wrappers defined above to consider a different state space model. For example, we might want to model inflation as the combination of a random walk trend and autoregressive error term
Step11: As noted earlier, the Theano wrappers (Loglike and Score) that we created above are generic, so we can re-use essentially the same code to explore the model with Bayesian methods.
Step12: And as before we can plot the marginal posteriors. In contrast to the SARIMAX example, here the posterior modes are somewhat different from the MLE estimates.
Step13: One benefit of this model is that it gives us an estimate of the underling "level" of inflation, using the smoothed estimate of $\mu_t$, which we can access as the "level" column in the results objects' states.smoothed attribute. In this case, because the Bayesian posterior mean of the level's variance is larger than the MLE estimate, its estimated level is a little more volatile. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pymc3 as pm
import statsmodels.api as sm
import theano
import theano.tensor as tt
from pandas.plotting import register_matplotlib_converters
from pandas_datareader.data import DataReader
plt.style.use("seaborn")
register_matplotlib_converters()
Explanation: Fast Bayesian estimation of SARIMAX models
Introduction
This notebook will show how to use fast Bayesian methods to estimate SARIMAX (Seasonal AutoRegressive Integrated Moving Average with eXogenous regressors) models. These methods can also be parallelized across multiple cores.
Here, fast methods means a version of Hamiltonian Monte Carlo called the No-U-Turn Sampler (NUTS) developed by Hoffmann and Gelman: see Hoffman, M. D., & Gelman, A. (2014). The No-U-Turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15(1), 1593-1623.. As they say, "the cost of HMC per independent sample from a target distribution of dimension $D$ is roughly $\mathcal{O}(D^{5/4})$, which stands in sharp contrast with the $\mathcal{O}(D^{2})$ cost of random-walk Metropolis". So for problems of larger dimension, the time-saving with HMC is significant. However it does require the gradient, or Jacobian, of the model to be provided.
This notebook will combine the Python libraries statsmodels, which does econometrics, and PyMC3, which is for Bayesian estimation, to perform fast Bayesian estimation of a simple SARIMAX model, in this case an ARMA(1, 1) model for US CPI.
Note that, for simple models like AR(p), base PyMC3 is a quicker way to fit a model; there's an example here. The advantage of using statsmodels is that it gives access to methods that can solve a vast range of statespace models.
The model we'll solve is given by
$$
y_t = \phi y_{t-1} + \varepsilon_t + \theta_1 \varepsilon_{t-1}, \qquad \varepsilon_t \sim N(0, \sigma^2)
$$
with 1 auto-regressive term and 1 moving average term. In statespace form it is written as:
$$
\begin{align}
y_t & = \underbrace{\begin{bmatrix} 1 & \theta_1 \end{bmatrix}}{Z} \underbrace{\begin{bmatrix} \alpha{1,t} \ \alpha_{2,t} \end{bmatrix}}{\alpha_t} \
\begin{bmatrix} \alpha{1,t+1} \ \alpha_{2,t+1} \end{bmatrix} & = \underbrace{\begin{bmatrix}
\phi & 0 \
1 & 0 \
\end{bmatrix}}{T} \begin{bmatrix} \alpha{1,t} \ \alpha_{2,t} \end{bmatrix} +
\underbrace{\begin{bmatrix} 1 \ 0 \end{bmatrix}}{R} \underbrace{\varepsilon{t+1}}_{\eta_t} \
\end{align}
$$
The code will follow these steps:
1. Import external dependencies
2. Download and plot the data on US CPI
3. Simple maximum likelihood estimation (MLE) as an example
4. Definitions of helper functions to provide tensors to the library doing Bayesian estimation
5. Bayesian estimation via NUTS
6. Application to US CPI series
Finally, Appendix A shows how to re-use the helper functions from step (4) to estimate a different state space model, UnobservedComponents, using the same Bayesian methods.
1. Import external dependencies
End of explanation
cpi = DataReader("CPIAUCNS", "fred", start="1971-01", end="2018-12")
cpi.index = pd.DatetimeIndex(cpi.index, freq="MS")
# Define the inflation series that we'll use in analysis
inf = np.log(cpi).resample("QS").mean().diff()[1:] * 400
inf = inf.dropna()
print(inf.head())
# Plot the series
fig, ax = plt.subplots(figsize=(9, 4), dpi=300)
ax.plot(inf.index, inf, label=r"$\Delta \log CPI$", lw=2)
ax.legend(loc="lower left")
plt.show()
Explanation: 2. Download and plot the data on US CPI
We'll get the data from FRED:
End of explanation
# Create an SARIMAX model instance - here we use it to estimate
# the parameters via MLE using the `fit` method, but we can
# also re-use it below for the Bayesian estimation
mod = sm.tsa.statespace.SARIMAX(inf, order=(1, 0, 1))
res_mle = mod.fit(disp=False)
print(res_mle.summary())
Explanation: 3. Fit the model with maximum likelihood
Statsmodels does all of the hard work of this for us - creating and fitting the model takes just two lines of code. The model order parameters correspond to auto-regressive, difference, and moving average orders respectively.
End of explanation
predict_mle = res_mle.get_prediction()
predict_mle_ci = predict_mle.conf_int()
lower = predict_mle_ci["lower CPIAUCNS"]
upper = predict_mle_ci["upper CPIAUCNS"]
# Graph
fig, ax = plt.subplots(figsize=(9, 4), dpi=300)
# Plot data points
inf.plot(ax=ax, style="-", label="Observed")
# Plot predictions
predict_mle.predicted_mean.plot(ax=ax, style="r.", label="One-step-ahead forecast")
ax.fill_between(predict_mle_ci.index, lower, upper, color="r", alpha=0.1)
ax.legend(loc="lower left")
plt.show()
Explanation: It's a good fit. We can also get the series of one-step ahead predictions and plot it next to the actual data, along with a confidence band.
End of explanation
class Loglike(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, model):
self.model = model
self.score = Score(self.model)
def perform(self, node, inputs, outputs):
(theta,) = inputs # contains the vector of parameters
llf = self.model.loglike(theta)
outputs[0][0] = np.array(llf) # output the log-likelihood
def grad(self, inputs, g):
# the method that calculates the gradients - it actually returns the
# vector-Jacobian product - g[0] is a vector of parameter values
(theta,) = inputs # our parameters
out = [g[0] * self.score(theta)]
return out
class Score(tt.Op):
itypes = [tt.dvector]
otypes = [tt.dvector]
def __init__(self, model):
self.model = model
def perform(self, node, inputs, outputs):
(theta,) = inputs
outputs[0][0] = self.model.score(theta)
Explanation: 4. Helper functions to provide tensors to the library doing Bayesian estimation
We're almost on to the magic but there are a few preliminaries. Feel free to skip this section if you're not interested in the technical details.
Technical Details
PyMC3 is a Bayesian estimation library ("Probabilistic Programming in Python: Bayesian Modeling and Probabilistic Machine Learning with Theano") that is a) fast and b) optimized for Bayesian machine learning, for instance Bayesian neural networks. To do all of this, it is built on top of a Theano, a library that aims to evaluate tensors very efficiently and provide symbolic differentiation (necessary for any kind of deep learning). It is the symbolic differentiation that means PyMC3 can use NUTS on any problem formulated within PyMC3.
We are not formulating a problem directly in PyMC3; we're using statsmodels to specify the statespace model and solve it with the Kalman filter. So we need to put the plumbing of statsmodels and PyMC3 together, which means wrapping the statsmodels SARIMAX model object in a Theano-flavored wrapper before passing information to PyMC3 for estimation.
Because of this, we can't use the Theano auto-differentiation directly. Happily, statsmodels SARIMAX objects have a method to return the Jacobian evaluated at the parameter values. We'll be making use of this to provide gradients so that we can use NUTS.
Defining helper functions to translate models into a PyMC3 friendly form
First, we'll create the Theano wrappers. They will be in the form of 'Ops', operation objects, that 'perform' particular tasks. They are initialized with a statsmodels model instance.
Although this code may look somewhat opaque, it is generic for any state space model in statsmodels.
End of explanation
# Set sampling params
ndraws = 3000 # number of draws from the distribution
nburn = 600 # number of "burn-in points" (which will be discarded)
Explanation: 5. Bayesian estimation with NUTS
The next step is to set the parameters for the Bayesian estimation, specify our priors, and run it.
End of explanation
# Construct an instance of the Theano wrapper defined above, which
# will allow PyMC3 to compute the likelihood and Jacobian in a way
# that it can make use of. Here we are using the same model instance
# created earlier for MLE analysis (we could also create a new model
# instance if we preferred)
loglike = Loglike(mod)
with pm.Model() as m:
# Priors
arL1 = pm.Uniform("ar.L1", -0.99, 0.99)
maL1 = pm.Uniform("ma.L1", -0.99, 0.99)
sigma2 = pm.InverseGamma("sigma2", 2, 4)
# convert variables to tensor vectors
theta = tt.as_tensor_variable([arL1, maL1, sigma2])
# use a DensityDist (use a lamdba function to "call" the Op)
pm.DensityDist("likelihood", loglike, observed=theta)
# Draw samples
trace = pm.sample(
ndraws,
tune=nburn,
return_inferencedata=True,
cores=1,
compute_convergence_checks=False,
)
Explanation: Now for the fun part! There are three parameters to estimate: $\phi$, $\theta_1$, and $\sigma$. We'll use uninformative uniform priors for the first two, and an inverse gamma for the last one. Then we'll run the inference optionally using as many computer cores as I have.
End of explanation
plt.tight_layout()
# Note: the syntax here for the lines argument is required for
# PyMC3 versions >= 3.7
# For version <= 3.6 you can use lines=dict(res_mle.params) instead
_ = pm.plot_trace(
trace,
lines=[(k, {}, [v]) for k, v in dict(res_mle.params).items()],
combined=True,
figsize=(12, 12),
)
Explanation: Note that the NUTS sampler is auto-assigned because we provided gradients. PyMC3 will use Metropolis or Slicing samplers if it does not find that gradients are available. There are an impressive number of draws per second for a "block box" style computation! However, note that if the model can be represented directly by PyMC3 (like the AR(p) models mentioned above), then computation can be substantially faster.
Inference is complete, but are the results any good? There are a number of ways to check. The first is to look at the posterior distributions (with lines showing the MLE values):
End of explanation
pm.summary(trace)
Explanation: The estimated posteriors clearly peak close to the parameters found by MLE. We can also see a summary of the estimated values:
End of explanation
# Retrieve the posterior means
params = pm.summary(trace)["mean"].values
# Construct results using these posterior means as parameter values
res_bayes = mod.smooth(params)
predict_bayes = res_bayes.get_prediction()
predict_bayes_ci = predict_bayes.conf_int()
lower = predict_bayes_ci["lower CPIAUCNS"]
upper = predict_bayes_ci["upper CPIAUCNS"]
# Graph
fig, ax = plt.subplots(figsize=(9, 4), dpi=300)
# Plot data points
inf.plot(ax=ax, style="-", label="Observed")
# Plot predictions
predict_bayes.predicted_mean.plot(ax=ax, style="r.", label="One-step-ahead forecast")
ax.fill_between(predict_bayes_ci.index, lower, upper, color="r", alpha=0.1)
ax.legend(loc="lower left")
plt.show()
Explanation: Here $\hat{R}$ is the Gelman-Rubin statistic. It tests for lack of convergence by comparing the variance between multiple chains to the variance within each chain. If convergence has been achieved, the between-chain and within-chain variances should be identical. If $\hat{R}<1.2$ for all model parameters, we can have some confidence that convergence has been reached.
Additionally, the highest posterior density interval (the gap between the two values of HPD in the table) is small for each of the variables.
6. Application of Bayesian estimates of parameters
We'll now re-instigate a version of the model but using the parameters from the Bayesian estimation, and again plot the one-step-ahead forecasts.
End of explanation
# Construct the model instance
mod_uc = sm.tsa.UnobservedComponents(inf, "rwalk", autoregressive=1)
# Fit the model via maximum likelihood
res_uc_mle = mod_uc.fit()
print(res_uc_mle.summary())
Explanation: Appendix A. Application to UnobservedComponents models
We can reuse the Loglike and Score wrappers defined above to consider a different state space model. For example, we might want to model inflation as the combination of a random walk trend and autoregressive error term:
$$
\begin{aligned}
y_t & = \mu_t + \varepsilon_t \
\mu_t & = \mu_{t-1} + \eta_t \
\varepsilon_t &= \phi \varepsilon_t + \zeta_t
\end{aligned}
$$
This model can be constructed in Statsmodels with the UnobservedComponents class using the rwalk and autoregressive specifications. As before, we can fit the model using maximum likelihood via the fit method.
End of explanation
# Set sampling params
ndraws = 3000 # number of draws from the distribution
nburn = 600 # number of "burn-in points" (which will be discarded)
# Here we follow the same procedure as above, but now we instantiate the
# Theano wrapper `Loglike` with the UC model instance instead of the
# SARIMAX model instance
loglike_uc = Loglike(mod_uc)
with pm.Model():
# Priors
sigma2level = pm.InverseGamma("sigma2.level", 1, 1)
sigma2ar = pm.InverseGamma("sigma2.ar", 1, 1)
arL1 = pm.Uniform("ar.L1", -0.99, 0.99)
# convert variables to tensor vectors
theta_uc = tt.as_tensor_variable([sigma2level, sigma2ar, arL1])
# use a DensityDist (use a lamdba function to "call" the Op)
pm.DensityDist("likelihood", loglike_uc, observed=theta_uc)
# Draw samples
trace_uc = pm.sample(
ndraws,
tune=nburn,
return_inferencedata=True,
cores=1,
compute_convergence_checks=False,
)
Explanation: As noted earlier, the Theano wrappers (Loglike and Score) that we created above are generic, so we can re-use essentially the same code to explore the model with Bayesian methods.
End of explanation
plt.tight_layout()
# Note: the syntax here for the lines argument is required for
# PyMC3 versions >= 3.7
# For version <= 3.6 you can use lines=dict(res_mle.params) instead
_ = pm.plot_trace(
trace_uc,
lines=[(k, {}, [v]) for k, v in dict(res_uc_mle.params).items()],
combined=True,
figsize=(12, 12),
)
pm.summary(trace_uc)
# Retrieve the posterior means
params = pm.summary(trace_uc)["mean"].values
# Construct results using these posterior means as parameter values
res_uc_bayes = mod_uc.smooth(params)
Explanation: And as before we can plot the marginal posteriors. In contrast to the SARIMAX example, here the posterior modes are somewhat different from the MLE estimates.
End of explanation
# Graph
fig, ax = plt.subplots(figsize=(9, 4), dpi=300)
# Plot data points
inf["CPIAUCNS"].plot(ax=ax, style="-", label="Observed data")
# Plot estimate of the level term
res_uc_mle.states.smoothed["level"].plot(ax=ax, label="Smoothed level (MLE)")
res_uc_bayes.states.smoothed["level"].plot(ax=ax, label="Smoothed level (Bayesian)")
ax.legend(loc="lower left");
Explanation: One benefit of this model is that it gives us an estimate of the underling "level" of inflation, using the smoothed estimate of $\mu_t$, which we can access as the "level" column in the results objects' states.smoothed attribute. In this case, because the Bayesian posterior mean of the level's variance is larger than the MLE estimate, its estimated level is a little more volatile.
End of explanation |
5,652 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conditional Entropy
Step1: Problem 1b
Create a function, phase_plot, that takes x, y, and $P$ as inputs to create a phase-folded light curve (i.e., plot the data at their respective phase values given the period $P$).
Include an optional argument, y_unc, to include uncertainties on the y values, when available.
Step2: Problem 1c
Generate a signal with $A = 2$, $p = \pi$, and Gaussian noise with variance = 0.01 over a regular grid between 0 and 10. Plot the phase-folded results (and make sure the results behave as you would expect).
Hint - your simulated signal should have at least 100 data points.
Step3: Note a couple changes from the previous helper function –– we have added a grid to the plot (this will be useful for visualizing the entropy), and we have also normalized the brightness measurements from 0 to 1.
Problem 2) The Shannon entropy
As noted above, to calculate the Shannon entropy we need to sum the data over partitions in the normalized $(\phi, m)$ plane.
This is straightforward using histogram2d from numpy.
Problem 2a
Write a function shannon_entropy to calculate the Shannon entropy, $H_0$, for a timeseries, $m(t_i)$, at a given period, p.
Hint - use histogram2d and a 10 x 10 grid (as plotted above).
Step4: Problem 2b
What is the Shannon entropy for the simulated signal at periods = 1, $\pi$-0.05, and $\pi$?
Do these results make sense given your understanding of the Shannon entropy?
Step5: We know the correct period of the simulated data is $\pi$, so it makes sense that this period minimizes the Shannon entropy.
Problem 2c
Write a function, se_periodogram to calculate the Shannon entropy for observations $m$, $t$ over a frequency grid f_grid.
Step6: Problem 2d
Plot the Shannon entropy periodogram, and return the best-fit period from the periodogram.
Hint - recall what we learned about frequency grids earlier today.
Step7: Problem 3) The Conditional Entropy
The CE is very similar to the Shannon entropy, though we need to condition the calculation on the occupation probability of the partitions in phase.
Problem 3a
Write a function conditional_entropy to calculate the CE, $H_c$, for a timeseries, $m(t_i)$, at a given period, p.
Hint - if you use histogram2d be sure to sum along the correct axes
Hint 2 - recall from session 8 that we want to avoid for loops, try to vectorize your calculation.
Step8: Problem 3b
What is the conditional entropy for the simulated signal at periods = 1, $\pi$-0.05, and $\pi$?
Do these results make sense given your understanding of CE?
Step9: Problem 3c
Write a function, ce_periodogram, to calculate the conditional entropy for observations $m$, $t$ over a frequency grid f_grid.
Step10: Problem 3d
Plot the conditional entropy periodogram, and return the best-fit period from the periodogram.
Step11: The Shannon entropy and CE return nearly identical results for a simulated sinusoidal signal. Now we will examine how each performs with actual astronomical observations.
Problem 4) SE vs. CE for real observations
Problem 4a
Load the data from our favorite eclipsing binary from this morning's LS exercise. Plot the light curve.
Hint - if you haven't already, download the example light curve.
Step12: Problem 4b
Using the Shannon entropy, determine the best period for this light curve.
Hint - recall this morning's discussion about the optimal grid for a period search
Step13: Problem 4c
Plot the Shannon entropy periodogram.
Step14: Problem 4d
Plot the light curve phase-folded on the best-fit period, as measured by the Shannon entropy periodogram.
Does this look reasonable? Why or why not?
Hint - it may be helpful to zoom in on the periodogram.
Step15: Problem 4e
Using the conditional entropy, determine the best period for this light curve.
Step16: Problem 4f
Plot the CE periodogram.
Step17: Problem 4g
Plot the light curve phase-folded on the best-fit period, as measured by the CE periodogram.
Does this look reasonable? If not - can you make it look better? | Python Code:
def gen_periodic_data(x, period=1, amplitude=1, phase=0, noise=0):
'''Generate periodic data given the function inputs
y = A*cos(x/p - phase) + noise
Parameters
----------
x : array-like
input values to evaluate the array
period : float (default=1)
period of the periodic signal
amplitude : float (default=1)
amplitude of the periodic signal
phase : float (default=0)
phase offset of the periodic signal
noise : float (default=0)
variance of the noise term added to the periodic signal
Returns
-------
y : array-like
Periodic signal evaluated at all points x
'''
y = amplitude*np.sin(2*np.pi*x/(period) - phase) + np.random.normal(0, np.sqrt(noise), size=len(x))
return y
Explanation: Conditional Entropy: Can Information Theory Beat the L-S Periodogram?
Version 0.2
By AA Miller
23 Sep 2021
Lecture IV focused on alternative methods to Lomb-Scargle when searching for periodic signals in astronomical time series. In this notebook you will develop the software necessary to search for periodicity via Conditional Entropy (my personal favorite method).
Conditional Entropy
Conditional Entropy (CE; Graham et al. 2013), and other entropy based methods, aim to minimize the entropy in binned (normalized magnitude, phase) space. CE, in particular, is good at supressing signal due to the window function.
When tested on real observations, CE outperforms most of the alternatives (e.g., LS, PDM, etc).
<img style="display: block; margin-left: auto; margin-right: auto" src="./images/CE.png" align="middle">
<div align="right"> <font size="-3">(credit: Graham et al. 2013) </font></div>
Conditional Entropy
The focus of today's exercise is conditional entropy (CE), which uses information theory and thus, in principle, works better in the presence of noise and outliers. Furthermore, CE does not make any assumptions about the underlying shape of the signal, which is useful when looking for non-sinusoidal patterns (such as transiting planets or eclipsing binaries).
For full details on the CE algorithm, see Graham et al. (2013).
Briefly, CE is based on the using the Shannon entropy (Cincotta et al. 1995), which is determined as follows:
Normalize the time series data $m(t_i)$ to occupy a uniform square over phase, $\phi$, and magnitude, $m$, at a given trial period, $p$.
Calculate the Shannon entropy, $H_0$, over the $k$ partitions in $(\phi, m)$:
$$H_0 = - \sum_{i=1}^{k} \mu_i \ln{(\mu_i)}\;\; \forall \mu_i \ne 0,$$
where $\mu_i$ is the occupation probability for the $i^{th}$ partition, which is just the number of data points in that partition divided by the total number of points in the data set.
Iterate over multiple periods, and identify the period that minimizes the entropy (recall that entropy measures a lack of information)
As discussed in Graham et al. (2013), minimizing the Shannon entropy can be influenced by the window function, so they introduce the conditional entropy, $H_c(m|\phi)$, to help mitigate these effects. The CE can be calculated as:
$$H_c = \sum_{i,j} p(m_i, \phi_j) \ln \left( \frac{p(\phi_j)}{p(m_i, \phi_j)} \right), $$
where $p(m_i, \phi_j)$ is the occupation probability for the $i^{th}$ partition in normalized magnitude and the $j^{th}$
partition in phase and $p(\phi_j)$ is the occupation probability of the $j^{th}$ phase partition
In this problem we will first calculate the Shannon entropy, then the CE to find the best-fit period of the eclipsing binary from the LS lecture.
Problem 1) Helper Functions
Problem 1a
Create a function, gen_periodic_data, that creates simulated data (including noise) over a grid of user supplied positions:
$$ y = A\,cos\left(\frac{2{\pi}x}{P} - \phi\right) + \sigma_y$$
where $A, P, \phi$ are inputs to the function. gen_periodic_data should include Gaussian noise, $\sigma_y$, for each output $y_i$.
End of explanation
def phase_plot(x, y, period, y_unc = 0.0):
'''Create phase-folded plot of input data x, y
Parameters
----------
x : array-like
data values along abscissa
y : array-like
data values along ordinate
period : float
period to fold the data
y_unc : array-like
uncertainty of the
'''
phases = (x/period) % 1
if type(y_unc) == float:
y_unc = np.zeros_like(x)
plot_order = np.argsort(phases)
norm_y = (y - np.min(y))/(np.max(y) - np.min(y))
norm_y_unc = (y_unc)/(np.max(y) - np.min(y))
plt.rc('grid', linestyle=":", color='0.8')
fig, ax = plt.subplots()
ax.errorbar(phases[plot_order], norm_y[plot_order], norm_y_unc[plot_order],
fmt='o', mec="0.2", mew=0.1)
ax.set_xlabel("phase")
ax.set_ylabel("signal")
ax.set_yticks(np.linspace(0,1,11))
ax.set_xticks(np.linspace(0,1,11))
ax.grid()
fig.tight_layout()
Explanation: Problem 1b
Create a function, phase_plot, that takes x, y, and $P$ as inputs to create a phase-folded light curve (i.e., plot the data at their respective phase values given the period $P$).
Include an optional argument, y_unc, to include uncertainties on the y values, when available.
End of explanation
x = np.linspace( # complete
y = # complete
# complete plot
Explanation: Problem 1c
Generate a signal with $A = 2$, $p = \pi$, and Gaussian noise with variance = 0.01 over a regular grid between 0 and 10. Plot the phase-folded results (and make sure the results behave as you would expect).
Hint - your simulated signal should have at least 100 data points.
End of explanation
def shannon_entropy(m, t, p):
'''Calculate the Shannon entropy
Parameters
----------
m : array-like
brightness measurements of the time-series data
t : array-like (default=1)
timestamps corresponding to the brightness measurements
p : float
period of the periodic signal
Returns
-------
H0 : float
Shannon entropy for m(t) at period p
'''
m_norm = # complete
phases = # complete
H, _, _ = np.histogram2d( # complete
occupied = np.where(H > 0)
H0 = # complete
return H0
Explanation: Note a couple changes from the previous helper function –– we have added a grid to the plot (this will be useful for visualizing the entropy), and we have also normalized the brightness measurements from 0 to 1.
Problem 2) The Shannon entropy
As noted above, to calculate the Shannon entropy we need to sum the data over partitions in the normalized $(\phi, m)$ plane.
This is straightforward using histogram2d from numpy.
Problem 2a
Write a function shannon_entropy to calculate the Shannon entropy, $H_0$, for a timeseries, $m(t_i)$, at a given period, p.
Hint - use histogram2d and a 10 x 10 grid (as plotted above).
End of explanation
print('For p = 1, \t\tH_0 = {:.5f}'.format( # complete
print('For p = pi - 0.05, \tH_0 = {:.5f}'.format( # complete
print('For p = pi, \t\tH_0 = {:.5f}'.format( # complete
Explanation: Problem 2b
What is the Shannon entropy for the simulated signal at periods = 1, $\pi$-0.05, and $\pi$?
Do these results make sense given your understanding of the Shannon entropy?
End of explanation
def se_periodogram(m, t, f_grid):
'''Calculate the Shannon entropy at every freq in f_grid
Parameters
----------
m : array-like
brightness measurements of the time-series data
t : array-like
timestamps corresponding to the brightness measurements
f_grid : array-like
trial periods for the periodic signal
Returns
-------
se_p : array-like
Shannon entropy for m(t) at every trial freq
'''
# complete
for # complete in # complete
# complete
return se_p
Explanation: We know the correct period of the simulated data is $\pi$, so it makes sense that this period minimizes the Shannon entropy.
Problem 2c
Write a function, se_periodogram to calculate the Shannon entropy for observations $m$, $t$ over a frequency grid f_grid.
End of explanation
f_grid = # complete
se_p = # complete
fig,ax = plt.subplots()
# complete
# complete
# complete
print("The best fit period is: {:.4f}".format( # complete
Explanation: Problem 2d
Plot the Shannon entropy periodogram, and return the best-fit period from the periodogram.
Hint - recall what we learned about frequency grids earlier today.
End of explanation
def conditional_entropy(m, t, p):
'''Calculate the conditional entropy
Parameters
----------
m : array-like
brightness measurements of the time-series data
t : array-like
timestamps corresponding to the brightness measurements
p : float
period of the periodic signal
Returns
-------
Hc : float
Conditional entropy for m(t) at period p
'''
m_norm = # complete
phases = # complete
# complete
# complete
# complete
Hc = # complete
return Hc
Explanation: Problem 3) The Conditional Entropy
The CE is very similar to the Shannon entropy, though we need to condition the calculation on the occupation probability of the partitions in phase.
Problem 3a
Write a function conditional_entropy to calculate the CE, $H_c$, for a timeseries, $m(t_i)$, at a given period, p.
Hint - if you use histogram2d be sure to sum along the correct axes
Hint 2 - recall from session 8 that we want to avoid for loops, try to vectorize your calculation.
End of explanation
print('For p = 1, \t\tH_c = {:.5f}'.format( # complete
print('For p = pi - 0.05, \tH_c = {:.5f}'.format( # complete
print('For p = pi, \t\tH_c = {:.5f}'.format( # complete
Explanation: Problem 3b
What is the conditional entropy for the simulated signal at periods = 1, $\pi$-0.05, and $\pi$?
Do these results make sense given your understanding of CE?
End of explanation
def ce_periodogram(m, t, f_grid):
'''Calculate the conditional entropy at every freq in f_grid
Parameters
----------
m : array-like
brightness measurements of the time-series data
t : array-like
timestamps corresponding to the brightness measurements
f_grid : array-like
trial periods for the periodic signal
Returns
-------
ce_p : array-like
conditional entropy for m(t) at every trial freq
'''
# complete
for # complete in # complete
# complete
return ce_p
Explanation: Problem 3c
Write a function, ce_periodogram, to calculate the conditional entropy for observations $m$, $t$ over a frequency grid f_grid.
End of explanation
f_grid = # complete
ce_p = # complete
fig,ax = plt.subplots()
# complete
# complete
# complete
print("The best fit period is: {:.4f}".format( # complete
Explanation: Problem 3d
Plot the conditional entropy periodogram, and return the best-fit period from the periodogram.
End of explanation
data = pd.read_csv("example_asas_lc.dat")
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel('HJD (d)')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
Explanation: The Shannon entropy and CE return nearly identical results for a simulated sinusoidal signal. Now we will examine how each performs with actual astronomical observations.
Problem 4) SE vs. CE for real observations
Problem 4a
Load the data from our favorite eclipsing binary from this morning's LS exercise. Plot the light curve.
Hint - if you haven't already, download the example light curve.
End of explanation
f_min = # complete
f_max = # complete
delta_f = # complete
f_grid = # complete
se_p = # complete
print("The best fit period is: {:.9f}".format( # complete
Explanation: Problem 4b
Using the Shannon entropy, determine the best period for this light curve.
Hint - recall this morning's discussion about the optimal grid for a period search
End of explanation
fig, ax = plt.subplots()
# complete
# complete
# complete
Explanation: Problem 4c
Plot the Shannon entropy periodogram.
End of explanation
phase_plot(# complete
Explanation: Problem 4d
Plot the light curve phase-folded on the best-fit period, as measured by the Shannon entropy periodogram.
Does this look reasonable? Why or why not?
Hint - it may be helpful to zoom in on the periodogram.
End of explanation
ce_p = # complete
print("The best fit period is: {:.9f}".format( # complete
Explanation: Problem 4e
Using the conditional entropy, determine the best period for this light curve.
End of explanation
fig, ax = plt.subplots()
# complete
# complete
# complete
Explanation: Problem 4f
Plot the CE periodogram.
End of explanation
phase_plot( # complete
Explanation: Problem 4g
Plot the light curve phase-folded on the best-fit period, as measured by the CE periodogram.
Does this look reasonable? If not - can you make it look better?
End of explanation |
5,653 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RNN from scratch using TensorFlow
<img src="http
Step1: First, we make some training data. To keep things simple, we'll only pick numbers between 0 and $2^6$, so that the sum of the two numbers are less than $2^7$.
Step2: The next step is to convert all the decimal numbers to their binary array representations. For this we'll use the numpy.unpackbits function.
Step3: First y. The 5000 y values become a $1 \times 8 \times 5000$ 2D array.
Step4: Similarly, the 5000 x1 and x2 values become an $8 \times 5000 \times$ 2 3D array. | Python Code:
import numpy as np
import pandas as pd
import tensorflow as tf
%pylab inline
pylab.style.use('ggplot')
Explanation: RNN from scratch using TensorFlow
<img src="http://d3kbpzbmcynnmx.cloudfront.net/wp-content/uploads/2015/09/rnn.jpg">
In this example, we'll build a simple RNN using TensorFlow and we'll train the RNN to add two binary numbers.
End of explanation
max_digits = 6
n_samples = 10000
ints = np.random.randint(low=0, high=np.power(2, max_digits), size=[n_samples, 2])
data_df = pd.DataFrame(ints, columns=['x1', 'x2'])
data_df.head()
data_df = data_df.assign(y=data_df.x1 + data_df.x2)
data_df.head()
Explanation: First, we make some training data. To keep things simple, we'll only pick numbers between 0 and $2^6$, so that the sum of the two numbers are less than $2^7$.
End of explanation
np.unpackbits(np.array(10, dtype=np.uint8))
Explanation: The next step is to convert all the decimal numbers to their binary array representations. For this we'll use the numpy.unpackbits function.
End of explanation
y_data = np.unpackbits(data_df.y.astype(np.uint8))
y_data =y_data.astype(np.float64).reshape(n_samples, 8, 1)
y_data = np.transpose(y_data, axes=[1, 0, 2])
np.packbits(y_data[:, 0, :].astype(np.int64))
y_data.shape
Explanation: First y. The 5000 y values become a $1 \times 8 \times 5000$ 2D array.
End of explanation
x_data = np.zeros([2, n_samples, 8], dtype=np.uint8)
x_data[0, :, :] = np.unpackbits(data_df.x1.astype(np.uint8)).reshape(n_samples, 8)
x_data[1, :, :] = np.unpackbits(data_df.x2.astype(np.uint8)).reshape(n_samples, 8)
x_data = x_data.astype(np.float64)
x_data = np.transpose(x_data, axes=[2, 1, 0])
np.packbits(x_data[:, 0, :].T.astype(np.int64))
x_data.shape
# Build the RNN graph
hidden_dim = 3
tf.reset_default_graph()
with tf.variable_scope('input'):
x_in = tf.placeholder(shape=(8, n_samples, 2), dtype=np.float64, name='x')
y_in = tf.placeholder(shape=(8, n_samples, 1), dtype=np.float64, name='y')
with tf.variable_scope('hidden'):
# Check dimensions
w_f = tf.get_variable(shape=[hidden_dim, 1], dtype=np.float64,
initializer=tf.truncated_normal_initializer(),
name='w_f')
w_h = tf.get_variable(shape=[2, hidden_dim], dtype=np.float64,
initializer=tf.truncated_normal_initializer(),
name='w_h')
u_h = tf.get_variable(shape=[1, hidden_dim], dtype=np.float64,
initializer=tf.truncated_normal_initializer(),
name='u_h')
b_f = tf.get_variable(shape=[1, 1], dtype=np.float64,
initializer=tf.zeros_initializer(),
name='b_f')
b_h = tf.get_variable(shape=[1, hidden_dim], dtype=np.float64,
initializer=tf.zeros_initializer(),
name='b_h')
with tf.variable_scope('output'):
y_t = tf.get_variable(shape=(n_samples, 1), dtype=np.float64,
initializer=tf.zeros_initializer(), name='y_t')
y_out = []
x_pos = tf.unstack(x_in, axis=0)
# x_pos is a tensor of length 8, each item 5000 * 2
y_pos = tf.unstack(y_in, axis=0)
# y_pos is a tensor of length 8, each item 5000 * 1
# reverse both x_pos and y_pos because
# we want to start at the LSB and work our way to the MSB
for x, y in zip(reversed(x_pos), reversed(y_pos)):
# dim check
# x: [5000, 2], w_h: [2, 3] -> tf.matmul(x, w_h): [5000, 3]
# y_t: [5000, 1], u_h: [1, 3] -> tf.matmul(y_t, u_h): [5000, 3]
# b_h: [1, 3] is broadcast into the sum
# finally, h_t: [5000, 3]
h_t = tf.nn.sigmoid(tf.matmul(x, w_h) + tf.matmul(y_t, u_h) + b_h, name='h_t')
# dim check
# w_f: [3, 1] -> tf.matmul(h_t, w_f): [5000, 1]
# b_f is again broadcast
y_t = tf.nn.sigmoid(tf.matmul(h_t, w_f) + b_f, name='y_t')
y_out.append(y_t)
with tf.variable_scope('loss'):
losses = []
for y_calc, y_actual in zip(y_out, reversed(y_pos)):
loss = tf.squared_difference(y_calc, y_actual)
losses.append(loss)
optimizer = tf.train.AdamOptimizer(learning_rate=0.04)
mean_loss = tf.reduce_mean(losses, name='ms_loss')
train_op = optimizer.minimize(mean_loss, name='minimization')
init = tf.global_variables_initializer()
n_training_iters = 2000
with tf.Session() as sess:
sess.run(init)
for i in range(1, n_training_iters+1):
_, loss_val = sess.run([train_op, mean_loss], feed_dict={x_in: x_data, y_in: y_data})
if i == 1 or i % 100 == 0:
print(i, loss_val)
y_out_vals = sess.run(y_out, feed_dict={x_in: x_data, y_in: y_data})
len(y_out_vals)
y_out_t = np.array(y_out_vals)[:, :, 0]
y_out_f = np.fliplr(y_out_t.T)
y_out_int = np.where(y_out_f > 0.5, 1, 0).astype(np.uint8)
nums = np.packbits(y_out_int)
nums
results = pd.DataFrame({'sum_actual': data_df.y, 'sum_predicted': nums})
results.sample(20).plot(kind='bar')
results.tail(20)
results.corr()
Explanation: Similarly, the 5000 x1 and x2 values become an $8 \times 5000 \times$ 2 3D array.
End of explanation |
5,654 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding data with python-fmrest
This is a short example on finding records with python-fmrest.
Step1: Login
Step2: Specify find queries and retrieve foundset and record
We want to find records in Contacts where the name field matches 'John Doe'.
Step3: We have our foundset and can iterate through it. fetched_records=0 means that we haven't consumed the Foundset yet.
Step4: Looks like Record 44 is our only record in the foundset. Let's look at it
Step5: Above we can see all available keys and values. If we want to acces a value, we can use the Record's attributes
Step6: Using wildcards
The FileMaker Data API supports the same operators FileMaker Pro does. So let's go ahead and broaden our search.
Step7: Using the wildcard we match all the Johns.
Multiple find requests
Multiple find requests are also supported. Again, just like in FileMaker Pro.
Let's find all Johns and Joes, but not John Does.
Step8: Sorting the result
You can control the order of the results by specifying the sort parameter.
Step9: Limiting the result
To create more efficient requests, you can limit the data being returned.
Offset
Only return from the second record.
Step10: Limit
Only return one record.
Step11: Of course, you can combine these parameters as you like. Defaults are offset 1 (which is the first record), limit 100.
Limit data returned by portals
By specifying portals, you can prevent certain portal data to be returned, even if portals are present on your layout. | Python Code:
import fmrest
Explanation: Finding data with python-fmrest
This is a short example on finding records with python-fmrest.
End of explanation
fms = fmrest.Server('https://10.211.55.15',
user='admin',
password='admin',
database='Contacts',
layout='Demo',
verify_ssl=False
)
fms.login()
Explanation: Login
End of explanation
find_query = [{'name': 'John Doe'}]
foundset = fms.find(find_query)
foundset
Explanation: Specify find queries and retrieve foundset and record
We want to find records in Contacts where the name field matches 'John Doe'.
End of explanation
for record in foundset:
print(record)
foundset
Explanation: We have our foundset and can iterate through it. fetched_records=0 means that we haven't consumed the Foundset yet.
End of explanation
record = foundset[0]
print(record.keys()) # all the field names on the layout, including portals
print(record.values()) # all the value corresponding to the field names
Explanation: Looks like Record 44 is our only record in the foundset. Let's look at it:
End of explanation
record.name
record.drink
Explanation: Above we can see all available keys and values. If we want to acces a value, we can use the Record's attributes:
End of explanation
find_query = [{'name': 'John*'}]
foundset = fms.find(find_query)
for record in foundset:
print(record.id, record.name)
Explanation: Using wildcards
The FileMaker Data API supports the same operators FileMaker Pro does. So let's go ahead and broaden our search.
End of explanation
find_query = [
{'name': 'John'},
{'name': 'Joe'},
{'name': 'John Doe', 'omit': 'true'}
]
foundset = fms.find(find_query)
for record in foundset:
print(record.name)
Explanation: Using the wildcard we match all the Johns.
Multiple find requests
Multiple find requests are also supported. Again, just like in FileMaker Pro.
Let's find all Johns and Joes, but not John Does.
End of explanation
order_by = [{'fieldName': 'name', 'sortOrder': 'descend'}] #descending
foundset = fms.find(find_query, sort=order_by)
for record in foundset:
print(record.name)
print('---')
order_by = [{'fieldName': 'name', 'sortOrder': 'ascend'}] #ascending
foundset = fms.find(find_query, sort=order_by)
for record in foundset:
print(record.name)
Explanation: Sorting the result
You can control the order of the results by specifying the sort parameter.
End of explanation
foundset = fms.find(find_query, sort=order_by, offset=2)
for record in foundset:
print(record.name)
Explanation: Limiting the result
To create more efficient requests, you can limit the data being returned.
Offset
Only return from the second record.
End of explanation
foundset = fms.find(find_query, sort=order_by, limit=1)
foundset[0].name
Explanation: Limit
Only return one record.
End of explanation
portals = [{'name':'notes', 'offset':1, 'limit': 1}]
foundset = fms.find(find_query, portals=portals)
for row in foundset[0].portal_notes:
print(row['Notes::note'])
Explanation: Of course, you can combine these parameters as you like. Defaults are offset 1 (which is the first record), limit 100.
Limit data returned by portals
By specifying portals, you can prevent certain portal data to be returned, even if portals are present on your layout.
End of explanation |
5,655 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting Glider data with Python tools
In this notebook we demonstrate how to obtain and plot glider data using iris and cartopy. We will explore data from the Rutgers University RU29 Challenger glider that was launched from Ubatuba, Brazil on June 23, 2015 to travel across the Atlantic Ocean. After 282 days at sea, the Challenger was picked up off the coast of South Africa, on March 31, 2016. For more information on this ground breaking excusion see
Step1: Iris requires the data to adhere strictly to the CF-1.6 data model.
That is why we see all those warnings about Missing CF-netCDF ancillary data variable.
Note that if the data is not CF at all iris will refuse to load it!
The other hand, the advantage of following the CF-1.6 conventions,
is that the iris cube has the proper metadata is attached it.
We do not need to extract the coordinates or any other information separately .
All we need to do is to request the phenomena we want, in this case sea_water_density, sea_water_temperature and sea_water_salinity.
Step2: Glider data is not something trivial to visualize. The very first thing to do is to plot the glider track to check its path.
Step3: One might be interested in a the individual profiles of each dive. Lets extract the deepest dive and plot it.
Step4: We can also visualize the whole track as a cross-section. | Python Code:
# See https://github.com/Unidata/netcdf-c/issues/1299 for the explanation of `#fillmismatch`.
url = (
"https://data.ioos.us/thredds/dodsC/deployments/rutgers/"
"ru29-20150623T1046/ru29-20150623T1046.nc3.nc#fillmismatch"
)
import iris
iris.FUTURE.netcdf_promote = True
glider = iris.load(url)
print(glider)
Explanation: Plotting Glider data with Python tools
In this notebook we demonstrate how to obtain and plot glider data using iris and cartopy. We will explore data from the Rutgers University RU29 Challenger glider that was launched from Ubatuba, Brazil on June 23, 2015 to travel across the Atlantic Ocean. After 282 days at sea, the Challenger was picked up off the coast of South Africa, on March 31, 2016. For more information on this ground breaking excusion see: https://marine.rutgers.edu/main/announcements/the-challenger-glider-mission-south-atlantic-mission-complete
Data collected from this glider mission are available on the IOOS Glider DAC THREDDS via OPeNDAP.
End of explanation
temp = glider.extract_strict("sea_water_temperature")
salt = glider.extract_strict("sea_water_salinity")
dens = glider.extract_strict("sea_water_density")
print(temp)
Explanation: Iris requires the data to adhere strictly to the CF-1.6 data model.
That is why we see all those warnings about Missing CF-netCDF ancillary data variable.
Note that if the data is not CF at all iris will refuse to load it!
The other hand, the advantage of following the CF-1.6 conventions,
is that the iris cube has the proper metadata is attached it.
We do not need to extract the coordinates or any other information separately .
All we need to do is to request the phenomena we want, in this case sea_water_density, sea_water_temperature and sea_water_salinity.
End of explanation
import numpy.ma as ma
T = temp.data.squeeze()
S = salt.data.squeeze()
D = dens.data.squeeze()
x = temp.coord(axis="X").points.squeeze()
y = temp.coord(axis="Y").points.squeeze()
z = temp.coord(axis="Z")
t = temp.coord(axis="T")
vmin, vmax = z.attributes["actual_range"]
z = ma.masked_outside(z.points.squeeze(), vmin, vmax)
t = t.units.num2date(t.points.squeeze())
location = y.mean(), x.mean() # Track center.
locations = list(zip(y, x)) # Track points.
import folium
tiles = (
"http://services.arcgisonline.com/arcgis/rest/services/"
"World_Topo_Map/MapServer/MapServer/tile/{z}/{y}/{x}"
)
m = folium.Map(location, tiles=tiles, attr="ESRI", zoom_start=4)
folium.CircleMarker(locations[0], fill_color="green", radius=10).add_to(m)
folium.CircleMarker(locations[-1], fill_color="red", radius=10).add_to(m)
line = folium.PolyLine(
locations=locations,
color="orange",
weight=8,
opacity=0.6,
popup="Slocum Glider ru29 Deployed on 2015-06-23",
).add_to(m)
m
Explanation: Glider data is not something trivial to visualize. The very first thing to do is to plot the glider track to check its path.
End of explanation
import numpy as np
# Find the deepest profile.
idx = np.nonzero(~T[:, -1].mask)[0][0]
%matplotlib inline
import matplotlib.pyplot as plt
ncols = 3
fig, (ax0, ax1, ax2) = plt.subplots(
sharey=True, sharex=False, ncols=ncols, figsize=(3.25 * ncols, 5)
)
kw = dict(linewidth=2, color="cornflowerblue", marker=".")
ax0.plot(T[idx], z[idx], **kw)
ax1.plot(S[idx], z[idx], **kw)
ax2.plot(D[idx] - 1000, z[idx], **kw)
def spines(ax):
ax.spines["right"].set_color("none")
ax.spines["bottom"].set_color("none")
ax.xaxis.set_ticks_position("top")
ax.yaxis.set_ticks_position("left")
[spines(ax) for ax in (ax0, ax1, ax2)]
ax0.set_ylabel("Depth (m)")
ax0.set_xlabel("Temperature ({})".format(temp.units))
ax0.xaxis.set_label_position("top")
ax1.set_xlabel("Salinity ({})".format(salt.units))
ax1.xaxis.set_label_position("top")
ax2.set_xlabel("Density ({})".format(dens.units))
ax2.xaxis.set_label_position("top")
ax0.invert_yaxis()
Explanation: One might be interested in a the individual profiles of each dive. Lets extract the deepest dive and plot it.
End of explanation
import numpy as np
import seawater as sw
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
def distance(x, y, units="km"):
dist, pha = sw.dist(x, y, units=units)
return np.r_[0, np.cumsum(dist)]
def plot_glider(
x, y, z, t, data, cmap=plt.cm.viridis, figsize=(9, 3.75), track_inset=False
):
fig, ax = plt.subplots(figsize=figsize)
dist = distance(x, y, units="km")
z = np.abs(z)
dist, z = np.broadcast_arrays(dist[..., np.newaxis], z)
cs = ax.pcolor(dist, z, data, cmap=cmap, snap=True)
kw = dict(orientation="vertical", extend="both", shrink=0.65)
cbar = fig.colorbar(cs, **kw)
if track_inset:
axin = inset_axes(ax, width="25%", height="30%", loc=4)
axin.plot(x, y, "k.")
start, end = (x[0], y[0]), (x[-1], y[-1])
kw = dict(marker="o", linestyle="none")
axin.plot(*start, color="g", **kw)
axin.plot(*end, color="r", **kw)
axin.axis("off")
ax.invert_yaxis()
ax.set_xlabel("Distance (km)")
ax.set_ylabel("Depth (m)")
return fig, ax, cbar
from palettable import cmocean
haline = cmocean.sequential.Haline_20.mpl_colormap
thermal = cmocean.sequential.Thermal_20.mpl_colormap
dense = cmocean.sequential.Dense_20.mpl_colormap
fig, ax, cbar = plot_glider(x, y, z, t, S, cmap=haline, track_inset=False)
cbar.ax.set_xlabel("(g kg$^{-1}$)")
cbar.ax.xaxis.set_label_position("top")
ax.set_title("Salinity")
fig, ax, cbar = plot_glider(x, y, z, t, T, cmap=thermal, track_inset=False)
cbar.ax.set_xlabel(r"($^\circ$C)")
cbar.ax.xaxis.set_label_position("top")
ax.set_title("Temperature")
fig, ax, cbar = plot_glider(x, y, z, t, D - 1000, cmap=dense, track_inset=False)
cbar.ax.set_xlabel(r"(kg m$^{-3}$C)")
cbar.ax.xaxis.set_label_position("top")
ax.set_title("Density")
print("Data collected from {} to {}".format(t[0], t[-1]))
Explanation: We can also visualize the whole track as a cross-section.
End of explanation |
5,656 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Re-creating Capillary Hysteresis in Neutrally Wettable Fibrous Media
Step1: Now we set some key variables for the simulation, $\theta$ is the contact angle in each phase and without contact hysteresis sums to 180. The fiber radius is 5 $\mu m$ for this particular material and this used in the pore-scale capillary pressure models.
Step2: Experimental Data
The experimental data we are matching is taken from the 2009 paper for uncompressed Toray 090D which has had some treatment with PTFE to make it non-wetting to water. However, the material also seems to be non-wetting to air, once filled with water as reducing the pressure once invaded with water does not lead to spontaneous uptake of air.
Step3: New Geometric Parameters
The following code block cleans up the data a bit. The conduit_lengths are a new addition to openpnm to be able to apply different conductances along the length of the conduit for each section. Conduits in OpenPNM are considered to be comprised of a throat and the two half-pores either side and the length of each element is somewhat subjective for a converging, diverging profile such as a sphere pack or indeed fibrous media such as the GDL. We will effectively apply the conductance of the throat to the entire conduit length by setting the pore sections to be very small. For these highly porous materials the cross-sectional area of a throat is similar to that of the pore and so this is a reasonable assumption. It also helps to account for anisotropy of the material as the throats have vectors whereas pores do not.
Boundary pores also need to be handled with care. These are placed on the faces of the domain and have zero volume but need other properties for the conductance models to work. They are mainly used for defining the inlets and outlets of the percolation simulations and effective transport simulations. However, they are kind of fictitious so we do not want them contributing resistance to flow and therefore set their areas to be the highest in the network. The boundary pores are aligned with the planar faces of the domain, which is necessary for the effective transport property calculations which consider the transport through an effective medium of defined size and shape.
Step4: Phase Setup
Now we set up the phases and apply the contact angles.
Step5: Physics Setup
Now we set up the physics for each phase. The default capillary pressure model from the Standard physics class is the Washburn model which applies to straight capillary tubes and we must override it here with the Purcell model. We add the model to both phases and also add a value for pore.entry_pressure making sure that it is less than any of the throat.entry_pressure values. This is done because the MixedInvasionPercolation model invades pores and throats separately and for now we just want to consider the pores to be invaded as soon as their connecting throats are.
Step6: We apply the following late pore filling model
Step7: Finally we add the meniscus model for cooperative pore filling. The model mechanics are explained in greater detail in part c of this tutorial but the process is shown in the animation below. The brown fibrous cage structure represents the fibers surrounding and defining a single pore in the network. The shrinking spheres represent the invading phase present at each throat. The cooperative pore filling sequence for a single pore in the network then goes as follow
Step8: Percolation Algorithms
Now all the physics is defined we can setup and run two algorithms for water injection and withdrawal and compare to the experimental data.
NOTE
Step9: Let's take a look at the data plotted in the above cell
Step10: Saving the output
OpenPNM manages the simulation projects with the Workspace manager class which is a singleton and instantied when OpenPNM is first imported. We can print it to take a look at the contents
Step11: The project is saved for part b of this tutorial | Python Code:
import pickle
import numpy as np
import openpnm as op
from pathlib import Path
import matplotlib.pyplot as plt
from openpnm.models import physics as pm
%matplotlib inline
ws = op.Workspace()
ws.settings["loglevel"] = 50
ws.clear()
np.random.seed(10)
path = Path('../../fixtures/hysteresis_paper_network.pnm')
save_net = pickle.load(open(path, "rb"))
prj = op.Project(name='hysteresis_paper')
pn = op.network.GenericNetwork(project=prj, name='network')
pn.update(save_net)
print(pn.Np, pn.Nt)
Explanation: Re-creating Capillary Hysteresis in Neutrally Wettable Fibrous Media: A Pore Network Study of a Fuel Cell Electrode
Part A: Percolation
Introduction
In this tutorial, we will use the MixedInvasionPercolation algorithm to examine capillary hysteresis in a fibrous media with neutral wettability to water and air as detailed in Tranter et al. 2017. Part (a) performs the percolation simulation, part (b) uses the output of part (a) and computes relative diffusivity and part (c) takes a deeper look into the meniscus model.
The paper reproduces data gathered by Gostick et al. 2009 for the Toray 090 carbon fiber paper commonly used as a gas diffusion layer (GDL) in Polymer Electrolyte Fuel Cells (PEFCs). Fuel cells are electrochemical devices that convert hydrogen and oxygen into water as part of a redox reaction involving two half-steps. At the anode hydrogen oxidizes into protons and electrons, the protons migrate through a semi-permeable polymer membrane with a water content dependent conductivity. The electrons flow in the opposite direction, around an external cicuit producing work and recombine with the protons and oxygen in the cathode to form water. The reactants are both in gaseous form and the product water will typically saturate the cell and must be managed, keeping the membrane hydrated but not flooding the cell completley leading to blocking of reactant gases. The GDL is therefore an important component in the cell operation as it helps to remove water and aids diffusion to regions of the cell which may become starved of reactant. Understanding of the capillary properties of the GDL for multiphase transport is therefore essential for improving and maintaining fuel cell operation.
The experimental data shows a strong hysteresis in the capillary pressure defined as $P_c = P_{water} - P_{air}$. Whereby, on injecting water into an air-filled GDL, positive capillary pressure is required, whereas negative capillary pressure is required to withdraw the water. This signifies that the material is neither wetting to water nor air and was previously explained by contact angle hysteresis. However, by considering the shape of the interface as it moves in-between fibers a more logical explanation can be found. The constrictions between fibers are modelled as toroidal or donut shaped, as first explained by Purcell. With changing capillary pressure, as the mensicus contact line moves it conforms to the converging and diverging geometry and this modifies the effective contact angle. An inflection always occurs, irrespective of the intrinsic contact angle, signifying a change in pressure from negative to positive in the invading phase.
The invasion mechanism in highly porous fibrous media is complex and considering invasion of a single throat in isolation is not appropriate. The entry pressure in the simple isolated case is just the maximum pressure experienced by the meniscus as it transitions through the throat: termed burst pressure. However, the model allows for the bulge of the mensiscus to protrude quite far into the neighboring pore before the burst pressure is exceeded. The model is used in a new meniscus class of OpenPNM which supplies information about the position, size and shape of the mensicus at a given capillary pressure. These details are used by the MixedInvasionPercolation class to determine which type of invasion event may occur as it is now possible to determine whether an individual meniscus inside a throat will interact with solid features in neighboring pores (touch pressure) or even neighboring meniscii (cooperative pore filling). As nearby mensicii may grow simulataneously, they may coalesce at much lower pressure than the burst pressure, thus changing the characteristic capillary behaviour and saturation profile within the material.
The network is generated using the VoronoiFibers class, for which there is a tutorial in the topology folder. For speed and convenience we provide a pickled dictionary with the network properties because the domain is quite large and would take 30 mins to generate from scratch.
Model Setup
First we import the required python modules and load the network file
End of explanation
theta_w = 110
theta_a = 70
fiber_rad = 5e-6
Explanation: Now we set some key variables for the simulation, $\theta$ is the contact angle in each phase and without contact hysteresis sums to 180. The fiber radius is 5 $\mu m$ for this particular material and this used in the pore-scale capillary pressure models.
End of explanation
data = np.array([[-1.95351934e+04, 0.00000000e+00], [-1.79098945e+04, 1.43308300e-03], [-1.63107500e+04, 1.19626000e-03],
[-1.45700654e+04, 9.59437000e-04], [-1.30020859e+04, 7.22614000e-04], [-1.14239746e+04, 4.85791000e-04],
[-9.90715234e+03, 2.48968000e-04], [-8.45271973e+03, 1.68205100e-03], [-7.01874170e+03, 1.44522800e-03],
[-5.61586768e+03, 2.87831100e-03], [-4.27481055e+03, 4.44633600e-03], [-3.52959229e+03, 5.81363400e-03],
[-2.89486523e+03, 5.51102700e-03], [-2.25253784e+03, 8.26249200e-03], [-1.59332751e+03, 9.32718400e-03],
[-9.93971252e+02, 1.03918750e-02], [-3.52508118e+02, 1.31433410e-02], [ 2.55833755e+02, 1.90500850e-02],
[ 8.10946533e+02, 1.12153247e-01], [ 1.44181152e+03, 1.44055799e-01], [ 2.02831689e+03, 1.58485811e-01],
[ 2.56954688e+03, 1.68051842e-01], [ 3.22414917e+03, 1.83406543e-01], [ 3.81607397e+03, 2.00111675e-01],
[ 4.35119043e+03, 2.20173487e-01], [ 4.93044141e+03, 2.50698356e-01], [ 5.44759180e+03, 2.70760168e-01],
[ 5.97326611e+03, 3.02663131e-01], [ 6.49410010e+03, 3.83319515e-01], [ 7.05238232e+03, 5.06499276e-01],
[ 7.54107031e+03, 6.63817501e-01], [ 8.08143408e+03, 7.67864788e-01], [ 8.54633203e+03, 8.26789866e-01],
[ 9.03138965e+03, 8.62470191e-01], [ 9.53165723e+03, 8.84504516e-01], [ 1.00119375e+04, 9.01529123e-01],
[ 1.19394492e+04, 9.32130571e-01], [ 1.37455771e+04, 9.43415425e-01], [ 1.54468594e+04, 9.54111932e-01],
[ 1.71077578e+04, 9.59966386e-01], [ 1.87670996e+04, 9.66241521e-01], [ 2.02733223e+04, 9.70728677e-01],
[ 2.17321895e+04, 9.75215832e-01], [ 2.30644336e+04, 9.79820651e-01], [ 2.44692598e+04, 9.81254145e-01],
[ 2.56992520e+04, 9.88778094e-01], [ 2.69585078e+04, 9.93080716e-01], [ 2.81848105e+04, 9.92843893e-01],
[ 2.93189434e+04, 9.99000955e-01], [ 3.04701816e+04, 1.00180134e+00], [ 2.94237266e+04, 1.00323442e+00],
[ 2.82839531e+04, 1.00132769e+00], [ 2.70130059e+04, 1.00109128e+00], [ 2.57425723e+04, 1.00085404e+00],
[ 2.43311738e+04, 1.00047148e+00], [ 2.29761172e+04, 1.00023466e+00], [ 2.15129902e+04, 9.99997838e-01],
[ 2.00926621e+04, 9.98091109e-01], [ 1.85019902e+04, 9.97854286e-01], [ 1.70299883e+04, 9.95947557e-01],
[ 1.53611387e+04, 9.95710734e-01], [ 1.36047275e+04, 9.93804005e-01], [ 1.18231387e+04, 9.93567182e-01],
[ 9.87990430e+03, 9.91660453e-01], [ 9.40066016e+03, 9.89671072e-01], [ 8.89503516e+03, 9.89368465e-01],
[ 8.39770508e+03, 9.89065857e-01], [ 7.89161768e+03, 9.88763250e-01], [ 7.37182080e+03, 9.86790737e-01],
[ 6.87028369e+03, 9.86488130e-01], [ 6.28498584e+03, 9.85882915e-01], [ 5.80695361e+03, 9.85580308e-01],
[ 5.23104834e+03, 9.85277701e-01], [ 4.68521338e+03, 9.84975094e-01], [ 4.11333887e+03, 9.84672487e-01],
[ 3.59290625e+03, 9.84369879e-01], [ 2.96803101e+03, 9.84067272e-01], [ 2.41424536e+03, 9.82094759e-01],
[ 1.82232153e+03, 9.81792152e-01], [ 1.22446594e+03, 9.79819639e-01], [ 6.63709351e+02, 9.79517032e-01],
[ 7.13815610e+01, 9.79214424e-01], [-5.23247498e+02, 9.75437063e-01], [-1.19633813e+03, 9.73464550e-01],
[-1.81142188e+03, 9.66162844e-01], [-2.46475146e+03, 9.42637411e-01], [-3.08150562e+03, 8.98736764e-01],
[-3.72976978e+03, 7.06808493e-01], [-4.36241846e+03, 3.18811069e-01], [-5.10291357e+03, 2.13867093e-01],
[-5.77698242e+03, 1.76544863e-01], [-6.47121728e+03, 1.62546665e-01], [-7.23913574e+03, 1.49192478e-01],
[-7.89862988e+03, 1.45550059e-01], [-8.60248633e+03, 1.43577546e-01], [-9.35398340e+03, 1.39800185e-01],
[-1.00623330e+04, 1.37827671e-01], [-1.15617539e+04, 1.37590848e-01], [-1.31559434e+04, 1.37354025e-01],
[-1.48024961e+04, 1.35430429e-01], [-1.63463340e+04, 1.33523700e-01], [-1.80782656e+04, 1.33286877e-01],
[-1.98250000e+04, 1.31380148e-01], [-2.15848105e+04, 1.31143325e-01], [-2.34678457e+04, 1.29236596e-01]])
#NBVAL_IGNORE_OUTPUT
plt.figure();
plt.plot(data[:, 0], data[:, 1], 'g--');
plt.xlabel('Capillary Pressure \n (P_water - P_air) [Pa]');
plt.ylabel('Saturation \n Porous Volume Fraction occupied by water');
Explanation: Experimental Data
The experimental data we are matching is taken from the 2009 paper for uncompressed Toray 090D which has had some treatment with PTFE to make it non-wetting to water. However, the material also seems to be non-wetting to air, once filled with water as reducing the pressure once invaded with water does not lead to spontaneous uptake of air.
End of explanation
net_health = pn.check_network_health()
if len(net_health['trim_pores']) > 0:
op.topotools.trim(network=pn, pores=net_health['trim_pores'])
Ps = pn.pores()
Ts = pn.throats()
geom = op.geometry.GenericGeometry(network=pn, pores=Ps, throats=Ts, name='geometry')
geom['throat.conduit_lengths.pore1'] = 1e-12
geom['throat.conduit_lengths.pore2'] = 1e-12
geom['throat.conduit_lengths.throat'] = geom['throat.length'] - 2e-12
# Handle Boundary Pores - Zero Volume for saturation but not zero diam and area
# For flow calculations
pn['pore.diameter'][pn['pore.diameter'] == 0.0] = pn['pore.diameter'].max()
pn['pore.area'][pn['pore.area'] == 0.0] = pn['pore.area'].max()
Explanation: New Geometric Parameters
The following code block cleans up the data a bit. The conduit_lengths are a new addition to openpnm to be able to apply different conductances along the length of the conduit for each section. Conduits in OpenPNM are considered to be comprised of a throat and the two half-pores either side and the length of each element is somewhat subjective for a converging, diverging profile such as a sphere pack or indeed fibrous media such as the GDL. We will effectively apply the conductance of the throat to the entire conduit length by setting the pore sections to be very small. For these highly porous materials the cross-sectional area of a throat is similar to that of the pore and so this is a reasonable assumption. It also helps to account for anisotropy of the material as the throats have vectors whereas pores do not.
Boundary pores also need to be handled with care. These are placed on the faces of the domain and have zero volume but need other properties for the conductance models to work. They are mainly used for defining the inlets and outlets of the percolation simulations and effective transport simulations. However, they are kind of fictitious so we do not want them contributing resistance to flow and therefore set their areas to be the highest in the network. The boundary pores are aligned with the planar faces of the domain, which is necessary for the effective transport property calculations which consider the transport through an effective medium of defined size and shape.
End of explanation
air = op.phases.Air(network=pn, name='air')
water = op.phases.Water(network=pn, name='water')
air['pore.contact_angle'] = theta_a
air["pore.surface_tension"] = water["pore.surface_tension"]
water['pore.contact_angle'] = theta_w
water["pore.temperature"] = 293.7
water.regenerate_models()
Explanation: Phase Setup
Now we set up the phases and apply the contact angles.
End of explanation
phys_air = op.physics.Standard(network=pn, phase=air, geometry=geom, name='phys_air')
phys_water = op.physics.Standard(network=pn, phase=water, geometry=geom, name='phys_water')
throat_diam = 'throat.diameter'
pore_diam = 'pore.indiameter'
pmod = pm.capillary_pressure.purcell
phys_water.add_model(propname='throat.entry_pressure',
model=pmod,
r_toroid=fiber_rad,
diameter=throat_diam)
phys_air.add_model(propname='throat.entry_pressure',
model=pmod,
r_toroid=fiber_rad,
diameter=throat_diam)
# Ignore the pore entry pressures
phys_air['pore.entry_pressure'] = -999999
phys_water['pore.entry_pressure'] = -999999
print("Mean Water Throat Pc:",str(np.mean(phys_water["throat.entry_pressure"])))
print("Mean Air Throat Pc:",str(np.mean(phys_air["throat.entry_pressure"])))
Explanation: Physics Setup
Now we set up the physics for each phase. The default capillary pressure model from the Standard physics class is the Washburn model which applies to straight capillary tubes and we must override it here with the Purcell model. We add the model to both phases and also add a value for pore.entry_pressure making sure that it is less than any of the throat.entry_pressure values. This is done because the MixedInvasionPercolation model invades pores and throats separately and for now we just want to consider the pores to be invaded as soon as their connecting throats are.
End of explanation
lpf = 'pore.late_filling'
phys_water.add_model(propname='pore.pc_star',
model=op.models.misc.from_neighbor_throats,
throat_prop='throat.entry_pressure',
mode='min')
phys_water.add_model(propname=lpf,
model=pm.multiphase.late_filling,
pressure='pore.pressure',
Pc_star='pore.pc_star',
Swp_star=0.25,
eta=2.5)
Explanation: We apply the following late pore filling model:
$ S_{res} = S_{wp}^\left(\frac{P_c^}{P_c}\right)^{\eta}$
This is a heuristic model that adjusts the phase ocupancy inside an individual after is has been invaded and reproduces the gradual expansion of the phases into smaller sub-pore scale features such as cracks fiber intersections.
End of explanation
phys_air.add_model(propname='throat.meniscus',
model=op.models.physics.meniscus.purcell,
mode='men',
r_toroid=fiber_rad,
target_Pc=5000)
Explanation: Finally we add the meniscus model for cooperative pore filling. The model mechanics are explained in greater detail in part c of this tutorial but the process is shown in the animation below. The brown fibrous cage structure represents the fibers surrounding and defining a single pore in the network. The shrinking spheres represent the invading phase present at each throat. The cooperative pore filling sequence for a single pore in the network then goes as follow: As pressure increases the phase is squeezed futher into the pores and the curvature of each meniscus increases. If there are no meniscii overlapping inside the pore they are coloured blue and when menisci spheres begin to intersect (inside the pore) they are coloured green. When the spheres curvature reaches the maximum required to transition through the throats they are coloured red. Larger throats allows for smaller curvature and lower pressure. Not all spheres transition from blue to green before going red and represent a burst before coalescence regardless of phase occupancy. Meniscii interactions are assessed for every throat and all the neighboring throats for each pore as a pre-processing step to determine the coalsecence pressure. Then once the percolation algorithm is running the coalescence is triggered if the phase is present at the corresponding throat pairs and the coalescence pressure is lower than the burst pressure.
End of explanation
#NBVAL_IGNORE_OUTPUT
inv_points = np.arange(-15000, 15100, 10)
IP_injection = op.algorithms.MixedInvasionPercolation(network=pn, name='injection')
IP_injection.setup(phase=water)
IP_injection.set_inlets(pores=pn.pores('bottom_boundary'))
IP_injection.settings['late_pore_filling'] = 'pore.late_filling'
IP_injection.run()
injection_data = IP_injection.get_intrusion_data(inv_points=inv_points)
IP_withdrawal = op.algorithms.MixedInvasionPercolationCoop(network=pn, name='withdrawal')
IP_withdrawal.setup(phase=air)
IP_withdrawal.set_inlets(pores=pn.pores('top_boundary'))
IP_withdrawal.setup(cooperative_pore_filling='throat.meniscus')
coop_points = np.arange(0, 1, 0.1)*inv_points.max()
IP_withdrawal.setup_coop_filling(inv_points=coop_points)
IP_withdrawal.run()
IP_withdrawal.set_outlets(pores=pn.pores(['bottom_boundary']))
IP_withdrawal.apply_trapping()
withdrawal_data = IP_withdrawal.get_intrusion_data(inv_points=inv_points)
plt.figure()
plt.plot(injection_data.Pcap, injection_data.S_tot, 'r*-')
plt.plot(-withdrawal_data.Pcap, 1-withdrawal_data.S_tot, 'b*-')
plt.plot(data[:, 0], data[:, 1], 'g--')
plt.xlabel('Capillary Pressure \n (P_water - P_air) [Pa]')
plt.ylabel('Saturation \n Porous Volume Fraction occupied by water')
plt.show()
Explanation: Percolation Algorithms
Now all the physics is defined we can setup and run two algorithms for water injection and withdrawal and compare to the experimental data.
NOTE: THIS NEXT STEP MIGHT TAKE SEVERAL MINUTES.
End of explanation
print(f"Injection - capillary pressure (Pa):\n {injection_data.Pcap}")
print(f"Injection - Saturation:\n {injection_data.S_tot}")
print(f"Withdrawal - capillary pressure (Pa):\n {-withdrawal_data.Pcap}")
print(f"Withdrawal - Saturation:\n {1-withdrawal_data.S_tot}")
Explanation: Let's take a look at the data plotted in the above cell:
End of explanation
#NBVAL_IGNORE_OUTPUT
print(ws)
Explanation: Saving the output
OpenPNM manages the simulation projects with the Workspace manager class which is a singleton and instantied when OpenPNM is first imported. We can print it to take a look at the contents
End of explanation
ws.save_project(prj, '../../fixtures/hysteresis_paper_project')
Explanation: The project is saved for part b of this tutorial
End of explanation |
5,657 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Chapter 19
Step1: Listing 19.1
Step2: Listing 19.2 | Python Code:
!curl https://raw.githubusercontent.com/Serulab/Py4Bio/master/samples/samples.tar.bz2 -o samples.tar.bz2
!mkdir samples
!tar xvfj samples.tar.bz2 -C samples
Explanation: Python for Bioinformatics
This Jupyter notebook is intented to be used alongside the book Python for Bioinformatics
Chapter 19: Filtering Out Specific Fields from a GenBank File
Note: Before opening the file, this file should be accesible from this Jupyter notebook. In order to do so, the following commands will download these files from Github and extract them into a directory called samples.
End of explanation
from Bio import SeqIO, SeqRecord, Seq
from Bio.Alphabet import IUPAC
GB_FILE = 'samples/NC_006581.gb'
OUT_FILE = 'nadh.fasta'
with open(GB_FILE) as gb_fh:
record = SeqIO.read(gb_fh, 'genbank')
seqs_for_fasta = []
for feature in record.features:
# Each Genbank record may have several features, the program
# will walk over all of them.
qualifier = feature.qualifiers
# Each feature has several parameters
# Pick selected parameters.
if 'NADH' in qualifier.get('product',[''])[0] and \
'product' in qualifier and 'translation' in qualifier:
id_ = qualifier['db_xref'][0][3:]
desc = qualifier['product'][0]
# nadh_sq is a NADH protein sequence
nadh_sq = Seq.Seq(qualifier['translation'][0], IUPAC.protein)
# 'srec' is a SeqRecord object from nadh_sq sequence.
srec = SeqRecord.SeqRecord(nadh_sq, id=id_, description=desc)
# Add this SeqRecord object into seqsforfasta list.
seqs_for_fasta.append(srec)
with open(OUT_FILE, 'w') as outf:
# Write all the sequences as a FASTA file.
SeqIO.write(seqs_for_fasta, outf, 'fasta')
Explanation: Listing 19.1: genbank1.py: Extract sequences from a Genbank file
End of explanation
from Bio import SeqIO
from Bio.SeqRecord import SeqRecord
GB_FILE = 'samples/NC_006581.gb'
OUT_FILE = 'tg.fasta'
with open(GB_FILE) as gb_fh:
record = SeqIO.read(gb_fh, 'genbank')
seqs_for_fasta = []
tg = (['cox2'],['atp6'],['atp9'],['cob'])
for feature in record.features:
if feature.qualifiers.get('gene') in tg and feature.type=='gene':
# Get the name of the gene
genename = feature.qualifiers.get('gene')
# Get the start position
startpos = feature.location.start.position
# Get the required slice
newfrag = record.seq[startpos-1000: startpos]
# Build a SeqRecord object
newrec = SeqRecord(newfrag, genename[0] + ' 1000bp upstream',
'','')
seqs_for_fasta.append(newrec)
with open(OUT_FILE,'w') as outf:
# Write all the sequences as a FASTA file.
SeqIO.write(seqs_for_fasta, outf, 'fasta')
Explanation: Listing 19.2: genbank2.py: Extract upstream regions
End of explanation |
5,658 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graphes - correction
Correction des exercices sur les graphes avec matplotlib.
Pour avoir des graphiques inclus dans le notebook, il faut ajouter cette ligne et l'exécuter en premier.
Step1: On change le style pour un style plus moderne, celui de ggplot
Step2: Données
élections
Pour tous les exemples qui suivent, on utilise les résultat élection présidentielle de 2012. Si vous n'avez pas le module actuariat_python, il vous suffit de recopier le code de la fonction elections_presidentielles qui utilise la fonction read_excel
Step3: localisation des villes
Step4: exercice 1
Step5: exercice 2
Step6: On ne retient que les villes de plus de 100.000 habitants. Toutes les villes ne font pas partie de la métropole
Step7: Saint-Denis est à la Réunion. On l'enlève de l'ensemble
Step8: On dessine la carte souhaitée en ajoutant un marqueur pour chaque ville dont la surface dépend du nombre d'habitant. Sa taille doit être proportionnelle à à la racine carrée du nombre d'habitants.
Step9: rappel
Step10: On l'utilise souvent de cette manière
Step11: Sans la fonction zip
Step12: Ou encore
Step13: exercice 3
Step14: Il y a 63 départements où Hollande est vainqueur.
Step15: On récupère les formes de chaque département
Step16: Le problème est que les codes sont difficiles à associer aux résultats des élections. La page Wikipedia de Bas-Rhin lui associe le code 67. Le Bas-Rhin est orthographié BAS RHIN dans la liste des résultats. Le code du département n'apparaît pas dans les shapefile récupérés. Il faut matcher sur le nom du département. On met tout en minuscules et on enlève espaces et tirets.
Step17: Et comme il faut aussi remplacer les accents, on s'inspire de la fonction remove_diacritic
Step18: Puis on utilise le code de l'énoncé en changeant la couleur. Pas de couleur indique les départements pour lesquels on ne sait pas.
Step19: La fonction fait encore une erreur pour la Corse du Sud... Je la laisse en guise d'exemple.
exercice 3 avec les shapefile etalab
Les données sont disponibles sur GEOFLA® Départements mais vous pouvez reprendre le code ce-dessus pour les télécharger.
Step20: exercice 4 | Python Code:
%matplotlib inline
Explanation: Graphes - correction
Correction des exercices sur les graphes avec matplotlib.
Pour avoir des graphiques inclus dans le notebook, il faut ajouter cette ligne et l'exécuter en premier.
End of explanation
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: On change le style pour un style plus moderne, celui de ggplot :
End of explanation
from actuariat_python.data import elections_presidentielles
dict_df = elections_presidentielles(local=True, agg="dep")
def cleandep(s):
if isinstance(s, str):
r = s.lstrip('0')
else:
r = str(s)
return r
dict_df["dep1"]["Code du département"] = dict_df["dep1"]["Code du département"].apply(cleandep)
dict_df["dep2"]["Code du département"] = dict_df["dep2"]["Code du département"].apply(cleandep)
deps = dict_df["dep1"].merge(dict_df["dep2"],
on="Code du département",
suffixes=("T1", "T2"))
deps["rHollandeT1"] = deps['François HOLLANDE (PS)T1'] / (deps["VotantsT1"] - deps["Blancs et nulsT1"])
deps["rSarkozyT1"] = deps['Nicolas SARKOZY (UMP)T1'] / (deps["VotantsT1"] - deps["Blancs et nulsT1"])
deps["rNulT1"] = deps["Blancs et nulsT1"] / deps["VotantsT1"]
deps["rHollandeT2"] = deps["François HOLLANDE (PS)T2"] / (deps["VotantsT2"] - deps["Blancs et nulsT2"])
deps["rSarkozyT2"] = deps['Nicolas SARKOZY (UMP)T2'] / (deps["VotantsT2"] - deps["Blancs et nulsT2"])
deps["rNulT2"] = deps["Blancs et nulsT2"] / deps["VotantsT2"]
data = deps[["Code du département", "Libellé du départementT1",
"VotantsT1", "rHollandeT1", "rSarkozyT1", "rNulT1",
"VotantsT2", "rHollandeT2", "rSarkozyT2", "rNulT2"]]
data_elections = data # parfois data est remplacé dans la suite
data.head()
Explanation: Données
élections
Pour tous les exemples qui suivent, on utilise les résultat élection présidentielle de 2012. Si vous n'avez pas le module actuariat_python, il vous suffit de recopier le code de la fonction elections_presidentielles qui utilise la fonction read_excel :
End of explanation
from pyensae.datasource import download_data
download_data("villes_france.csv", url="http://sql.sh/ressources/sql-villes-france/")
cols = ["ncommune", "numero_dep", "slug", "nom", "nom_simple", "nom_reel", "nom_soundex", "nom_metaphone", "code_postal",
"numero_commune", "code_commune", "arrondissement", "canton", "pop2010", "pop1999", "pop2012",
"densite2010", "surface", "superficie", "dlong", "dlat", "glong", "glat", "slong", "slat", "alt_min", "alt_max"]
import pandas
villes = pandas.read_csv("villes_france.csv", header=None,low_memory=False, names=cols)
Explanation: localisation des villes
End of explanation
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
ax.set_extent([-5, 10, 38, 52])
ax.add_feature(cfeature.OCEAN.with_scale('50m'))
ax.add_feature(cfeature.RIVERS.with_scale('50m'))
ax.add_feature(cfeature.BORDERS.with_scale('50m'), linestyle=':')
ax.set_title('France');
Explanation: exercice 1 : centrer la carte de la France
On recentre la carte. Seule modification : [-5, 10, 38, 52].
End of explanation
def carte_france(figsize=(7, 7)):
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(1, 1, 1, projection=ccrs.PlateCarree())
ax.set_extent([-5, 10, 38, 52])
ax.add_feature(cfeature.OCEAN.with_scale('50m'))
ax.add_feature(cfeature.RIVERS.with_scale('50m'))
ax.add_feature(cfeature.BORDERS.with_scale('50m'), linestyle=':')
ax.set_title('France');
return ax
carte_france();
Explanation: exercice 2 : placer les plus grandes villes de France sur la carte
On reprend la fonction carte_france donnée par l'énoncé et modifié avec le résultat de la question précédente.
End of explanation
grosses_villes = villes[villes.pop2012 > 100000][["dlong","dlat","nom", "pop2012"]]
grosses_villes.describe()
grosses_villes.sort_values("dlat").head()
Explanation: On ne retient que les villes de plus de 100.000 habitants. Toutes les villes ne font pas partie de la métropole :
End of explanation
grosses_villes = villes[(villes.pop2012 > 100000) & (villes.dlat > 40)] \
[["dlong","dlat","nom", "pop2012"]]
Explanation: Saint-Denis est à la Réunion. On l'enlève de l'ensemble :
End of explanation
import matplotlib.pyplot as plt
ax = carte_france()
def affiche_ville(ax, x, y, nom, pop):
ax.plot(x, y, 'ro', markersize=pop**0.5/50)
ax.text(x, y, nom)
for lon, lat, nom, pop in zip(grosses_villes["dlong"],
grosses_villes["dlat"],
grosses_villes["nom"],
grosses_villes["pop2012"]):
affiche_ville(ax, lon, lat, nom, pop)
ax;
Explanation: On dessine la carte souhaitée en ajoutant un marqueur pour chaque ville dont la surface dépend du nombre d'habitant. Sa taille doit être proportionnelle à à la racine carrée du nombre d'habitants.
End of explanation
list(zip([1,2,3], ["a", "b", "c"]))
Explanation: rappel : fonction zip
La fonction zip colle deux séquences ensemble.
End of explanation
for a,b in zip([1,2,3], ["a", "b", "c"]):
# faire quelque chose avec a et b
print(a,b)
Explanation: On l'utilise souvent de cette manière :
End of explanation
ax = carte_france()
def affiche_ville(ax, x, y, nom, pop):
ax.plot(x, y, 'ro', markersize=pop**0.5/50)
ax.text(x, y, nom)
def affiche_row(ax, row):
affiche_ville(ax, row["dlong"], row["dlat"], row["nom"], row["pop2012"])
grosses_villes.apply(lambda row: affiche_row(ax, row), axis=1)
ax;
Explanation: Sans la fonction zip :
End of explanation
import matplotlib.pyplot as plt
ax = carte_france()
def affiche_ville(ax, x, y, nom, pop):
ax.plot(x, y, 'ro', markersize=pop**0.5/50)
ax.text(x, y, nom)
for i in range(0, grosses_villes.shape[0]):
ind = grosses_villes.index[i]
# important ici, les lignes sont indexées par rapport à l'index de départ
# comme les lignes ont été filtrées pour ne garder que les grosses villes,
# il faut soit utiliser reset_index soit récupérer l'indice de la ligne
lon, lat = grosses_villes.loc[ind, "dlong"], grosses_villes.loc[ind, "dlat"]
nom, pop = grosses_villes.loc[ind, "nom"], grosses_villes.loc[ind, "pop2012"]
affiche_ville(ax, lon, lat, nom, pop)
ax;
Explanation: Ou encore :
End of explanation
data_elections.shape, data_elections[data_elections.rHollandeT2 > data_elections.rSarkozyT2].shape
Explanation: exercice 3 : résultats des élections par département
On reprend le résultat des élections, on construit d'abord un dictionnaire dans lequel { departement: vainqueur }.
End of explanation
hollande_gagnant = dict(zip(data_elections["Libellé du départementT1"], data_elections.rHollandeT2 > data_elections.rSarkozyT2))
list(hollande_gagnant.items())[:5]
Explanation: Il y a 63 départements où Hollande est vainqueur.
End of explanation
from pyensae.datasource import download_data
try:
download_data("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z",
website="https://wxs-telechargement.ign.fr/oikr5jryiph0iwhw36053ptm/telechargement/inspire/" + \
"GEOFLA_THEME-DEPARTEMENTS_2015_2$GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01/file/")
except Exception as e:
# au cas le site n'est pas accessible
download_data("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z", website="xd")
from pyquickhelper.filehelper import un7zip_files
try:
un7zip_files("GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01.7z", where_to="shapefiles")
departements = 'shapefiles/GEOFLA_2-1_DEPARTEMENT_SHP_LAMB93_FXX_2015-12-01/GEOFLA/1_DONNEES_LIVRAISON_2015/' + \
'GEOFLA_2-1_SHP_LAMB93_FR-ED152/DEPARTEMENT/DEPARTEMENT.shp'
except FileNotFoundError as e:
# Il est possible que cette instruction ne fonctionne pas.
# Dans ce cas, on prendra une copie de ce fichier.
import warnings
warnings.warn("Plan B parce que " + str(e))
download_data("DEPARTEMENT.zip")
departements = "DEPARTEMENT.shp"
import os
if not os.path.exists(departements):
raise FileNotFoundError("Impossible de trouver '{0}'".format(departements))
import shapefile
shp = departements
r = shapefile.Reader(shp)
shapes = r.shapes()
records = r.records()
records[0]
Explanation: On récupère les formes de chaque département :
End of explanation
hollande_gagnant_clean = { k.lower().replace("-", "").replace(" ", ""): v for k,v in hollande_gagnant.items()}
list(hollande_gagnant_clean.items())[:5]
Explanation: Le problème est que les codes sont difficiles à associer aux résultats des élections. La page Wikipedia de Bas-Rhin lui associe le code 67. Le Bas-Rhin est orthographié BAS RHIN dans la liste des résultats. Le code du département n'apparaît pas dans les shapefile récupérés. Il faut matcher sur le nom du département. On met tout en minuscules et on enlève espaces et tirets.
End of explanation
import unicodedata
def retourne_vainqueur(nom_dep):
s = nom_dep.lower().replace("-", "").replace(" ", "")
nkfd_form = unicodedata.normalize('NFKD', s)
only_ascii = nkfd_form.encode('ASCII', 'ignore')
s = only_ascii.decode("utf8")
if s in hollande_gagnant_clean:
return hollande_gagnant_clean[s]
else:
keys = list(sorted(hollande_gagnant_clean.keys()))
keys = [_ for _ in keys if _[0].lower() == s[0].lower()]
print("impossible de savoir pour ", nom_dep, "*", s, "*", " --- ", keys[:5])
return None
import math
def lambert932WGPS(lambertE, lambertN):
class constantes:
GRS80E = 0.081819191042816
LONG_0 = 3
XS = 700000
YS = 12655612.0499
n = 0.7256077650532670
C = 11754255.4261
delX = lambertE - constantes.XS
delY = lambertN - constantes.YS
gamma = math.atan(-delX / delY)
R = math.sqrt(delX * delX + delY * delY)
latiso = math.log(constantes.C / R) / constantes.n
sinPhiit0 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * math.sin(1)))
sinPhiit1 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit0))
sinPhiit2 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit1))
sinPhiit3 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit2))
sinPhiit4 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit3))
sinPhiit5 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit4))
sinPhiit6 = math.tanh(latiso + constantes.GRS80E * math.atanh(constantes.GRS80E * sinPhiit5))
longRad = math.asin(sinPhiit6)
latRad = gamma / constantes.n + constantes.LONG_0 / 180 * math.pi
longitude = latRad / math.pi * 180
latitude = longRad / math.pi * 180
return longitude, latitude
lambert932WGPS(99217.1, 6049646.300000001), lambert932WGPS(1242417.2, 7110480.100000001)
Explanation: Et comme il faut aussi remplacer les accents, on s'inspire de la fonction remove_diacritic :
End of explanation
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
ax = carte_france((8,8))
from matplotlib.collections import LineCollection
import shapefile
import geopandas
from shapely.geometry import Polygon
from shapely.ops import cascaded_union, unary_union
shp = departements
r = shapefile.Reader(shp)
shapes = r.shapes()
records = r.records()
polys = []
colors = []
for i, (record, shape) in enumerate(zip(records, shapes)):
# Vainqueur
dep = retourne_vainqueur(record[2])
if dep is not None:
couleur = "red" if dep else "blue"
else:
couleur = "gray"
# les coordonnées sont en Lambert 93
if i == 0:
print(record, shape.parts, couleur)
geo_points = [lambert932WGPS(x,y) for x, y in shape.points]
if len(shape.parts) == 1:
# Un seul polygone
poly = Polygon(geo_points)
else:
# Il faut les fusionner.
ind = list(shape.parts) + [len(shape.points)]
pols = [Polygon(geo_points[ind[i]:ind[i+1]]) for i in range(0, len(shape.parts))]
try:
poly = unary_union(pols)
except Exception as e:
print("Cannot merge: ", record)
print([_.length for _ in pols], ind)
poly = Polygon(geo_points)
polys.append(poly)
colors.append(couleur)
data = geopandas.GeoDataFrame(dict(geometry=polys, colors=colors))
geopandas.plotting.plot_polygon_collection(ax, data['geometry'], facecolor=data['colors'],
values=None, edgecolor='black');
Explanation: Puis on utilise le code de l'énoncé en changeant la couleur. Pas de couleur indique les départements pour lesquels on ne sait pas.
End of explanation
# ici, il faut dézipper manuellement les données
# à terminer
Explanation: La fonction fait encore une erreur pour la Corse du Sud... Je la laisse en guise d'exemple.
exercice 3 avec les shapefile etalab
Les données sont disponibles sur GEOFLA® Départements mais vous pouvez reprendre le code ce-dessus pour les télécharger.
End of explanation
import matplotlib.pyplot as plt
from ipywidgets import interact, Checkbox
def plot(candh, cands):
fig, axes = plt.subplots(1, 1, figsize=(14,5), sharey=True)
if candh:
data_elections.plot(x="rHollandeT1", y="rHollandeT2", kind="scatter", label="H", ax=axes)
if cands:
data_elections.plot(x="rSarkozyT1", y="rSarkozyT2", kind="scatter", label="S", ax=axes, c="red")
axes.plot([0.2,0.7], [0.2,0.7], "g--")
return axes
candh = Checkbox(description='Hollande', value=True)
cands = Checkbox(description='Sarkozy', value=True)
interact(plot, candh=candh, cands=cands);
Explanation: exercice 4 : même code, widget différent
On utilise des checkbox pour activer ou désactiver l'un des deux candidats.
End of explanation |
5,659 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conditional non-linear systems of equations
Sometimes when performing modelling work in physical sciences we use different sets of equations to describe our system depending on conditions. Sometimes it is not known beforehand which of those formulations that will be applicable (only after having solved the system of equations can we reject or accept the answer). pyneqsys provides facilities to handle precisely this situation.
Step1: Let's consider precipitation/dissolution of NaCl
Step2: if the solution is saturated, then the solubility product will be constant
Step3: Our two sets of reactions are then
Step4: We have one condition (a boolean describing whether the solution is saturated or not). We provide two conditionals, one for going from non-saturated to saturated (forward) and one going from saturated to non-saturated (backward)
Step5: Solving for inital concentrations below the solubility product
Step6: no surprises there (it is of course trivial).
In order to illustrate its usefulness, let us consider addition of a more soluable sodium salt (e.g. NaOH) to a chloride rich solution (e.g. HCl) | Python Code:
from __future__ import (absolute_import, division, print_function)
from functools import reduce
from operator import mul
import sympy as sp
import numpy as np
import matplotlib.pyplot as plt
from pyneqsys.symbolic import SymbolicSys, linear_exprs
sp.init_printing()
Explanation: Conditional non-linear systems of equations
Sometimes when performing modelling work in physical sciences we use different sets of equations to describe our system depending on conditions. Sometimes it is not known beforehand which of those formulations that will be applicable (only after having solved the system of equations can we reject or accept the answer). pyneqsys provides facilities to handle precisely this situation.
End of explanation
init_concs = iNa_p, iCl_m, iNaCl = [sp.Symbol('i_'+str(i), real=True, negative=False) for i in range(3)]
c = Na_p, Cl_m, NaCl = [sp.Symbol('c_'+str(i), real=True, negative=False) for i in range(3)]
prod = lambda x: reduce(mul, x)
texnames = [r'\mathrm{%s}' % k for k in 'Na^+ Cl^- NaCl'.split()]
Explanation: Let's consider precipitation/dissolution of NaCl:
$$
\rm NaCl(s) \rightleftharpoons Na^+(aq) + Cl^-(aq)
$$
End of explanation
stoichs = [[1, 1, -1]]
Na = [1, 0, 1]
Cl = [0, 1, 1]
charge = [1, -1, 0]
preserv = [Na, Cl, charge]
eq_constants = [Ksp] = [sp.Symbol('K_{sp}', real=True, positive=True)]
def get_f(x, params, saturated):
init_concs = params[:3] if saturated else params[:2]
eq_constants = params[3:]
le = linear_exprs(preserv, x, linear_exprs(preserv, init_concs), rref=True)
return le + ([Na_p*Cl_m - Ksp] if saturated else [NaCl])
Explanation: if the solution is saturated, then the solubility product will be constant:
$$
K_{\rm sp} = \mathrm{[Na^+][Cl^-]}
$$
in addition to this (conditial realtion) we can write equations for the preservation of atoms and charge:
End of explanation
get_f(c, init_concs + eq_constants, False)
f_true = get_f(c, init_concs + eq_constants, True)
f_false = get_f(c, init_concs + eq_constants, False)
f_true, f_false
Explanation: Our two sets of reactions are then:
End of explanation
from pyneqsys.core import ConditionalNeqSys
cneqsys = ConditionalNeqSys(
[
(lambda x, p: (x[0] + x[2]) * (x[1] + x[2]) > p[3], # forward condition
lambda x, p: x[2] >= 0) # backward condition
],
lambda conds: SymbolicSys(
c, f_true if conds[0] else f_false, init_concs+eq_constants
),
latex_names=['[%s]' % n for n in texnames], latex_param_names=['[%s]_0' % n for n in texnames]
)
c0, K = [0.5, 0.5, 0], [1] # Ksp for NaCl(aq) isn't 1 in reality, but used here for illustration
params = c0 + K
Explanation: We have one condition (a boolean describing whether the solution is saturated or not). We provide two conditionals, one for going from non-saturated to saturated (forward) and one going from saturated to non-saturated (backward):
End of explanation
cneqsys.solve([0.5, 0.5, 0], params)
Explanation: Solving for inital concentrations below the solubility product:
End of explanation
%matplotlib inline
ax_out = plt.subplot(1, 2, 1)
ax_err = plt.subplot(1, 2, 2)
xres, sols = cneqsys.solve_and_plot_series(
c0, params, np.linspace(0, 3), 0, 'kinsol',
{'ax': ax_out}, {'ax': ax_err}, fnormtol=1e-14)
_ = ax_out.legend()
Explanation: no surprises there (it is of course trivial).
In order to illustrate its usefulness, let us consider addition of a more soluable sodium salt (e.g. NaOH) to a chloride rich solution (e.g. HCl):
End of explanation |
5,660 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
03-exploratory-analysis for final project
I am working on the Kaggle Grupo Bimbo competition dataset for this project.
Link to Grupo Bimbo Kaggle competition
Step1: Part 1. Identify the Problem
Problem
Step2: Part 3. Parse, Mine, and Refine the data
Perform exploratory data analysis and verify the quality of the data.
Check columns and counts to drop any non-generic or near-empty columns
Step3: Check for missing values and drop or impute
Step4: Wrangle the data to address any issues from above checks
Step5: Perform exploratory data analysis
Step6: Check and convert all data types to numerical
Step7: Part 4. Build a Model
Create a cross validation split, select and build a model, evaluate the model, and refine the model
Create cross validation sets
Step8: Build a model
Step9: Evaluate the model
Step10: Part 5
Step11: Load Kaggle test data, make predictions using model, and generate submission file
Step12: Kaggle score | Python Code:
import numpy as np
import pandas as pd
from sklearn import cross_validation
from sklearn import metrics
from sklearn import linear_model
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="whitegrid", font_scale=1)
%matplotlib inline
Explanation: 03-exploratory-analysis for final project
I am working on the Kaggle Grupo Bimbo competition dataset for this project.
Link to Grupo Bimbo Kaggle competition: Kaggle-GrupoBimbo
End of explanation
# Load train data
# Given size of training data, I chose to use only 10% for speed reasons
# QUESTION - how can i randomize with python? i used sql to create the random sample below.
df_train = pd.read_csv("train_random10percent.csv")
# Check head
df_train.head()
# Load test data
df_test = pd.read_csv("test.csv")
# Check head. I noticed that I will have to drop certain columns so that test and train sets have the same features.
df_test.head()
#given that i cannot use a significant amount of variables in train data, i created additoinal features using the mean
#i grouped on product id since i will ultimately be predicting demand for each product
df_train_mean = df_train.groupby('Producto_ID').mean().add_suffix('_mean').reset_index()
df_train_mean.head()
#from above, adding 2 additional features, the average sales units and the average demand
df_train2 = df_train.merge(df_train_mean[['Producto_ID','Venta_uni_hoy_mean', 'Demanda_uni_equil_mean']],how='inner',on='Producto_ID')
df_train2.sample(5)
# Adding features to the test set in order to match train set
df_test2 = df_test.merge(df_train_mean[['Producto_ID','Venta_uni_hoy_mean', 'Demanda_uni_equil_mean']],how='left',on='Producto_ID')
df_test2.head()
Explanation: Part 1. Identify the Problem
Problem: Given various sales/client/product data, we want to predict demand for each product at each store on a weekly basis. Per the train dataset, the average demand for a product at a store per week is 7.2 units. However, this does not factor in cases in which store managers under-predict demand for a product which we can see when returns=0 for that week. There are 74,180,464 records in the train data, of which 71,636,003 records have returns=0 or approx 96%. This generally means that managers probably often under predict product demand (unless that are exact on the money, which seems unlikely).
Goals: The goal is to predict demand for each product at each store on a weekly basis while avoiding under-predicting demand.
Hypothesis: As stated previously, the average product demand at a store per week is 7.2 units per the train data. However, given the likelihood of managers underpredicint product demand, I hypothesize a good model should return a number higher than 7.2 units to more accurately predict demand.
Part 2. Acquire the Data
Kaggle has provided five files for this dataset:
train.csv: Use for building a model (contains target variable "Demanda_uni_equil")
test.csv: Use for submission file (fill in for target variable "Demanda_uni_equil")
cliente_tabla.csv: Contains client names (can be joined with train/test on Cliente_ID)
producto_tabla.csv: Contains product names (can be join with train/test on Producto_ID)
town_state.csv: Contains town and state (can be join with train/test on Agencia_ID)
Notes: I will further split train.csv to generate my own cross validation set. However, I will use all of train.csv to train my final model since Kaggle has already supplied a test dataset. Additionally, I am only using a random 10% of the train data given to me for EDA and model development. Using the entire train dataset proved to be too time consuming for the quick iternations needed for initial modeling building and EDA efforts. I plan to use 100% of the train dataset once I build a model I'm comfortable with. I may have to explore using EC2 for this effort.
End of explanation
# Check columns
print "train dataset columns:"
print df_train2.columns.values
print
print "test dataset columns:"
print df_test2.columns.values
# Check counts
print "train dataset counts:"
print df_train2.count()
print
print "test dataset counts:"
print df_test2.count()
Explanation: Part 3. Parse, Mine, and Refine the data
Perform exploratory data analysis and verify the quality of the data.
Check columns and counts to drop any non-generic or near-empty columns
End of explanation
# Check counts for missing values in each column
print "train dataset missing values:"
print df_train2.isnull().sum()
print
print "test dataset missing values:"
print df_test2.isnull().sum()
Explanation: Check for missing values and drop or impute
End of explanation
# Drop columns not included in test dataset
df_train2 = df_train2.drop(['Venta_uni_hoy', 'Venta_hoy', 'Dev_uni_proxima', 'Dev_proxima'], axis=1)
# Check data
df_train2.head()
# Drop blank values in test set and replace with mean
# Replace missing values for venta_uni_hoy_mean using mean
df_test2.loc[(df_test2['Venta_uni_hoy_mean'].isnull()), 'Venta_uni_hoy_mean'] = df_test2['Venta_uni_hoy_mean'].dropna().mean()
# Replace missing values for demand using mean
df_test2.loc[(df_test2['Demanda_uni_equil_mean'].isnull()), 'Demanda_uni_equil_mean'] = df_test2['Demanda_uni_equil_mean'].dropna().mean()
print "test dataset missing values:"
print df_test2.isnull().sum()
Explanation: Wrangle the data to address any issues from above checks
End of explanation
# Get summary statistics for data
df_train2.describe()
#RE RUN THIS LAST
# Get pair plot for data
sns.pairplot(df_train2)
#show demand by weeks
timing = pd.read_csv('train_random10percent.csv', usecols=['Semana','Demanda_uni_equil'])
print(timing['Semana'].value_counts())
plt.hist(timing['Semana'].tolist(), bins=7, color='blue')
plt.show()
#QUESTION - is this a time series problem since we are predicting demand for weeks 10 and 11? and beyond?
#Show box plot of demand by week
sns.factorplot(
x='Semana',
y='Demanda_uni_equil',
data=df_train2,
kind='box')
Explanation: Perform exploratory data analysis
End of explanation
# Check data types
df_train.dtypes
#these are all numerical but are not continuous values and therefore don't have relative significant to one another, except for week
#however, creating dummy variables for all these is too memory intensive. as such, might have to explore using a random forest model
#in addition to the linear regression model
Explanation: Check and convert all data types to numerical
End of explanation
#create cross validation sets
#set target variable name
target = 'Demanda_uni_equil'
#set X and y
X = df_train2.drop([target], axis=1)
y = df_train2[target]
# create separate training and test sets with 60/40 train/test split
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size= .4, random_state=0)
Explanation: Part 4. Build a Model
Create a cross validation split, select and build a model, evaluate the model, and refine the model
Create cross validation sets
End of explanation
#create linear regression object
lm = linear_model.LinearRegression()
#train the model using the training data
lm.fit(X_train,y_train)
Explanation: Build a model
End of explanation
# Check R^2 on test set
print "R^2: %0.3f" % lm.score(X_test,y_test)
# Check MSE on test set
#http://scikit-learn.org/stable/modules/classes.html#module-sklearn.metrics
print "MSE: %0.3f" % metrics.mean_squared_error(y_test, lm.predict(X_test))
#QUESTION - should i check this on train set?
print "MSE: %0.3f" % metrics.mean_squared_error(y_train, lm.predict(X_train))
Explanation: Evaluate the model
End of explanation
# Set target variable name
target = 'Demanda_uni_equil'
# Set X_train and y_train
X_train = df_train2.drop([target], axis=1)
y_train = df_train2[target]
# Build tuned model
#create linear regression object
lm = linear_model.LinearRegression()
#train the model using the training data
lm.fit(X_train,y_train)
# Score tuned model
print "R^2: %0.3f" % lm.score(X_train, y_train)
print "MSE: %0.3f" % metrics.mean_squared_error(y_train, lm.predict(X_train))
Explanation: Part 5: Present the Results
Generate summary of findings and kaggle submission file.
NOTE: For the purposes of generating summary narratives and kaggle submission, we can train the model on the entire training data provided in train.csv.
Load Kaggle training data and use entire data to train tuned model
End of explanation
#create data frame for submission
df_sub = df_test2[['id']]
df_test2 = df_test2.drop('id', axis=1)
#predict using tuned model
df_sub['Demanda_uni_equil'] = lm.predict(df_test2)
df_sub.describe()
d = df_sub['Demanda_uni_equil']
d[d<0] = 0
df_sub.describe()
# Write submission file
df_sub.to_csv("mysubmission3.csv", index=False)
Explanation: Load Kaggle test data, make predictions using model, and generate submission file
End of explanation
#notes
#want to try to use a classifier like random forest or logistic regression
Explanation: Kaggle score : 0.75682
End of explanation |
5,661 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
load clean descriptions into memory
| Python Code::
def load_clean_descriptions(filename, dataset):
# load document
doc = load_doc(filename)
descriptions = dict()
for line in doc.split('\n'):
# split line by white space
tokens = line.split()
# split id from description
image_id, image_desc = tokens[0], tokens[1:]
# skip images not in the set
if image_id in dataset:
# create list
if image_id not in descriptions:
descriptions[image_id] = list()
# wrap description in tokens
desc = 'startseq ' + ' '.join(image_desc) + ' endseq'
# store
descriptions[image_id].append(desc)
return descriptions
|
5,662 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a name="top"></a>
<div style="width
Step1: <a name="download"></a>
Downloading NARR Output
Lets investigate what specific NARR output is available to work with from NCEI.
https
Step2: Next, we set up access to request subsets of data from the model. This uses the NetCDF Subset Service (NCSS) to make requests from the GRIB collection and get results in netCDF format.
Step3: Subset Pressure Levels
Using xarray gives great funtionality for selecting pieces of your dataset to use within your script/program. MetPy also includes helpers for unit- and coordinate-aware selection and getting unit arrays from xarray DataArrays.
Step4: Exercise
Write the code to access the remaining necessary pieces of data from our file to calculate the QG Omega forcing terms valid at 700 hPa.
Data variables desired
Step5: Solution
Step6: QG Omega Forcing Terms
Here is the QG Omega equation from Bluesetein (1992; Eq. 5.6.11) with the two primary forcing terms on the right hand side of this equation.
$$\left(\nabla_p ^2 + \frac{f^2}{\sigma}\frac{\partial ^2}{\partial p^2}\right)\omega =
\frac{f_o}{\sigma}\frac{\partial}{\partial p}\left[\vec{V_g} \cdot \nabla_p \left(\zeta_g + f \right)\right] +
\frac{R}{\sigma p} \nabla_p ^2 \left[\vec{V_g} \cdot \nabla_p T \right]$$
We want to write code that will calculate the differential vorticity advection term (the first term on the r.h.s.) and the laplacian of the temperature advection. We will compute these terms so that they are valid at 700 hPa. Need to set constants for static stability, f0, and Rd.
Step7: Compute Term A - Differential Vorticity Advection
Need to compute
Step8: Exercise
Compute Term B - Laplacian of Temperature Advection
Need to compute
Step9: Solution
Step10: Four Panel Plot
Upper-left Panel
Step11: Start 4-panel Figure
Step12: Exercise
Plot the combined QG Omega forcing terms (term_A + term_B) in a single panel
BONUS | Python Code:
from datetime import datetime
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import numpy as np
from scipy.ndimage import gaussian_filter
from siphon.catalog import TDSCatalog
from siphon.ncss import NCSS
import matplotlib.pyplot as plt
import metpy.calc as mpcalc
import metpy.constants as mpconstants
from metpy.units import units
import xarray as xr
Explanation: <a name="top"></a>
<div style="width:1000 px">
<div style="float:right; width:98 px; height:98px;">
<img src="https://raw.githubusercontent.com/Unidata/MetPy/master/metpy/plots/_static/unidata_150x150.png" alt="Unidata Logo" style="height: 98px;">
</div>
<h1>Advanced MetPy: Quasi-Geostrophic Analysis</h1>
<div style="clear:both"></div>
</div>
<hr style="height:2px;">
Overview:
Teaching: 30 minutes
Exercises: 45 minutes
Objectives
<a href="#download">Download NARR output from TDS</a>
<a href="#interpolation">Calculate QG-Omega Forcing Terms</a>
<a href="#ascent">Create a four-panel plot of QG Forcings</a>
This is a tutorial demonstrates common analyses for Synoptic Meteorology courses with use of Unidata tools, specifically MetPy and Siphon. In this tutorial we will cover accessing, calculating, and plotting model output.
Let's investigate The Storm of the Century, although it would easy to change which case you wanted (please feel free to do so).
Reanalysis Output: NARR 00 UTC 13 March 1993
Data from Reanalysis on pressure surfaces:
Geopotential Heights
Temperature
u-wind component
v-wind component
Calculations:
Laplacian of Temperature Advection
Differential Vorticity Advection
Wind Speed
End of explanation
# Case Study Date
year = 1993
month = 3
day = 13
hour = 0
dt = datetime(year, month, day, hour)
Explanation: <a name="download"></a>
Downloading NARR Output
Lets investigate what specific NARR output is available to work with from NCEI.
https://www.ncdc.noaa.gov/data-access/model-data/model-datasets/north-american-regional-reanalysis-narr
We specifically want to look for data that has "TDS" data access, since that is short for a THREDDS server data access point. There are a total of four different GFS datasets that we could potentially use.
Choosing our data source
Let's go ahead and use the NARR Analysis data to investigate the past case we identified (The Storm of the Century).
https://www.ncei.noaa.gov/thredds/catalog/narr-a-files/199303/19930313/catalog.html?dataset=narr-a-files/199303/19930313/narr-a_221_19930313_0000_000.grb
And we will use a python package called Siphon to read this data through the NetCDFSubset (NetCDFServer) link.
https://www.ncei.noaa.gov/thredds/ncss/grid/narr-a-files/199303/19930313/narr-a_221_19930313_0000_000.grb/dataset.html
First we can set out date using the datetime module
End of explanation
# Read NARR Data from THREDDS server
base_url = 'https://www.ncei.noaa.gov/thredds/catalog/narr-a-files/'
# Programmatically generate the URL to the day of data we want
cat = TDSCatalog(f'{base_url}{dt:%Y%m}/{dt:%Y%m%d}/catalog.xml')
# Have Siphon find the appropriate dataset
ds = cat.datasets.filter_time_nearest(dt)
# Download data using the NetCDF Subset Service
ncss = ds.subset()
query = ncss.query().lonlat_box(north=60, south=18, east=300, west=225)
query.time(dt).variables('Geopotential_height_isobaric',
'Temperature_isobaric',
'u-component_of_wind_isobaric',
'v-component_of_wind_isobaric').add_lonlat().accept('netcdf')
data = ncss.get_data(query)
# Open data with xarray, and parse it with MetPy
ds = xr.open_dataset(xr.backends.NetCDF4DataStore(data)).metpy.parse_cf()
ds
# Back up in case of bad internet connection.
# Uncomment the following line to read local netCDF file of NARR data
# ds = xr.open_dataset('../../data/NARR_19930313_0000.nc').metpy.parse_cf()
Explanation: Next, we set up access to request subsets of data from the model. This uses the NetCDF Subset Service (NCSS) to make requests from the GRIB collection and get results in netCDF format.
End of explanation
# This is the time we're using
vtime = ds.Temperature_isobaric.metpy.time[0]
# Grab lat/lon values from file as unit arrays
lats = ds.lat.metpy.unit_array
lons = ds.lon.metpy.unit_array
# Calculate distance between grid points
# will need for computations later
dx, dy = mpcalc.lat_lon_grid_deltas(lons, lats)
# Grabbing data for specific variable contained in file (as a unit array)
# 700 hPa Geopotential Heights
hght_700 = ds.Geopotential_height_isobaric.metpy.sel(vertical=700 * units.hPa,
time=vtime)
# Equivalent form needed if there is a dash in name of variable
# (e.g., 'u-component_of_wind_isobaric')
# hght_700 = ds['Geopotential_height_isobaric'].metpy.sel(vertical=700 * units.hPa, time=vtime)
# 700 hPa Temperature
tmpk_700 = ds.Temperature_isobaric.metpy.sel(vertical=700 * units.hPa,
time=vtime)
# 700 hPa u-component_of_wind
uwnd_700 = ds['u-component_of_wind_isobaric'].metpy.sel(vertical=700 * units.hPa,
time=vtime)
# 700 hPa v-component_of_wind
vwnd_700 = ds['v-component_of_wind_isobaric'].metpy.sel(vertical=700 * units.hPa,
time=vtime)
Explanation: Subset Pressure Levels
Using xarray gives great funtionality for selecting pieces of your dataset to use within your script/program. MetPy also includes helpers for unit- and coordinate-aware selection and getting unit arrays from xarray DataArrays.
End of explanation
# 500 hPa Geopotential Height
# 500 hPa u-component_of_wind
# 500 hPa v-component_of_wind
# 900 hPa u-component_of_wind
# 900 hPa v-component_of_wind
Explanation: Exercise
Write the code to access the remaining necessary pieces of data from our file to calculate the QG Omega forcing terms valid at 700 hPa.
Data variables desired:
* hght_500: 500-hPa Geopotential_height_isobaric
* uwnd_500: 500-hPa u-component_of_wind_isobaric
* vwnd_500: 500-hPa v-component_of_wind_isobaric
* uwnd_900: 900-hPa u-component_of_wind_isobaric
* vwnd_900: 900-hPa v-component_of_wind_isobaric
End of explanation
# %load solutions/QG_data.py
Explanation: Solution
End of explanation
# Set constant values that will be needed in computations
# Set default static stability value
sigma = 2.0e-6 * units('m^2 Pa^-2 s^-2')
# Set f-plane at typical synoptic f0 value
f0 = 1e-4 * units('s^-1')
# Use dry gas constant from MetPy constants
Rd = mpconstants.Rd
# Smooth Heights
# For calculation purposes we want to smooth our variables
# a little to get to the "synoptic values" from higher
# resolution datasets
# Number of repetitions of smoothing function
n_reps = 50
# Apply the 9-point smoother
hght_700s = mpcalc.smooth_n_point(hght_700, 9, n_reps)
hght_500s = mpcalc.smooth_n_point(hght_500, 9, n_reps)
tmpk_700s = mpcalc.smooth_n_point(tmpk_700, 9, n_reps)
tmpc_700s = tmpk_700s.to('degC')
uwnd_700s = mpcalc.smooth_n_point(uwnd_700, 9, n_reps)
vwnd_700s = mpcalc.smooth_n_point(vwnd_700, 9, n_reps)
uwnd_500s = mpcalc.smooth_n_point(uwnd_500, 9, n_reps)
vwnd_500s = mpcalc.smooth_n_point(vwnd_500, 9, n_reps)
uwnd_900s = mpcalc.smooth_n_point(uwnd_900, 9, n_reps)
vwnd_900s = mpcalc.smooth_n_point(vwnd_900, 9, n_reps)
Explanation: QG Omega Forcing Terms
Here is the QG Omega equation from Bluesetein (1992; Eq. 5.6.11) with the two primary forcing terms on the right hand side of this equation.
$$\left(\nabla_p ^2 + \frac{f^2}{\sigma}\frac{\partial ^2}{\partial p^2}\right)\omega =
\frac{f_o}{\sigma}\frac{\partial}{\partial p}\left[\vec{V_g} \cdot \nabla_p \left(\zeta_g + f \right)\right] +
\frac{R}{\sigma p} \nabla_p ^2 \left[\vec{V_g} \cdot \nabla_p T \right]$$
We want to write code that will calculate the differential vorticity advection term (the first term on the r.h.s.) and the laplacian of the temperature advection. We will compute these terms so that they are valid at 700 hPa. Need to set constants for static stability, f0, and Rd.
End of explanation
# Absolute Vorticity Calculation
avor_900 = mpcalc.absolute_vorticity(uwnd_900s, vwnd_900s, dx, dy, lats)
avor_500 = mpcalc.absolute_vorticity(uwnd_500s, vwnd_500s, dx, dy, lats)
# Advection of Absolute Vorticity
vortadv_900 = mpcalc.advection(avor_900, (uwnd_900s, vwnd_900s), (dx, dy)).to_base_units()
vortadv_500 = mpcalc.advection(avor_500, (uwnd_500s, vwnd_500s), (dx, dy)).to_base_units()
# Differential Vorticity Advection between two levels
diff_avor = ((vortadv_900 - vortadv_500)/(400 * units.hPa)).to_base_units()
# Calculation of final differential vorticity advection term
term_A = (-f0 / sigma * diff_avor).to_base_units()
print(term_A.units)
Explanation: Compute Term A - Differential Vorticity Advection
Need to compute:
1. absolute vorticity at two levels (e.g., 500 and 900 hPa)
2. absolute vorticity advection at same two levels
3. centered finite-difference between two levels (e.g., valid at 700 hPa)
4. apply constants to calculate value of full term
End of explanation
# Temperature Advection
# Laplacian of Temperature Advection
# Calculation of final Laplacian of Temperature Advection term
Explanation: Exercise
Compute Term B - Laplacian of Temperature Advection
Need to compute:
1. Temperature advection at 700 hPa (tadv_700)
2. Laplacian of Temp Adv. at 700 hPa (lap_tadv_700)
3. final term B with appropriate constants (term_B)
For information on how to calculate a Laplacian using MetPy, see the documentation on this function.
End of explanation
# %load solutions/term_B_calc.py
Explanation: Solution
End of explanation
# Set some contour intervals for various parameters
# CINT 500 hPa Heights
clev_hght_500 = np.arange(0, 7000, 60)
# CINT 700 hPa Heights
clev_hght_700 = np.arange(0, 7000, 30)
# CINT 700 hPa Temps
clev_tmpc_700 = np.arange(-40, 40, 5)
# CINT Omega terms
clev_omega = np.arange(-20, 21, 2)
# Set some projections for our data (Plate Carree)
# and output maps (Lambert Conformal)
# Data projection; NARR Data is Earth Relative
dataproj = ccrs.PlateCarree()
# Plot projection
# The look you want for the view, LambertConformal for mid-latitude view
plotproj = ccrs.LambertConformal(central_longitude=-100.,
central_latitude=40.,
standard_parallels=[30, 60])
Explanation: Four Panel Plot
Upper-left Panel: 700-hPa Geopotential Heights, Temperature, and Winds
Upper-right Panel: 500-hPa Geopotential Heights, Absolute Vorticity, and Winds
Lower-left Panel: Term B (Laplacian of Temperature Advection)
Lower-right Panel: Term A (Laplacian of differential Vorticity Advection)
End of explanation
# Set figure size
fig=plt.figure(1, figsize=(24.5,17.))
# Format the valid time
vtime_str = str(vtime.dt.strftime('%Y-%m-%d %H%MZ').values)
# Upper-Left Panel
ax=plt.subplot(221, projection=plotproj)
ax.set_extent([-125., -73, 25., 50.],ccrs.PlateCarree())
ax.add_feature(cfeature.COASTLINE, linewidth=0.5)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
cs = ax.contour(lons, lats, hght_700, clev_hght_700,colors='k',
linewidths=1.5, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lons, lats, tmpc_700s, clev_tmpc_700, colors='grey',
linewidths=1.0, linestyles='dotted', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d',
rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lons, lats, tadv_700*10**4, np.arange(-10,10.1,0.5),
cmap=plt.cm.bwr, extend='both', transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0.0, aspect=50, extendrect=True)
# Vector
ax.barbs(lons.m, lats.m, uwnd_700s.to('kts').m, vwnd_700s.to('kts').m,
regrid_shape=15, transform=dataproj)
# Titles
plt.title('700-hPa Geopotential Heights (m), Temperature (C),\n'
'Winds (kts), and Temp Adv. ($*10^4$ C/s)',loc='left')
plt.title('VALID: ' + vtime_str, loc='right')
# Upper-Right Panel
ax=plt.subplot(222, projection=plotproj)
ax.set_extent([-125., -73, 25., 50.],ccrs.PlateCarree())
ax.add_feature(cfeature.COASTLINE, linewidth=0.5)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
clev500 = np.arange(0,7000,60)
cs = ax.contour(lons, lats, hght_500, clev500, colors='k',
linewidths=1.5, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lons, lats, avor_500*10**5, np.arange(-40, 50, 3),colors='grey',
linewidths=1.0, linestyles='dotted', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d',
rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lons, lats, vortadv_500*10**8, np.arange(-2, 2.2, 0.2),
cmap=plt.cm.BrBG, extend='both', transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0.0, aspect=50, extendrect=True)
# Vector
ax.barbs(lons.m, lats.m, uwnd_500s.to('kts').m, vwnd_500s.to('kts').m,
regrid_shape=15, transform=dataproj)
# Titles
plt.title('500-hPa Geopotential Heights (m), Winds (kt), and\n'
'Absolute Vorticity Advection ($*10^{8}$ 1/s^2)',loc='left')
plt.title('VALID: ' + vtime_str, loc='right')
# Lower-Left Panel
ax=plt.subplot(223, projection=plotproj)
ax.set_extent([-125., -73, 25., 50.],ccrs.PlateCarree())
ax.add_feature(cfeature.COASTLINE, linewidth=0.5)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
cs = ax.contour(lons, lats, hght_700s, clev_hght_700, colors='k',
linewidths=1.5, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lons, lats, tmpc_700s, clev_tmpc_700, colors='grey',
linewidths=1.0, transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d',
rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lons, lats, term_B*10**12, clev_omega,
cmap=plt.cm.RdYlBu_r, extend='both', transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0.0, aspect=50, extendrect=True)
# Vector
ax.barbs(lons.m, lats.m, uwnd_700s.to('kts').m, vwnd_700s.to('kts').m,
regrid_shape=15, transform=dataproj)
# Titles
plt.title('700-hPa Geopotential Heights (m), Winds (kt), and\n'
'Term B QG Omega ($*10^{12}$ kg m$^{-3}$ s$^{-3}$)',loc='left')
plt.title('VALID: ' + vtime_str, loc='right')
# # Lower-Right Panel
ax=plt.subplot(224, projection=plotproj)
ax.set_extent([-125., -73, 25., 50.],ccrs.PlateCarree())
ax.add_feature(cfeature.COASTLINE, linewidth=0.5)
ax.add_feature(cfeature.STATES, linewidth=0.5)
# Contour #1
cs = ax.contour(lons, lats, hght_500s, clev500, colors='k',
linewidths=1.5, linestyles='solid', transform=dataproj)
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=3, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Contour #2
cs2 = ax.contour(lons, lats, avor_500*10**5, np.arange(-40, 50, 3), colors='grey',
linewidths=1.0, linestyles='dotted', transform=dataproj)
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=3, fmt='%d',
rightside_up=True, use_clabeltext=True)
# Colorfill
cf = ax.contourf(lons, lats, term_A*10**12, clev_omega,
cmap=plt.cm.RdYlBu_r, extend='both', transform=dataproj)
plt.colorbar(cf, orientation='horizontal', pad=0.0, aspect=50, extendrect=True)
# Vector
ax.barbs(lons.m, lats.m, uwnd_500s.to('kt').m, vwnd_500s.to('kt').m,
regrid_shape=15, transform=dataproj)
# Titles
plt.title('500-hPa Geopotential Heights (m), Winds (kt), and\n'
'Term A QG Omega ($*10^{12}$ kg m$^{-3}$ s$^{-3}$)',loc='left')
plt.title('VALID: ' + vtime_str, loc='right')
plt.show()
Explanation: Start 4-panel Figure
End of explanation
# %load solutions/qg_omega_total_fig.py
Explanation: Exercise
Plot the combined QG Omega forcing terms (term_A + term_B) in a single panel
BONUS: Compute a difference map of Term A and Term B and plot
Solution
End of explanation |
5,663 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Universal Sentence Encoder
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 有关安装 Tensorflow 的更多详细信息,请访问 https
Step3: 语义文本相似度任务示例
Universal Sentence Encoder 生成的嵌入向量会被近似归一化。两个句子的语义相似度可以作为编码的内积轻松进行计算。
Step4: 可视化相似度
下面,我们在热图中显示相似度。最终的图形是一个 9x9 矩阵,其中每个条目 [i, j] 都根据句子 i 和 j 的编码的内积进行着色。
Step5: 评估:STS(语义文本相似度)基准
STS 基准会根据从句子嵌入向量计算得出的相似度得分与人为判断的一致程度,提供内部评估。该基准要求系统为不同的句子对选择返回相似度得分。然后使用皮尔逊相关来评估机器相似度得分相对于人为判断的质量。
下载数据
Step7: 评估句子嵌入向量 | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
%%capture
!pip3 install seaborn
Explanation: Universal Sentence Encoder
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://tensorflow.google.cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">View 在 TensorFlow.org 上查看</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
<td> <a href="https://tfhub.dev/s?q=google%2Funiversal-sentence-encoder%2F4%20OR%20google%2Funiversal-sentence-encoder-large%2F5"><img src="https://tensorflow.google.cn/images/hub_logo_32px.png">查看 TF Hub 模型</a> </td>
</table>
此笔记本演示了如何访问 Universal Sentence Encoder,并将它用于句子相似度和句子分类任务。
Universal Sentence Encoder 使获取句子级别的嵌入向量变得与以往查找单个单词的嵌入向量一样容易。之后,您可以轻松地使用句子嵌入向量计算句子级别的语义相似度,以及使用较少监督的训练数据在下游分类任务中实现更好的性能。
设置
本部分将设置访问 TF Hub 上通用句子编码器的环境,并提供将编码器应用于单词、句子和段落的示例。
End of explanation
#@title Load the Universal Sentence Encoder's TF Hub module
from absl import logging
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4" #@param ["https://tfhub.dev/google/universal-sentence-encoder/4", "https://tfhub.dev/google/universal-sentence-encoder-large/5"]
model = hub.load(module_url)
print ("module %s loaded" % module_url)
def embed(input):
return model(input)
#@title Compute a representation for each message, showing various lengths supported.
word = "Elephant"
sentence = "I am a sentence for which I would like to get its embedding."
paragraph = (
"Universal Sentence Encoder embeddings also support short paragraphs. "
"There is no hard limit on how long the paragraph is. Roughly, the longer "
"the more 'diluted' the embedding will be.")
messages = [word, sentence, paragraph]
# Reduce logging output.
logging.set_verbosity(logging.ERROR)
message_embeddings = embed(messages)
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
Explanation: 有关安装 Tensorflow 的更多详细信息,请访问 https://tensorflow.google.cn/install/。
End of explanation
def plot_similarity(labels, features, rotation):
corr = np.inner(features, features)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=rotation)
g.set_title("Semantic Textual Similarity")
def run_and_plot(messages_):
message_embeddings_ = embed(messages_)
plot_similarity(messages_, message_embeddings_, 90)
Explanation: 语义文本相似度任务示例
Universal Sentence Encoder 生成的嵌入向量会被近似归一化。两个句子的语义相似度可以作为编码的内积轻松进行计算。
End of explanation
messages = [
# Smartphones
"I like my phone",
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
"Global warming is real",
# Food and health
"An apple a day, keeps the doctors away",
"Eating strawberries is healthy",
"Is paleo better than keto?",
# Asking about age
"How old are you?",
"what is your age?",
]
run_and_plot(messages)
Explanation: 可视化相似度
下面,我们在热图中显示相似度。最终的图形是一个 9x9 矩阵,其中每个条目 [i, j] 都根据句子 i 和 j 的编码的内积进行着色。
End of explanation
import pandas
import scipy
import math
import csv
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = pandas.read_table(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"),
error_bad_lines=False,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
sts_test = pandas.read_table(
os.path.join(
os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"),
error_bad_lines=False,
quoting=csv.QUOTE_NONE,
skip_blank_lines=True,
usecols=[4, 5, 6],
names=["sim", "sent_1", "sent_2"])
# cleanup some NaN values in sts_dev
sts_dev = sts_dev[[isinstance(s, str) for s in sts_dev['sent_2']]]
Explanation: 评估:STS(语义文本相似度)基准
STS 基准会根据从句子嵌入向量计算得出的相似度得分与人为判断的一致程度,提供内部评估。该基准要求系统为不同的句子对选择返回相似度得分。然后使用皮尔逊相关来评估机器相似度得分相对于人为判断的质量。
下载数据
End of explanation
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
def run_sts_benchmark(batch):
sts_encode1 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_1'].tolist())), axis=1)
sts_encode2 = tf.nn.l2_normalize(embed(tf.constant(batch['sent_2'].tolist())), axis=1)
cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1)
clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0)
scores = 1.0 - tf.acos(clip_cosine_similarities) / math.pi
Returns the similarity scores
return scores
dev_scores = sts_data['sim'].tolist()
scores = []
for batch in np.array_split(sts_data, 10):
scores.extend(run_sts_benchmark(batch))
pearson_correlation = scipy.stats.pearsonr(scores, dev_scores)
print('Pearson correlation coefficient = {0}\np-value = {1}'.format(
pearson_correlation[0], pearson_correlation[1]))
Explanation: 评估句子嵌入向量
End of explanation |
5,664 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 STYLE="background
Step1: <h2 STYLE="background
Step2: <h4 style="padding
Step3: <h2 STYLE="background
Step4: <h4 style="padding
Step5: <h4 style="border-bottom
Step6: <h4 style="padding
Step7: <h2 STYLE="background
Step8: <h4 style="padding | Python Code:
import numpy as np # 数値計算を行うライブラリ
import scipy as sp # 科学計算ライブラリ
from scipy import stats # 統計計算ライブラリ
Explanation: <h1 STYLE="background: #c2edff;padding: 0.5em;">Step 2. 統計的検定</h1>
<ol>
<li><a href="#1">カイ2乗検定</a>
<li><a href="#2">t検定</a>
<li><a href="#3">分散分析</a>
</ol>
<h4 style="border-bottom: solid 1px black;">Step 2 の目標</h4>
統計的検定により、複数のグループに差があるかどうか検定する。
End of explanation
significance = 0.05
o = [17, 10, 6, 7, 15, 5] # 実測値
e = [10, 10, 10, 10, 10, 10] # 理論値
chi2, p = stats.chisquare(o, f_exp = e)
print('chi2 値は %(chi2)s' %locals())
print('確率は %(p)s' %locals())
if p < significance:
print('有意水準 %(significance)s で、有意な差があります' %locals())
else:
print('有意水準 %(significance)s で、有意な差がありません' %locals())
Explanation: <h2 STYLE="background: #c2edff;padding: 0.5em;"><a name="1">2.1 カイ2乗検定</a></h2>
カイ2乗検定は、2つの分布が同じかどうかを検定するときに用いる手法です。
サイコロを60回ふり、各目が出た回数を数えたところ、次のようになりました。
<table border=1>
<tr><td>サイコロの目</td>
<td>1</td><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td></tr>
<tr><td>出現回数</td>
<td>17</td><td>10</td><td>6</td><td>7</td><td>15</td><td>5</td></tr>
</table>
このとき、理論値の分布(一様分布)に従うかどうかを検定してみましょう。
End of explanation
# 練習2.1
Explanation: <h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;">練習2.1</h4>
ある野菜をA方式で育てたものとB方式で育てたものの出荷時の等級が次の表のようになったとき,これらの育て方と製品の等級には関連があると見るべきかどうか
<table border="1" bgcolor="#FFFFFF" cellpadding="0" cellspacing="0" align="center">
<tr>
<th width="100" height="30" bgcolor="#CCCC99"></th>
<th width="81" height="30" bgcolor="#FFFFCC"> 優 </th>
<th width="100" height="30" bgcolor="#FFCCCC"> 良 </th>
<th width="100" height="30" bgcolor="#99FFCC"> 可 </th>
<th width="100" height="30" bgcolor="#CCCCCC">計</th>
</tr>
<tr align="center">
<td width="100" height="30" bgcolor="#CCCC99"> A方式 </td>
<td width="100" height="30" bgcolor="#FFFFFF"> 12</td>
<td width="100" height="30" bgcolor="#FFFFFF">30</td>
<td width="100" height="30" bgcolor="#FFFFFF">58</td>
<td width="100" height="30" bgcolor="#CCCCCC">100</td>
</tr>
<tr align="center">
<td width="119" height="30" bgcolor="#CCCC99"> B方式 </td>
<td width="81" height="30" bgcolor="#FFFFFF"> 14</td>
<td width="100" height="30" bgcolor="#FFFFFF">90</td>
<td width="100" height="30" bgcolor="#FFFFFF">96</td>
<td width="100" height="30" bgcolor="#CCCCCC">200</td>
</tr>
<tr align="center" bgcolor="#CCCCCC">
<td width="119" height="30">計</td>
<td width="81" height="30">26</td>
<td width="100" height="30">120</td>
<td width="100" height="30">154</td>
<td width="100" height="30">300</td>
</tr>
</table>
End of explanation
# 対応のないt検定
significance = 0.05
X = [68, 75, 80, 71, 73, 79, 69, 65]
Y = [86, 83, 76, 81, 75, 82, 87, 75]
t, p = stats.ttest_ind(X, Y)
print('t 値は %(t)s' %locals())
print('確率は %(p)s' %locals())
if p < significance:
print('有意水準 %(significance)s で、有意な差があります' %locals())
else:
print('有意水準 %(significance)s で、有意な差がありません' %locals())
Explanation: <h2 STYLE="background: #c2edff;padding: 0.5em;"><a name="2">9.2 t検定</a></h2>
End of explanation
class_one = [70, 75, 70, 85, 90, 70, 80, 75]
class_two = [85, 80, 95, 70, 80, 75, 80, 90]
# 練習2.2
Explanation: <h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;">練習2.2</h4>
6年1組と6年2組の2つのクラスで同一の算数のテストを行い、採点結果が出ました。2つのクラスで点数に差があるかどうか検定してください。
<table border="1" bgcolor="#FFFFFF" cellpadding="0" cellspacing="0" align="center">
<tr align="center">
<th width="120" height="30" bgcolor="#ffffcc"> 6年1組</th>
<th width="120" height="30" bgcolor="#ffffcc"> 点数</th>
<th width="120" height="30" bgcolor="#ffcccc"> 6年2組</th>
<th width="120" height="30" bgcolor="#ffcccc"> 点数</th>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc"> 1</td>
<td width="120" height="30" bgcolor="#FFFFFF"> 70</td>
<td width="120" height="30" bgcolor="#cccccc"> 1</td>
<td width="120" height="30"> 85</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">2</td>
<td width="120" height="30" bgcolor="#FFFFFF">75</td>
<td width="120" height="30" bgcolor="#cccccc">2</td>
<td width="120" height="30">80</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">3</td>
<td width="120" height="30" bgcolor="#FFFFFF">70</td>
<td width="120" height="30" bgcolor="#cccccc">3</td>
<td width="120" height="30">95</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">4</td>
<td width="120" height="30" bgcolor="#FFFFFF">85</td>
<td width="120" height="30" bgcolor="#cccccc">4</td>
<td width="120" height="30">70</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">5</td>
<td width="120" height="30" bgcolor="#FFFFFF">90</td>
<td width="120" height="30" bgcolor="#cccccc">5</td>
<td width="120" height="30">80</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">6</td>
<td width="120" height="30" bgcolor="#FFFFFF">70</td>
<td width="120" height="30" bgcolor="#cccccc">6</td>
<td width="120" height="30">75</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">7</td>
<td width="120" height="30" bgcolor="#FFFFFF">80</td>
<td width="120" height="30" bgcolor="#cccccc">7</td>
<td width="120" height="30">80</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">8</td>
<td width="120" height="30" bgcolor="#FFFFFF">75</td>
<td width="120" height="30" bgcolor="#cccccc">8</td>
<td width="120" height="30">90</td>
</tr>
</table>
End of explanation
# 対応のあるt検定
significance = 0.05
X = [68, 75, 80, 71, 73, 79, 69, 65]
Y = [86, 83, 76, 81, 75, 82, 87, 75]
t, p = stats.ttest_rel(X, Y)
print('t 値は %(t)s' %locals())
print('確率は %(p)s' %locals())
if p < significance:
print('有意水準 %(significance)s で、有意な差があります' %locals())
else:
print('有意水準 %(significance)s で、有意な差がありません' %locals())
Explanation: <h4 style="border-bottom: solid 1px black;">対応のある t検定</h4>
End of explanation
kokugo = [90, 75, 75, 75, 80, 65, 75, 80]
sansuu = [95, 80, 80, 80, 75, 75, 80, 85]
# 練習2.3
Explanation: <h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;">練習2.3</h4>
国語と算数の点数に差があるかどうか検定してください。
<table border="1" bgcolor="#FFFFFF" cellpadding="0" cellspacing="0" align="center">
<tr align="center">
<th width="120" height="30" bgcolor="#ffffcc">6年1組</th>
<th width="120" height="30" bgcolor="#ffffcc">国語</th>
<th width="120" height="30" bgcolor="#ffcccc">算数</th>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc"> 1</td>
<td width="120" height="30" bgcolor="#FFFFFF"> 90</td>
<td width="120" height="30"> 95</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">2</td>
<td width="120" height="30" bgcolor="#FFFFFF">75</td>
<td width="120" height="30">80</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">3</td>
<td width="120" height="30" bgcolor="#FFFFFF">75</td>
<td width="120" height="30">80</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">4</td>
<td width="120" height="30" bgcolor="#FFFFFF">75</td>
<td width="120" height="30">80</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">5</td>
<td width="120" height="30" bgcolor="#FFFFFF">80</td>
<td width="120" height="30">75</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">6</td>
<td width="120" height="30" bgcolor="#FFFFFF">65</td>
<td width="120" height="30">75</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">7</td>
<td width="120" height="30" bgcolor="#FFFFFF">75</td>
<td width="120" height="30">80</td>
</tr>
<tr align="center">
<td width="120" height="30" bgcolor="#cccccc">8</td>
<td width="120" height="30" bgcolor="#FFFFFF">80</td>
<td width="120" height="30">85</td>
</tr>
</table>
End of explanation
# 1要因の分散分析
significance = 0.05
a = [34, 39, 50, 72, 54, 50, 58, 64, 55, 62]
b = [63, 75, 50, 54, 66, 31, 39, 45, 48, 60]
c = [49, 36, 46, 56, 52, 46, 52, 68, 49, 62]
f, p = stats.f_oneway(a, b, c)
print('f 値は %(f)s' %locals())
print('確率は %(p)s' %locals())
if p < significance:
print('有意水準 %(significance)s で、有意な差があります' %locals())
else:
print('有意水準 %(significance)s で、有意な差がありません' %locals())
Explanation: <h2 STYLE="background: #c2edff;padding: 0.5em;"><a name="3">2.3 分散分析</a></h2>
End of explanation
group1 = [80, 75, 80, 90, 95, 80, 80, 85, 85, 80, 90, 80, 75, 90, 85, 85, 90, 90, 85, 80]
group2 = [75, 70, 80, 85, 90, 75, 85, 80, 80, 75, 80, 75, 70, 85, 80, 75, 80, 80, 90, 80]
group3 = [80, 80, 80, 90, 95, 85, 95, 90, 85, 90, 95, 85, 98, 95, 85, 85, 90, 90, 85, 85]
# 練習2.4
Explanation: <h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;">練習2.4</h4>
下記のデータを用いて、分散分析を行ってください。
End of explanation |
5,665 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Detect subframe preamble by majority voting amongst the possible offsets.
Step1: Most subframes do not have valid parity, as shown below. We use a weaker heuristic, where only parity of TLM and HOW words are required to be valid. With this criterion, all the subframes are valid, except the first few and last subframes.
Step2: Analysis of the TOW in valid frames. It ranges between 298338 and 301728, corresponding to 10
Step3: The subframe ID in the HOW word cycles as usual.
Step4: Alert and anti-spoofing flags in the HOW word are not set.
Step5: For subframes 1 (WN, health and clock), 2 and 3 (ephemeris), a filler of alternating 1's and 0's is transmitted in all the words after the HOW (including the parity bits). This makes parity invalid for these subframes.
Step6: On the other hand, the parity for subframes 4 and 5 (almanacs) is correct.
Step7: Data ID field for subframes 4 and 5 has the nominal value 01.
Step8: The SVID in subframes 4 and 5 follows the nominal schedule, except that SVID 4 has been replaced with 0 to indicate dummy SV. This is normal, since PRN 4 is not currently assigned.
Step9: For subframe 5, we omit the study of pages 1 through 24, which contain almanac data and we assume to be valid. We study page 25, which is marked by SVID 51 and contains SV health.
Step10: The t_oa and WN_a for page 25 correspond to times near the beginning and end of GPS week 2059.
Step11: SV health in subframe 5 page 25 indicates that all SV except SV 4 are healthy.
Step12: The anti-spoofing and SV configurations flags in subframe 4 page 25 indicate that AS is on for all SVs and different signal capabilities for different SVs.
Step13: The health flags in subframe 4 page 25 indicate that SV 25 to 32 are all healthy.
Step14: Below we show t_oa for almanac entries in subframes 4 and 5. | Python Code:
preamble = np.array([1,0,0,0,1,0,1,1], dtype = 'uint8')
preamble_detect = np.where(np.abs(np.correlate(2*bits.astype('int')-1, 2*preamble.astype('int')-1)) == 8)[0]
preamble_offset = np.argmax(np.histogram(preamble_detect % subframe_size, bins = np.arange(0,subframe_size))[0])
subframes = bits[preamble_offset:]
subframes = subframes[:subframes.size//subframe_size*subframe_size].reshape((-1,subframe_size))
words = subframes.reshape((-1,word_size))
# Last bits from previous word, used for parity calculations
words_last = np.roll(words[:,-1], 1)
words_prelast = np.roll(words[:,-2], 1)
# Correct data using last bit from previous word
words_data = words[:, :word_data_size] ^ words_last.reshape((-1,1))
subframes_data = words_data.reshape((-1,subframe_data_size))
# Parity checks for each of the bits (0 means valid)
parity0 = np.bitwise_xor.reduce(words_data[:, np.array([1,2,3,5,6,10,11,12,13,14,17,18,20,23])-1], axis = 1) ^ words_prelast ^ words[:,word_data_size]
parity1 = np.bitwise_xor.reduce(words_data[:, np.array([2,3,4,6,7,11,12,13,14,15,18,19,21,24])-1], axis = 1) ^ words_last ^ words[:,word_data_size+1]
parity2 = np.bitwise_xor.reduce(words_data[:, np.array([1,3,4,5,7,8,12,13,14,15,16,19,20,22])-1], axis = 1) ^ words_prelast ^ words[:,word_data_size+2]
parity3 = np.bitwise_xor.reduce(words_data[:, np.array([2,4,5,6,8,9,13,14,15,16,17,20,21,23])-1], axis = 1) ^ words_last ^ words[:,word_data_size+3]
parity4 = np.bitwise_xor.reduce(words_data[:, np.array([1,3,5,6,7,9,10,14,15,16,17,18,21,22,24])-1], axis = 1) ^ words_last ^ words[:,word_data_size+4]
parity5 = np.bitwise_xor.reduce(words_data[:, np.array([3,5,6,8,9,10,11,13,15,19,22,23,24])-1], axis = 1) ^ words_prelast ^ words[:,word_data_size+5]
# Parity check for word
parity = parity0 | parity1 | parity2 | parity3 | parity4 | parity5
# Parity check for subframe
parity_subframe = np.any(parity.reshape((-1,subframe_size//word_size)), axis = 1)
Explanation: Detect subframe preamble by majority voting amongst the possible offsets.
End of explanation
parity_subframe
correct_frames = (parity[::10] == 0) & (parity[1::10] == 0)
plt.plot(correct_frames)
Explanation: Most subframes do not have valid parity, as shown below. We use a weaker heuristic, where only parity of TLM and HOW words are required to be valid. With this criterion, all the subframes are valid, except the first few and last subframes.
End of explanation
tow = np.sum(words_data[1::10,:17].astype('int') * 2**np.arange(16,-1,-1), axis = 1) * 6
plt.plot(np.arange(tow.size)[correct_frames], tow[correct_frames])
Explanation: Analysis of the TOW in valid frames. It ranges between 298338 and 301728, corresponding to 10:52:18 and 11:48:48 on Wednesday (2019-06-26).
End of explanation
subframe_id = np.packbits(words_data[1::10,19:22], axis = 1).ravel() >> 5
subframe_id[correct_frames]
Explanation: The subframe ID in the HOW word cycles as usual.
End of explanation
np.any(words_data[1::10,17:19][correct_frames])
Explanation: Alert and anti-spoofing flags in the HOW word are not set.
End of explanation
filler_subframe = subframes[correct_frames & (subframe_id == 1), 60:][0,:]
filler_subframe
np.any(subframes[correct_frames & (subframe_id <= 3), 60:] ^ filler_subframe, axis = 1)
np.all(parity_subframe[correct_frames & (subframe_id <= 3)])
Explanation: For subframes 1 (WN, health and clock), 2 and 3 (ephemeris), a filler of alternating 1's and 0's is transmitted in all the words after the HOW (including the parity bits). This makes parity invalid for these subframes.
End of explanation
np.all(parity_subframe[correct_frames & (subframe_id >= 4)])
Explanation: On the other hand, the parity for subframes 4 and 5 (almanacs) is correct.
End of explanation
np.any(subframes_data[correct_frames & (subframe_id >= 4), 2*word_data_size:2*word_data_size+2] ^ np.array([0,1]))
Explanation: Data ID field for subframes 4 and 5 has the nominal value 01.
End of explanation
svid_subframe4 = np.packbits(subframes_data[correct_frames & (subframe_id == 4), 2*word_data_size+2:2*word_data_size+8], axis = 1).ravel() >> 2
svid_subframe5 = np.packbits(subframes_data[correct_frames & (subframe_id == 5), 2*word_data_size+2:2*word_data_size+8], axis = 1).ravel() >> 2
svid_subframe4
svid_subframe5
Explanation: The SVID in subframes 4 and 5 follows the nominal schedule, except that SVID 4 has been replaced with 0 to indicate dummy SV. This is normal, since PRN 4 is not currently assigned.
End of explanation
subframe5_page25 = subframes_data[correct_frames & (subframe_id == 5), :][svid_subframe5 == 51, :]
Explanation: For subframe 5, we omit the study of pages 1 through 24, which contain almanac data and we assume to be valid. We study page 25, which is marked by SVID 51 and contains SV health.
End of explanation
toa = np.packbits(subframe5_page25[:,2*word_data_size+8:2*word_data_size+16], axis = 1).ravel().astype('int') * 2**12
wna = np.packbits(subframe5_page25[:,2*word_data_size+16:2*word_data_size+24], axis = 1).ravel() + 2048
toa
wna
Explanation: The t_oa and WN_a for page 25 correspond to times near the beginning and end of GPS week 2059.
End of explanation
subframe5_page25[:,2*word_data_size+24:][:,:6*6*4:].reshape((-1,6*4,6))
Explanation: SV health in subframe 5 page 25 indicates that all SV except SV 4 are healthy.
End of explanation
subframe4_page25 = subframes_data[correct_frames & (subframe_id == 4), :][svid_subframe4 == 63, :]
anti_spoofing = subframe4_page25[:,2*word_data_size+8:][:,:32*4].reshape((-1,32,4))
anti_spoofing
Explanation: The anti-spoofing and SV configurations flags in subframe 4 page 25 indicate that AS is on for all SVs and different signal capabilities for different SVs.
End of explanation
health = subframe4_page25[:,2*word_data_size+8+32*4+2:][:,:6*8].reshape((-1,8,6))
health
Explanation: The health flags in subframe 4 page 25 indicate that SV 25 to 32 are all healthy.
End of explanation
np.packbits(subframes_data[correct_frames & (subframe_id == 5), :][svid_subframe5 <= 24, 3*word_data_size:3*word_data_size+8], axis = 1) * 2**12
np.packbits(subframes_data[correct_frames & (subframe_id == 4), :][svid_subframe4 <= 32, 3*word_data_size:3*word_data_size+8], axis = 1) * 2**12
Explanation: Below we show t_oa for almanac entries in subframes 4 and 5.
End of explanation |
5,666 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Socks, Skeets, and Space Invaders
This notebook contains code from my blog, Probably Overthinking It
Copyright 2016 Allen Downey
MIT License
Step1: Socks
The sock drawer problem
Posed by Yuzhong Huang
Step2: Now I can make a Pmf that represents the two hypotheses
Step4: This function computes the likelihood of the data for a given hypothesis
Step5: Now we can update pmf with these likelihoods
Step6: The return value from Normalize is the total probability of the data, the denominator of Bayes's theorem, also known as the normalizing constant.
And here's the posterior distribution
Step7: The likelihood of getting a pair is higher in Drawer 1, which is 40
Step8: Before seeing the data, the mean of the distribution, which is the expected effectiveness of the blaster, is 0.25.
Step10: Here's how we compute the likelihood of the data. If each blaster takes two shots, there are three ways they can get a tie
Step11: To see what the likelihood function looks like, I'll print the likelihood of a tie for the four hypothetical values of x
Step12: If we multiply each likelihood by the corresponding prior, we get the unnormalized posteriors
Step13: Finally, we can do the update by multiplying the priors in pmf by the likelihoods
Step14: And then normalizing pmf. The result is the total probability of the data.
Step15: And here are the posteriors.
Step16: The lower values of x are more likely, so this evidence makes us downgrade our expectation about the effectiveness of the blaster. The posterior mean is 0.225, a bit lower than the prior mean, 0.25.
Step17: A tie is evidence in favor of extreme values of x.
The Skeet Shooting problem
At the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. After 25 shots, they were tied, sending the match into sudden death. In each round of sudden death, each competitor shoots at two targets. In the first three rounds, Rhode and Wei hit the same number of targets. Finally in the fourth round, Rhode hit more targets, so she won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games. Based on this information, should we infer that Rhode and Wei had an unusually good or bad day?
As background information, you can assume that anyone in the Olympic final has about the same probability of hitting 13, 14, 15, or 16 out of 25 targets.
To compute the likelihood function, I'll use binom.pmf, which computes the Binomial PMF. In the following example, the probability of hitting k=10 targets in n=25 attempts, with probability p=13/15 of hitting each target, is about 8%.
Step19: The following function computes the likelihood of tie or no tie after a given number of shots, n, given the hypothetical value of p.
It loops through the possible values of k from 0 to n and uses binom.pmf to compute the probability that each shooter hits k targets. To get the probability that BOTH shooters hit k targets, we square the result.
To get the total likelihood of the outcome, we add up the probability for each value of k.
Step20: Now we can see what that looks like for n=2
Step21: As we saw in the Sock Drawer problem and the Alien Blaster problem, the probability of a tie is highest for extreme values of p, and minimized when p=0.5.
The result is similar when n=25
Step23: In the range we care about (13 through 16) this curve is pretty flat, which means that a tie after the round of 25 doesn't discriminate strongly among the hypotheses.
We could use this likelihood function to run the update, but just for purposes of demonstration, I'll do it using the Suite class from thinkbayes2
Step24: Now I'll create the prior.
Step25: The prior mean is 14.5.
Step26: The higher values are a little more likely, but the effect is pretty small.
Interestingly, the rounds of n=2 provide more evidence in favor of the higher values of p.
Step27: After three rounds of sudden death, we are more inclined to think that the shooters are having a good day.
The fourth round, which ends with no tie, provides a small amount of evidence in the other direction.
Step28: And the posterior mean, after all updates, is a little higher than 14.5, where we started. | Python Code:
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
from thinkbayes2 import Pmf, Hist, Beta
import thinkbayes2
import thinkplot
Explanation: Socks, Skeets, and Space Invaders
This notebook contains code from my blog, Probably Overthinking It
Copyright 2016 Allen Downey
MIT License: http://opensource.org/licenses/MIT
End of explanation
drawer1 = Hist(dict(W=40, B=10), label='Drawer 1')
drawer2 = Hist(dict(W=20, B=30), label='Drawer 2')
drawer1.Print()
Explanation: Socks
The sock drawer problem
Posed by Yuzhong Huang:
There are two drawers of socks. The first drawer has 40 white socks and 10 black socks; the second drawer has 20 white socks and 30 black socks. We randomly get 2 socks from a drawer, and it turns out to be a pair (same color) but we don't know the color of these socks. What is the chance that we picked the first drawer?
Now I'll solve the problem more generally using a Jupyter notebook.
I'll represent the sock drawers with Hist objects, defined in the thinkbayes2 library:
End of explanation
pmf = Pmf([drawer1, drawer2])
pmf.Print()
Explanation: Now I can make a Pmf that represents the two hypotheses:
End of explanation
def likelihood(data, hypo):
Likelihood of the data under the hypothesis.
data: string 'same' or 'different'
hypo: Hist object with the number of each color
returns: float likelihood
probs = Pmf(hypo)
prob_same = probs['W']**2 + probs['B']**2
if data == 'same':
return prob_same
else:
return 1-prob_same
Explanation: This function computes the likelihood of the data for a given hypothesis:
End of explanation
data = 'same'
pmf[drawer1] *= likelihood(data, drawer1)
pmf[drawer2] *= likelihood(data, drawer2)
pmf.Normalize()
Explanation: Now we can update pmf with these likelihoods
End of explanation
pmf.Print()
Explanation: The return value from Normalize is the total probability of the data, the denominator of Bayes's theorem, also known as the normalizing constant.
And here's the posterior distribution:
End of explanation
pmf = Pmf([0.1, 0.2, 0.3, 0.4])
pmf.Print()
Explanation: The likelihood of getting a pair is higher in Drawer 1, which is 40:10, than in Drawer 2, which is 30:20.
In general, the probability of getting a pair is highest if the drawer contains only one color sock, and lowest if the proportion if 50:50.
So getting a pair is evidence that the drawer is more likely to have a high (or low) proportion of one color, and less likely to be balanced.
The Alien Blaster problem
In preparation for an alien invasion, the Earth Defense League has been working on new missiles to shoot down space invaders. Of course, some missile designs are better than others; let's assume that each design has some probability of hitting an alien ship, x.
Based on previous tests, the distribution of x in the population of designs is roughly uniform between 10% and 40%. To approximate this distribution, we'll assume that x is either 10%, 20%, 30%, or 40% with equal probability.
Now suppose the new ultra-secret Alien Blaster 10K is being tested. In a press conference, an EDF general reports that the new design has been tested twice, taking two shots during each test. The results of the test are confidential, so the general won't say how many targets were hit, but they report: ``The same number of targets were hit in the two tests, so we have reason to think this new design is consistent.''
Is this data good or bad; that is, does it increase or decrease your estimate of x for the Alien Blaster 10K?
I'll start by creating a Pmf that represents the four hypothetical values of x:
End of explanation
pmf.Mean()
Explanation: Before seeing the data, the mean of the distribution, which is the expected effectiveness of the blaster, is 0.25.
End of explanation
def likelihood(hypo, data):
Likelihood of the data under hypo.
hypo: probability of a hit, x
data: 'tie' or 'no tie'
x = hypo
like = x**4 + (2 * x * (1-x))**2 + (1-x)**4
if data == 'tie':
return like
else:
return 1-like
Explanation: Here's how we compute the likelihood of the data. If each blaster takes two shots, there are three ways they can get a tie: they both get 0, 1, or 2. If the probability that either blaster gets a hit is x, the probabilities of these outcomes are:
both 0: (1-x)**4
both 1: (2 * x * (1-x))**2
both 2: x**x
Here's the likelihood function that computes the total probability of the three outcomes:
End of explanation
data = 'tie'
for hypo in sorted(pmf):
like = likelihood(hypo, data)
print(hypo, like)
Explanation: To see what the likelihood function looks like, I'll print the likelihood of a tie for the four hypothetical values of x:
End of explanation
for hypo in sorted(pmf):
unnorm_post = pmf[hypo] * likelihood(hypo, data)
print(hypo, pmf[hypo], unnorm_post)
Explanation: If we multiply each likelihood by the corresponding prior, we get the unnormalized posteriors:
End of explanation
for hypo in pmf:
pmf[hypo] *= likelihood(hypo, data)
Explanation: Finally, we can do the update by multiplying the priors in pmf by the likelihoods:
End of explanation
pmf.Normalize()
Explanation: And then normalizing pmf. The result is the total probability of the data.
End of explanation
pmf.Print()
Explanation: And here are the posteriors.
End of explanation
pmf.Mean()
Explanation: The lower values of x are more likely, so this evidence makes us downgrade our expectation about the effectiveness of the blaster. The posterior mean is 0.225, a bit lower than the prior mean, 0.25.
End of explanation
from scipy.stats import binom
k = 10
n = 25
p = 13/25
binom.pmf(k, n, p)
Explanation: A tie is evidence in favor of extreme values of x.
The Skeet Shooting problem
At the 2016 Summer Olympics in the Women's Skeet event, Kim Rhode faced Wei Meng in the bronze medal match. After 25 shots, they were tied, sending the match into sudden death. In each round of sudden death, each competitor shoots at two targets. In the first three rounds, Rhode and Wei hit the same number of targets. Finally in the fourth round, Rhode hit more targets, so she won the bronze medal, making her the first Summer Olympian to win an individual medal at six consecutive summer games. Based on this information, should we infer that Rhode and Wei had an unusually good or bad day?
As background information, you can assume that anyone in the Olympic final has about the same probability of hitting 13, 14, 15, or 16 out of 25 targets.
To compute the likelihood function, I'll use binom.pmf, which computes the Binomial PMF. In the following example, the probability of hitting k=10 targets in n=25 attempts, with probability p=13/15 of hitting each target, is about 8%.
End of explanation
def likelihood(data, hypo):
Likelihood of data under hypo.
data: tuple of (number of shots, 'tie' or 'no tie')
hypo: hypothetical number of hits out of 25
p = hypo / 25
n, outcome = data
like = sum([binom.pmf(k, n, p)**2 for k in range(n+1)])
return like if outcome=='tie' else 1-like
Explanation: The following function computes the likelihood of tie or no tie after a given number of shots, n, given the hypothetical value of p.
It loops through the possible values of k from 0 to n and uses binom.pmf to compute the probability that each shooter hits k targets. To get the probability that BOTH shooters hit k targets, we square the result.
To get the total likelihood of the outcome, we add up the probability for each value of k.
End of explanation
data = 2, 'tie'
hypos = range(0, 26)
likes = [likelihood(data, hypo) for hypo in hypos]
thinkplot.Plot(hypos, likes)
thinkplot.Config(xlabel='Probability of a hit (out of 25)',
ylabel='Likelihood of a tie',
ylim=[0, 1])
Explanation: Now we can see what that looks like for n=2
End of explanation
data = 25, 'tie'
hypos = range(0, 26)
likes = [likelihood(data, hypo) for hypo in hypos]
thinkplot.Plot(hypos, likes)
thinkplot.Config(xlabel='Probability of a hit (out of 25)',
ylabel='Likelihood of a tie',
ylim=[0, 1])
Explanation: As we saw in the Sock Drawer problem and the Alien Blaster problem, the probability of a tie is highest for extreme values of p, and minimized when p=0.5.
The result is similar when n=25:
End of explanation
from thinkbayes2 import Suite
class Skeet(Suite):
def Likelihood(self, data, hypo):
Likelihood of data under hypo.
data: tuple of (number of shots, 'tie' or 'no tie')
hypo: hypothetical number of hits out of 25
p = hypo / 25
n, outcome = data
like = sum([binom.pmf(k, n, p)**2 for k in range(n+1)])
return like if outcome=='tie' else 1-like
Explanation: In the range we care about (13 through 16) this curve is pretty flat, which means that a tie after the round of 25 doesn't discriminate strongly among the hypotheses.
We could use this likelihood function to run the update, but just for purposes of demonstration, I'll do it using the Suite class from thinkbayes2:
End of explanation
suite = Skeet([13, 14, 15, 16])
suite.Print()
Explanation: Now I'll create the prior.
End of explanation
suite.Mean()
suite.Update((25, 'tie'))
suite.Print()
Explanation: The prior mean is 14.5.
End of explanation
suite.Update((2, 'tie'))
suite.Print()
suite.Update((2, 'tie'))
suite.Print()
suite.Update((2, 'tie'))
suite.Print()
Explanation: The higher values are a little more likely, but the effect is pretty small.
Interestingly, the rounds of n=2 provide more evidence in favor of the higher values of p.
End of explanation
suite.Update((2, 'no tie'))
suite.Print()
Explanation: After three rounds of sudden death, we are more inclined to think that the shooters are having a good day.
The fourth round, which ends with no tie, provides a small amount of evidence in the other direction.
End of explanation
suite.Mean()
Explanation: And the posterior mean, after all updates, is a little higher than 14.5, where we started.
End of explanation |
5,667 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning in Python
Get the code
Step1: So building and training Neural Networks in Python in simple!
But it is also powerful!
Neural Style Transfer
Step2: Let's load some data
Step6: and then visualise it
Step7: Data Preprocessing
Transform "images" to "features" ...
Most machine learning algorithms expect a flat array of numbers
Step8: Split the data into a "training" and "test" set ...
Step9: Transform the labels to a "one-hot" encoding ...
Step10: For example, let's inspect the first 2 labels
Step11: Simple Multi-Layer Perceptron (MLP)
The simplest kind of Artificial Neural Network is as Multi-Layer Perceptron (MLP) with a single hidden layer.
Step12: First we define the "architecture" of the network
Step13: then we compile it. This takes the symbolic computational graph of the model and compiles it an efficient implementation which can then be used to train and evaluate the model.
Note that we have to specify what loss/objective function we want to use as well which optimisation algorithm to use. SGD stands for Stochastic Gradient Descent.
Step14: Next we train the model on our training data. Watch the loss, which is the objective function which we are minimising, and the estimated accuracy of the model.
Step15: Once the model is trained, we can evaluate its performance on the test data.
Step16: Deep Learning
Why do we want Deep Neural Networks?
Universal Approximation Theorem
The theorem thus states that simple neural networks can represent
a wide variety of interesting functions when given appropriate parameters;
however, it does not touch upon the algorithmic learnability of those parameters.
Power of combinations
On the (Small) Number of Atoms in the Universe
On the number of Go positions
While <a href="https
Step17: Did you notice anything about the accuracy? Let's train it some more.
Step18: Autoencoders
Hinton 2006 (local)
Step19: A better Autoencoder
Step20: Stacked Autoencoder
Step21: Visualising the Filters | Python Code:
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
batch_size = 128
nb_classes = 10
nb_epoch = 15
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters = 32
# size of pooling area for max pooling
pool_size = (2, 2)
# convolution kernel size
kernel_size = (3, 3)
# %load ..\keras\examples\mnist_cnn.py
'''Trains a simple convnet on the MNIST dataset.
Gets to 99.25% test accuracy after 12 epochs
(there is still a lot of margin for parameter tuning).
16 seconds per epoch on a GRID K520 GPU.
'''
from __future__ import print_function
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.utils import np_utils
from keras import backend as K
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
(images_train, labels_train), (images_test, labels_test) = (X_train, y_train), (X_test, y_test)
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],
border_mode='valid',
input_shape=input_shape))
model.add(Activation('relu'))
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=pool_size))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
print("Compiling the model ...")
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
#model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
# verbose=1, validation_data=(X_test, Y_test))
import os
for epoch in range(1, nb_epoch+1):
weights_path = "models/mnist_cnn_{}_weights.h5".format(epoch)
if os.path.exists(weights_path):
print("Loading precomputed weights for epoch {} ...".format(epoch))
model.load_weights(weights_path)
print('Evaluating the model on the test set ...')
score = model.evaluate(X_test, Y_test, verbose=1)
print('Test score:', score[0])
print('Test accuracy:', score[1])
else:
print("Fitting the model for epoch {} ...".format(epoch))
model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=1,
validation_data=(X_test, Y_test), verbose=1)
model.save_weights(weights_path)
print('Evaluating the model on the test set ...')
score = model.evaluate(X_test, Y_test, verbose=1)
print('Test score:', score[0])
print('Test accuracy:', score[1])
Explanation: Deep Learning in Python
Get the code: github.com/snth/ctdeep
Tobias Brandt
<img src="img/argon_logo.png" align=left width=200>
<!-- <img src="http://www.argonassetmanagement.co.za/css/img/logo.png" align=left width=200> -->
About Me
ex-physicist, quant, pythonista
github.com/snth
[email protected]
Member of Cape Town Deep Learning Meetup
Tutorial Outline
Deep Learning in Python is simple and powerful!
An introduction to (Artificial) Neural Networks
An introduction to Deep Learning
Requirements
Python 3.4 (or legacy Python 2.7)
Keras >= 1.0.0
Theano or Tensorflow
git clone https://github.com/snth/ctdeep.git
Deep Learning in Python is simple
End of explanation
from __future__ import absolute_import, print_function, division
from ipywidgets import interact, interactive, widgets
import numpy as np
np.random.seed(1337) # for reproducibility
Explanation: So building and training Neural Networks in Python in simple!
But it is also powerful!
Neural Style Transfer: github.com/titu1994/Neural-Style-Transfer
<font size=20>
<table border="0"><tr>
<td><img src="img/golden_gate.jpg" width=250></td>
<td>+</td>
<td><img src="img/starry_night.jpg" width=250></td>
<td>=</td>
<td><img src="img/golden_gate_iteration_20.png" width=250></td>
</tr></table>
</font>
Neural Networks
<!--
[Yann LeCun Slides](https://drive.google.com/folderview?id=0BxKBnD5y2M8NclFWSXNxa0JlZTg&usp=drive_web) ([local](pdf/000c-yann-lecun-lecon-inaugurale-college-de-france-20160204.pdf))
 -->
<img src='img/LeCun_flight.png' align='middle' width=800>
A neuron looks something like this
Symbolically we can represent the key parts we want to model as
In order to build an artifical "brain" we need to connect together many neurons in a "neural network"
We can model the response of each neuron with various activation functions
Training a Neural Network
<!--
 -->
<img src='img/perceptron_node.png' align='middle' width=400>
Mathematically the activation of each neuron can be represented by
where $W$ and $b$ are the weights and bias respectively.
Loss Function
<!--

-->
<img src='img/loss_function.png' width=800>
Neural Networks in Python
Keras
High level library for specifying and training neural networks
Can use Theano or TensorFlow as backend
Keras makes Neural Networks awesome!
Theano
Python library that provides efficient (low-level) tools for working with Neural Networks
In particular:
Automatic Differentiation (AD)
Compiled computation graphs
GPU accelerated computation
Tensorflow
Deep Learning framework by Google
The MNIST Dataset
70,000 handwritten digits
60,000 for training
10,000 for testing
As 28x28 pixel images
End of explanation
from keras.datasets import mnist
#(images_train, labels_train), (images_test, labels_test) = mnist.load_data()
print("Data shapes:")
print('images',images_train.shape)
print('labels', labels_train.shape)
Explanation: Let's load some data
End of explanation
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
def plot_mnist_digit(image, figsize=None):
Plot a single MNIST image.
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
if figsize:
ax.set_figsize(*figsize)
ax.matshow(image, cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
def plot_1_by_2_images(image, reconstruction, figsize=None):
fig = plt.figure(figsize=figsize)
ax = fig.add_subplot(1, 2, 1)
ax.matshow(image, cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
ax = fig.add_subplot(1, 2, 2)
ax.matshow(reconstruction, cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
def plot_10_by_10_images(images, figsize=None):
Plot 100 MNIST images in a 10 by 10 table. Note that we crop
the images so that they appear reasonably close together. The
image is post-processed to give the appearance of being continued.
fig = plt.figure(figsize=figsize)
#images = [image[3:25, 3:25] for image in images]
#image = np.concatenate(images, axis=1)
for x in range(10):
for y in range(10):
ax = fig.add_subplot(10, 10, 10*y+x+1)
ax.matshow(images[10*y+x], cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
def plot_10_by_20_images(left, right, figsize=None):
Plot 100 MNIST images next to their reconstructions
fig = plt.figure(figsize=figsize)
for x in range(10):
for y in range(10):
ax = fig.add_subplot(10, 21, 21*y+x+1)
ax.matshow(left[10*y+x], cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
ax = fig.add_subplot(10, 21, 21*y+11+x+1)
ax.matshow(right[10*y+x], cmap = matplotlib.cm.binary)
plt.xticks(np.array([]))
plt.yticks(np.array([]))
plt.show()
plot_10_by_10_images(images_train, figsize=(8,8))
def draw_image(i):
plot_mnist_digit(images_train[i])
print('label:', labels_train[i])
interact(draw_image, i=(0, len(images_train)-1))
None
Explanation: and then visualise it
End of explanation
def to_features(X):
return X.reshape(-1, 784).astype("float32") / 255.0
def to_images(X):
return (X*255.0).astype('uint8').reshape(-1, 28, 28)
print('data shape:', images_train.shape, images_train.dtype)
print('features shape', to_features(images_train).shape, to_features(images_train).dtype)
Explanation: Data Preprocessing
Transform "images" to "features" ...
Most machine learning algorithms expect a flat array of numbers
End of explanation
#(images_train, labels_train), (images_test, labels_test) = mnist.load_data()
X_train = to_features(images_train)
X_test = to_features(images_test)
print(X_train.shape, 'training samples')
print(X_test.shape, 'test samples')
Explanation: Split the data into a "training" and "test" set ...
End of explanation
# The labels need to be transformed into class indicators
from keras.utils import np_utils
y_train = np_utils.to_categorical(labels_train, nb_classes=10)
y_test = np_utils.to_categorical(labels_test, nb_classes=10)
print('labels_train:', labels_train.shape, labels_train.dtype)
print('y_train:', y_test.shape, y_train.dtype)
Explanation: Transform the labels to a "one-hot" encoding ...
End of explanation
print('labels_train[:2]:\n', labels_train[:2][:, np.newaxis])
print('y_train[:2]\n', y_train[:2])
Explanation: For example, let's inspect the first 2 labels:
End of explanation
# Neural Network Architecture Parameters
nb_input = 784
nb_hidden = 512
nb_output = 10
# Training Parameters
nb_epoch = 1
batch_size = 128
Explanation: Simple Multi-Layer Perceptron (MLP)
The simplest kind of Artificial Neural Network is as Multi-Layer Perceptron (MLP) with a single hidden layer.
End of explanation
from keras.models import Sequential
from keras.layers.core import Dense, Activation
mlp = Sequential()
mlp.add(Dense(output_dim=nb_hidden, input_dim=nb_input, init='uniform'))
mlp.add(Activation('sigmoid'))
mlp.add(Dense(output_dim=nb_output, input_dim=nb_hidden, init='uniform'))
mlp.add(Activation('softmax'))
Explanation: First we define the "architecture" of the network
End of explanation
mlp.compile(loss='categorical_crossentropy', optimizer='SGD',
metrics=["accuracy"])
Explanation: then we compile it. This takes the symbolic computational graph of the model and compiles it an efficient implementation which can then be used to train and evaluate the model.
Note that we have to specify what loss/objective function we want to use as well which optimisation algorithm to use. SGD stands for Stochastic Gradient Descent.
End of explanation
mlp.fit(X_train, y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1)
Explanation: Next we train the model on our training data. Watch the loss, which is the objective function which we are minimising, and the estimated accuracy of the model.
End of explanation
mlp.evaluate(X_test, y_test)
#plot_10_by_10_images(images_test, figsize=(8,8))
def draw_mlp_prediction(j):
plot_mnist_digit(to_images(X_test)[j])
prediction = mlp.predict_classes(X_test[j:j+1], verbose=False)[0]
print('predict:', prediction, '\tactual:', labels_test[j])
interact(draw_mlp_prediction, j=(0, len(X_test)-1))
None
Explanation: Once the model is trained, we can evaluate its performance on the test data.
End of explanation
from keras.models import Sequential
nb_layers = 2
mlp2 = Sequential()
# add hidden layers
for i in range(nb_layers):
mlp2.add(Dense(output_dim=nb_hidden//nb_layers, input_dim=nb_input if i==0 else nb_hidden//nb_layers, init='uniform'))
mlp2.add(Activation('sigmoid'))
# add output layer
mlp2.add(Dense(output_dim=nb_output, input_dim=nb_hidden//nb_layers, init='uniform'))
mlp2.add(Activation('softmax'))
mlp2.compile(loss='categorical_crossentropy', optimizer='SGD',
metrics=["accuracy"])
mlp2.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1)
Explanation: Deep Learning
Why do we want Deep Neural Networks?
Universal Approximation Theorem
The theorem thus states that simple neural networks can represent
a wide variety of interesting functions when given appropriate parameters;
however, it does not touch upon the algorithmic learnability of those parameters.
Power of combinations
On the (Small) Number of Atoms in the Universe
On the number of Go positions
While <a href="https://www.theguardian.com/technology/2016/mar/09/google-deepmind-alphago-ai-defeats-human-lee-sedol-first-game-go-contest">discussing</a> the complexity of the game of Go, <a href="https://en.wikipedia.org/wiki/Demis_Hassabis">Demis Hassabis</a> said:
<blockquote>
<i>There are more possible Go positions than there are atoms in the universe.</i>
</blockquote>
A Go board has 19 × 19 points, each of which can be empty or occupied by black or white, so there are 3<sup>(19 × 19)</sup> <tt>≅</tt> 10<sup>172</sup> possible board positions, but "only" about 10<sup>170</sup> of those positions are legal.
<p>The crucial idea is, that as a number of <i>physical things</i>, 10<sup>80</sup> is a really big number. But as a number of <i>combinations</i> of things, 10<sup>80</sup> is a rather small number. It doesn't take a universe of stuff to get up to 10<sup>80</sup> combinations; we can get there with, for example, a passphrase field that is 40 characters long:
<blockquote id="passphrase">
<tt>a correct horse battery staple troubador</tt>
</blockquote>
### On the number of digital pictures ###
There is an art project to display every possible picture. Surely that would take a long time, because there must be many possible pictures. But how many?
We will assume the color model known as True Color, in which each pixel can be one of 2^24 ≅ 17 million distinct colors. The digital camera shown below left has 12 million pixels, and we'll also consider much smaller pictures: the array below middle, with 300 pixels, and the array below right with just 12 pixels; shown are some of the possible pictures:
<img src="img/norvig_atoms.png" align="center">
**Quiz: Which of these produces a number of pictures similar to the number of atoms in the universe?**
**Answer: An array of n pixels produces (17 million)^n different pictures. (17 million)^12 ≅ 10^86, so the tiny 12-pixel array produces a million times more pictures than the number of atoms in the universe!**
How about the 300 pixel array? It can produce 10^2167 pictures. You may think the number of atoms in the universe is big, but that's just peanuts to the number of pictures in a 300-pixel array. And 12M pixels? 10^86696638 pictures. Fuggedaboutit!
So the number of possible pictures is really, really, really big. And the number of atoms in the universe is looking relatively small, at least as a number of combinations.
### ==> The Curse of Dimensionality!
### Feature Hierarchies
<!--

-->
<img src="img/feature_hierarchy.png" width=800>
# A Deeper MLP
Next we build a two-layer MLP with the same number of hidden nodes, half in each layer.
End of explanation
mlp2.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=1)
mlp2.evaluate(X_test, y_test)
Explanation: Did you notice anything about the accuracy? Let's train it some more.
End of explanation
from IPython.display import HTML
HTML('<iframe src="pdf/Hinton2006-science.pdf" width=800 height=400></iframe>')
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
print('nb_input =', nb_input)
print('nb_hidden =', nb_hidden)
ae = Sequential()
# encoder
ae.add(Dense(nb_hidden, input_dim=nb_input, init='uniform'))
ae.add(Activation('sigmoid'))
# decoder
ae.add(Dense(nb_input, input_dim=nb_hidden, init='uniform'))
ae.add(Activation('sigmoid'))
ae.compile(loss='mse', optimizer='SGD')
nb_epoch = 1
ae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
plot_10_by_20_images(images_test, to_images(ae.predict(X_test)),
figsize=(10,5))
from keras.optimizers import SGD
sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
ae.compile(loss='mse', optimizer=sgd)
nb_epoch = 1
ae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
plot_10_by_20_images(images_test, to_images(ae.predict(X_test)),
figsize=(10,5))
def draw_ae_prediction(j):
X_plot = X_test[j:j+1]
prediction = ae.predict(X_plot, verbose=False)
plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0])
interact(draw_ae_prediction, j=(0, len(X_test)-1))
None
Explanation: Autoencoders
Hinton 2006 (local)
End of explanation
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
def make_autoencoder(nb_input=nb_input, nb_hidden=nb_hidden,
activation='sigmoid', init='uniform'):
ae = Sequential()
# encoder
ae.add(Dense(nb_hidden, input_dim=nb_input, init=init))
ae.add(Activation(activation))
# decoder
ae.add(Dense(nb_input, input_dim=nb_hidden, init=init))
ae.add(Activation(activation))
return ae
nb_epoch = 1
ae2 = make_autoencoder(activation='sigmoid', init='glorot_uniform')
ae2.compile(loss='mse', optimizer='adam')
ae2.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
plot_10_by_20_images(images_test, to_images(ae2.predict(X_test)), figsize=(10,5))
def draw_ae2_prediction(j):
X_plot = X_test[j:j+1]
prediction = ae2.predict(X_plot, verbose=False)
plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0])
interact(draw_ae2_prediction, j=(0, len(X_test)-1))
None
Explanation: A better Autoencoder
End of explanation
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Dropout
class StackedAutoencoder(object):
def __init__(self, layers, mode='autoencoder',
activation='sigmoid', init='uniform', final_activation='softmax',
dropout=0.2, optimizer='SGD', metrics=None):
self.layers = layers
self.mode = mode
self.activation = activation
self.final_activation = final_activation
self.init = init
self.dropout = dropout
self.optimizer = optimizer
self.metrics = metrics
self._model = None
self.build()
self.compile()
def _add_layer(self, model, i, is_encoder):
if is_encoder:
input_dim, output_dim = self.layers[i], self.layers[i+1]
activation = self.final_activation if i==len(self.layers)-2 else self.activation
else:
input_dim, output_dim = self.layers[i+1], self.layers[i]
activation = self.activation
model.add(Dense(output_dim=output_dim,
input_dim=input_dim,
init=self.init))
model.add(Activation(activation))
def build(self):
self.encoder = Sequential()
self.decoder = Sequential()
self.autoencoder = Sequential()
for i in range(len(self.layers)-1):
self._add_layer(self.encoder, i, True)
self._add_layer(self.autoencoder, i, True)
#if i<len(self.layers)-2:
# self.autoencoder.add(Dropout(self.dropout))
# Note that the decoder layers are in reverse order
for i in reversed(range(len(self.layers)-1)):
self._add_layer(self.decoder, i, False)
self._add_layer(self.autoencoder, i, False)
def compile(self):
print("Compiling the encoder ...")
self.encoder.compile(loss='categorical_crossentropy', optimizer=self.optimizer, metrics=self.metrics)
print("Compiling the decoder ...")
self.decoder.compile(loss='mse', optimizer=self.optimizer, metrics=self.metrics)
print("Compiling the autoencoder ...")
return self.autoencoder.compile(loss='mse', optimizer=self.optimizer, metrics=self.metrics)
def fit(self, X_train, Y_train, batch_size, nb_epoch, verbose=1):
result = self.autoencoder.fit(X_train, Y_train,
batch_size=batch_size, nb_epoch=nb_epoch,
verbose=verbose)
# copy the weights to the encoder
for i, l in enumerate(self.encoder.layers):
l.set_weights(self.autoencoder.layers[i].get_weights())
for i in range(len(self.decoder.layers)):
self.decoder.layers[-1-i].set_weights(self.autoencoder.layers[-1-i].get_weights())
return result
def pretrain(self, X_train, batch_size, nb_epoch, verbose=1):
for i in range(len(self.layers)-1):
# Greedily train each layer
print("Now pretraining layer {} [{}-->{}]".format(i+1, self.layers[i], self.layers[i+1]))
ae = Sequential()
self._add_layer(ae, i, True)
#ae.add(Dropout(self.dropout))
self._add_layer(ae, i, False)
ae.compile(loss='mse', optimizer=self.optimizer, metrics=self.metrics)
ae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=verbose)
# Then lift the training data up one layer
print("\nTransforming data from", X_train.shape, "to", (X_train.shape[0], self.layers[i+1]))
enc = Sequential()
self._add_layer(enc, i, True)
enc.compile(loss='mse', optimizer=self.optimizer, metrics=self.metrics)
enc.layers[0].set_weights(ae.layers[0].get_weights())
enc.layers[1].set_weights(ae.layers[1].get_weights())
X_train = enc.predict(X_train, verbose=verbose)
print("\nShape check:", X_train.shape, "\n")
# Then copy the learned weights
self.encoder.layers[2*i].set_weights(ae.layers[0].get_weights())
self.encoder.layers[2*i+1].set_weights(ae.layers[1].get_weights())
self.autoencoder.layers[2*i].set_weights(ae.layers[0].get_weights())
self.autoencoder.layers[2*i+1].set_weights(ae.layers[1].get_weights())
self.decoder.layers[-1-(2*i)].set_weights(ae.layers[-1].get_weights())
self.decoder.layers[-1-(2*i+1)].set_weights(ae.layers[-2].get_weights())
self.autoencoder.layers[-1-(2*i)].set_weights(ae.layers[-1].get_weights())
self.autoencoder.layers[-1-(2*i+1)].set_weights(ae.layers[-2].get_weights())
def evaluate(self, X_test, Y_test):
return self.autoencoder.evaluate(X_test, Y_test)
def predict(self, X, verbose=False):
return self.autoencoder.predict(X, verbose=verbose)
def _get_paths(self, name):
model_path = "models/{}_model.yaml".format(name)
weights_path = "models/{}_weights.hdf5".format(name)
return model_path, weights_path
def save(self, name='autoencoder'):
model_path, weights_path = self._get_paths(name)
open(model_path, 'w').write(self.autoencoder.to_yaml())
self.autoencoder.save_weights(weights_path, overwrite=True)
def load(self, name='autoencoder'):
model_path, weights_path = self._get_paths(name)
self.autoencoder = keras.models.model_from_yaml(open(model_path))
self.autoencoder.load_weights(weights_path)
nb_epoch = 3
sae = StackedAutoencoder(layers=[nb_input, 500, 150, 50, 10],
activation='sigmoid',
final_activation='softmax',
init='uniform',
dropout=0.25,
optimizer='SGD') # replace with 'adam', 'relu', 'glorot_uniform'
sae.fit(X_train, X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
plot_10_by_20_images(images_test, to_images(sae.predict(X_test)), figsize=(10,5))
def draw_sae_prediction(j):
X_plot = X_test[j:j+1]
prediction = sae.predict(X_plot, verbose=False)
plot_1_by_2_images(to_images(X_plot)[0], to_images(prediction)[0])
print(sae.encoder.predict(X_plot, verbose=False)[0])
interact(draw_sae_prediction, j=(0, len(X_test)-1))
None
sae.pretrain(X_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1)
Explanation: Stacked Autoencoder
End of explanation
def visualise_filter(model, layer_index, filter_index):
from keras import backend as K
# build a loss function that maximizes the activation
# of the nth filter on the layer considered
layer_output = model.layers[layer_index].get_output()
loss = K.mean(layer_output[:, filter_index])
# compute the gradient of the input picture wrt this loss
input_img = model.layers[0].input
grads = K.gradients(loss, input_img)[0]
# normalization trick: we normalize the gradient
grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5)
# this function returns the loss and grads given the input picture
iterate = K.function([input_img], [loss, grads])
# we start from a gray image with some noise
input_img_data = np.random.random((1,nb_input,))
# run gradient ascent for 20 steps
step = 1
for i in range(100):
loss_value, grads_value = iterate([input_img_data])
input_img_data += grads_value * step
#print("Current loss value:", loss_value)
if loss_value <= 0.:
# some filters get stuck to 0, we can skip them
break
print("Current loss value:", loss_value)
# decode the resulting input image
if loss_value>0:
#return input_img_data[0]
return input_img_data
else:
raise ValueError(loss_value)
def draw_filter(i):
flt = visualise_filter(mlp, 3, 4)
#print(flt)
plot_mnist_digit(to_images(flt)[0])
interact(draw_filter, i=[0, 9])
Explanation: Visualising the Filters
End of explanation |
5,668 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 14 – Recurrent Neural Networks
This notebook contains all the sample code and solutions to the exercises in chapter 14.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures
Step1: Then of course we will need TensorFlow
Step2: Basic RNNs
Manual RNN
Step7: Using static_rnn()
Step8: Packing sequences
Step9: Using dynamic_rnn()
Step10: Setting the sequence lengths
Step11: Training a sequence classifier
Note
Step12: Multi-layer RNN
Step13: Time series
Step14: Using an OuputProjectionWrapper
Let's create the RNN. It will contain 100 recurrent neurons and we will unroll it over 20 time steps since each traiing instance will be 20 inputs long. Each input will contain only one feature (the value at that time). The targets are also sequences of 20 inputs, each containing a sigle value
Step15: At each time step we now have an output vector of size 100. But what we actually want is a single output value at each time step. The simplest solution is to wrap the cell in an OutputProjectionWrapper.
Step16: Without using an OutputProjectionWrapper
Step17: Generating a creative new sequence
Step18: Deep RNN
MultiRNNCell
Step19: Distributing a Deep RNN Across Multiple GPUs
Do NOT do this
Step20: Instead, you need a DeviceCellWrapper
Step21: Dropout
Step22: Unfortunately, this code is only usable for training, because the DropoutWrapper class has no training parameter, so it always applies dropout, even when the model is not being trained, so we must first train the model, then create a different model for testing, without the DropoutWrapper.
Step23: Now that the model is trained, we need to create the model again, but without the DropoutWrapper for testing
Step24: Oops, it seems that Dropout does not help at all in this particular case.
Step25: LSTM
Step27: Embeddings
This section is based on TensorFlow's Word2Vec tutorial.
Fetch the data
Step28: Build the dictionary
Step29: Generate batches
Step30: Build the model
Step31: Train the model
Step32: Let's save the final embeddings (of course you can use a TensorFlow Saver if you prefer)
Step33: Plot the embeddings
Step34: Machine Translation
The basic_rnn_seq2seq() function creates a simple Encoder/Decoder model
Step35: Exercise solutions
1. to 6.
See Appendix A.
7. Embedded Reber Grammars
First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.
Step36: Let's generate a few strings based on the default Reber grammar
Step37: Looks good. Now let's generate a few strings based on the embedded Reber grammar
Step38: Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character
Step39: Let's look at a few corrupted strings
Step40: It's not possible to feed a string directly to an RNN
Step41: We can now generate the dataset, with 50% good strings, and 50% bad strings
Step42: Let's take a look at the first training instances
Step43: It's padded with a lot of zeros because the longest string in the dataset is that long. How long is this particular string?
Step44: What class is it?
Step45: Perfect! We are ready to create the RNN to identify good strings. We build a sequence classifier very similar to the one we built earlier to classify MNIST images, with two main differences
Step46: Now let's generate a validation set so we can track progress during training
Step47: Now let's test our RNN on two tricky strings | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
def reset_graph(seed=42):
tf.reset_default_graph()
tf.set_random_seed(seed)
np.random.seed(seed)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "rnn"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
Explanation: Chapter 14 – Recurrent Neural Networks
This notebook contains all the sample code and solutions to the exercises in chapter 14.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
import tensorflow as tf
Explanation: Then of course we will need TensorFlow:
End of explanation
reset_graph()
n_inputs = 3
n_neurons = 5
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
Wx = tf.Variable(tf.random_normal(shape=[n_inputs, n_neurons],dtype=tf.float32))
Wy = tf.Variable(tf.random_normal(shape=[n_neurons,n_neurons],dtype=tf.float32))
b = tf.Variable(tf.zeros([1, n_neurons], dtype=tf.float32))
Y0 = tf.tanh(tf.matmul(X0, Wx) + b)
Y1 = tf.tanh(tf.matmul(Y0, Wy) + tf.matmul(X1, Wx) + b)
init = tf.global_variables_initializer()
import numpy as np
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]]) # t = 0
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]]) # t = 1
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
print(Y0_val)
print(Y1_val)
Explanation: Basic RNNs
Manual RNN
End of explanation
n_inputs = 3
n_neurons = 5
reset_graph()
X0 = tf.placeholder(tf.float32, [None, n_inputs])
X1 = tf.placeholder(tf.float32, [None, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.contrib.rnn.static_rnn(basic_cell, [X0, X1],
dtype=tf.float32)
Y0, Y1 = output_seqs
init = tf.global_variables_initializer()
X0_batch = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 0, 1]])
X1_batch = np.array([[9, 8, 7], [0, 0, 0], [6, 5, 4], [3, 2, 1]])
with tf.Session() as sess:
init.run()
Y0_val, Y1_val = sess.run([Y0, Y1], feed_dict={X0: X0_batch, X1: X1_batch})
Y0_val
Y1_val
from IPython.display import clear_output, Image, display, HTML
def strip_consts(graph_def, max_const_size=32):
Strip large constant values from graph_def.
strip_def = tf.GraphDef()
for n0 in graph_def.node:
n = strip_def.node.add()
n.MergeFrom(n0)
if n.op == 'Const':
tensor = n.attr['value'].tensor
size = len(tensor.tensor_content)
if size > max_const_size:
tensor.tensor_content = "b<stripped %d bytes>"%size
return strip_def
def show_graph(graph_def, max_const_size=32):
Visualize TensorFlow graph.
if hasattr(graph_def, 'as_graph_def'):
graph_def = graph_def.as_graph_def()
strip_def = strip_consts(graph_def, max_const_size=max_const_size)
code =
<script>
function load() {{
document.getElementById("{id}").pbtxt = {data};
}}
</script>
<link rel="import" href="https://tensorboard.appspot.com/tf-graph-basic.build.html" onload=load()>
<div style="height:600px">
<tf-graph-basic id="{id}"></tf-graph-basic>
</div>
.format(data=repr(str(strip_def)), id='graph'+str(np.random.rand()))
iframe =
<iframe seamless style="width:1200px;height:620px;border:0" srcdoc="{}"></iframe>
.format(code.replace('"', '"'))
display(HTML(iframe))
show_graph(tf.get_default_graph())
Explanation: Using static_rnn()
End of explanation
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
X_seqs = tf.unstack(tf.transpose(X, perm=[1, 0, 2]))
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
output_seqs, states = tf.contrib.rnn.static_rnn(basic_cell, X_seqs,
dtype=tf.float32)
outputs = tf.transpose(tf.stack(output_seqs), perm=[1, 0, 2])
init = tf.global_variables_initializer()
X_batch = np.array([
# t = 0 t = 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
outputs_val = outputs.eval(feed_dict={X: X_batch})
print(outputs_val)
print(np.transpose(outputs_val, axes=[1, 0, 2])[1])
Explanation: Packing sequences
End of explanation
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.array([
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
with tf.Session() as sess:
init.run()
outputs_val = outputs.eval(feed_dict={X: X_batch})
print(outputs_val)
show_graph(tf.get_default_graph())
Explanation: Using dynamic_rnn()
End of explanation
n_steps = 2
n_inputs = 3
n_neurons = 5
reset_graph()
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
seq_length = tf.placeholder(tf.int32, [None])
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32,
sequence_length=seq_length)
init = tf.global_variables_initializer()
X_batch = np.array([
# step 0 step 1
[[0, 1, 2], [9, 8, 7]], # instance 1
[[3, 4, 5], [0, 0, 0]], # instance 2 (padded with zero vectors)
[[6, 7, 8], [6, 5, 4]], # instance 3
[[9, 0, 1], [3, 2, 1]], # instance 4
])
seq_length_batch = np.array([2, 1, 2, 2])
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run(
[outputs, states], feed_dict={X: X_batch, seq_length: seq_length_batch})
print(outputs_val)
print(states_val)
Explanation: Setting the sequence lengths
End of explanation
reset_graph()
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
basic_cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(basic_cell, X, dtype=tf.float32)
logits = tf.layers.dense(states, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/")
X_test = mnist.test.images.reshape((-1, n_steps, n_inputs))
y_test = mnist.test.labels
n_epochs = 100
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
Explanation: Training a sequence classifier
Note: the book uses tensorflow.contrib.layers.fully_connected() rather than tf.layers.dense() (which did not exist when this chapter was written). It is now preferable to use tf.layers.dense(), because anything in the contrib module may change or be deleted without notice. The dense() function is almost identical to the fully_connected() function. The main differences relevant to this chapter are:
* several parameters are renamed: scope becomes name, activation_fn becomes activation (and similarly the _fn suffix is removed from other parameters such as normalizer_fn), weights_initializer becomes kernel_initializer, etc.
* the default activation is now None rather than tf.nn.relu.
End of explanation
reset_graph()
n_steps = 28
n_inputs = 28
n_outputs = 10
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
n_neurons = 100
n_layers = 3
layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons,
activation=tf.nn.relu)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
states_concat = tf.concat(axis=1, values=states)
logits = tf.layers.dense(states_concat, n_outputs)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((-1, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print(epoch, "Train accuracy:", acc_train, "Test accuracy:", acc_test)
Explanation: Multi-layer RNN
End of explanation
t_min, t_max = 0, 30
resolution = 0.1
def time_series(t):
return t * np.sin(t) / 3 + 2 * np.sin(t*5)
def next_batch(batch_size, n_steps):
t0 = np.random.rand(batch_size, 1) * (t_max - t_min - n_steps * resolution)
Ts = t0 + np.arange(0., n_steps + 1) * resolution
ys = time_series(Ts)
return ys[:, :-1].reshape(-1, n_steps, 1), ys[:, 1:].reshape(-1, n_steps, 1)
t = np.linspace(t_min, t_max, int((t_max - t_min) / resolution))
n_steps = 20
t_instance = np.linspace(12.2, 12.2 + resolution * (n_steps + 1), n_steps + 1)
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.title("A time series (generated)", fontsize=14)
plt.plot(t, time_series(t), label=r"$t . \sin(t) / 3 + 2 . \sin(5t)$")
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "b-", linewidth=3, label="A training instance")
plt.legend(loc="lower left", fontsize=14)
plt.axis([0, 30, -17, 13])
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.title("A training instance", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.legend(loc="upper left")
plt.xlabel("Time")
save_fig("time_series_plot")
plt.show()
X_batch, y_batch = next_batch(1, n_steps)
np.c_[X_batch[0], y_batch[0]]
Explanation: Time series
End of explanation
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
Explanation: Using an OuputProjectionWrapper
Let's create the RNN. It will contain 100 recurrent neurons and we will unroll it over 20 time steps since each traiing instance will be 20 inputs long. Each input will contain only one feature (the value at that time). The targets are also sequences of 20 inputs, each containing a sigle value:
End of explanation
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.OutputProjectionWrapper(
tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu),
output_size=n_outputs)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
learning_rate = 0.001
loss = tf.reduce_mean(tf.square(outputs - y)) # MSE
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
saver.save(sess, "./my_time_series_model") # not shown in the book
with tf.Session() as sess: # not shown in the book
saver.restore(sess, "./my_time_series_model") # not shown
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
y_pred
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
save_fig("time_series_pred_plot")
plt.show()
Explanation: At each time step we now have an output vector of size 100. But what we actually want is a single output value at each time step. The simplest solution is to wrap the cell in an OutputProjectionWrapper.
End of explanation
reset_graph()
n_steps = 20
n_inputs = 1
n_neurons = 100
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cell = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons, activation=tf.nn.relu)
rnn_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
n_outputs = 1
learning_rate = 0.001
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
n_iterations = 1500
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
saver.save(sess, "./my_time_series_model")
y_pred
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
Explanation: Without using an OutputProjectionWrapper
End of explanation
with tf.Session() as sess: # not shown in the book
saver.restore(sess, "./my_time_series_model") # not shown
sequence = [0.] * n_steps
for iteration in range(300):
X_batch = np.array(sequence[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence.append(y_pred[0, -1, 0])
plt.figure(figsize=(8,4))
plt.plot(np.arange(len(sequence)), sequence, "b-")
plt.plot(t[:n_steps], sequence[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.show()
with tf.Session() as sess:
saver.restore(sess, "./my_time_series_model")
sequence1 = [0. for i in range(n_steps)]
for iteration in range(len(t) - n_steps):
X_batch = np.array(sequence1[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence1.append(y_pred[0, -1, 0])
sequence2 = [time_series(i * resolution + t_min + (t_max-t_min/3)) for i in range(n_steps)]
for iteration in range(len(t) - n_steps):
X_batch = np.array(sequence2[-n_steps:]).reshape(1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
sequence2.append(y_pred[0, -1, 0])
plt.figure(figsize=(11,4))
plt.subplot(121)
plt.plot(t, sequence1, "b-")
plt.plot(t[:n_steps], sequence1[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
plt.ylabel("Value")
plt.subplot(122)
plt.plot(t, sequence2, "b-")
plt.plot(t[:n_steps], sequence2[:n_steps], "b-", linewidth=3)
plt.xlabel("Time")
save_fig("creative_sequence_plot")
plt.show()
Explanation: Generating a creative new sequence
End of explanation
reset_graph()
n_inputs = 2
n_steps = 5
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
n_neurons = 100
n_layers = 3
layers = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(layers)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
X_batch = np.random.rand(2, n_steps, n_inputs)
with tf.Session() as sess:
init.run()
outputs_val, states_val = sess.run([outputs, states], feed_dict={X: X_batch})
outputs_val.shape
Explanation: Deep RNN
MultiRNNCell
End of explanation
with tf.device("/gpu:0"): # BAD! This is ignored.
layer1 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
with tf.device("/gpu:1"): # BAD! Ignored again.
layer2 = tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
Explanation: Distributing a Deep RNN Across Multiple GPUs
Do NOT do this:
End of explanation
import tensorflow as tf
class DeviceCellWrapper(tf.contrib.rnn.RNNCell):
def __init__(self, device, cell):
self._cell = cell
self._device = device
@property
def state_size(self):
return self._cell.state_size
@property
def output_size(self):
return self._cell.output_size
def __call__(self, inputs, state, scope=None):
with tf.device(self._device):
return self._cell(inputs, state, scope)
reset_graph()
n_inputs = 5
n_steps = 20
n_neurons = 100
X = tf.placeholder(tf.float32, shape=[None, n_steps, n_inputs])
devices = ["/cpu:0", "/cpu:0", "/cpu:0"] # replace with ["/gpu:0", "/gpu:1", "/gpu:2"] if you have 3 GPUs
cells = [DeviceCellWrapper(dev,tf.contrib.rnn.BasicRNNCell(num_units=n_neurons))
for dev in devices]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(cells)
outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
print(sess.run(outputs, feed_dict={X: np.random.rand(2, n_steps, n_inputs)}))
Explanation: Instead, you need a DeviceCellWrapper:
End of explanation
reset_graph()
n_inputs = 1
n_neurons = 100
n_layers = 3
n_steps = 20
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
keep_prob = 0.5
cells = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
cells_drop = [tf.contrib.rnn.DropoutWrapper(cell, input_keep_prob=keep_prob)
for cell in cells]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(cells_drop)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
learning_rate = 0.01
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Explanation: Dropout
End of explanation
n_iterations = 1000
batch_size = 50
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps)
_, mse = sess.run([training_op, loss], feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
print(iteration, "Training MSE:", mse)
saver.save(sess, "./my_dropout_time_series_model")
Explanation: Unfortunately, this code is only usable for training, because the DropoutWrapper class has no training parameter, so it always applies dropout, even when the model is not being trained, so we must first train the model, then create a different model for testing, without the DropoutWrapper.
End of explanation
reset_graph()
n_inputs = 1
n_neurons = 100
n_layers = 3
n_steps = 20
n_outputs = 1
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
keep_prob = 0.5
cells = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(cells)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
learning_rate = 0.01
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons])
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
loss = tf.reduce_mean(tf.square(outputs - y))
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, "./my_dropout_time_series_model")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs)))
y_pred = sess.run(outputs, feed_dict={X: X_new})
plt.title("Testing the model", fontsize=14)
plt.plot(t_instance[:-1], time_series(t_instance[:-1]), "bo", markersize=10, label="instance")
plt.plot(t_instance[1:], time_series(t_instance[1:]), "w*", markersize=10, label="target")
plt.plot(t_instance[1:], y_pred[0,:,0], "r.", markersize=10, label="prediction")
plt.legend(loc="upper left")
plt.xlabel("Time")
plt.show()
Explanation: Now that the model is trained, we need to create the model again, but without the DropoutWrapper for testing:
End of explanation
reset_graph()
import sys
training = True # in a script, this would be (sys.argv[-1] == "train") instead
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
cells = [tf.contrib.rnn.BasicRNNCell(num_units=n_neurons)
for layer in range(n_layers)]
if training:
cells = [tf.contrib.rnn.DropoutWrapper(cell, input_keep_prob=keep_prob)
for cell in cells]
multi_layer_cell = tf.contrib.rnn.MultiRNNCell(cells)
rnn_outputs, states = tf.nn.dynamic_rnn(multi_layer_cell, X, dtype=tf.float32)
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_neurons]) # not shown in the book
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs) # not shown
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs]) # not shown
loss = tf.reduce_mean(tf.square(outputs - y)) # not shown
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) # not shown
training_op = optimizer.minimize(loss) # not shown
init = tf.global_variables_initializer() # not shown
saver = tf.train.Saver() # not shown
with tf.Session() as sess:
if training:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(batch_size, n_steps) # not shown
_, mse = sess.run([training_op, loss], feed_dict={X: X_batch, y: y_batch}) # not shown
if iteration % 100 == 0: # not shown
print(iteration, "Training MSE:", mse) # not shown
save_path = saver.save(sess, "/tmp/my_model.ckpt")
else:
saver.restore(sess, "/tmp/my_model.ckpt")
X_new = time_series(np.array(t_instance[:-1].reshape(-1, n_steps, n_inputs))) # not shown
y_pred = sess.run(outputs, feed_dict={X: X_new}) # not shown
Explanation: Oops, it seems that Dropout does not help at all in this particular case. :/
Another option is to write a script with a command line argument to specify whether you want to train the mode or use it for making predictions:
End of explanation
reset_graph()
lstm_cell = tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
n_steps = 28
n_inputs = 28
n_neurons = 150
n_outputs = 10
n_layers = 3
learning_rate = 0.001
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.int32, [None])
lstm_cells = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
for layer in range(n_layers)]
multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
outputs, states = tf.nn.dynamic_rnn(multi_cell, X, dtype=tf.float32)
top_layer_h_state = states[-1][1]
logits = tf.layers.dense(top_layer_h_state, n_outputs, name="softmax")
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
init = tf.global_variables_initializer()
states
top_layer_h_state
n_epochs = 10
batch_size = 150
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
for iteration in range(mnist.train.num_examples // batch_size):
X_batch, y_batch = mnist.train.next_batch(batch_size)
X_batch = X_batch.reshape((batch_size, n_steps, n_inputs))
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, y: y_batch})
acc_test = accuracy.eval(feed_dict={X: X_test, y: y_test})
print("Epoch", epoch, "Train accuracy =", acc_train, "Test accuracy =", acc_test)
lstm_cell = tf.contrib.rnn.LSTMCell(num_units=n_neurons, use_peepholes=True)
gru_cell = tf.contrib.rnn.GRUCell(num_units=n_neurons)
Explanation: LSTM
End of explanation
from six.moves import urllib
import errno
import os
import zipfile
WORDS_PATH = "datasets/words"
WORDS_URL = 'http://mattmahoney.net/dc/text8.zip'
def mkdir_p(path):
Create directories, ok if they already exist.
This is for python 2 support. In python >=3.2, simply use:
>>> os.makedirs(path, exist_ok=True)
try:
os.makedirs(path)
except OSError as exc:
if exc.errno == errno.EEXIST and os.path.isdir(path):
pass
else:
raise
def fetch_words_data(words_url=WORDS_URL, words_path=WORDS_PATH):
os.makedirs(words_path, exist_ok=True)
zip_path = os.path.join(words_path, "words.zip")
if not os.path.exists(zip_path):
urllib.request.urlretrieve(words_url, zip_path)
with zipfile.ZipFile(zip_path) as f:
data = f.read(f.namelist()[0])
return data.decode("ascii").split()
words = fetch_words_data()
words[:5]
Explanation: Embeddings
This section is based on TensorFlow's Word2Vec tutorial.
Fetch the data
End of explanation
from collections import Counter
vocabulary_size = 50000
vocabulary = [("UNK", None)] + Counter(words).most_common(vocabulary_size - 1)
vocabulary = np.array([word for word, _ in vocabulary])
dictionary = {word: code for code, word in enumerate(vocabulary)}
data = np.array([dictionary.get(word, 0) for word in words])
" ".join(words[:9]), data[:9]
" ".join([vocabulary[word_index] for word_index in [5241, 3081, 12, 6, 195, 2, 3134, 46, 59]])
words[24], data[24]
Explanation: Build the dictionary
End of explanation
import random
from collections import deque
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
data_index=0
batch, labels = generate_batch(8, 2, 1)
batch, [vocabulary[word] for word in batch]
labels, [vocabulary[word] for word in labels[:, 0]]
Explanation: Generate batches
End of explanation
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. Here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.random.choice(valid_window, valid_size, replace=False)
num_sampled = 64 # Number of negative examples to sample.
learning_rate = 0.01
reset_graph()
# Input data.
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
vocabulary_size = 50000
embedding_size = 150
# Look up embeddings for inputs.
init_embeds = tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0)
embeddings = tf.Variable(init_embeds)
train_inputs = tf.placeholder(tf.int32, shape=[None])
embed = tf.nn.embedding_lookup(embeddings, train_inputs)
# Construct the variables for the NCE loss
nce_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / np.sqrt(embedding_size)))
nce_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Compute the average NCE loss for the batch.
# tf.nce_loss automatically draws a new sample of the negative labels each
# time we evaluate the loss.
loss = tf.reduce_mean(
tf.nn.nce_loss(nce_weights, nce_biases, train_labels, embed,
num_sampled, vocabulary_size))
# Construct the Adam optimizer
optimizer = tf.train.AdamOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
# Compute the cosine similarity between minibatch examples and all embeddings.
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), axis=1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, normalized_embeddings, transpose_b=True)
# Add variable initializer.
init = tf.global_variables_initializer()
Explanation: Build the model
End of explanation
num_steps = 10001
with tf.Session() as session:
init.run()
average_loss = 0
for step in range(num_steps):
print("\rIteration: {}".format(step), end="\t")
batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)
feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}
# We perform one update step by evaluating the training op (including it
# in the list of returned values for session.run()
_, loss_val = session.run([training_op, loss], feed_dict=feed_dict)
average_loss += loss_val
if step % 2000 == 0:
if step > 0:
average_loss /= 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print("Average loss at step ", step, ": ", average_loss)
average_loss = 0
# Note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = vocabulary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log_str = "Nearest to %s:" % valid_word
for k in range(top_k):
close_word = vocabulary[nearest[k]]
log_str = "%s %s," % (log_str, close_word)
print(log_str)
final_embeddings = normalized_embeddings.eval()
Explanation: Train the model
End of explanation
np.save("./my_final_embeddings.npy", final_embeddings)
Explanation: Let's save the final embeddings (of course you can use a TensorFlow Saver if you prefer):
End of explanation
def plot_with_labels(low_dim_embs, labels):
assert low_dim_embs.shape[0] >= len(labels), "More labels than embeddings"
plt.figure(figsize=(18, 18)) #in inches
for i, label in enumerate(labels):
x, y = low_dim_embs[i,:]
plt.scatter(x, y)
plt.annotate(label,
xy=(x, y),
xytext=(5, 2),
textcoords='offset points',
ha='right',
va='bottom')
from sklearn.manifold import TSNE
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
plot_only = 500
low_dim_embs = tsne.fit_transform(final_embeddings[:plot_only,:])
labels = [vocabulary[i] for i in range(plot_only)]
plot_with_labels(low_dim_embs, labels)
Explanation: Plot the embeddings
End of explanation
import tensorflow as tf
reset_graph()
n_steps = 50
n_neurons = 200
n_layers = 3
num_encoder_symbols = 20000
num_decoder_symbols = 20000
embedding_size = 150
learning_rate = 0.01
X = tf.placeholder(tf.int32, [None, n_steps]) # English sentences
Y = tf.placeholder(tf.int32, [None, n_steps]) # French translations
W = tf.placeholder(tf.float32, [None, n_steps - 1, 1])
Y_input = Y[:, :-1]
Y_target = Y[:, 1:]
encoder_inputs = tf.unstack(tf.transpose(X)) # list of 1D tensors
decoder_inputs = tf.unstack(tf.transpose(Y_input)) # list of 1D tensors
lstm_cells = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
for layer in range(n_layers)]
cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
output_seqs, states = tf.contrib.legacy_seq2seq.embedding_rnn_seq2seq(
encoder_inputs,
decoder_inputs,
cell,
num_encoder_symbols,
num_decoder_symbols,
embedding_size)
logits = tf.transpose(tf.unstack(output_seqs), perm=[1, 0, 2])
logits_flat = tf.reshape(logits, [-1, num_decoder_symbols])
Y_target_flat = tf.reshape(Y_target, [-1])
W_flat = tf.reshape(W, [-1])
xentropy = W_flat * tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y_target_flat, logits=logits_flat)
loss = tf.reduce_mean(xentropy)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
init = tf.global_variables_initializer()
Explanation: Machine Translation
The basic_rnn_seq2seq() function creates a simple Encoder/Decoder model: it first runs an RNN to encode encoder_inputs into a state vector, then runs a decoder initialized with the last encoder state on decoder_inputs. Encoder and decoder use the same RNN cell type but they don't share parameters.
End of explanation
from random import choice, seed
# to make this notebook's output stable across runs
seed(42)
np.random.seed(42)
default_reber_grammar = [
[("B", 1)], # (state 0) =B=>(state 1)
[("T", 2), ("P", 3)], # (state 1) =T=>(state 2) or =P=>(state 3)
[("S", 2), ("X", 4)], # (state 2) =S=>(state 2) or =X=>(state 4)
[("T", 3), ("V", 5)], # and so on...
[("X", 3), ("S", 6)],
[("P", 4), ("V", 6)],
[("E", None)]] # (state 6) =E=>(terminal state)
embedded_reber_grammar = [
[("B", 1)],
[("T", 2), ("P", 3)],
[(default_reber_grammar, 4)],
[(default_reber_grammar, 5)],
[("T", 6)],
[("P", 6)],
[("E", None)]]
def generate_string(grammar):
state = 0
output = []
while state is not None:
production, state = choice(grammar[state])
if isinstance(production, list):
production = generate_string(grammar=production)
output.append(production)
return "".join(output)
Explanation: Exercise solutions
1. to 6.
See Appendix A.
7. Embedded Reber Grammars
First we need to build a function that generates strings based on a grammar. The grammar will be represented as a list of possible transitions for each state. A transition specifies the string to output (or a grammar to generate it) and the next state.
End of explanation
for _ in range(25):
print(generate_string(default_reber_grammar), end=" ")
Explanation: Let's generate a few strings based on the default Reber grammar:
End of explanation
for _ in range(25):
print(generate_string(embedded_reber_grammar), end=" ")
Explanation: Looks good. Now let's generate a few strings based on the embedded Reber grammar:
End of explanation
def generate_corrupted_string(grammar, chars="BEPSTVX"):
good_string = generate_string(grammar)
index = np.random.randint(len(good_string))
good_char = good_string[index]
bad_char = choice(list(set(chars) - set(good_char)))
return good_string[:index] + bad_char + good_string[index + 1:]
Explanation: Okay, now we need a function to generate strings that do not respect the grammar. We could generate a random string, but the task would be a bit too easy, so instead we will generate a string that respects the grammar, and we will corrupt it by changing just one character:
End of explanation
for _ in range(25):
print(generate_corrupted_string(embedded_reber_grammar), end=" ")
Explanation: Let's look at a few corrupted strings:
End of explanation
def string_to_one_hot_vectors(string, n_steps, chars="BEPSTVX"):
char_to_index = {char: index for index, char in enumerate(chars)}
output = np.zeros((n_steps, len(chars)), dtype=np.int32)
for index, char in enumerate(string):
output[index, char_to_index[char]] = 1.
return output
string_to_one_hot_vectors("BTBTXSETE", 12)
Explanation: It's not possible to feed a string directly to an RNN: we need to convert it to a sequence of vectors, first. Each vector will represent a single letter, using a one-hot encoding. For example, the letter "B" will be represented as the vector [1, 0, 0, 0, 0, 0, 0], the letter E will be represented as [0, 1, 0, 0, 0, 0, 0] and so on. Let's write a function that converts a string to a sequence of such one-hot vectors. Note that if the string is shorted than n_steps, it will be padded with zero vectors (later, we will tell TensorFlow how long each string actually is using the sequence_length parameter).
End of explanation
def generate_dataset(size):
good_strings = [generate_string(embedded_reber_grammar)
for _ in range(size // 2)]
bad_strings = [generate_corrupted_string(embedded_reber_grammar)
for _ in range(size - size // 2)]
all_strings = good_strings + bad_strings
n_steps = max([len(string) for string in all_strings])
X = np.array([string_to_one_hot_vectors(string, n_steps)
for string in all_strings])
seq_length = np.array([len(string) for string in all_strings])
y = np.array([[1] for _ in range(len(good_strings))] +
[[0] for _ in range(len(bad_strings))])
rnd_idx = np.random.permutation(size)
return X[rnd_idx], seq_length[rnd_idx], y[rnd_idx]
X_train, l_train, y_train = generate_dataset(10000)
Explanation: We can now generate the dataset, with 50% good strings, and 50% bad strings:
End of explanation
X_train[0]
Explanation: Let's take a look at the first training instances:
End of explanation
l_train[0]
Explanation: It's padded with a lot of zeros because the longest string in the dataset is that long. How long is this particular string?
End of explanation
y_train[0]
Explanation: What class is it?
End of explanation
reset_graph()
possible_chars = "BEPSTVX"
n_inputs = len(possible_chars)
n_neurons = 30
n_outputs = 1
learning_rate = 0.02
momentum = 0.95
X = tf.placeholder(tf.float32, [None, None, n_inputs], name="X")
seq_length = tf.placeholder(tf.int32, [None], name="seq_length")
y = tf.placeholder(tf.float32, [None, 1], name="y")
gru_cell = tf.contrib.rnn.GRUCell(num_units=n_neurons)
outputs, states = tf.nn.dynamic_rnn(gru_cell, X, dtype=tf.float32,
sequence_length=seq_length)
logits = tf.layers.dense(states, n_outputs, name="logits")
y_pred = tf.cast(tf.greater(logits, 0.), tf.float32, name="y_pred")
y_proba = tf.nn.sigmoid(logits, name="y_proba")
xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.MomentumOptimizer(learning_rate=learning_rate,
momentum=momentum,
use_nesterov=True)
training_op = optimizer.minimize(loss)
correct = tf.equal(y_pred, y, name="correct")
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32), name="accuracy")
init = tf.global_variables_initializer()
saver = tf.train.Saver()
Explanation: Perfect! We are ready to create the RNN to identify good strings. We build a sequence classifier very similar to the one we built earlier to classify MNIST images, with two main differences:
* First, the input strings have variable length, so we need to specify the sequence_length when calling the dynamic_rnn() function.
* Second, this is a binary classifier, so we only need one output neuron that will output, for each input string, the estimated log probability that it is a good string. For multiclass classification, we used sparse_softmax_cross_entropy_with_logits() but for binary classification we use sigmoid_cross_entropy_with_logits().
End of explanation
X_val, l_val, y_val = generate_dataset(5000)
n_epochs = 50
batch_size = 50
with tf.Session() as sess:
init.run()
for epoch in range(n_epochs):
X_batches = np.array_split(X_train, len(X_train) // batch_size)
l_batches = np.array_split(l_train, len(l_train) // batch_size)
y_batches = np.array_split(y_train, len(y_train) // batch_size)
for X_batch, l_batch, y_batch in zip(X_batches, l_batches, y_batches):
loss_val, _ = sess.run(
[loss, training_op],
feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
acc_train = accuracy.eval(feed_dict={X: X_batch, seq_length: l_batch, y: y_batch})
acc_val = accuracy.eval(feed_dict={X: X_val, seq_length: l_val, y: y_val})
print("{:4d} Train loss: {:.4f}, accuracy: {:.2f}% Validation accuracy: {:.2f}%".format(
epoch, loss_val, 100 * acc_train, 100 * acc_val))
saver.save(sess, "my_reber_classifier")
Explanation: Now let's generate a validation set so we can track progress during training:
End of explanation
test_strings = [
"BPBTSSSSSSSSSSSSXXTTTTTVPXTTVPXTTTTTTTVPXVPXVPXTTTVVETE",
"BPBTSSSSSSSSSSSSXXTTTTTVPXTTVPXTTTTTTTVPXVPXVPXTTTVVEPE"]
l_test = np.array([len(s) for s in test_strings])
max_length = l_test.max()
X_test = [string_to_one_hot_vectors(s, n_steps=max_length)
for s in test_strings]
with tf.Session() as sess:
saver.restore(sess, "my_reber_classifier")
y_proba_val = y_proba.eval(feed_dict={X: X_test, seq_length: l_test})
print()
print("Estimated probability that these are Reber strings:")
for index, string in enumerate(test_strings):
print("{}: {:.2f}%".format(string, y_proba_val[index][0]))
Explanation: Now let's test our RNN on two tricky strings: the first one is bad while the second one is good. They only differ by the second to last character. If the RNN gets this right, it shows that it managed to notice the pattern that the second letter should always be equal to the second to last letter. That requires a fairly long short-term memory (which is the reason why we used a GRU cell).
End of explanation |
5,669 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scripts
Calculate Mutual Info
The script "calculate_mutual_info.py" takes as an input a file containing various time-series replicas
Step1: then the argument parser is defined
Step2: Arguments
The Input file format has been already described. Other options give the possibility to
Step3: Following our exploration of the script we now enter in the actual execution.
Firstly the options are stored in more readable variables.
Step4: and finally the data is read from the file specified from the user and if the --interleave option has been selected the data is reorganized as described above and stored in a numpy array named dat
Step5: Then the dat array is used to create an instance of the TimeSer Object defined in the ts module.
Step6: In the TimeSer Object a series of procedre for the calculation of Statistical Entropy, Mutual Information and Information Transfer are available.
The most crucial and time consuming functions multiple are actually wrapper for FORTRAN90 code that have been compiled with the f2py tool. These function variants are identified by the post-fix "for".
Some of the functions have also been parallelized with OpenMP. The Module automatically identify the number of processors available on the computer, and automatically gnereates an equal number of threads and equally distributes the calculations among these threads. The parallelized versions are identified by the post-fix "omp".
In the script here presented we use the "OMP" version of the "mutual_info()" function [called "mutual_info_omp()"], wich produces as an output two numpy arrays
Step7: Then finally an image representing the Mutual Information matrix is generated using matplotlib.
Step8: If asked the image is also saved to a file in the SVG format (Scalable Vector Graphics) that can be easily opened with any vector graphic editor (e.g. Inkscape, Adobe Illustrator)
Step9: And the Mutual Information Matrix is also saved to disk in text format. | Python Code:
import ts
import matplotlib.pyplot as plt
import numpy as np
from argparse import ArgumentParser
Explanation: Scripts
Calculate Mutual Info
The script "calculate_mutual_info.py" takes as an input a file containing various time-series replicas: each column will be interpreted as different replica and each row will be a different value as a function of time.
The replicas needs to have the same number of time-measures (i.e. same number of rows).
The output will contain a symmetric matrix of size (N x N) where N = number of replicas, which contains the Mutual Information of each replica against the others (on the diagonal the values of Information Entropy of each replica).
The script starts by loading the needed packages:
End of explanation
parser = ArgumentParser( description = 'Calculate Mutual Information')
#
# SEE FILE FOR DETAILS.
#
options = parser.parse_args()
Explanation: then the argument parser is defined:
End of explanation
def interleave(data,ndim):
nfr, nrep = data.shape
out = np.zeros(data.shape)
for i in range(nrep/ndim):
or j in range(ndim):
out[:,ndim*i+j] = data[:,j*(nrep/ndim)+i]
return out
Explanation: Arguments
The Input file format has been already described. Other options give the possibility to :
load and analyse the time-series using only one every n-th frame (--stride)
define the number of bins to be used to build the histograms (--nbins)
use a simple (and not so clever) optimization for calculate the optimal bin-width (--opt)
specify the dimensionality and the organization of the data in the input file (--ndim and --interleave)
For more informations concerning this aspect read the next paragraph
create an image containing a representation of the results (--plot)
Data dimensionality and reorganization
By default the program assumes that the data are 1-dimensional time series, so if the input files contains N
columns it will generate N replicas.
But the data can also be multi dimensional: if the user specify that the data are k-dimensional, if the
input files contains N columns it will generate N/k replicas.
In the case the user specifies that the data has to be represented in k($>1$) dimensions, by default the
script assumes that the values of the various dimensions of a given replicas are consecutives columns.
EXAMPLE:
If we specify --dim 3 and tha files contains 6 columns, the program will genrate 2 3-dim replicas, and it will assume that the column in the input file are:
X1 Y1 Z1 X2 Y2 Z2
i.e. : the 1-st column is the 1-st dimension of the 1-st replica, the 2-nd column in the 2-nd dimension of the 1-st replica and so on.
Specifing the option --interleave, the user can modify this behaviour and the script will instead assume that the input data are organized as the following:
X1 X2 Y1 Y2 Z1 Z2
i.e. : the first N/k colum are the 1-st dimension of replicas, followed by N/K columns containing the 2-nd dimension and son on.
Description
The reorganization of the data in the correct order is made using the following function:
End of explanation
f_dat = options.dat
f_out = options.out
stride = options.stride
Explanation: Following our exploration of the script we now enter in the actual execution.
Firstly the options are stored in more readable variables.
End of explanation
dat = np.loadtxt(f_dat)
dat = dat[::stride]
if (options.interleave) & (options.ndim != 1):
dat = interleave(dat,options.ndim)
Explanation: and finally the data is read from the file specified from the user and if the --interleave option has been selected the data is reorganized as described above and stored in a numpy array named dat
End of explanation
DATA= ts.TimeSer(dat,len(dat),dim=options.ndim,nbins=options.nbins)
DATA.calc_bins(opt=options.opt)
Explanation: Then the dat array is used to create an instance of the TimeSer Object defined in the ts module.
End of explanation
M, E = DATA.mutual_info_omp()
Explanation: In the TimeSer Object a series of procedre for the calculation of Statistical Entropy, Mutual Information and Information Transfer are available.
The most crucial and time consuming functions multiple are actually wrapper for FORTRAN90 code that have been compiled with the f2py tool. These function variants are identified by the post-fix "for".
Some of the functions have also been parallelized with OpenMP. The Module automatically identify the number of processors available on the computer, and automatically gnereates an equal number of threads and equally distributes the calculations among these threads. The parallelized versions are identified by the post-fix "omp".
In the script here presented we use the "OMP" version of the "mutual_info()" function [called "mutual_info_omp()"], wich produces as an output two numpy arrays:
M [ size (NxN) N = num. of replicas ] : Mutual Information.
E [ size (NxN) N = num. of replicas ] : Entropies joint distributions of replicas.
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
mat = ax.matshow(M)
fig.colorbar(mat)
plt.show()
Explanation: Then finally an image representing the Mutual Information matrix is generated using matplotlib.
End of explanation
if options.plot:
fig.savefig(f_out.split('.')[0]+".svg",format='svg')
Explanation: If asked the image is also saved to a file in the SVG format (Scalable Vector Graphics) that can be easily opened with any vector graphic editor (e.g. Inkscape, Adobe Illustrator)
End of explanation
np.savetxt(f_out,M)
quit()
Explanation: And the Mutual Information Matrix is also saved to disk in text format.
End of explanation |
5,670 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TwoPortOnePath, EnhancedResponse, and FakeFlip
Intro
This example demonstrates a macgyver-ish shortcut you can take if you are measuring a device that is reciprocal and symmetric on a switch-less three-receiver system. For more information about error correction this type of architecture, see Calibration With Three Receivers.
In general, full error correction of a 2-port network on a switchless three-receiver architecture requires each DUT to measured in two orientations. However, if the DUT is known to be reciprocal ($S_{21}=S_{12}$) and symmetric ($S_{11}=S_{22}$), then measurements in both orientations produce the same response, and therefore are unnecessary.
The following worked example compares the corrected response of a 10dB attenuator at WR-12 as corrected using full error correction and pseudo-full error correction using
Step1: Example
These measurements where taken on a Agilent PNAX with a set of VDI WR-12 TXRX-RX Frequency Extender heads. The measurements of the calibration standards and DUT's were downloaded from the VNA by saving touchstone files of the raw s-parameter data to disk.
In the code that follows a TwoPortOnePath calibration is created from corresponding measured and ideal responses of the calibration standards. The measured networks are read from disk, while their corresponding ideal responses are generated using scikit-rf. More information about using scikit-rf to do offline calibrations can be found here.
Step2: Correction Options
With the calibration created above, we compare the corrected response of WR-12 10dB attenuator using Full, Pseudo-Full, and Partial Correction. Each correction algorithm is described below.
Step3: Full Correction (TwoPortOnePath)
Full correction on this type of architecture has been called TwoPortOnePath. In scikit-rf using this correction algorithm requires the device to be measured in both orientations, forward and reverse, and passing them both to the apply_cal() function as a tuple. Neglecting the connector uncertainty, this type of correction is identical to full two-port SOLT calibration.
Pseudo-full Correction ( FakeFlip)
If we assume the DUT is reciprocal and symmetric, then measuring the device in both orientations will produce the same result. Therefore, the reverse orientation measurement may be replaced by a copy of the forward orientation measurement. We refer to this technique as the Fake Flip.
<div class="alert ">
**Warning** | Python Code:
from IPython.display import *
Image('three_receiver_cal/pics/macgyver.jpg', width='50%')
Explanation: TwoPortOnePath, EnhancedResponse, and FakeFlip
Intro
This example demonstrates a macgyver-ish shortcut you can take if you are measuring a device that is reciprocal and symmetric on a switch-less three-receiver system. For more information about error correction this type of architecture, see Calibration With Three Receivers.
In general, full error correction of a 2-port network on a switchless three-receiver architecture requires each DUT to measured in two orientations. However, if the DUT is known to be reciprocal ($S_{21}=S_{12}$) and symmetric ($S_{11}=S_{22}$), then measurements in both orientations produce the same response, and therefore are unnecessary.
The following worked example compares the corrected response of a 10dB attenuator at WR-12 as corrected using full error correction and pseudo-full error correction using:
Full Correction
Pseudo-Full Correction (FakeFlip)
Partial (EnhancedResponse)
End of explanation
import skrf as rf
%matplotlib inline
from pylab import *
rf.stylely()
from skrf.calibration import TwoPortOnePath
from skrf.media import RectangularWaveguide
from skrf import two_port_reflect as tpr
from skrf import mil
raw = rf.read_all_networks('three_receiver_cal/data/')
# pull frequency information from measurements
frequency = raw['short'].frequency
# the media object
wg = RectangularWaveguide(frequency=frequency, a=120*mil, z0=50)
# list of 'ideal' responses of the calibration standards
ideals = [wg.short(nports=2),
tpr(wg.delay_short( 90,'deg'), wg.match()),
wg.match(nports=2),
wg.thru()]
# corresponding measurements to the 'ideals'
measured = [raw['short'],
raw['quarter wave delay short'],
raw['load'],
raw['thru']]
# the Calibration object
cal = TwoPortOnePath(measured = measured, ideals = ideals )
Explanation: Example
These measurements where taken on a Agilent PNAX with a set of VDI WR-12 TXRX-RX Frequency Extender heads. The measurements of the calibration standards and DUT's were downloaded from the VNA by saving touchstone files of the raw s-parameter data to disk.
In the code that follows a TwoPortOnePath calibration is created from corresponding measured and ideal responses of the calibration standards. The measured networks are read from disk, while their corresponding ideal responses are generated using scikit-rf. More information about using scikit-rf to do offline calibrations can be found here.
End of explanation
Image('three_receiver_cal/pics/symmetric DUT.jpg', width='75%')
Explanation: Correction Options
With the calibration created above, we compare the corrected response of WR-12 10dB attenuator using Full, Pseudo-Full, and Partial Correction. Each correction algorithm is described below.
End of explanation
dutf = raw['attenuator (forward)']
dutr = raw['attenuator (reverse)']
# note the correction algorithm is different depending on what is passed to
# apply_cal
corrected_full = cal.apply_cal((dutf, dutr))
corrected_fakeflip = cal.apply_cal((dutf,dutf))
corrected_partial = cal.apply_cal(dutf)
f, ax = subplots(2,2, figsize=(8,8))
for m in [0,1]:
for n in [0,1]:
ax_ = ax[m,n]
ax_.set_title('$S_{%i%i}$'%(m+1,n+1))
corrected_full.plot_s_db(m,n, label='Full Correction',ax=ax_ )
corrected_fakeflip.plot_s_db(m,n, label='Pseudo-full Correction', ax=ax_)
if n==0:
corrected_partial.plot_s_db(m,n, label='Partial Correction', ax=ax_)
tight_layout()
Explanation: Full Correction (TwoPortOnePath)
Full correction on this type of architecture has been called TwoPortOnePath. In scikit-rf using this correction algorithm requires the device to be measured in both orientations, forward and reverse, and passing them both to the apply_cal() function as a tuple. Neglecting the connector uncertainty, this type of correction is identical to full two-port SOLT calibration.
Pseudo-full Correction ( FakeFlip)
If we assume the DUT is reciprocal and symmetric, then measuring the device in both orientations will produce the same result. Therefore, the reverse orientation measurement may be replaced by a copy of the forward orientation measurement. We refer to this technique as the Fake Flip.
<div class="alert ">
**Warning**:
Be sure that you understand the assumptions of reciprocity and symmetry before using this macgyver technique, incorrect usage can lead to nonsense results.
</div>
Partial Correction (EnhancedResponse)
If you pass a single measurement to the apply_cal() function, then the calibration will employ partial correction. This type of correction is known as EnhancedResponse. While the Fake Flip technique assumes the device is reciprocal and symmetric, the EnhancedResponse algorithm implicitly assumes that the port 2 of the device is perfectly matched. The accuracy of the corrected result produced with either of these algorithms depends on accuracy of the assumptions.
Comparison
End of explanation |
5,671 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
Step1: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Here, just creating some placeholders like normal.
Step6: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper
Step7: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Exercise
Step9: Model Loss
Calculating the loss like before, nothing new here.
Step11: Optimizers
Again, nothing new here.
Step12: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network.
Step15: Hyperparameters
GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise | Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), self.scaler(y)
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x = tf.layers.dense(z, 512*16)
x = tf.reshape(x, (-1, 4, 4, 512)) # 4x4x516
# leaky relu
leaky_relu = lambda x: tf.maximum(x*alpha, x)
# convolutions
conv_1 = tf.layers.conv2d_transpose(x, 256, 5, strides=2, padding='same') # 8x8x256
conv_1 = tf.layers.batch_normalization(conv_1, training=training)
conv_1 = leaky_relu(conv_1)
conv_2 = tf.layers.conv2d_transpose(conv_1, 128, 5, strides=2, padding='same') # 16x16x128
conv_2 = tf.layers.batch_normalization(conv_2, training=training)
conv_2 = leaky_relu(conv_2)
# Output layer, 32x32x3
logits = tf.layers.conv2d_transpose(conv_1, 3, 5, strides=2, padding='same')
out = tf.tanh(logits)
return out
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.
End of explanation
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x = tf.layers.conv2d(x, 16, 4, (2, 2), padding='same')
x = tf.maximum(x * alpha, x)
# now 16x16x16
x = tf.layers.conv2d(x, 32, 4, (2, 2), padding='same')
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(x * alpha, x)
# now 8x8x32
x = tf.layers.conv2d(x, 64, 4, (2, 2), padding='same')
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(x * alpha, x)
# now 4x4x64
x = tf.reshape(x, (-1, 4*4*64))
logits = tf.layers.dense(x, 1, name='output')
out = tf.nn.sigmoid(logits)
return out, logits
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've build before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.
End of explanation
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
Explanation: Optimizers
Again, nothing new here.
End of explanation
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=0.2)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, 0.5)
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img)
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
Explanation: Here is a function for displaying generated images.
End of explanation
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(50, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 5, 10, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
Explanation: And another function we can use to train our network.
End of explanation
real_size = (32,32,3)
z_size = 100
learning_rate = 0.0002
batch_size = 128
epochs = 1
alpha = 0.2
beta1 = 0.5
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 5, 10, figsize=(10,5))
Explanation: Hyperparameters
GANs are very senstive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.
End of explanation |
5,672 | Given the following text description, write Python code to implement the functionality described.
Description:
Last digit in a power of 2
Python3 program to find last digit in a power of 2. ; Corner case ; Find the shift in current cycle and return value accordingly ; Driver code
| Python Code:
def lastDigit2PowerN(n ) :
if n == 0 :
return 1
elif n % 4 == 1 :
return 2
elif n % 4 == 2 :
return 4
elif n % 4 == 3 :
return 8
else :
for n in range(20 ) :
print(lastDigit2PowerN(n ) , end = "▁ ")
|
5,673 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Seem to need to do one run before the profiling
Every time the first run looks different than the rest, so we run one and throw it out.
Step1: Memory profile without sigma clipping
Step2: Memory profile with sigma clipping | Python Code:
_, _ = run_memory_profile(num_files, sampling_interval, size=image_size,
memory_limit=memory_limit, combine_method='average')
Explanation: Seem to need to do one run before the profiling
Every time the first run looks different than the rest, so we run one and throw it out.
End of explanation
n_repetitions = 4
def run_them(runs, clipping=False):
for combine_method in runs.keys():
for _ in range(n_repetitions):
mem_use, img_size = run_memory_profile(num_files, sampling_interval, size=image_size,
memory_limit=memory_limit, combine_method=combine_method,
sigma_clip=clipping)
gc.collect()
runs[combine_method]['times'].append(np.arange(len(mem_use)) * sampling_interval)
runs[combine_method]['memory'].append(mem_use)
runs[combine_method]['image_size'] = img_size
runs[combine_method]['memory_limit'] = memory_limit
runs[combine_method]['clipping'] = clipping
run_them(runs)
styles = ['solid', 'dashed', 'dotted']
plt.figure(figsize=(20, 10))
for idx, method in enumerate(runs.keys()):
style = styles[idx % len(styles)]
for i, data in enumerate(zip(runs[method]['times'], runs[method]['memory'])):
time, mem_use = data
if i == 0:
label = 'Memory use in {} combine (repeated runs same style)'.format(method)
alpha = 1.0
else:
label = ''
alpha = 0.4
plt.plot(time, mem_use, linestyle=style, label=label, alpha=alpha)
plt.vlines(-40 * sampling_interval, mem_use[0], mem_use[0] + memory_limit/1e6, colors='red', label='Memory use limit')
plt.vlines(-20 * sampling_interval, mem_use[0], mem_use[0] + runs[method]['image_size']/1e6, label='size of one image')
plt.grid()
clipped = 'ON' if runs[method]['clipping'] else 'OFF'
plt.title('ccdproc commit {}; {} repetitions per method; sigma_clip {}'.format(commit, n_repetitions, clipped),
fontsize=20)
plt.xlabel('Time (sec)', fontsize=20)
plt.ylabel('Memory use (MB)', fontsize=20)
plt.legend(fontsize=20)
plt.savefig('commit_{}_reps_{}_clip_{}_memlim_{}GB.png'.format(commit, n_repetitions, clipped, memory_limit/1e9))
Explanation: Memory profile without sigma clipping
End of explanation
run_them(runs_clip, clipping=True)
plt.figure(figsize=(20, 10))
for idx, method in enumerate(runs_clip.keys()):
style = styles[idx % len(styles)]
for i, data in enumerate(zip(runs_clip[method]['times'], runs_clip[method]['memory'])):
time, mem_use = data
if i == 0:
label = 'Memory use in {} combine (repeated runs same style)'.format(method)
alpha = 1.0
else:
label = ''
alpha = 0.4
plt.plot(time, mem_use, linestyle=style, label=label, alpha=alpha)
plt.vlines(-40 * sampling_interval, mem_use[0], mem_use[0] + memory_limit/1e6, colors='red', label='Memory use limit')
plt.vlines(-20 * sampling_interval, mem_use[0], mem_use[0] + runs_clip[method]['image_size']/1e6, label='size of one image')
plt.grid()
clipped = 'ON' if runs_clip[method]['clipping'] else 'OFF'
plt.title('ccdproc commit {}; {} repetitions per method; sigma_clip {}'.format(commit, n_repetitions, clipped),
fontsize=20)
plt.xlabel('Time (sec)', fontsize=20)
plt.ylabel('Memory use (MB)', fontsize=20)
plt.legend(fontsize=20)
plt.savefig('commit_{}_reps_{}_clip_{}_memlim_{}GB.png'.format(commit, n_repetitions, clipped, memory_limit/1e9))
Explanation: Memory profile with sigma clipping
End of explanation |
5,674 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference
Step1: imports for Python, Pandas
Step2: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source
Step3: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source
Step4: JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
Step5: Top 10 Countries with most projects
Step6: Top 10 major project themes
Step7: Dataframe with the missing names filled in
Top 10 themes | Python Code:
import pandas as pd
Explanation: JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference: http://pandas-docs.github.io/pandas-docs-travis/io.html#json
data source: http://jsonstudio.com/resources/
End of explanation
import json
from pandas.io.json import json_normalize
Explanation: imports for Python, Pandas
End of explanation
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
Explanation: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source: http://pandas-docs.github.io/pandas-docs-travis/io.html#normalization
End of explanation
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
Explanation: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source: http://jsonstudio.com/resources/
End of explanation
# load json data frame
dataFrame = pd.read_json('data/world_bank_projects.json')
dataFrame
dataFrame.info()
dataFrame.columns
Explanation: JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
End of explanation
dataFrame.groupby(dataFrame.countryshortname).count().sort('_id', ascending=False).head(10)
Explanation: Top 10 Countries with most projects
End of explanation
themeNameCode = []
for codes in dataFrame.mjtheme_namecode:
themeNameCode += codes
themeNameCode = json_normalize(themeNameCode)
themeNameCode['count']=themeNameCode.groupby('code').transform('count')
themeNameCode.sort('count', ascending=False).drop_duplicates().head(10)
#Create dictionary Code:Name to replace empty names.
codeNameDict = {}
for codes in dataFrame.mjtheme_namecode:
for code in codes:
if code['name']!='':
codeNameDict[code['code']]=code['name']
index=0
for codes in dataFrame.mjtheme_namecode:
innerIndex=0
for code in codes:
if code['name']=='':
dataFrame.mjtheme_namecode[index][innerIndex]['name']=codeNameDict[code['code']]
innerIndex += 1
index += 1
themeNameCode = []
for codes in dataFrame.mjtheme_namecode:
themeNameCode += codes
Explanation: Top 10 major project themes
End of explanation
themeNameCode = json_normalize(themeNameCode)
themeNameCode['count']=themeNameCode.groupby('code').transform('count')
themeNameCode.sort('count', ascending=False).drop_duplicates().head(10)
Explanation: Dataframe with the missing names filled in
Top 10 themes
End of explanation |
5,675 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dynamic Testing
We want to measure the dynamical characteristics of a SDOF building system,
i.e., its mass, its damping coefficient and its elastic stiffness.
To this purpose, we demonstrate that is sufficient to measure the steady-state
response of the SDOF when subjected to a number of harmonic excitations with
different frequencies.
The steady-state response is characterized by its amplitude, $ρ$ and the phase
delay, $θ$, as in $x_{SS}(t) = ρ \sin(ωt − θ)$.
A SDOF structural system is excited by a vibrodyne that exerts a harmonic force
$p(t) = p_o\sin ωt$, with $p_o = 2.224\,{}$kN at different frequencies, and we can
measure the steady-state response parameters for two different input frequencies,
as detailed in the following table.
<style type="text/css">
.tg {border-collapse
Step1: Determination of $\zeta$
Using the previously established relationship for $\cos\vartheta$, we
can write $\cos\vartheta=k(1-\beta^2)\frac{\rho}{p_o}$, from the
equation of the phase angle (see above), we can write $\cos\vartheta =
\frac{1-\beta^2}{2\zeta\beta}\sin\vartheta$, and finally
$$\frac{\rho k}{p_o}=\frac{\sin\vartheta}{2\zeta\beta},
\quad\text{hence}\quad
\zeta=\frac{p_o}{\rho k}\frac{\sin\vartheta}{2\beta}$$
Lets write some code that gives us our two wstimates | Python Code:
from scipy import matrix, sqrt, pi, cos, sin, set_printoptions
p0 = 2224.0 # converted from kN to Newton
rho1 = 183E-6 ; rho2 = 368E-6 # converted from μm to m
w1 = 16.0 ; w2 = 25.0
th1 = 15.0 ; th2 = 55.0
d2r = pi/180.
cos1 = cos(d2r*th1) ; cos2 = cos(d2r*th2)
sin1 = sin(d2r*th1) ; sin2 = sin(d2r*th2)
# the unknowns are k and m
# coefficient matrix, row i is 1, omega_i^2
coeff = matrix(((1, -w1**2),(1, -w2**2)))
# kt i.e., know term, cos(theta_i)/rho_i * p_0
kt = matrix((cos1/rho1,cos2/rho2)).T*p0
print(coeff)
print(kt)
k_and_m = coeff.I*kt
k, m = k_and_m[0,0], k_and_m[1,0]
wn2, wn = k/m, sqrt(k/m)
print(' k m wn2 wn')
print(k, m, wn2, wn)
Explanation: Dynamic Testing
We want to measure the dynamical characteristics of a SDOF building system,
i.e., its mass, its damping coefficient and its elastic stiffness.
To this purpose, we demonstrate that is sufficient to measure the steady-state
response of the SDOF when subjected to a number of harmonic excitations with
different frequencies.
The steady-state response is characterized by its amplitude, $ρ$ and the phase
delay, $θ$, as in $x_{SS}(t) = ρ \sin(ωt − θ)$.
A SDOF structural system is excited by a vibrodyne that exerts a harmonic force
$p(t) = p_o\sin ωt$, with $p_o = 2.224\,{}$kN at different frequencies, and we can
measure the steady-state response parameters for two different input frequencies,
as detailed in the following table.
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;text-align:center;}
.tg td{font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;text-align:center;}
.tg th{font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;text-align:center;}
</style>
<center>
<table class="tg">
<tr>
<th class="tg-031e">$i$</th>
<th class="tg-031e">$ω_i$ (rad/s)</th>
<th class="tg-031e">$ρ_i$ (μm)</th>
<th class="tg-031e">$θ_i$ (deg) </th>
<th class="tg-031e">$\cos θ_i$</th>
<th class="tg-031e">$\sin θ_i$</th>
</tr>
<tr>
<td class="tg-031e">1</td>
<td class="tg-031e">16.0</td>
<td class="tg-031e">183.0</td>
<td class="tg-031e">15.0</td>
<td class="tg-031e">0.966</td>
<td class="tg-031e">0.259</td>
</tr>
<tr>
<td class="tg-031e">2</td>
<td class="tg-031e">25.0</td>
<td class="tg-031e">368.0</td>
<td class="tg-031e">55.0</td>
<td class="tg-031e">0.574</td>
<td class="tg-031e">0.819</td>
</tr>
</table>
</center>
Determination of $k$ and $m$
We start from the equation for steady-state response amplitude,
$$\rho=\frac{p_o}{k}\frac{1}{\sqrt{(1-\beta^2)^2+(2\zeta\beta)^2}}$$
where we collect $(1-\beta^2)^2$ in the radicand in the right member,
$$\rho=\frac{p_o}{k}\frac{1}{1-\beta^2}\frac{1}{\sqrt{1+[2\zeta\beta/(1-\beta^2)]^2}}$$
but the equation for the phase angle,
$\tan\vartheta=\frac{2\zeta\beta}{1-\beta^2}$, can be substituted in
the radicand, so that, using simple trigonometric identities, we find that
$$\rho=\frac{p_o}{k}\frac{1}{1-\beta^2}\frac{1}{\sqrt{1+\tan^2\vartheta}}=
\frac{p_o}{k}\frac{\cos\vartheta}{1-\beta^2}.$$
With $k(1-\beta^2)=k-k\frac{\omega^2}{k/m}=k-\omega^2m$ and using a
simple rearrangement, we eventually have
<center>
$\displaystyle{k-\omega^2m=\frac{p_o}{\rho}\cos\vartheta.}$
</center>
End of explanation
z1 = p0*sin1/rho1/k/2/(w1/wn)
z2 = p0*sin2/rho2/k/2/(w2/wn)
print(z1*100, z2*100)
Explanation: Determination of $\zeta$
Using the previously established relationship for $\cos\vartheta$, we
can write $\cos\vartheta=k(1-\beta^2)\frac{\rho}{p_o}$, from the
equation of the phase angle (see above), we can write $\cos\vartheta =
\frac{1-\beta^2}{2\zeta\beta}\sin\vartheta$, and finally
$$\frac{\rho k}{p_o}=\frac{\sin\vartheta}{2\zeta\beta},
\quad\text{hence}\quad
\zeta=\frac{p_o}{\rho k}\frac{\sin\vartheta}{2\beta}$$
Lets write some code that gives us our two wstimates
End of explanation |
5,676 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A nonlinear perspective on climate change
<p class="gap2"></p>
The following lab will have you explore the concepts developed in Palmer (1999}
1. Exploring the Lorenz System
By now you are well acquainted with the Lorenz system
Step2: Let's write a quick Lorenz solver
Step3: Let's plot this solution
Step5: Very pretty. If you are curious, you can even change the plot angle within the function, and examine this strange attractor under all angles. There are several key properties to this attractor
Step6: Let's start by plotting the solutions for X, Y and Z when $f_0 = 0$ (no pertubation)
Step7: What happened to X? Well in this case X and Y are so close to each other that they plot on top of each other.
Baiscally, the system orbits around some fixed points near $X, Y = \pm 10$. The transitions between these "regimes" are quite random. Sometimes the system hangs out there a while, sometimes not.
Isolating climate fluctuations
Furthermore, you may be overwhelmed by the short term variability in the system. In the climate system, we call this shprt-term variability "weather", and often what is of interest is the long-term behavior of the system (the "climate"). To isolate that, we need to filter the solutions. More precisely, we will apply a butterworth lowpass filter to $X, Y ,Z$ to highlight their slow evolution. (if you ever wonder what it is, take GEOL425L
Step8: (Once again, Y is on top of X. )
Let us now plot the probability of occurence of states in the $(X,Y)$ plane. If all motions were equally likely, this probability would be uniform. Is that what we observe?
Step9: Question 1 ###
How would you describe the probability of visiting states? What do "dark" regions correspond to?
Answer 1
Step10: Now let's have some fun | Python Code:
%matplotlib inline
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
import seaborn as sns
import butter_lowpass_filter as blf
Explanation: A nonlinear perspective on climate change
<p class="gap2"></p>
The following lab will have you explore the concepts developed in Palmer (1999}
1. Exploring the Lorenz System
By now you are well acquainted with the Lorenz system:
$$
\begin{aligned}
\dot{x} & = \sigma(y-x) \
\dot{y} & = \rho x - y - xz \
\dot{z} & = -\beta z + xy
\end{aligned}
$$
It exhibits a range of different behaviors as the parameters ($\sigma$, $\beta$, $\rho$) are varied.
Wouldn't it be nice if you could tinker with this yourself? That is exactly what you will be doing in this lab.
Everything is based on the programming language Python, which every geoscientist should learn. But all you need to know is that Shit+Enter will execute the current cell and move on to the next one. For all the rest, read this
Note that if you're viewing this notebook statically (e.g. on nbviewer) the examples below will not work. They require connection to a running Python kernel
Let us start by defining a few useful modules
End of explanation
def solve_lorenz(N=10, angle=0.0, max_time=4.0, sigma=10.0, beta=8./3, rho=28.0):
def lorenz_deriv((x, y, z), t0, sigma=sigma, beta=beta, rho=rho):
Compute the time-derivative of a Lorentz system.
return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z]
# Choose random starting points, uniformly distributed from -15 to 15
np.random.seed(1)
x0 = -15 + 30 * np.random.random((N, 3))
# Solve for the trajectories
t = np.linspace(0, max_time, int(250*max_time))
x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)
for x0i in x0])
# choose a different color for each trajectory
colors = plt.cm.jet(np.linspace(0, 1, N))
# plot the results
sns.set_style("white")
fig = plt.figure(figsize = (8,8))
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
#ax.axis('off')
# prepare the axes limits
ax.set_xlim((-25, 25))
ax.set_ylim((-35, 35))
ax.set_zlim((5, 55))
for i in range(N):
x, y, z = x_t[i,:,:].T
lines = ax.plot(x, y, z, '-', c=colors[i])
plt.setp(lines, linewidth=1)
ax.view_init(30, angle)
ax.set_xlabel('X axis')
ax.set_ylabel('Y axis')
ax.set_zlabel('Z axis')
plt.show()
return t, x_t
Explanation: Let's write a quick Lorenz solver
End of explanation
t, x_t = solve_lorenz(max_time=10.0)
Explanation: Let's plot this solution
End of explanation
def forced_lorenz(N=3, fnot=2.5, theta=0, max_time=100.0, sigma=10.0, beta=8./3, rho=28.0):
def lorenz_deriv((x, y, z), t0, sigma=sigma, beta=beta, rho=rho):
Compute the time-derivative of a forced Lorentz system.
c = 2*np.pi/360
return [sigma * (y - x) + fnot*np.cos(theta*c), x * (rho - z) - y + fnot*np.sin(theta*c), x * y - beta * z]
# Choose random starting points, uniformly distributed from -15 to 15
np.random.seed(1)
x0 = -15 + 30 * np.random.random((N, 3))
# Solve for the trajectories
t = np.linspace(0, max_time, int(25*max_time))
x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)
for x0i in x0])
return t, x_t
Explanation: Very pretty. If you are curious, you can even change the plot angle within the function, and examine this strange attractor under all angles. There are several key properties to this attractor:
A forced Lorenz system
As a metaphor for anthropogenic climate change, we now consider the case of a forced lorenz system. We wish to see what happens if a force is applied in the directions $X$ and $Y$. The magnitude of this force is $f_0$ and we may apply it at some angle $\theta$. Hence:
$f = f_0 \left ( \cos(\theta),sin(\theta) \right) $
The new system is thus:
$$
\begin{aligned}
\dot{x} & = \sigma(y-x) + f_0 \cos(\theta)\
\dot{y} & = \rho x - y - xz + f_0 \sin(\theta)\
\dot{z} & = -\beta z + xy
\end{aligned}
$$
Does the attractor change? Do the solutions change? If so, how?
Solving the system
Let us define a function to solve this new system:
End of explanation
sns.set_style("darkgrid")
sns.set_palette("Dark2")
t, x_t = forced_lorenz(fnot = 0, theta = 50,max_time = 100.00)
xv = x_t[0,:,:]
# time filter
lab = 'X', 'Y', 'Z'
col = sns.color_palette("Paired")
fig = plt.figure(figsize = (8,8))
xl = np.empty(xv.shape)
for k in range(3):
xl[:,k] = blf.filter(xv[:,k],0.5,fs=25)
plt.plot(t,xv[:,k],color=col[k*2])
#plt.plot(t,xl[:,k],color=col[k*2+1],lw=3.0)
plt.legend(lab)
plt.show()
Explanation: Let's start by plotting the solutions for X, Y and Z when $f_0 = 0$ (no pertubation):
End of explanation
# Timeseries PLOT
sns.set_palette("Dark2")
t, x_t = forced_lorenz(fnot = 0, theta = 50,max_time = 1000.00)
xv = x_t[0,:,:]
# time filter
lab = 'X', 'lowpass-filtered X', 'Y', 'lowpass-filtered Y', 'Z','lowpass-filtered Z'
col = sns.color_palette("Paired")
fig = plt.figure(figsize = (8,8))
xl = np.empty(xv.shape)
for k in range(3):
xl[:,k] = blf.filter(xv[:,k],0.5,fs=25)
plt.plot(t,xv[:,k],color=col[k*2])
plt.plot(t,xl[:,k],color=col[k*2+1],lw=3.0)
plt.legend(lab)
plt.xlim(0,100)
# Be patient... this could take a few seconds to complete.
Explanation: What happened to X? Well in this case X and Y are so close to each other that they plot on top of each other.
Baiscally, the system orbits around some fixed points near $X, Y = \pm 10$. The transitions between these "regimes" are quite random. Sometimes the system hangs out there a while, sometimes not.
Isolating climate fluctuations
Furthermore, you may be overwhelmed by the short term variability in the system. In the climate system, we call this shprt-term variability "weather", and often what is of interest is the long-term behavior of the system (the "climate"). To isolate that, we need to filter the solutions. More precisely, we will apply a butterworth lowpass filter to $X, Y ,Z$ to highlight their slow evolution. (if you ever wonder what it is, take GEOL425L: Data Analysis in the Earth and Environmental Sciences)
End of explanation
cmap = sns.cubehelix_palette(light=1, as_cmap=True)
skip = 10
sns.jointplot(xl[0::skip,0],xl[0::skip,1], kind="kde", color="#4CB391")
Explanation: (Once again, Y is on top of X. )
Let us now plot the probability of occurence of states in the $(X,Y)$ plane. If all motions were equally likely, this probability would be uniform. Is that what we observe?
End of explanation
def plot_lorenzPDF(fnot, theta, max_time = 1000.00, skip = 10):
t, x_t = forced_lorenz(fnot = fnot, theta = theta,max_time = max_time)
xv = x_t[0,:,:]; xl = np.empty(xv.shape)
for k in range(3):
xl[:,k] = blf.filter(xv[:,k],0.5,fs=25)
g = sns.jointplot(xl[0::skip,0],xl[0::skip,1], kind="kde", color="#4CB391")
return g
Explanation: Question 1 ###
How would you describe the probability of visiting states? What do "dark" regions correspond to?
Answer 1:
Write your answer here
2. Visualizing climate change in the Lorenz system
We now wish to see if this changes once we apply a non-zero forcing. Specifically:
1. Does the attractor change with the applied forcing?
2. If not, can we say something about how frequently some states are visited?
In all the following we set $f_0 = 2.5$ and we tweak $\theta$ to see how the system responds
To ease experimentation, let us first define a function to compute and plot the results:
End of explanation
theta = 50; f0 = 2.5 # assign values of f0 and theta
g = plot_lorenzPDF(fnot = f0, theta = theta)
g.ax_joint.arrow(0, 0, 2*f0*np.cos(theta*np.pi/180), f0*np.sin(theta*np.pi/180), head_width=0.5, head_length=0.5, lw=3.0, fc='r', ec='r')
## (BE PATIENT THIS COULD TAKE UP TO A MINUTE)
Explanation: Now let's have some fun
End of explanation |
5,677 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train A Smartcab to Drive
Christopher Phillippi
This project forks Udacity's Machine Learning Nanodegree Smartcab project with my solution, modifying/adding smartcab/agent.py and smartcab/notebookhelpers.py as well as this README.
Overall summary of the final agent learning algorithm
Step1: Obviously, states are not drawn uniformly, but rather based on the simulated distribution with 3 dummy cars. Thus we’re more likely to have sampled the most likely states, and the missing states are less likely to be encountered later than if we had drawn states uniformly. In a production environment, I would make sure I run this until every possible state has been seen a sufficient number of times (potentially through stratification). For this project, I think seeing around 500 states is sufficient, and thus 100 trials should train a fairly reasonable agent.
Changes in agent behavior after implementing Q-Learning
Initially after training the agent, it would consistently approach the destination, but would take very odd paths. For example, on a red light, it would commonly take a right turn, perhaps optimistic the next intersection would allow for a left turn on green, despite the penalty for disobeying the waypoint. I found this was due to my gamma being extremely high (0.95). Overall I was ultimately weighting the future rewards much more than the current penalties and rewards for taking each correct turn. The resulting agent somewhat ignored the waypoint and took it’s own optimal course, likely based on the fact that right turns on red just tend to be optimal. I think it’s reasonable that over time, the agent would have learned to follow the waypoint, assuming it’s the most efficient way to the destination, and perhaps the high gamma was causing slow convergence. It’s also possible the agent, weighting the final outcomes higher, found a more optimal waypoint to the end goal (ignoring illegal and waypoint penalties), but I think this is unlikely.
During training, the agent would occasionally pick a suboptimal action (based on Q). Usually this was akin to taking a legal right turn on red, when the waypoint wanted to wait and go forward. This was done to ensure the agent sufficiently explored the state space. If I simply picked the action corresponding to the maximum $Q$ value, the agent would likely get stuck in a local optima. Instead the randomness allows it to eventually converge to a global optima.
To visualize, the success rate while the initial $Q$-Learning model is shown below, followed by that same agent (now learned) using the optimal policy only
Step2: What I noticed here was that the train performance was very similiar to the test performance in terms of the success rate. My intuition is that this is mainly due to the high gamma, which results in Q values that are slow to converge. Finally my $\alpha$ were decaying fairly slowly due to a span of 100, this caused my temperatures to stay high and randomly sample many suboptimal actions. Combined, this exacerbated bad estimates of Q values, which caused the test run to fail to significantly improve the overall success rate.
However, I did find that the test run was much safer after taking a look at the cumulative trips with crimes, thus it was learning
Step3: Updates to the final agent and final performance
Step4: I made two major changes (as shown in the code above), based on my observations of the initial agent. First, I reduced the $\gamma$ all the way to 0.05 from 0.95. This caused my agent to pay much more attention to the current correct turn, and less on the final goal. This also means I can set a much larger initial $\alpha$ value since a majority of the new value is now deterministic (the reward).
Another key observation I made was that optimal moves were deterministic based on the state. In order to exploit this in the learner, I considered the following cases
Step5: Optimality of the final policy
The agent effectively either took the waypoint or sat if it was illegal. That, to me, is optimal. Something I also looked into was learning my own waypoint by giving relative headings to the destination [up-left, up, up-right, left, right, down-left, down, down-right]. Obviously the environment is rewarding the wrong rewards for this scenario (tuned to the given waypoint), and I did not want to tamper with the environment so I wasn’t able to test this sufficiently.
To get a formal measure of optimality, for each trial, I counted the number of steps, $t$, as well as the number of suboptimal steps (legal but not following waypoint) $t_s$ and crime (illegal) steps $t_c$. Optimality, $\theta$, on each trial is then 1 minus the ratio of non-optimal steps | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import pylab
%matplotlib inline
def expected_trials(total_states):
n_drawn = np.arange(1, total_states)
return pd.Series(
total_states * np.cumsum(1. / n_drawn[::-1]),
n_drawn
)
expected_trials(96).plot(
title='Expected number of trials until $k$ distinct states are seen',
figsize=(15, 10))
_ = pylab.xlabel('$k$ (# of states seen)')
Explanation: Train A Smartcab to Drive
Christopher Phillippi
This project forks Udacity's Machine Learning Nanodegree Smartcab project with my solution, modifying/adding smartcab/agent.py and smartcab/notebookhelpers.py as well as this README.
Overall summary of the final agent learning algorithm:
In order to build a reinforcement learning agent to solve this problem, I ended up implementing $Q$ learning from the transitions. In class, we covered $\epsilon$-greedy exploration, where we selected the optimal action based on $Q$ with some probability 1 - $\epsilon$ and randomly otherwise. This obviously puts more weight on the current optimal strategy, but I wanted to put more or less weight on more or less suboptimal strategies as well. I did this by sampling actions in a simualted annealing fashion, assigning actions softmax probabilities of being sampled using the current $Q$ value with a decaying temperature. Further, each $Q(s, a_i)$ value is updated based on it's own exponentially decaying learning rate: $\alpha(s, a_i)$. The current temperature, $T(s)$, is defined as the mean of the decaying $\alpha(s, a)$ over all actions such that:
$$T(s) = \frac{1}{n}\sum_{i=0}^{n}{\alpha(s', a_j')}$$
$$P(a_i|Q,s) = \frac{e^{Q(s, a_i) / T(s)}}{\sum_{i=0}^{n}{e^{Q(s, a_i) / T(s)}}}$$
Once the action for exploration, $a_i$, is sampled, the algorithm realizes a reward, $R(s, a_i)$, and new state, $s'$. I then update $Q$ using the action that maximizes Q for the new state. The update equations for $Q$ and $\alpha(s, a_i)$ are below:
$$Q_{t+1}(s, a_i) = (1 - \alpha_t(s, a_i))Q_t(s, a_i) + \alpha_t(s, a_i)[R(s, a_i) + 0.05 \max_{a'}{Q_t(s', a')}]$$
$$\alpha_t(s, a_i) = 0.5(\alpha(s, a_i) - 0.05) + 0.05$$
and initially:
$$Q_{0}(s, a_i) = 0$$
$$\alpha_{0}(s, a_i) = 1.0$$
Note that while $\alpha(s, a_i)$ is decaying at each update, it hits a minimum of 0.05 (thus it never quits learning fully). Also, I chose a very low $\gamma=0.05$ here to discount the next maximum $Q$.
In terms of my state space, I use the following:
- waypoint: {left, right, forward}
- light: {green, red}
- oncoming: {None, left, right, forward}
- left: {True, False}
- right: {True, False}
Before implementing Q-Learning, did the smartcab eventually make it to the target location?
When randomly selecting actions, it's very literally acting out a random walk. It's worthing noting that on a 2D lattice, it's been proven that a random-walking agent will almost surely reach any point as the number of steps approaches infinity (McCrea Whipple, 1940). In other words, it will almost surely make it to the target location, especially because this 2D grid also has a finite number of points.
Justification behind the state space, and how it models the agent and environment.
I picked the state space mentioned above based on features I believed mattered to the optimal solution. The waypoint effectively proxies the shortest path, and the light generally signals whether None is the right action. These two features alone should be sufficient to get a fairly good accuracy, though I did not test it. Further, I added traffic because this information can help optimize certain actions. For example, you can turn right on red conditional on no traffic from the left. You can turn left on green conditional on no oncoming traffic.
I did not include the deadline here because we are incentivized to either follow the waypoint or stop to avoid something illegal. If we were learning our own waypoint based on the header, the deadline may be useful as a boolean feature once we’re close. Perhaps this would signal whether or not it would be efficient to take a right turn on red. Again, the deadline doesn’t help much given the game rewards.
I also compressed left/right which previously could be {None, left, right, forward} based on the other agents signals. Now they are True/False based on whether or not cars existed left/right. You could also likely compress the state space conditional on a red light, where only traffic on the left matters. I strayed from this approach as it involved too much hard coding for rules the Reinforcement Learner could learn with sufficient exploration.
There are only 96 unique states. Assuming each trial runs at least 5 steps, 100 trials views at least 500 states. Estimating the probability that each state will be seen here is tough since each state has a different probability of being picked based on the unknown true state distribution. Assuming the chance a state is picked is uniform, this becomes the Coupon Collector’s problem, where the expected number of trials, $T$, until $k$ coupons are collected out of a total of $n$ is:
$$E[T_{n,k}] = n \sum_{i=n-k}^{n}\frac{1}{i}$$
We can see below that assuming states are drawn uniformly, we’d expect to see all of the states after about 500 runs, and about 90% after only 250 runs:
End of explanation
from smartcab.notebookhelpers import generated_sim_stats
def plot_cumulative_success_rate(stats, sim_type, ax=None):
columns = [
'always_reached_destination',
'reached_destination',
'missed_destination',
]
stats[columns].cumsum().plot(
ax=ax, kind='area', stacked=False,
title='%s Success Rate Over Trials: %.2f%%' % (
sim_type, stats.reached_destination.mean()*100
)
)
pylab.xlabel('Trial#')
def train_test_plots(train_stats, test_stats, plot):
_, (top, bottom) = pylab.subplots(
2, sharex=True, sharey=True, figsize=(15, 12))
plot(train_stats, 'Train', ax=top)
plot(test_stats, 'Test', ax=bottom)
# Generate training, and test simulations
learned_agent_env, train_stats = generated_sim_stats(
n_trials=100,
gamma=0.95,
alpha_span=100,
min_alpha=0.05,
initial_alpha=0.2,
)
_, test_stats = generated_sim_stats(
agent_env=learned_agent_env, n_trials=100)
train_test_plots(train_stats, test_stats,
plot_cumulative_success_rate)
Explanation: Obviously, states are not drawn uniformly, but rather based on the simulated distribution with 3 dummy cars. Thus we’re more likely to have sampled the most likely states, and the missing states are less likely to be encountered later than if we had drawn states uniformly. In a production environment, I would make sure I run this until every possible state has been seen a sufficient number of times (potentially through stratification). For this project, I think seeing around 500 states is sufficient, and thus 100 trials should train a fairly reasonable agent.
Changes in agent behavior after implementing Q-Learning
Initially after training the agent, it would consistently approach the destination, but would take very odd paths. For example, on a red light, it would commonly take a right turn, perhaps optimistic the next intersection would allow for a left turn on green, despite the penalty for disobeying the waypoint. I found this was due to my gamma being extremely high (0.95). Overall I was ultimately weighting the future rewards much more than the current penalties and rewards for taking each correct turn. The resulting agent somewhat ignored the waypoint and took it’s own optimal course, likely based on the fact that right turns on red just tend to be optimal. I think it’s reasonable that over time, the agent would have learned to follow the waypoint, assuming it’s the most efficient way to the destination, and perhaps the high gamma was causing slow convergence. It’s also possible the agent, weighting the final outcomes higher, found a more optimal waypoint to the end goal (ignoring illegal and waypoint penalties), but I think this is unlikely.
During training, the agent would occasionally pick a suboptimal action (based on Q). Usually this was akin to taking a legal right turn on red, when the waypoint wanted to wait and go forward. This was done to ensure the agent sufficiently explored the state space. If I simply picked the action corresponding to the maximum $Q$ value, the agent would likely get stuck in a local optima. Instead the randomness allows it to eventually converge to a global optima.
To visualize, the success rate while the initial $Q$-Learning model is shown below, followed by that same agent (now learned) using the optimal policy only:
End of explanation
def plot_cumulative_crimes(stats, sim_type, ax=None):
(stats['crimes'.split()] > 0).cumsum().plot(
ax=ax, kind='area', stacked=False, figsize=(15, 8),
title='Cumulative %s Trials With Any Crimes: %.0f%%' % (
sim_type, (stats.crimes > 0).mean()*100
)
)
pylab.ylabel('# of Crimes')
pylab.xlabel('Trial#')
train_test_plots(train_stats, test_stats,
plot_cumulative_crimes)
Explanation: What I noticed here was that the train performance was very similiar to the test performance in terms of the success rate. My intuition is that this is mainly due to the high gamma, which results in Q values that are slow to converge. Finally my $\alpha$ were decaying fairly slowly due to a span of 100, this caused my temperatures to stay high and randomly sample many suboptimal actions. Combined, this exacerbated bad estimates of Q values, which caused the test run to fail to significantly improve the overall success rate.
However, I did find that the test run was much safer after taking a look at the cumulative trips with crimes, thus it was learning:
End of explanation
# Generate training, and test simulations
learned_agent_env, train_stats = generated_sim_stats(
n_trials=100,
gamma=0.05,
initial_alpha=1.0,
min_alpha=0.05,
alpha_span=2.0,
)
_, test_stats = generated_sim_stats(
agent_env=learned_agent_env, n_trials=100)
Explanation: Updates to the final agent and final performance
End of explanation
train_test_plots(train_stats, test_stats,
plot_cumulative_success_rate)
train_test_plots(train_stats, test_stats,
plot_cumulative_crimes)
Explanation: I made two major changes (as shown in the code above), based on my observations of the initial agent. First, I reduced the $\gamma$ all the way to 0.05 from 0.95. This caused my agent to pay much more attention to the current correct turn, and less on the final goal. This also means I can set a much larger initial $\alpha$ value since a majority of the new value is now deterministic (the reward).
Another key observation I made was that optimal moves were deterministic based on the state. In order to exploit this in the learner, I considered the following cases:
Reward of 12:
This is assigned when the car makes the correct move to the destination (not illegal or suboptimal).
Reward of 9.5:
This is assigned when the car makes an incorrect move to the destination (perhaps teleporting from one side to the other)
I map this to -0.5
Reward of 9:
This is assigned when the car makes an illegal move to the destination
I map this to -1
Reward of 2:
This is assigned when the car legally follows the waypoint
Reward of 0:
This is assigned when the car stops
Reward of -0.5:
This is assigned when the car makes a suboptimal but legal move (doesn't follow waypoint)
Reward of -1:
This is assigned when the car makes an illegal move
Now, any action with a positive reward is now an optimal action, and any action with a negative reward is suboptimal. Therefore, if I can get a positive reward, a good learner should not bother looking at any other actions, pruning the rest. If I encounter a negative reward, a good learner should never try that action again. The only uncertainty comes into play when the reward is 0 (stopping). In this case, we must try each action until we either find a positive rewarding action or rule them all out (as < 0). An optimal explorer then, will assign a zero probability to negative rewards, 1 probability to positive rewards, and a non-zero probability to 0 rewards. It follows that the initial value of Q should be 0 here. Naturally then, the explorer will do best as my temperature, $T$, for the softmax (action sampling) probabilities approaches 0. Since the temperature is modeled as the average $\alpha$, I greatly reduced the span of $\alpha$, from 200 to 2, promoting quick convergence for $\alpha \to 0.05$ and thus $T \to 0.05$. I then increased the initial value of $\alpha$ to 1.0 in order to learn $Q$ values much quicker (with higher magnitudes), knowing the $\alpha$ values themselves, will still decay to their minimum value of 0.05 quickly.
The final performance can be seen below:
End of explanation
def plot_cumulative_optimality(stats, sim_type, ax=None):
(1. - (stats['suboptimals'] + stats['crimes']) /
stats['n_turns']).plot(
ax=ax, kind='area', stacked=False, figsize=(15, 8),
title='%s Optimality in Each Trial' % (
sim_type
)
)
pylab.ylabel('% of Optimal Moves')
pylab.xlabel('Trial#')
train_test_plots(train_stats, test_stats,
plot_cumulative_optimality)
Explanation: Optimality of the final policy
The agent effectively either took the waypoint or sat if it was illegal. That, to me, is optimal. Something I also looked into was learning my own waypoint by giving relative headings to the destination [up-left, up, up-right, left, right, down-left, down, down-right]. Obviously the environment is rewarding the wrong rewards for this scenario (tuned to the given waypoint), and I did not want to tamper with the environment so I wasn’t able to test this sufficiently.
To get a formal measure of optimality, for each trial, I counted the number of steps, $t$, as well as the number of suboptimal steps (legal but not following waypoint) $t_s$ and crime (illegal) steps $t_c$. Optimality, $\theta$, on each trial is then 1 minus the ratio of non-optimal steps:
$$\theta = 1 - \frac{t_n + t_c}{t}$$
This is shown below for each trial:
End of explanation |
5,678 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HydroTrend, Ganges Basin, Q0 Climate Scenario
Created By
Step1: Import the pymt package. Create a new instance. With each new run, it is wise to rename the instance
Step2: Note that the following "cat" commands will allow you to view your input files. They are not required to run HydroTrend, but it is good measure to make sure the model you are running is using the correct basin information.
Step3: In pymt one can always find out what output a model generates by using the .output_var_names method.
Step4: Now we initialize the model with the configure file and in the configure folder.
Step5: This line of code lists time parameters
Step6: This code declares numpy arrays for several important parameters we want to save.
Empty is declaring space in memory for info to go in.
Step7: Here we have coded up the time loop using i as the index.
We update the model with one timestep at the time, untill we reach the end time.
For each time step, we also get the values for the output parameters we wish to.
Step8: Plot water discharge (q)
Step9: Plot suspended sediment discharge (qs)
Step10: Explore mean annual water discharge trends through time
Step11: Plot mean daily discharge over 125 year period
Step12: Explore mean annual sediment discharge trends through time
Step13: Get important mass balance information about your model run
Step14: Create discharge-sedimentload relationship for this simulation
Step15: Now let's answer some questions about the above Ganges River simulation in HydroTrend
Exercise 1
Step16: Q1c | Python Code:
import matplotlib.pyplot as plt
import numpy as np
Explanation: HydroTrend, Ganges Basin, Q0 Climate Scenario
Created By: Abby Eckland and Irina Overeem, March 2020
About this notebook
This notebook replicates and improves upon simulations originally run by Frances Dunn and Stephen Darby, reported in Darby et al. 2015.
This simulation is driven by climate predictions (daily temperature and precipitation) obtained from the Hadley Centre (HadRM3P) Regional Climate Model. The Q0 realization is utilized in this notebook.
At the end of this notebook, questions are available to help analyze the model outputs.
See the following links for more informaton and resources regarding the model HydroTrend: <br>
https://csdms.colorado.edu/wiki/Model_help:HydroTrend <br>
https://csdms.colorado.edu/wiki/Model:HydroTrend <br>
125 year simulation: 1975-2100
End of explanation
import pymt.models
hydrotrend_GQ0 = pymt.models.Hydrotrend()
# Add directory for output files
config_file, config_folder = "hydro_config.txt", "Ganges"
Explanation: Import the pymt package. Create a new instance. With each new run, it is wise to rename the instance
End of explanation
cat Ganges/HYDRO.IN
cat Ganges/HYDRO0.HYPS
cat Ganges/HYDRO.CLIMATE
Explanation: Note that the following "cat" commands will allow you to view your input files. They are not required to run HydroTrend, but it is good measure to make sure the model you are running is using the correct basin information.
End of explanation
hydrotrend_GQ0.output_var_names
Explanation: In pymt one can always find out what output a model generates by using the .output_var_names method.
End of explanation
hydrotrend_GQ0.initialize(config_file, config_folder)
Explanation: Now we initialize the model with the configure file and in the configure folder.
End of explanation
(
hydrotrend_GQ0.start_time,
hydrotrend_GQ0.time,
hydrotrend_GQ0.end_time,
hydrotrend_GQ0.time_step,
hydrotrend_GQ0.time_units,
)
Explanation: This line of code lists time parameters: when, how long, and at what timestep the model simulation will work.
End of explanation
n_days_GQ0 = int(hydrotrend_GQ0.end_time)
q_GQ0 = np.empty(n_days_GQ0) # river discharge at the outlet
qs_GQ0 = np.empty(n_days_GQ0) # sediment load at the outlet
cs_GQ0 = np.empty(
n_days_GQ0
) # suspended sediment concentration for different grainsize classes at the outlet
qb_GQ0 = np.empty(n_days_GQ0) # bedload at the outlet
Explanation: This code declares numpy arrays for several important parameters we want to save.
Empty is declaring space in memory for info to go in.
End of explanation
for i in range(n_days_GQ0):
hydrotrend_GQ0.update()
q_GQ0[i] = hydrotrend_GQ0.get_value("channel_exit_water__volume_flow_rate")
qs_GQ0[i] = hydrotrend_GQ0.get_value(
"channel_exit_water_sediment~suspended__mass_flow_rate"
)
cs_GQ0[i] = hydrotrend_GQ0.get_value(
"channel_exit_water_sediment~suspended__mass_concentration"
)
qb_GQ0[i] = hydrotrend_GQ0.get_value(
"channel_exit_water_sediment~bedload__mass_flow_rate"
)
Explanation: Here we have coded up the time loop using i as the index.
We update the model with one timestep at the time, untill we reach the end time.
For each time step, we also get the values for the output parameters we wish to.
End of explanation
plt.plot(q_GQ0, color="blue")
plt.title("Simulated water discharge, Ganges River (1975-2100)", y=1.05)
plt.xlabel("Day in simulation")
plt.ylabel("Water discharge (m^3/s)")
plt.show()
Explanation: Plot water discharge (q)
End of explanation
plt.plot(qs_GQ0, color="tab:brown")
plt.title("Simulated suspended sediment flux, Ganges River (1975-2100)", y=1.05)
plt.xlabel("Day in simulation")
plt.ylabel("Sediment discharge (kg/s)")
plt.show()
Explanation: Plot suspended sediment discharge (qs)
End of explanation
# Reshape data array to find mean yearly water discharge
q_reshape_GQ0 = q_GQ0.reshape(124, 365)
q_mean_rows_GQ0 = np.mean(q_reshape_GQ0, axis=1)
q_y_vals = np.arange(124)
# Plot data, add trendline
plt.plot(q_y_vals, q_mean_rows_GQ0, color="blue")
plt.xlabel("Year in simulation")
plt.ylabel("Discharge (m^3/s)")
plt.title("Simulated Mean Annual Water Discharge, Ganges River (1975-2100)", y=1.05)
z = np.polyfit(q_y_vals.flatten(), q_mean_rows_GQ0.flatten(), 1)
p = np.poly1d(z)
plt.plot(q_y_vals, p(q_y_vals), "r--")
plt.suptitle("y={:.6f}x+{:.6f}".format(z[0], z[1]), y=0.8)
plt.show()
Explanation: Explore mean annual water discharge trends through time
End of explanation
q_GQ0_daily = np.mean(q_reshape_GQ0, axis=0)
plt.plot(q_GQ0_daily, color="blue")
plt.xlabel("Day of Year")
plt.ylabel("Discharge (m^3/s)")
plt.title("Simulated Mean Daily Water Discharge, Ganges River, 1975-2100")
plt.show()
Explanation: Plot mean daily discharge over 125 year period
End of explanation
# Reshape data array to find mean yearly sediment discharge
qs_reshape_GQ0 = qs_GQ0.reshape(124, 365)
qs_mean_rows_GQ0 = np.mean(qs_reshape_GQ0, axis=1)
qs_y_vals = np.arange(124)
# Plot data, add trendline
plt.plot(qs_y_vals, qs_mean_rows_GQ0, color="tab:brown")
plt.xlabel("Year in simulation")
plt.ylabel("Discharge (kg/s)")
plt.title("Simulated Mean Annual Sediment Discharge, Ganges River (1975-2100)", y=1.05)
z = np.polyfit(qs_y_vals.flatten(), qs_mean_rows_GQ0.flatten(), 1)
p = np.poly1d(z)
plt.plot(qs_y_vals, p(qs_y_vals), "r--")
plt.suptitle("y={:.6f}x+{:.6f}".format(z[0], z[1]), y=0.85)
plt.show()
# Plot mean daily sediment discharge over 125 year period
qs_GQ0_daily = np.mean(qs_reshape_GQ0, axis=0)
plt.plot(qs_GQ0_daily, color="tab:brown")
plt.xlabel("Day of Year")
plt.ylabel("Discharge (kg/s)")
plt.title("Simulated Mean Daily Sediment Discharge, Ganges River, 1975-2100")
plt.show()
Explanation: Explore mean annual sediment discharge trends through time
End of explanation
print(
"Mean Water Discharge = {} {}".format(
q_GQ0.mean(),
hydrotrend_GQ0.get_var_units("channel_exit_water__volume_flow_rate"),
)
)
print(
"Mean Bedload Discharge = {} {}".format(
qb_GQ0.mean(),
hydrotrend_GQ0.get_var_units(
"channel_exit_water_sediment~bedload__mass_flow_rate"
),
)
)
print(
"Mean Suspended Sediment Discharge = {} {}".format(
qs_GQ0.mean(),
hydrotrend_GQ0.get_var_units(
"channel_exit_water_sediment~suspended__mass_flow_rate"
),
)
)
print(
"Mean Suspended Sediment Concentration = {} {}".format(
cs_GQ0.mean(),
hydrotrend_GQ0.get_var_units(
"channel_exit_water_sediment~suspended__mass_concentration"
),
)
)
# Convert qs to MT/year
AnnualQs_GQ0 = (qs_GQ0.mean() * 1e-9) / 3.17098e-8
print(f"Mean Annual Suspended Sediment Discharge = {AnnualQs_GQ0} MT / year")
Explanation: Get important mass balance information about your model run
End of explanation
# Typically presented as a loglog pot
plt.scatter(np.log(q_GQ0), np.log(cs_GQ0), s=5, color="0.7")
plt.title("HydroTrend simulation of 25 year water discharge, Ganges River (1975-2100)")
plt.xlabel("Log River Discharge in m3/s")
plt.ylabel("Log Sediment concentration in kg/m3")
plt.show()
Explanation: Create discharge-sedimentload relationship for this simulation
End of explanation
# work for Q1b:
Explanation: Now let's answer some questions about the above Ganges River simulation in HydroTrend
Exercise 1: How water and sediment discharge is predicted to change over the next century in the Ganges Basin.
Q1a: How does water discharge and suspended sediment discharge in the Ganges River change over the 125 year time period? Describe the general trend.
A1a:
Q1b: What is the percent change in average water discharge from the beginning to the end of the simulation? You will need to do some calculations in the cell below.
A1b:
End of explanation
# work for Q1c:
Explanation: Q1c: What about the percent change in sediment discharge over the simulation time period?
A1c:
End of explanation |
5,679 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
x=np.linspace(-1.0,1.0,size)
if sigma==0:
y = m*x + b
else:
y = m*x + b + np.random.normal(0,sigma**2,size)
return x,y
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
x,y=random_line(m,b,sigma,size)
plt.scatter(x,y,color=color)#makes the scatter
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
interact (plot_random_line,m=(-10.0,10.0,0.1),b=(-5.0,5.0,0.1),sigma=(0.0,5.0,.01),size=(10,100,10),color=('red','blue','green'));
#makes the whole thing interactive
assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
5,680 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Translation of Numeric Phrases with Seq2Seq
In the following we will try to build a translation model from french phrases describing numbers to the corresponding numeric representation (base 10).
This is a toy machine translation task with a restricted vocabulary and a single valid translation for each source phrase which makes it more tractable to train on a laptop computer and easier to evaluate. Despite those limitations we expect that this task will highlight interesting properties of Seq2Seq models including
Step1: Generating a Training Set
The following will generate phrases 20000 example phrases for numbers between 1 and 1,000,000 (excluded). We chose to over-represent small numbers by generating all the possible short sequences between 1 and exhaustive.
We then split the generated set into non-overlapping train, validation and test splits.
Step2: Vocabularies
Build the vocabularies from the training set only to get a chance to have some out-of-vocabulary words in the validation and test sets.
First we need to introduce specific symbols that will be used to
Step3: To build the vocabulary we need to tokenize the sequences of symbols. For the digital number representation we use character level tokenization while whitespace-based word level tokenization will do for the French phrases
Step4: Let's now use this tokenization strategy to assign a unique integer token id to each possible token string found the traing set in each language ('French' and 'numeric')
Step5: The two languages do not have the same vocabulary sizes
Step6: We also built the reverse mappings from token ids to token string representations
Step7: Seq2Seq with a single GRU architecture
<img src="images/basic_seq2seq.png" width="80%" />
From
Step8: Vectorization of the parallel corpus
Let's apply the previous transformation to each pair of (source, target) sequene and use a shared vocabulary to store the results in numpy arrays of integer token ids, with padding on the left so that all input / output sequences have the same length
Step9: This looks good. In particular we can note
Step10: A simple homogeneous Seq2Seq architecture
To keep the architecture simple we will use the same RNN model and weights for both the encoder part (before the _GO token) and the decoder part (after the _GO token).
We may GRU recurrent cell instead of LSTM because it is slightly faster to compute and should give comparable results.
Exercise
Step11: Let's use a callback mechanism to automatically snapshot the best model found so far on the validation set
Step12: We need to use np.expand_dims trick on Y
Step13: Let's load the best model found on the validation set at the end of training
Step14: If you don't have access to a GPU and cannot wait 10 minutes to for the model to converge to a reasonably good state, feel to use the pretrained model. This model has been obtained by training the above model for ~150 epochs. The validation loss is significantly lower than 1e-5. In practice it should hardly ever make any prediction error on this easy translation problem.
Alternatively we will load this imperfect model (trained only to 50 epochs) with a validation loss of ~7e-4. This model makes funny translation errors so I would suggest to try it first
Step15: Let's have a look at a raw prediction on the first sample of the test set
Step16: In numeric array this is provided (along with the expected target sequence) as the following padded input sequence
Step17: Remember that the _GO (symbol indexed at 1) separates the reversed source from the expected target sequence
Step18: Interpreting the model prediction
Exercise
Step20: In the previous exercise we cheated a bit because we gave the complete sequence along with the solution in the input sequence. To correctly predict we need to predict one token at a time and reinject the predicted token in the input sequence to predict the next token
Step21: Why does the partially trained network is able to correctly give the output for
"sept mille huit cent cinquante neuf"
but not for
Step22: Model evaluation
Because we expect only one correct translation for a given source sequence, we can use phrase-level accuracy as a metric to quantify our model quality.
Note that this is not the case for real translation models (e.g. from French to English on arbitrary sentences). Evaluation of a machine translation model is tricky in general. Automated evaluation can somehow be done at the corpus level with the BLEU score (bilingual evaluation understudy) given a large enough sample of correct translations provided by certified translators but its only a noisy proxy.
The only good evaluation is to give a large enough sample of the model predictions on some test sentences to certified translators and ask them to give an evaluation (e.g. a score between 0 and 6, 0 for non-sensical and 6 for the hypothetical perfect translation). However in practice this is very costly to do.
Fortunately we can just use phrase-level accuracy on a our very domain specific toy problem
Step25: Bonus
Step26: Model Accuracy with Beam Search Decoding | Python Code:
from french_numbers import to_french_phrase
for x in [21, 80, 81, 300, 213, 1100, 1201, 301000, 80080]:
print(str(x).rjust(6), to_french_phrase(x))
Explanation: Translation of Numeric Phrases with Seq2Seq
In the following we will try to build a translation model from french phrases describing numbers to the corresponding numeric representation (base 10).
This is a toy machine translation task with a restricted vocabulary and a single valid translation for each source phrase which makes it more tractable to train on a laptop computer and easier to evaluate. Despite those limitations we expect that this task will highlight interesting properties of Seq2Seq models including:
the ability to deal with different length of the source and target sequences,
handling token with a meaning that changes depending on the context (e.g "quatre" vs "quatre vingts" in "quatre cents"),
basic counting and "reasoning" capabilities of LSTM and GRU models.
The parallel text data is generated from a "ground-truth" Python function named to_french_phrase that captures common rules. Hyphenation was intentionally omitted to make the phrases more ambiguous and therefore make the translation problem slightly harder to solve (and also because Olivier had no particular interest hyphenation in properly implementing rules :).
End of explanation
from french_numbers import generate_translations
from sklearn.model_selection import train_test_split
numbers, french_numbers = generate_translations(
low=1, high=int(1e6) - 1, exhaustive=5000, random_seed=0)
num_train, num_dev, fr_train, fr_dev = train_test_split(
numbers, french_numbers, test_size=0.5, random_state=0)
num_val, num_test, fr_val, fr_test = train_test_split(
num_dev, fr_dev, test_size=0.5, random_state=0)
len(fr_train), len(fr_val), len(fr_test)
for i, fr_phrase, num_phrase in zip(range(5), fr_train, num_train):
print(num_phrase.rjust(6), fr_phrase)
for i, fr_phrase, num_phrase in zip(range(5), fr_val, num_val):
print(num_phrase.rjust(6), fr_phrase)
Explanation: Generating a Training Set
The following will generate phrases 20000 example phrases for numbers between 1 and 1,000,000 (excluded). We chose to over-represent small numbers by generating all the possible short sequences between 1 and exhaustive.
We then split the generated set into non-overlapping train, validation and test splits.
End of explanation
PAD, GO, EOS, UNK = START_VOCAB = ['_PAD', '_GO', '_EOS', '_UNK']
Explanation: Vocabularies
Build the vocabularies from the training set only to get a chance to have some out-of-vocabulary words in the validation and test sets.
First we need to introduce specific symbols that will be used to:
- pad sequences
- mark the beginning of translation
- mark the end of translation
- be used as a placehold for out-of-vocabulary symbols (not seen in the training set).
Here we use the same convention as the tensorflow seq2seq tutorial:
End of explanation
def tokenize(sentence, word_level=True):
if word_level:
return sentence.split()
else:
return [sentence[i:i + 1] for i in range(len(sentence))]
tokenize('1234', word_level=False)
tokenize('mille deux cent trente quatre', word_level=True)
Explanation: To build the vocabulary we need to tokenize the sequences of symbols. For the digital number representation we use character level tokenization while whitespace-based word level tokenization will do for the French phrases:
End of explanation
def build_vocabulary(tokenized_sequences):
rev_vocabulary = START_VOCAB[:]
unique_tokens = set()
for tokens in tokenized_sequences:
unique_tokens.update(tokens)
rev_vocabulary += sorted(unique_tokens)
vocabulary = {}
for i, token in enumerate(rev_vocabulary):
vocabulary[token] = i
return vocabulary, rev_vocabulary
tokenized_fr_train = [tokenize(s, word_level=True) for s in fr_train]
tokenized_num_train = [tokenize(s, word_level=False) for s in num_train]
fr_vocab, rev_fr_vocab = build_vocabulary(tokenized_fr_train)
num_vocab, rev_num_vocab = build_vocabulary(tokenized_num_train)
Explanation: Let's now use this tokenization strategy to assign a unique integer token id to each possible token string found the traing set in each language ('French' and 'numeric'):
End of explanation
len(fr_vocab)
len(num_vocab)
for k, v in sorted(fr_vocab.items())[:10]:
print(k.rjust(10), v)
print('...')
for k, v in sorted(num_vocab.items()):
print(k.rjust(10), v)
Explanation: The two languages do not have the same vocabulary sizes:
End of explanation
print(rev_fr_vocab)
print(rev_num_vocab)
Explanation: We also built the reverse mappings from token ids to token string representations:
End of explanation
def make_input_output(source_tokens, target_tokens, reverse_source=True):
# TOTO
return input_tokens, output_tokens
# %load solutions/make_input_output.py
def make_input_output(source_tokens, target_tokens, reverse_source=True):
if reverse_source:
source_tokens = source_tokens[::-1]
input_tokens = source_tokens + [GO] + target_tokens
output_tokens = target_tokens + [EOS]
return input_tokens, output_tokens
input_tokens, output_tokens = make_input_output(
['cent', 'vingt', 'et', 'un'],
['1', '2', '1'],
)
input_tokens
output_tokens
Explanation: Seq2Seq with a single GRU architecture
<img src="images/basic_seq2seq.png" width="80%" />
From: Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." NIPS 2014
For a given source sequence - target sequence pair, we will:
- tokenize the source and target sequences;
- reverse the order of the source sequence;
- build the input sequence by concatenating the reversed source sequence and the target sequence in original order using the _GO token as a delimiter,
- build the output sequence by appending the _EOS token to the source sequence.
Let's do this as a function using the original string representations for the tokens so as to make it easier to debug:
Exercise
- Build a function which adapts a pair of tokenized sequences to the framework above.
- The function should have a reverse_source as an option.
Note:
- The function should output two sequences of string tokens: one to be fed as the input and the other as expected output for the seq2seq network. We will handle the padding later;
- Don't forget to insert the _GO and _EOS special symbols at the right locations.
End of explanation
all_tokenized_sequences = tokenized_fr_train + tokenized_num_train
shared_vocab, rev_shared_vocab = build_vocabulary(all_tokenized_sequences)
import numpy as np
max_length = 20 # found by introspection of our training set
def vectorize_corpus(source_sequences, target_sequences, shared_vocab,
word_level_source=True, word_level_target=True,
max_length=max_length):
assert len(source_sequences) == len(target_sequences)
n_sequences = len(source_sequences)
source_ids = np.empty(shape=(n_sequences, max_length), dtype=np.int32)
source_ids.fill(shared_vocab[PAD])
target_ids = np.empty(shape=(n_sequences, max_length), dtype=np.int32)
target_ids.fill(shared_vocab[PAD])
numbered_pairs = zip(range(n_sequences), source_sequences, target_sequences)
for i, source_seq, target_seq in numbered_pairs:
source_tokens = tokenize(source_seq, word_level=word_level_source)
target_tokens = tokenize(target_seq, word_level=word_level_target)
in_tokens, out_tokens = make_input_output(source_tokens, target_tokens)
in_token_ids = [shared_vocab.get(t, UNK) for t in in_tokens]
source_ids[i, -len(in_token_ids):] = in_token_ids
out_token_ids = [shared_vocab.get(t, UNK) for t in out_tokens]
target_ids[i, -len(out_token_ids):] = out_token_ids
return source_ids, target_ids
X_train, Y_train = vectorize_corpus(fr_train, num_train, shared_vocab,
word_level_target=False)
X_train.shape
Y_train.shape
fr_train[0]
num_train[0]
X_train[0]
Y_train[0]
Explanation: Vectorization of the parallel corpus
Let's apply the previous transformation to each pair of (source, target) sequene and use a shared vocabulary to store the results in numpy arrays of integer token ids, with padding on the left so that all input / output sequences have the same length:
End of explanation
X_val, Y_val = vectorize_corpus(fr_val, num_val, shared_vocab,
word_level_target=False)
X_test, Y_test = vectorize_corpus(fr_test, num_test, shared_vocab,
word_level_target=False)
X_val.shape, Y_val.shape
X_test.shape, Y_test.shape
Explanation: This looks good. In particular we can note:
the PAD=0 symbol at the beginning of the two sequences,
the input sequence has the GO=1 symbol to separate the source from the target,
the output sequence is a shifted version of the target and ends with EOS=2.
Let's vectorize the validation and test set to be able to evaluate our models:
End of explanation
# %load solutions/simple_seq2seq.py
from keras.models import Sequential
from keras.layers import Embedding, Dropout, GRU, Dense
vocab_size = len(shared_vocab)
simple_seq2seq = Sequential()
simple_seq2seq.add(Embedding(vocab_size, 32, input_length=max_length))
simple_seq2seq.add(Dropout(0.2))
simple_seq2seq.add(GRU(256, return_sequences=True))
simple_seq2seq.add(Dense(vocab_size, activation='softmax'))
# Here we use the sparse_categorical_crossentropy loss to be able to pass
# integer-coded output for the token ids without having to convert to one-hot
# codes
simple_seq2seq.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
Explanation: A simple homogeneous Seq2Seq architecture
To keep the architecture simple we will use the same RNN model and weights for both the encoder part (before the _GO token) and the decoder part (after the _GO token).
We may GRU recurrent cell instead of LSTM because it is slightly faster to compute and should give comparable results.
Exercise:
- Build a Seq2Seq model:
- Start with an Embedding layer;
- Add a single GRU layer: the GRU layer should yield a sequence of output vectors, one at each timestep;
- Add a Dense layer to adapt the ouput dimension of the GRU layer to the dimension of the output vocabulary;
- Don't forget to insert some Dropout layer(s), especially after the Embedding layer.
Note:
- The output dimension of the Embedding layer should be smaller than usual be cause we have small vocabulary size;
- The dimension of the GRU should be larger to give the Seq2Seq model enough "working memory" to memorize the full input sequence before decoding it;
- Your model should output a shape [batch, sequence_length, vocab_size].
End of explanation
from keras.callbacks import ModelCheckpoint
from keras.models import load_model
best_model_fname = "simple_seq2seq_checkpoint.h5"
best_model_cb = ModelCheckpoint(best_model_fname, monitor='val_loss',
save_best_only=True, verbose=1)
Explanation: Let's use a callback mechanism to automatically snapshot the best model found so far on the validation set:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
history = simple_seq2seq.fit(X_train, np.expand_dims(Y_train, -1),
validation_data=(X_val, np.expand_dims(Y_val, -1)),
nb_epoch=15, verbose=2, batch_size=32,
callbacks=[best_model_cb])
plt.figure(figsize=(12, 6))
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], '--', label='validation')
plt.ylabel('negative log likelihood')
plt.xlabel('epoch')
plt.title('Convergence plot for Simple Seq2Seq')
Explanation: We need to use np.expand_dims trick on Y: this is required by Keras because of we use a sparse (integer-based) representation for the output:
End of explanation
simple_seq2seq = load_model(best_model_fname)
Explanation: Let's load the best model found on the validation set at the end of training:
End of explanation
from keras.utils.data_utils import get_file
import os
get_file("simple_seq2seq_partially_pretrained.h5",
"https://github.com/m2dsupsdlclass/lectures-labs/releases/"
"download/0.4/simple_seq2seq_partially_pretrained.h5")
filename = os.path.expanduser(os.path.join('~',
'.keras/datasets/simple_seq2seq_partially_pretrained.h5'))
### Uncomment the following to replace for the fully trained network
#get_file("simple_seq2seq_pretrained.h5",
# "https://github.com/m2dsupsdlclass/lectures-labs/releases/"
# "download/0.4/simple_seq2seq_pretrained.h5")
#filename = os.path.expanduser(os.path.join('~',
# '.keras/datasets/simple_seq2seq_pretrained.h5'))
simple_seq2seq = load_model(filename)
Explanation: If you don't have access to a GPU and cannot wait 10 minutes to for the model to converge to a reasonably good state, feel to use the pretrained model. This model has been obtained by training the above model for ~150 epochs. The validation loss is significantly lower than 1e-5. In practice it should hardly ever make any prediction error on this easy translation problem.
Alternatively we will load this imperfect model (trained only to 50 epochs) with a validation loss of ~7e-4. This model makes funny translation errors so I would suggest to try it first:
End of explanation
fr_test[0]
Explanation: Let's have a look at a raw prediction on the first sample of the test set:
End of explanation
first_test_sequence = X_test[0:1]
first_test_sequence
Explanation: In numeric array this is provided (along with the expected target sequence) as the following padded input sequence:
End of explanation
rev_shared_vocab[1]
Explanation: Remember that the _GO (symbol indexed at 1) separates the reversed source from the expected target sequence:
End of explanation
# %load solutions/interpret_output.py
prediction = simple_seq2seq.predict(first_test_sequence)
print("prediction shape:", prediction.shape)
# Let's use `argmax` to extract the predicted token ids at each step:
predicted_token_ids = prediction[0].argmax(-1)
print("prediction token ids:", predicted_token_ids)
# We can use the shared reverse vocabulary to map
# this back to the string representation of the tokens,
# as well as removing Padding and EOS symbols
predicted_numbers = [rev_shared_vocab[token_id] for token_id in predicted_token_ids
if token_id not in (shared_vocab[PAD], shared_vocab[EOS])]
print("predicted number:", "".join(predicted_numbers))
print("test number:", num_test[0])
# The model successfully predicted the test sequence.
# However, we provided the full sequence as input, including all the solution
# (except for the last number). In a real testing condition, one wouldn't
# have the full input sequence, but only what is provided before the "GO"
# symbol
Explanation: Interpreting the model prediction
Exercise :
- Feed this test sequence into the model. What is the shape of the output?
- Get the argmax of each output prediction to get the most likely symbols
- Dismiss the padding / end of sentence
- Convert to readable vocabulary using rev_shared_vocab
Interpretation
- Compare the output with the first example in numerical format num_test[0]
- What do you think of this way of decoding? Is it correct to use it at inference time?
End of explanation
def greedy_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True):
Greedy decoder recursively predicting one token at a time
# Initialize the list of input token ids with the source sequence
source_tokens = tokenize(source_sequence, word_level=word_level_source)
input_ids = [shared_vocab.get(t, UNK) for t in source_tokens[::-1]]
input_ids += [shared_vocab[GO]]
# Prepare a fixed size numpy array that matches the expected input
# shape for the model
input_array = np.empty(shape=(1, model.input_shape[1]),
dtype=np.int32)
decoded_tokens = []
while len(input_ids) <= max_length:
# Vectorize a the list of input tokens as
# and use zeros padding.
input_array.fill(shared_vocab[PAD])
input_array[0, -len(input_ids):] = input_ids
# Predict the next output: greedy decoding with argmax
next_token_id = model.predict(input_array)[0, -1].argmax()
# Stop decoding if the network predicts end of sentence:
if next_token_id == shared_vocab[EOS]:
break
# Otherwise use the reverse vocabulary to map the prediction
# back to the string space
decoded_tokens.append(rev_shared_vocab[next_token_id])
# Append prediction to input sequence to predict the next
input_ids.append(next_token_id)
separator = " " if word_level_target else ""
return separator.join(decoded_tokens)
phrases = [
"un",
"deux",
"trois",
"onze",
"quinze",
"cent trente deux",
"cent mille douze",
"sept mille huit cent cinquante neuf",
"vingt et un",
"vingt quatre",
"quatre vingts",
"quatre vingt onze mille",
"quatre vingt onze mille deux cent deux",
]
for phrase in phrases:
translation = greedy_translate(simple_seq2seq, phrase,
shared_vocab, rev_shared_vocab,
word_level_target=False)
print(phrase.ljust(40), translation)
Explanation: In the previous exercise we cheated a bit because we gave the complete sequence along with the solution in the input sequence. To correctly predict we need to predict one token at a time and reinject the predicted token in the input sequence to predict the next token:
End of explanation
phrases = [
"quatre vingt et un",
"quarante douze",
"onze cent soixante vingt quatorze",
]
for phrase in phrases:
translation = greedy_translate(simple_seq2seq, phrase,
shared_vocab, rev_shared_vocab,
word_level_target=False)
print(phrase.ljust(40), translation)
Explanation: Why does the partially trained network is able to correctly give the output for
"sept mille huit cent cinquante neuf"
but not for:
"cent mille douze" ?
The answer is as following:
- it is rather easy for the network to learn a correspondance between symbols (first case), by dismissing "cent" and "mille"
- outputing the right amount of symbols, especially 0s for "cent mille douze" requires more reasoning and ability to count.
End of explanation
def phrase_accuracy(model, num_sequences, fr_sequences, n_samples=300,
decoder_func=greedy_translate):
correct = []
n_samples = len(num_sequences) if n_samples is None else n_samples
for i, num_seq, fr_seq in zip(range(n_samples), num_sequences, fr_sequences):
if i % 100 == 0:
print("Decoding %d/%d" % (i, n_samples))
predicted_seq = decoder_func(simple_seq2seq, fr_seq,
shared_vocab, rev_shared_vocab,
word_level_target=False)
correct.append(num_seq == predicted_seq)
return np.mean(correct)
print("Phrase-level test accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_test, fr_test))
print("Phrase-level train accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_train, fr_train))
Explanation: Model evaluation
Because we expect only one correct translation for a given source sequence, we can use phrase-level accuracy as a metric to quantify our model quality.
Note that this is not the case for real translation models (e.g. from French to English on arbitrary sentences). Evaluation of a machine translation model is tricky in general. Automated evaluation can somehow be done at the corpus level with the BLEU score (bilingual evaluation understudy) given a large enough sample of correct translations provided by certified translators but its only a noisy proxy.
The only good evaluation is to give a large enough sample of the model predictions on some test sentences to certified translators and ask them to give an evaluation (e.g. a score between 0 and 6, 0 for non-sensical and 6 for the hypothetical perfect translation). However in practice this is very costly to do.
Fortunately we can just use phrase-level accuracy on a our very domain specific toy problem:
End of explanation
def beam_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True,
beam_size=10, return_ll=False):
Decode candidate translations with a beam search strategy
If return_ll is False, only the best candidate string is returned.
If return_ll is True, all the candidate strings and their loglikelihoods
are returned.
# %load solutions/beam_search.py
def beam_translate(model, source_sequence, shared_vocab, rev_shared_vocab,
word_level_source=True, word_level_target=True,
beam_size=10, return_ll=False):
Decode candidate translations with a beam search strategy
If return_ll is False, only the best candidate string is returned.
If return_ll is True, all the candidate strings and their loglikelihoods
are returned.
# Initialize the list of input token ids with the source sequence
source_tokens = tokenize(source_sequence, word_level=word_level_source)
input_ids = [shared_vocab.get(t, UNK) for t in source_tokens[::-1]]
input_ids += [shared_vocab[GO]]
# initialize loglikelihood, input token ids, decoded tokens for
# each candidate in the beam
candidates = [(0, input_ids[:], [], False)]
# Prepare a fixed size numpy array that matches the expected input
# shape for the model
input_array = np.empty(shape=(beam_size, model.input_shape[1]),
dtype=np.int32)
while any([not done and (len(input_ids) < max_length)
for _, input_ids, _, done in candidates]):
# Vectorize a the list of input tokens and use zeros padding.
input_array.fill(shared_vocab[PAD])
for i, (_, input_ids, _, done) in enumerate(candidates):
if not done:
input_array[i, -len(input_ids):] = input_ids
# Predict the next output in a single call to the model to amortize
# the overhead and benefit from vector data parallelism on GPU.
next_likelihood_batch = model.predict(input_array)
# Build the new candidates list by summing the loglikelood of the
# next token with their parents for each new possible expansion.
new_candidates = []
for i, (ll, input_ids, decoded, done) in enumerate(candidates):
if done:
new_candidates.append((ll, input_ids, decoded, done))
else:
next_loglikelihoods = np.log(next_likelihood_batch[i, -1])
for next_token_id, next_ll in enumerate(next_loglikelihoods):
new_ll = ll + next_ll
new_input_ids = input_ids[:]
new_input_ids.append(next_token_id)
new_decoded = decoded[:]
new_done = done
if next_token_id == shared_vocab[EOS]:
new_done = True
if not new_done:
new_decoded.append(rev_shared_vocab[next_token_id])
new_candidates.append(
(new_ll, new_input_ids, new_decoded, new_done))
# Only keep a beam of the most promising candidates
new_candidates.sort(reverse=True)
candidates = new_candidates[:beam_size]
separator = " " if word_level_target else ""
if return_ll:
return [(separator.join(decoded), ll) for ll, _, decoded, _ in candidates]
else:
_, _, decoded, done = candidates[0]
return separator.join(decoded)
candidates = beam_translate(simple_seq2seq, "cent mille un",
shared_vocab, rev_shared_vocab,
word_level_target=False,
return_ll=True, beam_size=10)
candidates
candidates = beam_translate(simple_seq2seq, "quatre vingts",
shared_vocab, rev_shared_vocab,
word_level_target=False,
return_ll=True, beam_size=10)
candidates
Explanation: Bonus: Decoding with a Beam Search
Instead of decoding with greedy strategy that only considers the most-likely next token at each prediction, we can hold a priority queue of the most promising top-n sequences ordered by loglikelihoods.
This could potentially improve the final accuracy of an imperfect model: indeed it can be the case that the most likely sequence (based on the conditional proability estimated by the model) starts with a character that is not the most likely alone.
Bonus Exercise:
- build a beam_translate function which decodes candidate translations with a beam search strategy
- use a list of candidates, tracking beam_size candidates and their corresponding likelihood
- compute predictions for the next outputs by using predict with a batch of the size of the beam
- be careful to stop appending results if EOS symbols have been found for each candidate!
End of explanation
print("Phrase-level test accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_test, fr_test,
decoder_func=beam_translate))
print("Phrase-level train accuracy: %0.3f"
% phrase_accuracy(simple_seq2seq, num_train, fr_train,
decoder_func=beam_translate))
Explanation: Model Accuracy with Beam Search Decoding
End of explanation |
5,681 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First the data is loaded into Pandas data frames
Step1: Next select a subset of our train_data to use for training the model
Step2: Now train the SVM classifier and get validation accuracy using K-Folds cross validation
Step3: Make predictions on the test data and output the results | Python Code:
import numpy as np
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
# Read the input datasets
train_data = pd.read_csv('../input/train.csv')
test_data = pd.read_csv('../input/test.csv')
# Fill missing numeric values with median for that column
train_data['Age'].fillna(train_data['Age'].mean(), inplace=True)
test_data['Age'].fillna(test_data['Age'].mean(), inplace=True)
test_data['Fare'].fillna(test_data['Fare'].mean(), inplace=True)
print(train_data.info())
print(test_data.info())
Explanation: First the data is loaded into Pandas data frames
End of explanation
# Encode sex as int 0=female, 1=male
train_data['Sex'] = train_data['Sex'].apply(lambda x: int(x == 'male'))
# Extract the features we want to use
X = train_data[['Pclass', 'Sex', 'Age', 'Fare', 'SibSp', 'Parch']].as_matrix()
print(np.shape(X))
# Extract survival target
y = train_data[['Survived']].values.ravel()
print(np.shape(y))
Explanation: Next select a subset of our train_data to use for training the model
End of explanation
from sklearn.svm import SVC
from sklearn.model_selection import KFold, cross_val_score
from sklearn.preprocessing import MinMaxScaler
# Build the classifier
kf = KFold(n_splits=3)
model = SVC(kernel='rbf', C=300)
scores = []
for train, test in kf.split(X):
# Normalize training and test data using train data norm parameters
normalizer = MinMaxScaler().fit(X[train])
X_train = normalizer.transform(X[train])
X_test = normalizer.transform(X[test])
scores.append(model.fit(X_train, y[train]).score(X_test, y[test]))
print("Mean 3-fold cross validation accuracy: %s" % np.mean(scores))
Explanation: Now train the SVM classifier and get validation accuracy using K-Folds cross validation
End of explanation
# Create model with all training data
normalizer = MinMaxScaler().fit(X)
X = normalizer.transform(X)
classifier = model.fit(X, y)
# Encode sex as int 0=female, 1=male
test_data['Sex'] = test_data['Sex'].apply(lambda x: int(x == 'male'))
# Extract desired features
X_ = test_data[['Pclass', 'Sex', 'Age', 'Fare', 'SibSp', 'Parch']].as_matrix()
X_ = normalizer.transform(X_)
# Predict if passengers survived using model
y_ = classifier.predict(X_)
# Append the survived attribute to the test data
test_data['Survived'] = y_
predictions = test_data[['PassengerId', 'Survived']]
print(predictions)
# Save the output for submission
predictions.to_csv('submission.csv', index=False)
Explanation: Make predictions on the test data and output the results
End of explanation |
5,682 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
Step1: Compile a model for the Edge TPU
This notebook offers a convenient way to compile a TensorFlow Lite model for the Edge TPU, in case you don't have a system that's compatible with the Edge TPU Compiler (Debian Linux only).
Simply upload a compatible .tflite file to this Colab session, run the code below, and then download the compiled model.
For more details about how to create a model that's compatible with the Edge TPU, see the documentation at coral.ai.
<a href="https
Step2: Now click Runtime > Run all in the Colab toolbar.
Get the Edge TPU Compiler
Step3: Compile the model
Step4: The compiled model uses the same filename but with "_edgetpu" appended at the end.
If the compilation failed, check the Files panel on the left for the .log file that contains more details. (You might need to click the Refresh button to see the new files.)
Download the model
You can download the converted model from Colab with this | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 Google LLC
Licensed under the Apache License, Version 2.0 (the "License")
End of explanation
%env TFLITE_FILE=example.tflite
Explanation: Compile a model for the Edge TPU
This notebook offers a convenient way to compile a TensorFlow Lite model for the Edge TPU, in case you don't have a system that's compatible with the Edge TPU Compiler (Debian Linux only).
Simply upload a compatible .tflite file to this Colab session, run the code below, and then download the compiled model.
For more details about how to create a model that's compatible with the Edge TPU, see the documentation at coral.ai.
<a href="https://colab.research.google.com/github/google-coral/tutorials/blob/master/compile_for_edgetpu.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab"></a>
<a href="https://github.com/google-coral/tutorials/blob/master/compile_for_edgetpu.ipynb" target="_parent"><img src="https://img.shields.io/static/v1?logo=GitHub&label=&color=333333&style=flat&message=View%20on%20GitHub" alt="View in GitHub"></a>
Upload a compatible TF Lite model
To use this script, you need to upload a TensorFlow Lite model that's fully quantized and meets all the Edge TPU model requirements.
With a compatible model in-hand, you can upload it as follows:
Click the Files tab (the folder icon) in the left panel. (Do not change directories.)
Click Upload to session storage (the file icon).
Follow your system UI to select and open your .tflite file.
When it's uploaded, you should see the file appear in the left panel.
4. Replace example.tflite with your uploaded model's filename:
End of explanation
! curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
! echo "deb https://packages.cloud.google.com/apt coral-edgetpu-stable main" | sudo tee /etc/apt/sources.list.d/coral-edgetpu.list
! sudo apt-get update
! sudo apt-get install edgetpu-compiler
Explanation: Now click Runtime > Run all in the Colab toolbar.
Get the Edge TPU Compiler
End of explanation
! edgetpu_compiler $TFLITE_FILE
Explanation: Compile the model
End of explanation
import os
from google.colab import files
name = os.path.splitext(os.environ['TFLITE_FILE'])[0]
files.download(str(name + '_edgetpu.tflite'))
Explanation: The compiled model uses the same filename but with "_edgetpu" appended at the end.
If the compilation failed, check the Files panel on the left for the .log file that contains more details. (You might need to click the Refresh button to see the new files.)
Download the model
You can download the converted model from Colab with this:
End of explanation |
5,683 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gaussian Processes in Shogun
By Heiko Strathmann - <a href="mailto
Step1: Some Formal Background (Skip if you just want code examples)
This notebook is about Bayesian regression models with Gaussian Process priors. A Gaussian Process (GP) over real valued functions on some domain $\mathcal{X}$, $f(\mathbf{x})
Step2: Apart from its apealling form, this curve has the nice property of given rise to analytical solutions to the required integrals. Recall these are given by
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta},$
and
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
Since all involved elements, the likelihood $p(\mathbf{y}|\mathbf{f})$, the GP prior $p(\mathbf{f}|\boldsymbol{\theta})$ are Gaussian, the same follows for the GP posterior $p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})$, and the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$. Therefore, we just need to sit down with pen and paper to derive the resulting forms of the Gaussian distributions of these objects (see references). Luckily, everything is already implemented in Shogun.
In order to get some intuition about Gaussian Processes in general, let us first have a look at these latent Gaussian variables, which define a probability distribution over real values functions $f(\mathbf{x})
Step3: First, we compute the kernel matrix $\mathbf{C}_\boldsymbol{\theta}$ using the <a href="http
Step4: This matrix, as any kernel or covariance matrix, is positive semi-definite and symmetric. It can be viewed as a similarity matrix. Here, elements on the diagonal (corresponding to $\mathbf{x}=\mathbf{x}'$) have largest similarity. For increasing kernel bandwidth $\tau$, more and more elements are similar. This matrix fully specifies a distribution over functions $f(\mathbf{x})
Step5: Note how the functions are exactly evaluated at the training covariates $\mathbf{x}_i$ which are randomly distributed on the x-axis. Even though these points do not visualise the full functions (we can only evaluate them at a finite number of points, but we connected the points with lines to make it more clear), this reveils that larger values of the kernel bandwidth $\tau$ lead to smoother latent Gaussian functions.
In the above plots all functions are equally possible. That is, the prior of the latent Gaussian variables $\mathbf{f}|\boldsymbol{\theta}$ does not favour any particular function setups. Computing the posterior given our training data, the distribution ober $\mathbf{f}|\mathbf{y},\boldsymbol{\theta}$ then corresponds to restricting the above distribution over functions to those that explain the training data (up to observation noise). We will now use the Shogun class <a href="http
Step6: Note how the above function samples are constrained to go through our training data labels (up to observation noise), as much as their smoothness allows them. In fact, these are already samples from the predictive distribution, which gives a probability for a label $\mathbf{y}^$ for any covariate $\mathbf{x}^$. These distributions are Gaussian (!), nice to look at and extremely useful to understand the GP's underlying model. Let's plot them. We finally use the Shogun class <a href="http
Step7: The question now is
Step8: Now we can output the best parameters and plot the predictive distribution for those.
Step9: Now the predictive distribution is very close to the true data generating process.
Non-Linear, Binary Bayesian Classification
In binary classification, the observed data comes from a space of discrete, binary labels, i.e. $\mathbf{y}\in\mathcal{Y}^n={-1,+1}^n$, which are represented via the Shogun class <a href="http
Step10: Note how the logit function maps any input value to $[0,1]$ in a continuous way. The other plot above is for another classification likelihood is implemented in Shogun is the Gaussian CDF function
$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^n p(y_i|f_i)=\prod_{i=1}^n \Phi(y_i f_i),$
where $\Phi
Step11: We will now pass this data into Shogun representation, and use the standard Gaussian kernel (or squared exponential covariance function (<a href="http
Step12: This is already quite nice. The nice thing about Gaussian Processes now is that they are Bayesian, which means that have a full predictive distribution, i.e., we can plot the probability for a point belonging to a class. These can be obtained via the interface of <a href="http
Step13: If you are interested in the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$, for example for the sake of comparing different model parameters $\boldsymbol{\theta}$ (more in model-selection later), it is very easy to compute it via the interface of <a href="http
Step14: This plot clearly shows that there is one kernel width (aka hyper-parameter element $\theta$) for that the marginal likelihood is maximised. If one was interested in the single best parameter, the above concept can be used to learn the best hyper-parameters of the GP. In fact, this is possible in a very efficient way since we have a lot of information about the geometry of the marginal likelihood function, as for example its gradient
Step15: In the above plots, it is quite clear that the maximum of the marginal likelihood corresponds to the best single setting of the parameters. To give some more intuition
Step16: This now gives us a trained Gaussian Process with the best hyper-parameters. In the above setting, this is the s <a href="http
Step17: Note how nicely this predictive distribution matches the data generating distribution. Also note that the best kernel bandwidth is different to the one we saw in the above plot. This is caused by the different kernel scalling that was also learned automatically. The kernel scaling, roughly speaking, corresponds to the sharpness of the changes in the surface of the predictive likelihood. Since we have two hyper-parameters, we can plot the surface of the marginal likelihood as a function of both of them. This is sometimes interesting, for example when this surface has multiple maximum (corresponding to multiple "best" parameter settings), and thus might be useful for analysis. It is expensive however.
Step18: Our found maximum nicely matches the result of the "grid-search". The take home message for this is | Python Code:
%matplotlib inline
# import all shogun classes
from shogun import *
import random
import numpy as np
import matplotlib.pyplot as plt
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
from math import exp
Explanation: Gaussian Processes in Shogun
By Heiko Strathmann - <a href="mailto:[email protected]">[email protected]</a> - <a href="https://github.com/karlnapf">github.com/karlnapf</a> - <a href="http://herrstrathmann.de">herrstrathmann.de</a>. Based on the GP framework of the <a href="http://www.google-melange.com/gsoc/project/google/gsoc2013/votjak/8001">Google summer of code 2013 project</a> of Roman Votyakov - <a href="mailto:[email protected]">[email protected]</a> - <a href="https://github.com/votjakovr">github.com/votjakovr</a>, and the <a href="http://www.google-melange.com/gsoc/project/google/gsoc2012/walke434/39001">Google summer of code 2012 project</a> of Jacob Walker - <a href="mailto:[email protected]">[email protected]</a> - <a href="https://github.com/puffin444">github.com/puffin444</a>
This notebook is about <a href="http://en.wikipedia.org/wiki/Bayesian_linear_regression">Bayesian regression</a> and <a href="http://en.wikipedia.org/wiki/Statistical_classification">classification</a> models with <a href="http://en.wikipedia.org/wiki/Gaussian_process">Gaussian Process (GP)</a> priors in Shogun. After providing a semi-formal introduction, we illustrate how to efficiently train them, use them for predictions, and automatically learn parameters.
End of explanation
# plot likelihood for three different noise lebels $\sigma$ (which is not yet squared)
sigmas=np.array([0.5,1,2])
# likelihood instance
lik=GaussianLikelihood()
# A set of labels to consider
lab=RegressionLabels(np.linspace(-4.0,4.0, 200))
# A single 1D Gaussian response function, repeated once for each label
# this avoids doing a loop in python which would be slow
F=np.zeros(lab.get_num_labels())
# plot likelihood for all observations noise levels
plt.figure(figsize=(12, 4))
for sigma in sigmas:
# set observation noise, this is squared internally
lik.set_sigma(sigma)
# compute log-likelihood for all labels
log_liks=lik.get_log_probability_f(lab, F)
# plot likelihood functions, exponentiate since they were computed in log-domain
plt.plot(lab.get_labels(), list(map(exp,log_liks)))
plt.ylabel("$p(y_i|f_i)$")
plt.xlabel("$y_i$")
plt.title("Regression Likelihoods for different observation noise levels")
_=plt.legend(["sigma=$%.1f$" % sigma for sigma in sigmas])
Explanation: Some Formal Background (Skip if you just want code examples)
This notebook is about Bayesian regression models with Gaussian Process priors. A Gaussian Process (GP) over real valued functions on some domain $\mathcal{X}$, $f(\mathbf{x}):\mathcal{X} \rightarrow \mathbb{R}$, written as
$\mathcal{GP}(m(\mathbf{x}), k(\mathbf{x},\mathbf{x}')),$
defines a distribution over real valued functions with mean value $m(\mathbf{x})=\mathbb{E}[f(\mathbf{x})]$ and inter-function covariance $k(\mathbf{x},\mathbf{x}')=\mathbb{E}[(f(\mathbf{x})-m(\mathbf{x}))^T(f(\mathbf{x})-m(\mathbf{x})]$. This intuitively means tha the function value at any point $\mathbf{x}$, i.e., $f(\mathbf{x})$ is a random variable with mean $m(\mathbf{x})$; if you take the average of infinitely many functions from the Gaussian Process, and evaluate them at $\mathbf{x}$, you will get this value. Similarily, the function values at two different points $\mathbf{x}, \mathbf{x}'$ have covariance $k(\mathbf{x}, \mathbf{x}')$. The formal definition is that Gaussian Process is a collection of random variables (may be infinite) of which any finite subset have a joint Gaussian distribution.
One can model data with Gaussian Processes via defining a joint distribution over
$n$ data (labels in Shogun) $\mathbf{y}\in \mathcal{Y}^n$, from a $n$ dimensional continous (regression) or discrete (classification) space. These data correspond to $n$ covariates $\mathbf{x}_i\in\mathcal{X}$ (features in Shogun) from the input space $\mathcal{X}$.
Hyper-parameters $\boldsymbol{\theta}$ which depend on the used model (details follow).
Latent Gaussian variables $\mathbf{f}\in\mathbb{R}^n$, coming from a GP, i.e., they have a joint Gaussian distribution. Every entry $f_i$ corresponds to the GP function $f(\mathbf{x_i})$ evaluated at covariate $\mathbf{x}_i$ for $1\leq i \leq n$.
The joint distribution takes the form
$p(\mathbf{f},\mathbf{y},\theta)=p(\boldsymbol{\theta})p(\mathbf{f}|\boldsymbol{\theta})p(\mathbf{y}|\mathbf{f}),$
where $\mathbf{f}|\boldsymbol{\theta}\sim\mathcal{N}(\mathbf{m}\theta, \mathbf{C}\theta)$ is the joint Gaussian distribution for the GP variables, with mean $\mathbf{m}\boldsymbol{\theta}$ and covariance $\mathbf{C}\theta$. The $(i,j)$-th entriy of $\mathbf{C}_\boldsymbol{\theta}$ is given by the covariance or kernel between the $(i,j)$-th covariates $k(\mathbf{x}_i, \mathbf{x}_j)$. Examples for kernel and mean functions are given later in the notebook.
Mean and covariance are both depending on hyper-parameters coming from a prior distribution $\boldsymbol{\theta}\sim p(\boldsymbol{\theta})$. The data itself $\mathbf{y}\in \mathcal{Y}^n$ (no assumptions on $\mathcal{Y}$ for now) is modelled by a likelihood function $p(\mathbf{y}|\mathbf{f})$, which gives the probability of the data $\mathbf{y}$ given a state of the latent Gaussian variables $\mathbf{f}$, i.e. $p(\mathbf{y}|\mathbf{f}):\mathcal{Y}^n\rightarrow [0,1]$.
In order to do inference for a new, unseen covariate $\mathbf{x}^\in\mathcal{X}$, i.e., predicting its label $y^\in\mathcal{Y}$ or in particular computing the predictive distribution for that label, we have integrate over the posterior over the latent Gaussian variables (assume fixed $\boldsymbol{\theta}$ for now, which means you can just ignore the symbol in the following if you want),
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}.$
This posterior, $p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})$, can be obtained using standard <a href="http://en.wikipedia.org/wiki/Bayes'_theorem">Bayes-Rule</a> as
$p(\mathbf{f}|\mathbf{y},\boldsymbol{\theta})=\frac{p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})}{p(\mathbf{y}|\boldsymbol{\theta})},$
with the so called evidence or marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$ given as another integral over the prior over the latent Gaussian variables
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
In order to solve the above integrals, Shogun offers a variety of approximations. Don't worry, you will not have to deal with these nasty integrals on your own, but everything is hidden within Shogun. Though, if you like to play with these objects, you will be able to compute only parts.
Note that in the above description, we did not make any assumptions on the input space $\mathcal{X}$. As long as you define mean and covariance functions, and a likelihood, your data can have any form you like. Shogun in fact is able to deal with standard <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDenseFeatures.html">dense numerical data</a>, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CSparseFeatures.html"> sparse data</a>, and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStringFeatures.html">strings of any type</a>, and many more out of the box. We will provide some examples below.
To gain some intuition how these latent Gaussian variables behave, and how to model data with them, see the regression part of this notebook.
Non-Linear Bayesian Regression
Bayesian regression with Gaussian Processes is among the most fundamental applications of latent Gaussian models. As usual, the oberved data come from a contintous space, i.e. $\mathbf{y}\in\mathbb{R}^n$, which is represented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CRegressionLabels.html">CRegressionLabels</a>. We assume that these observations come from some distribution $p(\mathbf{y}|\mathbf{f)}$ that is based on a fixed state of latent Gaussian response variables $\mathbf{f}\in\mathbb{R}^n$. In fact, we assume that the true model is the latent Gaussian response variable (which defined a distribution over functions; plus some Gaussian observation noise which is modelled by the likelihood as
$p(\mathbf{y}|\mathbf{f})=\mathcal{N}(\mathbf{f},\sigma^2\mathbf{I})$
This simple likelihood is implemented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianLikelihood.html">CGaussianLikelihood</a>. It is the well known bell curve. Below, we plot the likelihood as a function of $\mathbf{y}$, for $n=1$.
End of explanation
def generate_regression_toy_data(n=50, n_test=100, x_range=15, x_range_test=20, noise_var=0.4):
# training and test sine wave, test one has more points
X_train = np.random.rand(n)*x_range
X_test = np.linspace(0,x_range_test, 500)
# add noise to training observations
y_test = np.sin(X_test)
y_train = np.sin(X_train)+np.random.randn(n)*noise_var
return X_train, y_train, X_test, y_test
X_train, y_train, X_test, y_test = generate_regression_toy_data()
plt.figure(figsize=(16,4))
plt.plot(X_train, y_train, 'ro')
plt.plot(X_test, y_test)
plt.legend(["Noisy observations", "True model"])
plt.title("One-Dimensional Toy Regression Data")
plt.xlabel("$\mathbf{x}$")
_=plt.ylabel("$\mathbf{y}$")
Explanation: Apart from its apealling form, this curve has the nice property of given rise to analytical solutions to the required integrals. Recall these are given by
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta},$
and
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
Since all involved elements, the likelihood $p(\mathbf{y}|\mathbf{f})$, the GP prior $p(\mathbf{f}|\boldsymbol{\theta})$ are Gaussian, the same follows for the GP posterior $p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})$, and the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$. Therefore, we just need to sit down with pen and paper to derive the resulting forms of the Gaussian distributions of these objects (see references). Luckily, everything is already implemented in Shogun.
In order to get some intuition about Gaussian Processes in general, let us first have a look at these latent Gaussian variables, which define a probability distribution over real values functions $f(\mathbf{x}):\mathcal{X} \rightarrow \mathbb{R}$, where in the regression case, $\mathcal{X}=\mathbb{R}$.
As mentioned above, the joint distribution of a finite number (say $n$) of variables $\mathbf{f}\in\mathbb{R}^n$ from a Gaussian Process $\mathcal{GP}(m(\mathbf{x}), k(\mathbf{x},\mathbf{x}'))$, takes the form
$\mathbf{f}|\boldsymbol{\theta}\sim\mathcal{N}(\mathbf{m}\theta, \mathbf{C}\theta),$
where $\mathbf{m}\theta$ is the mean function's mean and $\mathbf{C}\theta$ is the pairwise covariance or kernel matrix of the input covariates $\mathbf{x}_i$. This means, we can easily sample function realisations $\mathbf{f}^{(j)}$ from the Gaussian Process, and more important, visualise them.
To this end, let us consider the well-known and often used Gaussian Kernel or squared exponential covariance, which is implemented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> in the parametric from (note that there are other forms in the literature)
$ k(\mathbf{x}, \mathbf{x}')=\exp\left( -\frac{||\mathbf{x}-\mathbf{x}'||_2^2}{\tau}\right),$
where $\tau$ is a hyper-parameter of the kernel. We will also use the constant <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CZeroMean.html">CZeroMean</a> mean function, which is suitable if the data's mean is zero (which can be achieved via removing it).
Let us consider some toy regression data in the form of a sine wave, which is observed at random points with some observations noise.
End of explanation
# bring data into shogun representation (features are 2d-arrays, organised as column vectors)
feats_train=features(X_train.reshape(1,len(X_train)))
feats_test=features(X_test.reshape(1,len(X_test)))
labels_train=RegressionLabels(y_train)
# compute covariances for different kernel parameters
taus=np.asarray([.1,4.,32.])
Cs=np.zeros(((len(X_train), len(X_train), len(taus))))
for i in range(len(taus)):
# compute unscalled kernel matrix (first parameter is maximum size in memory and not very important)
kernel=GaussianKernel(10, taus[i])
kernel.init(feats_train, feats_train)
Cs[:,:,i]=kernel.get_kernel_matrix()
# plot
plt.figure(figsize=(16,5))
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
plt.imshow(Cs[:,:,i], interpolation="nearest")
plt.xlabel("Covariate index")
plt.ylabel("Covariate index")
_=plt.title("tau=%.1f" % taus[i])
Explanation: First, we compute the kernel matrix $\mathbf{C}_\boldsymbol{\theta}$ using the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> with hyperparameter $\boldsymbol{\theta}={\tau}$ with a few differnt values. Note that in Gaussian Processes, kernels usually have a scaling parameter. We skip this one for now and cover it later.
End of explanation
plt.figure(figsize=(16,5))
plt.suptitle("Random Samples from GP prior")
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
# sample a bunch of latent functions from the Gaussian Process
# note these vectors are stored row-wise
F=Statistics.sample_from_gaussian(np.zeros(len(X_train)), Cs[:,:,i], 3)
for j in range(len(F)):
# sort points to connect the dots with lines
sorted_idx=X_train.argsort()
plt.plot(X_train[sorted_idx], F[j,sorted_idx], '-', markersize=6)
plt.xlabel("$\mathbf{x}_i$")
plt.ylabel("$f(\mathbf{x}_i)$")
_=plt.title("tau=%.1f" % taus[i])
Explanation: This matrix, as any kernel or covariance matrix, is positive semi-definite and symmetric. It can be viewed as a similarity matrix. Here, elements on the diagonal (corresponding to $\mathbf{x}=\mathbf{x}'$) have largest similarity. For increasing kernel bandwidth $\tau$, more and more elements are similar. This matrix fully specifies a distribution over functions $f(\mathbf{x}):\mathcal{X}\rightarrow\mathbb{R}$ over a finite set of latent Gaussian variables $\mathbf{f}$, which we can sample from and plot. To this end, we use the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CStatistics.html">CStatistics</a>, which offers a method to sample from multivariate Gaussians.
End of explanation
plt.figure(figsize=(16,5))
plt.suptitle("Random Samples from GP posterior")
for i in range(len(taus)):
plt.subplot(1,len(taus),i+1)
# create inference method instance with very small observation noise to make
inf=ExactInferenceMethod(GaussianKernel(10, taus[i]), feats_train, ZeroMean(), labels_train, GaussianLikelihood())
C_post=inf.get_posterior_covariance()
m_post=inf.get_posterior_mean()
# sample a bunch of latent functions from the Gaussian Process
# note these vectors are stored row-wise
F=Statistics.sample_from_gaussian(m_post, C_post, 5)
for j in range(len(F)):
# sort points to connect the dots with lines
sorted_idx=sorted(range(len(X_train)),key=lambda x:X_train[x])
plt.plot(X_train[sorted_idx], F[j,sorted_idx], '-', markersize=6)
plt.plot(X_train, y_train, 'r*')
plt.xlabel("$\mathbf{x}_i$")
plt.ylabel("$f(\mathbf{x}_i)$")
_=plt.title("tau=%.1f" % taus[i])
Explanation: Note how the functions are exactly evaluated at the training covariates $\mathbf{x}_i$ which are randomly distributed on the x-axis. Even though these points do not visualise the full functions (we can only evaluate them at a finite number of points, but we connected the points with lines to make it more clear), this reveils that larger values of the kernel bandwidth $\tau$ lead to smoother latent Gaussian functions.
In the above plots all functions are equally possible. That is, the prior of the latent Gaussian variables $\mathbf{f}|\boldsymbol{\theta}$ does not favour any particular function setups. Computing the posterior given our training data, the distribution ober $\mathbf{f}|\mathbf{y},\boldsymbol{\theta}$ then corresponds to restricting the above distribution over functions to those that explain the training data (up to observation noise). We will now use the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CExactInferenceMethod.html">CExactInferenceMethod</a> to do exactly this. The class is the general basis of exact GP regression in Shogun. We have to define all parts of the Gaussian Process for the inference method.
End of explanation
# helper function that plots predictive distribution and data
def plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances):
# evaluate predictive distribution in this range of y-values and preallocate predictive distribution
y_values=np.linspace(-3,3)
D=np.zeros((len(y_values), len(X_test)))
# evaluate normal distribution at every prediction point (column)
for j in range(np.shape(D)[1]):
# create gaussian distributio instance, expects mean vector and covariance matrix, reshape
gauss=GaussianDistribution(np.array(means[j]).reshape(1,), np.array(variances[j]).reshape(1,1))
# evaluate predictive distribution for test point, method expects matrix
D[:,j]=np.exp(gauss.log_pdf_multiple(y_values.reshape(1,len(y_values))))
plt.pcolor(X_test,y_values,D)
plt.colorbar()
plt.contour(X_test,y_values,D)
plt.plot(X_test,y_test, 'b', linewidth=3)
plt.plot(X_test,means, 'm--', linewidth=3)
plt.plot(X_train, y_train, 'ro')
plt.legend(["Truth", "Prediction", "Data"])
plt.figure(figsize=(18,10))
plt.suptitle("GP inference for different kernel widths")
for i in range(len(taus)):
plt.subplot(len(taus),1,i+1)
# create GP instance using inference method and train
# use Shogun objects from above
inf.put('kernel', GaussianKernel(10,taus[i]))
gp=GaussianProcessRegression(inf)
gp.train()
# predict labels for all test data (note that this produces the same as the below mean vector)
means = gp.apply(feats_test)
# extract means and variance of predictive distribution for all test points
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
# note: y_predicted == means
# plot predictive distribution and training data
plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances)
_=plt.title("tau=%.1f" % taus[i])
Explanation: Note how the above function samples are constrained to go through our training data labels (up to observation noise), as much as their smoothness allows them. In fact, these are already samples from the predictive distribution, which gives a probability for a label $\mathbf{y}^$ for any covariate $\mathbf{x}^$. These distributions are Gaussian (!), nice to look at and extremely useful to understand the GP's underlying model. Let's plot them. We finally use the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianProcessRegression.html">CGaussianProcessRegression</a> to represent the whole GP under an interface to perform inference with. In addition, we use the helper class class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianDistribution.html">CGaussianDistribution</a> to evaluate the log-likelihood for every test point's $\mathbf{x}^_j$ value $\mathbf{y}_j^$.
End of explanation
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = ExactInferenceMethod(GaussianKernel(10, taus[i]), feats_train, ZeroMean(), labels_train, GaussianLikelihood())
gp = GaussianProcessRegression(inf)
# evaluate our inference method for its derivatives
grad = GradientEvaluation(gp, feats_train, labels_train, GradientCriterion(), False)
grad.put('differentiable_function', inf)
# handles all of the above structures in memory
grad_search = GradientModelSelection(grad)
# search for best parameters and store them
best_combination = grad_search.select_model()
# apply best parameters to GP, train
best_combination.apply_to_machine(gp)
# we have to "cast" objects to the specific kernel interface we used (soon to be easier)
best_width=GaussianKernel.obtain_from_generic(inf.get_kernel()).get_width()
best_scale=inf.get_scale()
best_sigma=GaussianLikelihood.obtain_from_generic(inf.get_model()).get_sigma()
print("Selected tau (kernel bandwidth):", best_width)
print("Selected gamma (kernel scaling):", best_scale)
print("Selected sigma (observation noise):", best_sigma)
Explanation: The question now is: Which set of hyper-parameters $\boldsymbol{\theta}={\tau, \gamma, \sigma}$ to take, where $\gamma$ is the kernel scaling (which we omitted so far), and $\sigma$ is the observation noise (which we left at its defaults value of one)? The question of model-selection will be handled in a bit more depth in the binary classification case. For now we just show code how to do it as a black box. See below for explanations.
End of explanation
# train gp
gp.train()
# extract means and variance of predictive distribution for all test points
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
# plot predictive distribution
plt.figure(figsize=(18,5))
plot_predictive_regression(X_train, y_train, X_test, y_test, means, variances)
_=plt.title("Maximum Likelihood II based inference")
Explanation: Now we can output the best parameters and plot the predictive distribution for those.
End of explanation
# two classification likelihoods in Shogun
logit=LogitLikelihood()
probit=ProbitLikelihood()
# A couple of Gaussian response functions, 1-dimensional here
F=np.linspace(-5.0,5.0)
# Single observation label with +1
lab=BinaryLabels(np.array([1.0]))
# compute log-likelihood for all values in F
log_liks_logit=np.zeros(len(F))
log_liks_probit=np.zeros(len(F))
for i in range(len(F)):
# Shogun expects a 1D array for f, not a single number
f=np.array(F[i]).reshape(1,)
log_liks_logit[i]=logit.get_log_probability_f(lab, f)
log_liks_probit[i]=probit.get_log_probability_f(lab, f)
# in fact, loops are slow and Shogun offers a method to compute the likelihood for many f. Much faster!
log_liks_logit=logit.get_log_probability_fmatrix(lab, F.reshape(1,len(F)))
log_liks_probit=probit.get_log_probability_fmatrix(lab, F.reshape(1,len(F)))
# plot the sigmoid functions, note that Shogun computes it in log-domain, so we have to exponentiate
plt.figure(figsize=(12, 4))
plt.plot(F, np.exp(log_liks_logit))
plt.plot(F, np.exp(log_liks_probit))
plt.ylabel("$p(y_i|f_i)$")
plt.xlabel("$f_i$")
plt.title("Classification Likelihoods")
_=plt.legend(["Logit", "Probit"])
Explanation: Now the predictive distribution is very close to the true data generating process.
Non-Linear, Binary Bayesian Classification
In binary classification, the observed data comes from a space of discrete, binary labels, i.e. $\mathbf{y}\in\mathcal{Y}^n={-1,+1}^n$, which are represented via the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CBinaryLabels.html">CBinaryLabels</a>. To model these observations with a GP, we need a likelihood function $p(\mathbf{y}|\mathbf{f})$ that maps a set of such discrete observations to a probability, given a fixed response $\mathbf{f}$ of the Gaussian Process.
In regression, this way straight-forward, as we could simply use the response variable $\mathbf{f}$ itself, plus some Gaussian noise, which gave rise to a probability distribution. However, now that the $\mathbf{y}$ are discrete, we cannot do the same thing. We rather need a function that squashes the Gaussian response variable itself to a probability, given some data. This is a common problem in Machine Learning and Statistics and is usually done with some sort of Sigmoid function of the form $\sigma:\mathbb{R}\rightarrow[0,1]$. One popular choicefor such a function is the Logit likelihood, given by
$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^n p(y_i|f_i)=\prod_{i=1}^n \frac{1}{1-\exp(-y_i f_i)}.$
This likelihood is implemented in Shogun under <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLogitLikelihood.html">CLogitLikelihood</a> and using it is sometimes refered to as logistic regression. Using it with GPs results in non-linear Bayesian logistic regression. We can easily use the class to illustrate the sigmoid function for a 1D example and a fixed data point with label $+1$
End of explanation
def generate_classification_toy_data(n_train=100, mean_a=np.asarray([0, 0]), std_dev_a=1.0, mean_b=3, std_dev_b=0.5):
# positive examples are distributed normally
X1 = (np.random.randn(n_train, 2)*std_dev_a+mean_a).T
# negative examples have a "ring"-like form
r = np.random.randn(n_train)*std_dev_b+mean_b
angle = np.random.randn(n_train)*2*np.pi
X2 = np.array([r*np.cos(angle)+mean_a[0], r*np.sin(angle)+mean_a[1]])
# stack positive and negative examples in a single array
X_train = np.hstack((X1,X2))
# label positive examples with +1, negative with -1
y_train = np.zeros(n_train*2)
y_train[:n_train] = 1
y_train[n_train:] = -1
return X_train, y_train
def plot_binary_data(X_train, y_train):
plt.plot(X_train[0, np.argwhere(y_train == 1)], X_train[1, np.argwhere(y_train == 1)], 'ro')
plt.plot(X_train[0, np.argwhere(y_train == -1)], X_train[1, np.argwhere(y_train == -1)], 'bo')
X_train, y_train=generate_classification_toy_data()
plot_binary_data(X_train, y_train)
_=plt.title("2D Toy classification problem")
Explanation: Note how the logit function maps any input value to $[0,1]$ in a continuous way. The other plot above is for another classification likelihood is implemented in Shogun is the Gaussian CDF function
$p(\mathbf{y}|\mathbf{f})=\prod_{i=1}^n p(y_i|f_i)=\prod_{i=1}^n \Phi(y_i f_i),$
where $\Phi:\mathbb{R}\rightarrow [0,1]$ is the <a href="http://en.wikipedia.org/wiki/Cumulative_distribution_function">cumulative distribution function</a> (CDF) of the standard Gaussian distribution $\mathcal{N}(0,1)$. It is implemented in the Shogun class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CProbitLikelihood.html">CProbitLikelihood</a> and using it is refered to as probit regression. While the Gaussian CDF has some convinient properties for integrating over it (and thus allowing some different modelling decisions), it doesn not really matter what you use in Shogun in most cases. However, for the sake of completeness, it is also potted above, being very similar to the logit likelihood.
TODO: Show a function squashed through the logit likelihood
Recall that in order to do inference, we need to solve two integrals (in addition to the Bayes rule, see above)
$p(y^|\mathbf{y}, \boldsymbol{\theta})=\int p(\mathbf{y}^|\mathbf{f})p(\mathbf{f}|\mathbf{y}, \boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta},$
and
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$.
In classification, the second integral is not available in closed form since it is the convolution of a Gaussian, $p(\mathbf{f}|\boldsymbol{\theta})$, and a non-Gaussian, $p(\mathbf{y}|\mathbf{f})$, distribution. Therefore, we have to rely on approximations in order to compute and integrate over the posterior $p(\mathbf{f}|\mathbf{y},\boldsymbol{\theta})$. Shogun offers various standard methods from the literature to deal with this problem, including the Laplace approximation (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLaplacianInferenceMethod.html">CLaplacianInferenceMethod</a>), Expectation Propagation (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CEPInferenceMethod.html">CEPInferenceMethod</a>) for inference and evaluatiing the marginal likelihood. These two approximations give rise to a Gaussian posterior $p(\mathbf{f}|\mathbf{y},\boldsymbol{\theta})$, which can then be easily computed and integrated over (all this is done by Shogun for you).
While the Laplace approximation is quite fast, EP usually has a better accuracy, in particular if one is not just interetsed in binary decisions but also in certainty values for these predictions. Go for Laplace if interested in binary decisions, and for EP otherwise.
TODO, add references to inference methods.
We will now give an example on how to do GP inference for binary classification in Shogun on some toy data. For that, we will first definea function to generate a classical non-linear classification problem.
End of explanation
# for building combinations of arrays
from itertools import product
# convert training data into Shogun representation
train_features = features(X_train)
train_labels = BinaryLabels(y_train)
# generate all pairs in 2d range of testing data (full space), discretisation resultion is n_test
n_test=50
x1 = np.linspace(X_train[0,:].min()-1, X_train[0,:].max()+1, n_test)
x2 = np.linspace(X_train[1,:].min()-1, X_train[1,:].max()+1, n_test)
X_test = np.asarray(list(product(x1, x2))).T
# convert testing features into Shogun representation
test_features = features(X_test)
# create Gaussian kernel with width = 2.0
kernel = GaussianKernel(10, 2)
# create zero mean function
zero_mean = ZeroMean()
# you can easily switch between probit and logit likelihood models
# by uncommenting/commenting the following lines:
# create probit likelihood model
# lik = ProbitLikelihood()
# create logit likelihood model
lik = LogitLikelihood()
# you can easily switch between Laplace and EP approximation by
# uncommenting/commenting the following lines:
# specify Laplace approximation inference method
#inf = LaplacianInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# specify EP approximation inference method
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
# create and train GP classifier, which uses Laplace approximation
gp = GaussianProcessClassification(inf)
gp.train()
test_labels=gp.apply(test_features)
# plot data and decision boundary
plot_binary_data(X_train, y_train)
plt.pcolor(x1, x2, test_labels.get_labels().reshape(n_test, n_test))
_=plt.title('Decision boundary')
Explanation: We will now pass this data into Shogun representation, and use the standard Gaussian kernel (or squared exponential covariance function (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a>)) and the Laplace approximation to obtain a decision boundary for the two classes. You can easily exchange different likelihood models and inference methods.
End of explanation
# obtain probabilities for
p_test = gp.get_probabilities(test_features)
# create figure
plt.title('Training data, predictive probability and decision boundary')
# plot training data
plot_binary_data(X_train, y_train)
# plot decision boundary
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
# plot probabilities
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
Explanation: This is already quite nice. The nice thing about Gaussian Processes now is that they are Bayesian, which means that have a full predictive distribution, i.e., we can plot the probability for a point belonging to a class. These can be obtained via the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianProcessClassification.html">CGaussianProcessClassification</a>
End of explanation
# generate some non-negative kernel widths
widths=2**np.linspace(-5,6,20)
# compute marginal likelihood under Laplace apprixmation for every width
# use Shogun objects from above
marginal_likelihoods=np.zeros(len(widths))
for i in range(len(widths)):
# note that GP training is automatically done/updated if a parameter is changed. No need to call train again
kernel.set_width(widths[i])
marginal_likelihoods[i]=-inf.get_negative_log_marginal_likelihood()
# plot marginal likelihoods as a function of kernel width
plt.plot(np.log2(widths), marginal_likelihoods)
plt.title("Log Marginal likelihood for different kernels")
plt.xlabel("Kernel Width in log-scale")
_=plt.ylabel("Log-Marginal Likelihood")
print("Width with largest marginal likelihood:", widths[marginal_likelihoods.argmax()])
Explanation: If you are interested in the marginal likelihood $p(\mathbf{y}|\boldsymbol{\theta})$, for example for the sake of comparing different model parameters $\boldsymbol{\theta}$ (more in model-selection later), it is very easy to compute it via the interface of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CInferenceMethod.html">CInferenceMethod</a>, i.e., every inference method in Shogun can do that. It is even possible to obtain the mean and covariance of the Gaussian approximation to the posterior $p(\mathbf{f}|\mathbf{y})$ using Shogun. In the following, we plot the marginal likelihood under the EP inference method (more accurate approximation) as a one dimensional function of the kernel width.
End of explanation
# again, use Shogun objects from above, but a few extremal widths
widths_subset=np.array([widths[0], widths[marginal_likelihoods.argmax()], widths[len(widths)-1]])
plt.figure(figsize=(18, 5))
for i in range(len(widths_subset)):
plt.subplot(1,len(widths_subset),i+1)
kernel.set_width(widths_subset[i])
# obtain and plot predictive distribution
p_test = gp.get_probabilities(test_features)
title_str="Width=%.2f, " % widths_subset[i]
if i is 0:
title_str+="too complex, overfitting"
elif i is 1:
title_str+="just right"
else:
title_str+="too smooth, underfitting"
plt.title(title_str)
plot_binary_data(X_train, y_train)
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
Explanation: This plot clearly shows that there is one kernel width (aka hyper-parameter element $\theta$) for that the marginal likelihood is maximised. If one was interested in the single best parameter, the above concept can be used to learn the best hyper-parameters of the GP. In fact, this is possible in a very efficient way since we have a lot of information about the geometry of the marginal likelihood function, as for example its gradient: It turns out that for example the above function is smooth and we can use the usual optimisation techniques to find extrema. This is called maximum likelihood II. Let's have a closer look.
Excurs: Model-Selection with Gaussian Processes
First, let us have a look at the predictive distributions of some of the above kernel widths
End of explanation
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
gp = GaussianProcessClassification(inf)
# evaluate our inference method for its derivatives
grad = GradientEvaluation(gp, train_features, train_labels, GradientCriterion(), False)
grad.put('differentiable_function', inf)
# handles all of the above structures in memory
grad_search = GradientModelSelection(grad)
# search for best parameters and store them
best_combination = grad_search.select_model()
# apply best parameters to GP
best_combination.apply_to_machine(gp)
# we have to "cast" objects to the specific kernel interface we used (soon to be easier)
best_width=GaussianKernel.obtain_from_generic(inf.get_kernel()).get_width()
best_scale=inf.get_scale()
print("Selected kernel bandwidth:", best_width)
print("Selected kernel scale:", best_scale)
Explanation: In the above plots, it is quite clear that the maximum of the marginal likelihood corresponds to the best single setting of the parameters. To give some more intuition: The interpretation of the marginal likelihood
$p(\mathbf{y}|\boldsymbol{\theta})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f}|\boldsymbol{\theta})d\mathbf{f}|\boldsymbol{\theta}$
is the probability of the data given the model parameters $\boldsymbol{\theta}$. Note that this is averaged over all possible configurations of the latent Gaussian variables $\mathbf{f}|\boldsymbol{\theta}$ given a fixed configuration of parameters. However, since this is probability distribution, it has to integrate to $1$. This means that models that are too complex (and thus being able to explain too many different data configutations) and models that are too simple (and thus not able to explain the current data) give rise to a small marginal likelihood. Only when the model is just complex enough to explain the data well (but not more complex), the marginal likelihood is maximised. This is an implementation of a concept called <a href="http://en.wikipedia.org/wiki/Occam's_razor#Probability_theory_and_statistics">Occam's razor</a>, and is a nice motivation why you should be Bayesian if you can -- overfitting doesn't happen that quickly.
As mentioned before, Shogun is able to automagically learn all of the hyper-parameters $\boldsymbol{\theta}$ using gradient based optimisation on the marginal likelihood (whose derivatives are computed internally). To this is, we use the class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGradientModelSelection.html">CGradientModelSelection</a>. Note that we could also use <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGridSearchModelSelection.html">CGridSearchModelSelection</a> to do a standard grid-search, such as is done for Support Vector Machines. However, this is highly ineffective, in particular when the number of parameters grows. In addition, order to evaluate parameter states, we have to use the classes <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGradientEvaluation.html">CGradientEvaluation</a>, and <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1GradientCriterion.html">GradientCriterion</a>, which is also much cheaper than the usual <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CCrossValidation.html">CCrossValidation</a>, since it just evaluates the gradient of the marginal likelihood rather than performing many training and testing runs. This is another very nice motivation for using Gaussian Processes: optimising parameters is much easier. In the following, we demonstrate how to select all parameters of the used model. In Shogun, parameter configurations (corresponding to $\boldsymbol{\theta}$ are stored in instances of <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CParameterCombination.html">CParameterCombination</a>, which can be applied to machines.
This approach is known as maximum likelihood II (the 2 is for the second level, averaging over all possible $\mathbf{f}|\boldsymbol{\theta}$), or evidence maximisation.
End of explanation
# train gp
gp.train()
# visualise predictive distribution
p_test = gp.get_probabilities(test_features)
plot_binary_data(X_train, y_train)
plt.contour(x1, x2, np.reshape(p_test, (n_test, n_test)), levels=[0.5], colors=('black'))
plt.pcolor(x1, x2, p_test.reshape(n_test, n_test))
_=plt.colorbar()
Explanation: This now gives us a trained Gaussian Process with the best hyper-parameters. In the above setting, this is the s <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianKernel.html">CGaussianKernel</a> bandwith, and its scale (which is stored in the GP itself since Shogun kernels do not support scalling). We can now again visualise the predictive distribution, and also output the best parameters.
End of explanation
# parameter space, increase resolution if you want finer plots, takes long though
resolution=5
widths=2**np.linspace(-4,10,resolution)
scales=2**np.linspace(-5,10,resolution)
# re-create inference method and GP instance to start from scratch, use other Shogun structures from above
inf = EPInferenceMethod(kernel, train_features, zero_mean, train_labels, lik)
# EP might not converge, we here allow that without errors
inf.set_fail_on_non_convergence(False)
gp = GaussianProcessClassification(inf)
inf.set_tolerance(1e-3)
# compute marginal likelihood for every parameter combination
# use Shogun objects from above
marginal_likelihoods=np.zeros((len(widths), len(scales)))
for i in range(len(widths)):
for j in range(len(scales)):
kernel.set_width(widths[i])
inf.set_scale(scales[j])
marginal_likelihoods[i,j]=-inf.get_negative_log_marginal_likelihood()
# contour plot of marginal likelihood as a function of kernel width and scale
plt.contour(np.log2(widths), np.log2(scales), marginal_likelihoods)
plt.colorbar()
plt.xlabel("Kernel width (log-scale)")
plt.ylabel("Kernel scale (log-scale)")
_=plt.title("Log Marginal Likelihood")
# plot our found best parameters
_=plt.plot([np.log2(best_width)], [np.log2(best_scale)], 'r*', markersize=20)
Explanation: Note how nicely this predictive distribution matches the data generating distribution. Also note that the best kernel bandwidth is different to the one we saw in the above plot. This is caused by the different kernel scalling that was also learned automatically. The kernel scaling, roughly speaking, corresponds to the sharpness of the changes in the surface of the predictive likelihood. Since we have two hyper-parameters, we can plot the surface of the marginal likelihood as a function of both of them. This is sometimes interesting, for example when this surface has multiple maximum (corresponding to multiple "best" parameter settings), and thus might be useful for analysis. It is expensive however.
End of explanation
# for measuring runtime
import time
# simple regression data
X_train, y_train, X_test, y_test = generate_regression_toy_data(n=1000)
# bring data into shogun representation (features are 2d-arrays, organised as column vectors)
feats_train=features(X_train.reshape(1,len(X_train)))
feats_test=features(X_test.reshape(1,len(X_test)))
labels_train=RegressionLabels(y_train)
# inducing features (here: a random grid over the input space, try out others)
n_inducing=10
#X_inducing=linspace(X_train.min(), X_train.max(), n_inducing)
X_inducing=np.random.rand(int(X_train.min())+n_inducing)*X_train.max()
feats_inducing=features(X_inducing.reshape(1,len(X_inducing)))
# create FITC inference method and GP instance
inf = FITCInferenceMethod(GaussianKernel(10, best_width), feats_train, ZeroMean(), labels_train, \
GaussianLikelihood(best_sigma), feats_inducing)
gp = GaussianProcessRegression(inf)
start=time.time()
gp.train()
means = gp.get_mean_vector(feats_test)
variances = gp.get_variance_vector(feats_test)
print("FITC inference took %.2f seconds" % (time.time()-start))
# exact GP
start=time.time()
inf_exact = ExactInferenceMethod(GaussianKernel(10, best_width), feats_train, ZeroMean(), labels_train, \
GaussianLikelihood(best_sigma))
inf_exact.set_scale(best_scale)
gp_exact = GaussianProcessRegression(inf_exact)
gp_exact.train()
means_exact = gp_exact.get_mean_vector(feats_test)
variances_exact = gp_exact.get_variance_vector(feats_test)
print "Exact inference took %.2f seconds" % (time.time()-start)
# comparison plot FITC and exact inference, plot 95% confidence of both predictive distributions
plt.figure(figsize=(18,5))
plt.plot(X_test, y_test, color="black", linewidth=3)
plt.plot(X_test, means, 'r--', linewidth=3)
plt.plot(X_test, means_exact, 'b--', linewidth=3)
plt.plot(X_train, y_train, 'ro')
plt.plot(X_inducing, np.zeros(len(X_inducing)), 'g*', markersize=15)
# tube plot of 95% confidence
error=1.96*np.sqrt(variances)
plt.plot(X_test,means-error, color='red', alpha=0.3, linewidth=3)
plt.fill_between(X_test,means-error,means+error,color='red', alpha=0.3)
error_exact=1.96*np.sqrt(variances_exact)
plt.plot(X_test,means_exact-error_exact, color='blue', alpha=0.3, linewidth=3)
plt.fill_between(X_test,means_exact-error_exact,means_exact+error_exact,color='blue', alpha=0.3)
# plot upper confidence lines later due to legend
plt.plot(X_test,means+error, color='red', alpha=0.3, linewidth=3)
plt.plot(X_test,means_exact+error_exact, color='blue', alpha=0.3, linewidth=3)
plt.legend(["True", "FITC prediction", "Exact prediction", "Data", "Inducing points", "95% FITC", "95% Exact"])
_=plt.title("Comparison FITC and Exact Regression")
Explanation: Our found maximum nicely matches the result of the "grid-search". The take home message for this is: With Gaussian Processes, you neither need to do expensive brute force approaches to find best paramters (but you can use gradient descent), nor do you need to do expensive cross-validation to evaluate your model (but can use the Bayesian concept of maximum likelihood II).
Excurs: Large-Scale Regression
One "problem" with the classical method of Gaussian Process based inference is the computational complexity of $\mathcal{O}(n^3)$, where $n$ is the number of training examples. This is caused by matrix inversion, Cholesky factorization, etc. Up to a few thousand points, this is feasible. You will quickly run into memory and runtime problems for very large problems.
One way of approaching very large problems is called Fully Independent Training Components, which is a low-rank plus diagonal approximation to the exact covariance. The rough idea is to specify a set of $m\ll n$ inducing points and to base all computations on the covariance between training/test and inducing points only, which intuitively corresponds to combining various training points around an inducing point. This reduces the computational complexity to $\mathcal{O}(nm^2)$, where again $n$ is the number of training points, and $m$ is the number of inducing points. This is quite a significant decrease, in particular if the number of inducing points is much smaller than the number of examples.
The optimal way to specify inducing points is to densely and uniformly place them in the input space. However, this might be quickly become infeasible in high dimensions. In this case, a random subset of the training data might be a good idea.
In Shogun, the class <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CFITCInferenceMethod.html">CFITCInferenceMethod</a> handles inference for regression with the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGaussianLikelihood.html">CGaussianLikelihood</a>. Below, we demonstrate its usage on a toy example and compare to exact regression. Note that <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGradientModelSelection.html">CGradientModelSelection</a> still works as before. We compare the runtime for inference with both GP.
First, note that changing the inference method only requires the change of a single line of code
End of explanation |
5,684 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Practical PyTorch
Step1: The Grid World, Agent and Environment
First we'll build the training environment, which is a simple square grid world with various rewards and a goal. If you're just interested in the training code, skip down to building the actor-critic network
The Grid
The Grid class keeps track of the grid world
Step2: The Agent
The Agent has a current position and a health. All this class does is update the position based on an action (up, right, down or left) and decrement a small STEP_VALUE at every time step, so that it eventually starves if it doesn't reach the goal.
The world based effects on the agent's health are handled by the Environment below.
Step7: The Environment
The Environment encapsulates the Grid and Agent, and handles the bulk of the logic of assigning rewards when the agent acts. If an agent lands on a plant or goal or edge, its health is updated accordingly. Plants are removed from the grid (set to 0) when "eaten" by the agent. Every time step there is also a slight negative health penalty so that the agent must keep finding plants or reach the goal to survive.
The Environment's main function is step(action) → (state, reward, done), which updates the world state with a chosen action and returns the resulting state, and also returns a reward and whether the episode is done. The state it returns is what the agent will use to make its action predictions, which in this case is the visible grid area (flattened into one dimension) and the current agent health (to give it some "self awareness").
The episode is considered done if won or lost - won if the agent reaches the goal (agent.health >= GOAL_VALUE) and lost if the agent dies from falling off the edge, eating too many poisonous plants, or getting too hungry (agent.health <= 0).
In this experiment the environment only returns a single reward at the end of the episode (to make it more challenging). Values from plants and the step penalty are implicit - they might cause the agent to live longer or die sooner, but they aren't included in the final reward.
The Environment also keeps track of the grid and agent states for each step of an episode, for visualization.
Step8: Visualizing History
To visualize an episode the animate(history) function uses Matplotlib to plot the grid state and agent health over time, and turn the resulting frames into a GIF.
Step9: Testing the Environment
Let's test what we have so far with a quick simulation
Step10: Actor-Critic network
Value-based reinforcement learning methods like Q-Learning try to predict the expected reward of the next state(s) given an action. In contrast, a policy method tries to directly choose the best action given a state. Policy methods are conceptually simpler but training can be tricky - due to the high variance of rewards, it can easily become unstable or just plateau at a local minimum.
Combining a value estimation with the policy helps regularize training by establishing a "baseline" reward that learns alongside the actor. Subtracting a baseline value from the rewards essentially trains the actor to perform "better than expected".
In this case, both actor and critic (baseline) are combined into a single neural network with 5 outputs
Step11: Selecting actions
To select actions we treat the output of the policy as a multinomial distribution over actions, and sample from that to choose a single action. Thanks to the REINFORCE algorithm we can calculate gradients for discrete action samples by calling action.reinforce(reward) at the end of the episode.
To encourage exploration in early episodes, here's one weird trick
Step12: Playing through an episode
A single episode is the agent moving through the environment from start to finish. We keep track of the chosen action and value outputs from the model, and resulting rewards to reinforce at the end of the episode.
Step13: Using REINFORCE with a value baseline
The policy gradient method is similar to regular supervised learning, except we don't know the "correct" action for any given state. Plus we are only getting a single reward at the end of the episode. To give rewards to past actions we fake history by copying the final reward (and possibly intermediate rewards) back in time with a discount factor
Step14: With everything in place we can define the training parameters and create the actual Environment and Policy instances. We'll also use a SlidingAverage helper to keep track of average rewards over time.
Step15: Finally, we run a bunch of episodes and wait for some results. The average final reward will help us track whether it's learning. This took about an hour on a 2.8GHz CPU to get some reasonable results. | Python Code:
import numpy as np
from itertools import count
import random
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.autograd as autograd
from torch.autograd import Variable
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
import matplotlib.animation
from IPython.display import HTML
%pylab inline
from helpers import *
Explanation: Practical PyTorch: Playing GridWorld with Reinforcement Learning (Policy Gradients with REINFORCE)
In this project we'll teach a neural network to navigate through a dangerous grid world.
Training uses policy gradients via the REINFORCE algorithm and a simplified Actor-Critic method. A single network calculates both a policy to choose the next action (the actor) and an estimated value of the current state (the critic). Rewards are propagated through the graph with PyTorch's reinforce method.
Resources
The Reinforcement learning book from Sutton & Barto
The REINFORCE paper from Ronald J. Williams (1992)
Scholarpedia article on policy gradient methods
A Lecture from David Silver (of UCL, DeepMind) on policy gradients
The REINFORCE PyTorch example this tutorial is based on
Requirements
The main requirements are PyTorch (of course), and numpy, matplotlib, and iPython for animating the states.
End of explanation
MIN_PLANT_VALUE = -1
MAX_PLANT_VALUE = 0.5
GOAL_VALUE = 10
EDGE_VALUE = -10
VISIBLE_RADIUS = 1
class Grid():
def __init__(self, grid_size=8, n_plants=15):
self.grid_size = grid_size
self.n_plants = n_plants
def reset(self):
padded_size = self.grid_size + 2 * VISIBLE_RADIUS
self.grid = np.zeros((padded_size, padded_size)) # Padding for edges
# Edges
self.grid[0:VISIBLE_RADIUS, :] = EDGE_VALUE
self.grid[-1*VISIBLE_RADIUS:, :] = EDGE_VALUE
self.grid[:, 0:VISIBLE_RADIUS] = EDGE_VALUE
self.grid[:, -1*VISIBLE_RADIUS:] = EDGE_VALUE
# Randomly placed plants
for i in range(self.n_plants):
plant_value = random.random() * (MAX_PLANT_VALUE - MIN_PLANT_VALUE) + MIN_PLANT_VALUE
ry = random.randint(0, self.grid_size-1) + VISIBLE_RADIUS
rx = random.randint(0, self.grid_size-1) + VISIBLE_RADIUS
self.grid[ry, rx] = plant_value
# Goal in one of the corners
S = VISIBLE_RADIUS
E = self.grid_size + VISIBLE_RADIUS - 1
gps = [(E, E), (S, E), (E, S), (S, S)]
gp = gps[random.randint(0, len(gps)-1)]
self.grid[gp] = GOAL_VALUE
def visible(self, pos):
y, x = pos
return self.grid[y-VISIBLE_RADIUS:y+VISIBLE_RADIUS+1, x-VISIBLE_RADIUS:x+VISIBLE_RADIUS+1]
Explanation: The Grid World, Agent and Environment
First we'll build the training environment, which is a simple square grid world with various rewards and a goal. If you're just interested in the training code, skip down to building the actor-critic network
The Grid
The Grid class keeps track of the grid world: a 2d array of empty squares, plants, and the goal.
Plants are randomly placed values from -1 to 0.5 (mostly poisonous) and if the agent lands on one, that value is added to the agent's health. The agent's goal is to reach the goal square, placed in one of the corners. As the agent moves around it gradually loses health so it has to move with purpose.
The agent can see a surrounding area VISIBLE_RADIUS squares out from its position, so the edges of the grid are padded by that much with negative values. If the agent "falls off the edge" it dies instantly.
End of explanation
START_HEALTH = 1
STEP_VALUE = -0.02
class Agent:
def reset(self):
self.health = START_HEALTH
def act(self, action):
# Move according to action: 0=UP, 1=RIGHT, 2=DOWN, 3=LEFT
y, x = self.pos
if action == 0: y -= 1
elif action == 1: x += 1
elif action == 2: y += 1
elif action == 3: x -= 1
self.pos = (y, x)
self.health += STEP_VALUE # Gradually getting hungrier
Explanation: The Agent
The Agent has a current position and a health. All this class does is update the position based on an action (up, right, down or left) and decrement a small STEP_VALUE at every time step, so that it eventually starves if it doesn't reach the goal.
The world based effects on the agent's health are handled by the Environment below.
End of explanation
class Environment:
def __init__(self):
self.grid = Grid()
self.agent = Agent()
def reset(self):
Start a new episode by resetting grid and agent
self.grid.reset()
self.agent.reset()
c = math.floor(self.grid.grid_size / 2)
self.agent.pos = (c, c)
self.t = 0
self.history = []
self.record_step()
return self.visible_state
def record_step(self):
Add the current state to history for display later
grid = np.array(self.grid.grid)
grid[self.agent.pos] = self.agent.health * 0.5 # Agent marker faded by health
visible = np.array(self.grid.visible(self.agent.pos))
self.history.append((grid, visible, self.agent.health))
@property
def visible_state(self):
Return the visible area surrounding the agent, and current agent health
visible = self.grid.visible(self.agent.pos)
y, x = self.agent.pos
yp = (y - VISIBLE_RADIUS) / self.grid.grid_size
xp = (x - VISIBLE_RADIUS) / self.grid.grid_size
extras = [self.agent.health, yp, xp]
return np.concatenate((visible.flatten(), extras), 0)
def step(self, action):
Update state (grid and agent) based on an action
self.agent.act(action)
# Get reward from where agent landed, add to agent health
value = self.grid.grid[self.agent.pos]
self.grid.grid[self.agent.pos] = 0
self.agent.health += value
# Check if agent won (reached the goal) or lost (health reached 0)
won = value == GOAL_VALUE
lost = self.agent.health <= 0
done = won or lost
# Rewards at end of episode
if won:
reward = 1
elif lost:
reward = -1
else:
reward = 0 # Reward will only come at the end
# Save in history
self.record_step()
return self.visible_state, reward, done
Explanation: The Environment
The Environment encapsulates the Grid and Agent, and handles the bulk of the logic of assigning rewards when the agent acts. If an agent lands on a plant or goal or edge, its health is updated accordingly. Plants are removed from the grid (set to 0) when "eaten" by the agent. Every time step there is also a slight negative health penalty so that the agent must keep finding plants or reach the goal to survive.
The Environment's main function is step(action) → (state, reward, done), which updates the world state with a chosen action and returns the resulting state, and also returns a reward and whether the episode is done. The state it returns is what the agent will use to make its action predictions, which in this case is the visible grid area (flattened into one dimension) and the current agent health (to give it some "self awareness").
The episode is considered done if won or lost - won if the agent reaches the goal (agent.health >= GOAL_VALUE) and lost if the agent dies from falling off the edge, eating too many poisonous plants, or getting too hungry (agent.health <= 0).
In this experiment the environment only returns a single reward at the end of the episode (to make it more challenging). Values from plants and the step penalty are implicit - they might cause the agent to live longer or die sooner, but they aren't included in the final reward.
The Environment also keeps track of the grid and agent states for each step of an episode, for visualization.
End of explanation
def animate(history):
frames = len(history)
print("Rendering %d frames..." % frames)
fig = plt.figure(figsize=(6, 2))
fig_grid = fig.add_subplot(121)
fig_health = fig.add_subplot(243)
fig_visible = fig.add_subplot(244)
fig_health.set_autoscale_on(False)
health_plot = np.zeros((frames, 1))
def render_frame(i):
grid, visible, health = history[i]
# Render grid
fig_grid.matshow(grid, vmin=-1, vmax=1, cmap='jet')
fig_visible.matshow(visible, vmin=-1, vmax=1, cmap='jet')
# Render health chart
health_plot[i] = health
fig_health.clear()
fig_health.axis([0, frames, 0, 2])
fig_health.plot(health_plot[:i + 1])
anim = matplotlib.animation.FuncAnimation(
fig, render_frame, frames=frames, interval=100
)
plt.close()
display(HTML(anim.to_html5_video()))
Explanation: Visualizing History
To visualize an episode the animate(history) function uses Matplotlib to plot the grid state and agent health over time, and turn the resulting frames into a GIF.
End of explanation
env = Environment()
env.reset()
print(env.visible_state)
done = False
while not done:
_, _, done = env.step(2) # Down
animate(env.history)
Explanation: Testing the Environment
Let's test what we have so far with a quick simulation:
End of explanation
class Policy(nn.Module):
def __init__(self, hidden_size):
super(Policy, self).__init__()
visible_squares = (VISIBLE_RADIUS * 2 + 1) ** 2
input_size = visible_squares + 1 + 2 # Plus agent health, y, x
self.inp = nn.Linear(input_size, hidden_size)
self.out = nn.Linear(hidden_size, 4 + 1, bias=False) # For both action and expected value
def forward(self, x):
x = x.view(1, -1)
x = F.tanh(x) # Squash inputs
x = F.relu(self.inp(x))
x = self.out(x)
# Split last five outputs into scores and value
scores = x[:,:4]
value = x[:,4]
return scores, value
Explanation: Actor-Critic network
Value-based reinforcement learning methods like Q-Learning try to predict the expected reward of the next state(s) given an action. In contrast, a policy method tries to directly choose the best action given a state. Policy methods are conceptually simpler but training can be tricky - due to the high variance of rewards, it can easily become unstable or just plateau at a local minimum.
Combining a value estimation with the policy helps regularize training by establishing a "baseline" reward that learns alongside the actor. Subtracting a baseline value from the rewards essentially trains the actor to perform "better than expected".
In this case, both actor and critic (baseline) are combined into a single neural network with 5 outputs: the probabilities of the 4 possible actions, and an estimated value.
The input layer inp transforms the environment state, $(radius*2+1)^2$ squares plus the agent's health and position, into an internal state. The output layer out transforms that internal state to probabilities of possible actions plus the estimated value.
End of explanation
DROP_MAX = 0.3
DROP_MIN = 0.05
DROP_OVER = 200000
def select_action(e, state):
drop = interpolate(e, DROP_MAX, DROP_MIN, DROP_OVER)
state = Variable(torch.from_numpy(state).float())
scores, value = policy(state) # Forward state through network
scores = F.dropout(scores, drop, True) # Dropout for exploration
scores = F.softmax(scores)
action = scores.multinomial() # Sample an action
return action, value
Explanation: Selecting actions
To select actions we treat the output of the policy as a multinomial distribution over actions, and sample from that to choose a single action. Thanks to the REINFORCE algorithm we can calculate gradients for discrete action samples by calling action.reinforce(reward) at the end of the episode.
To encourage exploration in early episodes, here's one weird trick: apply dropout to the action scores, before softmax. Randomly masking some scores will cause less likely scores to be chosen. The dropout percent gradually decreases from 30% to 5% over the first 200k episodes.
End of explanation
def run_episode(e):
state = env.reset()
actions = []
values = []
rewards = []
done = False
while not done:
action, value = select_action(e, state)
state, reward, done = env.step(action.data[0, 0])
actions.append(action)
values.append(value)
rewards.append(reward)
return actions, values, rewards
Explanation: Playing through an episode
A single episode is the agent moving through the environment from start to finish. We keep track of the chosen action and value outputs from the model, and resulting rewards to reinforce at the end of the episode.
End of explanation
gamma = 0.9 # Discounted reward factor
mse = nn.MSELoss()
def finish_episode(e, actions, values, rewards):
# Calculate discounted rewards, going backwards from end
discounted_rewards = []
R = 0
for r in rewards[::-1]:
R = r + gamma * R
discounted_rewards.insert(0, R)
discounted_rewards = torch.Tensor(discounted_rewards)
# Use REINFORCE on chosen actions and associated discounted rewards
value_loss = 0
for action, value, reward in zip(actions, values, discounted_rewards):
reward_diff = reward - value.data[0] # Treat critic value as baseline
action.reinforce(reward_diff) # Try to perform better than baseline
value_loss += mse(value, Variable(torch.Tensor([reward]))) # Compare with actual reward
# Backpropagate
optimizer.zero_grad()
nodes = [value_loss] + actions
gradients = [torch.ones(1)] + [None for _ in actions] # No gradients for reinforced values
autograd.backward(nodes, gradients)
optimizer.step()
return discounted_rewards, value_loss
Explanation: Using REINFORCE with a value baseline
The policy gradient method is similar to regular supervised learning, except we don't know the "correct" action for any given state. Plus we are only getting a single reward at the end of the episode. To give rewards to past actions we fake history by copying the final reward (and possibly intermediate rewards) back in time with a discount factor:
Then for every time step, we use action.reinforce(reward) to encourage or discourage those actions.
We will use the value output of the network as a baseline, and use the difference of the reward and the baseline with reinforce. The value estimate itself is trained to be close to the actual reward with a MSE loss.
End of explanation
hidden_size = 50
learning_rate = 1e-4
weight_decay = 1e-5
log_every = 1000
render_every = 20000
env = Environment()
policy = Policy(hidden_size=hidden_size)
optimizer = optim.Adam(policy.parameters(), lr=learning_rate)#, weight_decay=weight_decay)
reward_avg = SlidingAverage('reward avg', steps=log_every)
value_avg = SlidingAverage('value avg', steps=log_every)
Explanation: With everything in place we can define the training parameters and create the actual Environment and Policy instances. We'll also use a SlidingAverage helper to keep track of average rewards over time.
End of explanation
e = 0
while reward_avg < 0.75:
actions, values, rewards = run_episode(e)
final_reward = rewards[-1]
discounted_rewards, value_loss = finish_episode(e, actions, values, rewards)
reward_avg.add(final_reward)
value_avg.add(value_loss.data[0])
if e % log_every == 0:
print('[epoch=%d]' % e, reward_avg, value_avg)
if e > 0 and e % render_every == 0:
animate(env.history)
e += 1
# Plot average reward and value loss
plt.plot(np.array(reward_avg.avgs))
plt.show()
plt.plot(np.array(value_avg.avgs))
plt.show()
Explanation: Finally, we run a bunch of episodes and wait for some results. The average final reward will help us track whether it's learning. This took about an hour on a 2.8GHz CPU to get some reasonable results.
End of explanation |
5,685 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Representations and metrics
Question 1
<img src="images/Screen Shot 2016-07-02 at 9.34.11 AM.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Answer
- https
Step1: Question 2
<img src="images/Screen Shot 2016-07-02 at 9.39.55 AM.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Answer
- Word counts
- Sentence 1
Step2: Question 3
<img src="images/Screen Shot 2016-07-02 at 9.39.59 AM.png">
Screenshot taken from Coursera
<!--TEASER_END--> | Python Code:
def calculate_weight(feature):
weight = (1/(max(feature) - min(feature))) ** 2
return weight
price = calculate_weight(np.array([500000, 350000, 600000, 400000], dtype=float))
room = calculate_weight(np.array([3, 2, 4, 2], dtype=float))
lot = calculate_weight(np.array([1840, 1600, 2000, 1900], dtype=float))
print price
print room
print lot
Explanation: Representations and metrics
Question 1
<img src="images/Screen Shot 2016-07-02 at 9.34.11 AM.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Answer
- https://www.coursera.org/learn/ml-clustering-and-retrieval/discussions/weeks/2/threads/yoIkdz9XEeaQYwrcKTQWAQ
- The coordinate differences get squared before scaling, so the amount of variations gets squared too
End of explanation
import numpy as np
s1 = np.array([2, 1, 1, 1, 1, 1, 1, 1, 0, 0], dtype=float)
s2 = np.array([0, 2, 1, 1, 0, 0, 0, 1, 2, 1], dtype=float)
print s1
print s2
euclidean_distance = np.sqrt(np.sum((s1 - s2)**2))
euclidean_distance
Explanation: Question 2
<img src="images/Screen Shot 2016-07-02 at 9.39.55 AM.png">
Screenshot taken from Coursera
<!--TEASER_END-->
Answer
- Word counts
- Sentence 1: [2, 1, 1, 1, 1, 1, 1, 1, 0]
- Sentence 2: [0, 2, 1, 1, 0, 1, 0, 1, 2]
- Euclidean distance:
End of explanation
import numpy as np
s1 = np.array([2, 1, 1, 1, 1, 1, 1, 1, 0, 0], dtype=float)
s2 = np.array([0, 2, 1, 1, 0, 0, 0, 1, 2, 1], dtype=float)
print s1
print s2
cosine_similarity = np.dot(s1, s2)/(np.sqrt(np.sum(s1**2)) * np.sqrt(np.sum(s2**2)))
cosine_distance = 1 - cosine_similarity
cosine_distance
Explanation: Question 3
<img src="images/Screen Shot 2016-07-02 at 9.39.59 AM.png">
Screenshot taken from Coursera
<!--TEASER_END-->
End of explanation |
5,686 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Build a recommender system with retail data on Vertex AI using PySpark
Table of contents
Overview
Dataset
Objective
Costs
Create a Dataproc cluster with component gateway enabled and JupyterLab extension
Connect to the cluster from the notebook
Explore the data
Define the ALS Model
Evaluate the model
Save the ALS model to Cloud Storage
Write the recommendations to BigQuery
Clean up
Overview
<a name="section-1"></a>
Recommender systems are powerful tools that model existing customer behavior to generate recommendations. These models generally build complex matrices and map out existing customer preferences in order to find intersecting interests and offer recommendations. These matrices can be very large and will benefit from distributed computing and large memory pools. In a Vertex AI Workbench managed notebooks instance, you can use distributed computing by processing your data in PySpark on a Dataproc cluster.
Note
Step1: Otherwise, set your project ID here.
Step2: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
Step3: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
Step4: Only if your bucket doesn't already exist
Step5: Finally, validate access to your Cloud Storage bucket by examining its contents
Step6: Before you begin
The ALS model approach is compute-intensive and could take a lot of time to train on a regular notebook environment, so this tutorial uses a Dataproc cluster with PySpark environment.
Create a Dataproc cluster with component gateway enabled and JupyterLab extension
<a name="section-5"></a>
Create the cluster using the following gcloud command.
Step8: Alternatively, the cluster can be created through the Dataproc console as well. Additional settings like network configuratons and service-accounts can be configured there if required. While configuring the cluster, make sure you complete the following
Step9: Preprocess the Data
To run PySpark's ALS method on the existing data, there must be some fields to quantify the relationship between a USER_ID and a PRODUCT_ID, such as ratings given by the user. If such fields already exist in the data, they can be treated as an explicit feedback for the ALS model. Otherwise, the fields indicative of a relationship can be given as an implicit feedback. Learn more about feedback for PySpark's ALS method.
In the current dataset, as there are no such numerical fields, the STATUS field is further used to quantify the association between a USER_ID and a PRODUCT_ID. Based on when they occur during an order lifecycle and how likely the user is going to like the order, the STATUS field is assigned one of the following ratings
Step10: Check the distribution of the newly generated RATING field.
Step11: Load the required methods and classes from PySpark MLlib.
Step12: Generate a Spark session with the BigQuery-Spark connector configured.
Note
Step13: Convert the pandas dataframe to a spark dataframe for further processing.
Step14: Split the data into train and test
Step15: Define the ALS Model
<a name="section-8"></a>
The PySpark ALS recommender, Alternating Least Squares, is a matrix factorization algorithm. The idea is to build a matrix that maps users to actions. The actions can be reviews, purchases, various options taken, and more. Due to the complexity and size of the matrix, PySpark can run the algorithm in parallel.
ALS will attempt to estimate the rating matrix R as the product of two lower-rank matrices, X and Y. Typically these approximations are called "factor" matrices. During each iteration, one of the factor matrices is held constant, while the other is solved for using least squares. The newly-solved factor matrix is then held constant while solving for the other factor matrix.
PySpark uses a blocked implementation of the ALS factorization algorithm that groups the two sets of factors (referred to as “users” and “products”) into blocks and reduces communication by only sending one copy of each user vector to each product block on each iteration, and only for the product blocks that need that user’s feature vector.
Essentially instead of finding the low-rank approximations to the rating matrix R, this finds the approximations for a preference matrix P where the elements of P are 1 if r > 0 and 0 if r <= 0. The ratings then act as confidence values related to the strength of indicated user preferences rather than explicit ratings given to items. Learn more about PySpark's ALS algorithm.
Step16: The ALS model tries to predict the ratings between users and items and so RMSE can be used for evaluating the model.
Step17: Define a hyperparameter grid for cross-validation.
Step18: Perform cross-validation and save the best model.
Step19: Evaluate the model
<a name="section-9"></a>
Evaluate the model by computing the RMSE on the train and test data.
Step20: Generate recommendations for all users
The required number of recommendations for the users can be generated using the ALS model's recommendForAllUsers() method.
Step21: Generate recommendations for a specific user
The earlier step already generated and stored the specified number of product recommendations for all users in the nrecommendations dataframe object. To obtain recommendations for a single user, this dataframe object can be queried.
Step22: Save the ALS model to Cloud Storage (optional)
<a name="section-10"></a>
PySpark's ALS.save() method creates a folder at the specified path where it saves the trained model. A Cloud Storage file browser is available in the managed notebooks instance's environment, which you can use to save the model to a Cloud Storage bucket.
Use the ALS object's .save() function to write the model to the Cloud Storage bucket.
Step23: Write the recommendations to BigQuery (optional)
<a name="section-11"></a>
In order to serve the recommendations to the end-users or any applications, the output from the recommendForAllUsers() method can be saved to a BigQuery table using Spark's BigQuery connector.
Create a Dataset in BigQuery
The following cell creates a new dataset in BigQuery.
@bigquery
-- create a dataset in BigQuery
CREATE SCHEMA recommender_sys
OPTIONS(
location="us"
)
Write the Recommendations to BigQuery
PySpark's BigQuery connector requires two necessary fields
Step24: Clean up
<a name="section-12"></a>
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | Python Code:
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
Explanation: Build a recommender system with retail data on Vertex AI using PySpark
Table of contents
Overview
Dataset
Objective
Costs
Create a Dataproc cluster with component gateway enabled and JupyterLab extension
Connect to the cluster from the notebook
Explore the data
Define the ALS Model
Evaluate the model
Save the ALS model to Cloud Storage
Write the recommendations to BigQuery
Clean up
Overview
<a name="section-1"></a>
Recommender systems are powerful tools that model existing customer behavior to generate recommendations. These models generally build complex matrices and map out existing customer preferences in order to find intersecting interests and offer recommendations. These matrices can be very large and will benefit from distributed computing and large memory pools. In a Vertex AI Workbench managed notebooks instance, you can use distributed computing by processing your data in PySpark on a Dataproc cluster.
Note: This notebook file was designed to run in a Vertex AI Workbench managed notebooks instance using a Python 3 kernel generated by a Dataproc runtime. Some components of this notebook may not work in other notebook environments.
Dataset
<a name="section-2"></a>
This notebook uses the looker-private-demo.retail dataset in BigQuery. The dataset can be accessed by pinning the looker-private-demo project in BigQuery. Instead of going to the BigQuery user interface, this process can be performed from the JupyterLab user interface on a Vertex AI Workbench managed notebooks instance. Vertex AI Workbench managed notebooks instances support browsing through the datasets and tables from BigQuery through its BigQuery integration.
<img src="images/Bigquery_UI_new.PNG"></img>
In this dataset, the retail.order_items table will be used to train the recommendation system using PySpark. This table contains information on various orders related to the users and items (products) in the dataset.
Objective
<a name="section-3"></a>
This tutorial builds a recommendation model with a collaborative filtering approach using the interactive PySpark features offered by the Vertex AI Workbench's managed notebooks instances. You'll set up a remotely-connected Dataproc cluster and use the <a href="http://dl.acm.org/citation.cfm?id=1608614">Alternating Least Squares(ALS)</a> method implemented in PySpark's MLlib library.
The steps performed in this notebook are:
Connect your managed notebooks instance to a Dataproc cluster with PySpark.
Explore the dataset in BigQuery from within the notebook.
Preprocess the data.
Train a PySpark ALS model on the data.
Evaluate the ALS model.
Generate recommendations.
Save the recommendations to a BigQuery table using the PySpark-BigQuery connector.
Save the ALS model to a Cloud Storage bucket.
Clean up the resources.
Costs
<a name="section-4"></a>
This tutorial uses the following billable components of Google Cloud:
Vertex AI
Dataproc
BigQuery
Cloud Storage
Learn about Vertex AI
pricing, Dataproc pricing, BigQuery
pricing and Cloud Storage
pricing and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set your project ID
If you don't know your project ID, you may be able to get your project ID using gcloud.
End of explanation
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
Explanation: Otherwise, set your project ID here.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. Vertex AI runs
the code from this package. In this tutorial, Vertex AI also saves the
trained model that results from your job in the same bucket. Using this model artifact, you can then
create Vertex AI model and endpoint resources in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are
available. You may
not use a Multi-Regional Storage bucket for training with Vertex AI.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
CLUSTER_NAME = "[your-cluster-name]"
CLUSTER_REGION = "[your-cluster-region]"
CLUSTER_ZONE = "[your-cluster-zone]"
MACHINE_TYPE = "[your=machine-type]"
! gcloud dataproc clusters create $CLUSTER_NAME \
--enable-component-gateway \
--region $CLUSTER_REGION \
--zone $CLUSTER_ZONE \
--single-node \
--master-machine-type $MACHINE_TYPE \
--master-boot-disk-size 100 \
--image-version 2.0-debian10 \
--optional-components JUPYTER \
--project $PROJECT_ID
Explanation: Before you begin
The ALS model approach is compute-intensive and could take a lot of time to train on a regular notebook environment, so this tutorial uses a Dataproc cluster with PySpark environment.
Create a Dataproc cluster with component gateway enabled and JupyterLab extension
<a name="section-5"></a>
Create the cluster using the following gcloud command.
End of explanation
# The following two lines are only necessary to run once.
# Comment out otherwise for speed-up.
from google.cloud.bigquery import Client
client = Client()
query = WITH user_prod_table AS (
SELECT USER_ID, PRODUCT_ID, STATUS FROM looker-private-demo.retail.order_items AS a
join
(SELECT ID, PRODUCT_ID FROM looker-private-demo.retail.inventory_items) AS b
on a.inventory_item_id = b.ID )
SELECT USER_ID, PRODUCT_ID, STATUS from user_prod_table
job = client.query(query)
df = job.to_dataframe()
Explanation: Alternatively, the cluster can be created through the Dataproc console as well. Additional settings like network configuratons and service-accounts can be configured there if required. While configuring the cluster, make sure you complete the following:
Provide a name for the cluster.
Select a region and zone for the cluster.
Select the cluster type as single-node. For small and proof-of-concept use-cases, a single-node cluster is recommended.
Enable the component gateway.
In the optional components, select Jupyter Notebook.
(Optional) Select the machine-type (preferably a high-mem machine type).
Create the cluster.
Connect to the cluster from the notebook
<a name="section-6"></a>
When the new Dataproc cluster is running, the corresponding runtime appears as a kernel in the notebook. The created cluster's name will appear in the list of kernels that can be selected for this notebook. In the top right corner of this notebook file, click the current kernel name, Python (local), and then select the Python 3 kernel that is running on your Dataproc cluster.
<img src="images/cluster_kernel_selection.png"></img>
Note the following:
Your Dataproc kernel might take a few minutes to show up in the list of kernels.
PySpark code in this tutorial can be run on either a PySpark or Python 3 kernel on the Dataproc cluster, but to run the optional code that saves recommendations to a BigQuery table, the Python 3 kernel is recommended.
Tutorial
Explore the data
<a name="section-7"></a>
Vertex AI Workbench managed notebooks instances let you explore the BigQuery content from within the managed notebooks instance using a BigQuery integration. This feature lets you look at the metadata and preview of table content, query tables, and get a description of the data in the tables.
<img src="images/BQ_view_table_new.PNG"></img>
Check the distribution of the STATUS field.
@bigquery
SELECT STATUS, COUNT(*) order_count FROM looker-private-demo.retail.order_items GROUP BY 1
Join the order_items table with the inventory_items table from the same dataset to retrieve the product IDs for the orders.
@bigquery
WITH user_prod_table AS (
SELECT USER_ID, PRODUCT_ID, STATUS FROM looker-private-demo.retail.order_items AS a
join
(SELECT ID, PRODUCT_ID FROM looker-private-demo.retail.inventory_items) AS b
on a.inventory_item_id = b.ID )
SELECT USER_ID, PRODUCT_ID, STATUS from user_prod_table
Once the results from BigQuery are displayed in the above cell, click the Query and load as DataFrame button and execute the generated code stub to fetch the data into the current notebook as a dataframe.
Note: By default the data is loaded into a df variable, though this can be changed before executing the cell if required.
End of explanation
score_mapping = {
"Cancelled": 1,
"Returned": 2,
"Processing": 3,
"Shipped": 4,
"Complete": 5,
}
df["RATING"] = df["STATUS"].map(score_mapping)
Explanation: Preprocess the Data
To run PySpark's ALS method on the existing data, there must be some fields to quantify the relationship between a USER_ID and a PRODUCT_ID, such as ratings given by the user. If such fields already exist in the data, they can be treated as an explicit feedback for the ALS model. Otherwise, the fields indicative of a relationship can be given as an implicit feedback. Learn more about feedback for PySpark's ALS method.
In the current dataset, as there are no such numerical fields, the STATUS field is further used to quantify the association between a USER_ID and a PRODUCT_ID. Based on when they occur during an order lifecycle and how likely the user is going to like the order, the STATUS field is assigned one of the following ratings:
Cancelled - 1
Returned - 2
Processing - 3
Shipped - 4
Complete - 5
The ratings given are subjective and can be modified according to the use case.
End of explanation
df["RATING"].plot(kind="hist")
Explanation: Check the distribution of the newly generated RATING field.
End of explanation
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.recommendation import ALS
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.sql import SparkSession
Explanation: Load the required methods and classes from PySpark MLlib.
End of explanation
spark = (
SparkSession.builder.appName("Recommendations")
.config("spark.jars", "gs://spark-lib/bigquery/spark-bigquery-latest_2.12.jar")
.getOrCreate()
)
spark
Explanation: Generate a Spark session with the BigQuery-Spark connector configured.
Note: If the notebook is connected to a Dataproc cluster, the session object would show yarn as the Master.
End of explanation
spark_df = spark.createDataFrame(df[["USER_ID", "PRODUCT_ID", "RATING"]])
spark_df.printSchema()
spark_df.show()
Explanation: Convert the pandas dataframe to a spark dataframe for further processing.
End of explanation
(train, test) = spark_df.randomSplit([0.8, 0.2], seed=36)
train.count(), test.count()
Explanation: Split the data into train and test
End of explanation
als = ALS(
userCol="USER_ID",
itemCol="PRODUCT_ID",
ratingCol="RATING",
nonnegative=True,
implicitPrefs=False,
coldStartStrategy="drop",
)
Explanation: Define the ALS Model
<a name="section-8"></a>
The PySpark ALS recommender, Alternating Least Squares, is a matrix factorization algorithm. The idea is to build a matrix that maps users to actions. The actions can be reviews, purchases, various options taken, and more. Due to the complexity and size of the matrix, PySpark can run the algorithm in parallel.
ALS will attempt to estimate the rating matrix R as the product of two lower-rank matrices, X and Y. Typically these approximations are called "factor" matrices. During each iteration, one of the factor matrices is held constant, while the other is solved for using least squares. The newly-solved factor matrix is then held constant while solving for the other factor matrix.
PySpark uses a blocked implementation of the ALS factorization algorithm that groups the two sets of factors (referred to as “users” and “products”) into blocks and reduces communication by only sending one copy of each user vector to each product block on each iteration, and only for the product blocks that need that user’s feature vector.
Essentially instead of finding the low-rank approximations to the rating matrix R, this finds the approximations for a preference matrix P where the elements of P are 1 if r > 0 and 0 if r <= 0. The ratings then act as confidence values related to the strength of indicated user preferences rather than explicit ratings given to items. Learn more about PySpark's ALS algorithm.
End of explanation
evaluator = RegressionEvaluator(
metricName="rmse", labelCol="RATING", predictionCol="prediction"
)
Explanation: The ALS model tries to predict the ratings between users and items and so RMSE can be used for evaluating the model.
End of explanation
param_grid = (
ParamGridBuilder()
.addGrid(als.rank, [10, 50])
.addGrid(als.regParam, [0.01, 0.1, 0.2])
.build()
)
print("No. of settings to be tested: ", len(param_grid))
Explanation: Define a hyperparameter grid for cross-validation.
End of explanation
cv = CrossValidator(
estimator=als, estimatorParamMaps=param_grid, evaluator=evaluator, numFolds=3
)
model = cv.fit(train)
best_model = model.bestModel
print("##Parameters for the Best Model##")
print("Rank:", best_model._java_obj.parent().getRank())
print("MaxIter:", best_model._java_obj.parent().getMaxIter())
print("RegParam:", best_model._java_obj.parent().getRegParam())
Explanation: Perform cross-validation and save the best model.
End of explanation
# View the rating predictions by the model on train and test sets
train_predictions = best_model.transform(train)
train_RMSE = evaluator.evaluate(train_predictions)
test_predictions = best_model.transform(test)
test_RMSE = evaluator.evaluate(test_predictions)
print("Train RMSE ", train_RMSE)
print("Test RMSE ", test_RMSE)
Explanation: Evaluate the model
<a name="section-9"></a>
Evaluate the model by computing the RMSE on the train and test data.
End of explanation
# Generate 10 product recommendations for all users
nrecommendations = best_model.recommendForAllUsers(10)
nrecommendations.limit(10).show()
Explanation: Generate recommendations for all users
The required number of recommendations for the users can be generated using the ALS model's recommendForAllUsers() method.
End of explanation
# get product recommendations for the selected user (USER_ID = 1)
nrecommendations.where(nrecommendations.USER_ID == 1).select(
"recommendations.PRODUCT_ID", "recommendations.rating"
).collect()
Explanation: Generate recommendations for a specific user
The earlier step already generated and stored the specified number of product recommendations for all users in the nrecommendations dataframe object. To obtain recommendations for a single user, this dataframe object can be queried.
End of explanation
# Save the trained model
GCS_MODEL_PATH = "gs://" + BUCKET_NAME + "/recommender_systems/"
best_model.save(GCS_MODEL_PATH + "rcmd_model")
Explanation: Save the ALS model to Cloud Storage (optional)
<a name="section-10"></a>
PySpark's ALS.save() method creates a folder at the specified path where it saves the trained model. A Cloud Storage file browser is available in the managed notebooks instance's environment, which you can use to save the model to a Cloud Storage bucket.
Use the ALS object's .save() function to write the model to the Cloud Storage bucket.
End of explanation
DATASET = "[your-dataset-name]"
TABLE = "[your-bigquery-table-name]"
GCS_TEMP_PATH = "[your-cloud-storage-path]"
nrecommendations.write.format("bigquery").option(
"table", "{}.{}".format(DATASET, TABLE)
).option("temporaryGcsBucket", GCS_TEMP_PATH).mode("overwrite").save()
Explanation: Write the recommendations to BigQuery (optional)
<a name="section-11"></a>
In order to serve the recommendations to the end-users or any applications, the output from the recommendForAllUsers() method can be saved to a BigQuery table using Spark's BigQuery connector.
Create a Dataset in BigQuery
The following cell creates a new dataset in BigQuery.
@bigquery
-- create a dataset in BigQuery
CREATE SCHEMA recommender_sys
OPTIONS(
location="us"
)
Write the Recommendations to BigQuery
PySpark's BigQuery connector requires two necessary fields: a BigQuery Table name and a Cloud Storage path to write the temporary files while saving the model. These two fields can be provided while writing the recommendations to BigQuery.
End of explanation
# remove the BigQuery dataset created for storing the recommendations and all of its tables
! bq rm -r -f -d $PROJECT:$DATASET
# remove the Cloud Storage bucket created and all of its tables
! gsutil rm -r gs://$BUCKET_NAME
# delete the created Dataproc cluster
! gcloud dataproc clusters delete $CLUSTER_NAME --region=$CLUSTER_REGION
Explanation: Clean up
<a name="section-12"></a>
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation |
5,687 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_viz_epochs
Step1: This tutorial focuses on visualization of epoched data. All of the functions
introduced here are basically high level matplotlib functions with built in
intelligence to work with epoched data. All the methods return a handle to
matplotlib figure instance.
All plotting functions start with plot. Let's start with the most
obvious.
Step2: The numbers at the top refer to the event id of the epoch. We only have
events with id numbers of 1 and 2 since we included only those when
constructing the epochs.
Since we did no artifact correction or rejection, there are epochs
contaminated with blinks and saccades. For instance, epoch number 9 (see
numbering at the bottom) seems to be contaminated by a blink (scroll to the
bottom to view the EOG channel). This epoch can be marked for rejection by
clicking on top of the browser window. The epoch should turn red when you
click it. This means that it will be dropped as the browser window is closed.
You should check out help at the lower left corner of the window for more
information about the interactive features.
To plot individual channels as an image, where you see all the epochs at one
glance, you can use function | Python Code:
import os.path as op
import mne
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')
raw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif'))
events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
picks = mne.pick_types(raw.info, meg='grad')
epochs = mne.Epochs(raw, events, [1, 2], picks=picks)
Explanation: .. _tut_viz_epochs:
Visualize Epochs data
End of explanation
epochs.plot(block=True)
Explanation: This tutorial focuses on visualization of epoched data. All of the functions
introduced here are basically high level matplotlib functions with built in
intelligence to work with epoched data. All the methods return a handle to
matplotlib figure instance.
All plotting functions start with plot. Let's start with the most
obvious. :func:mne.Epochs.plot offers an interactive browser that allows
rejection by hand when called in combination with a keyword block=True.
This blocks the execution of the script until the browser window is closed.
End of explanation
epochs.plot_image(97)
# You also have functions for plotting channelwise information arranged into a
# shape of the channel array. The image plotting uses automatic scaling by
# default, but noisy channels and different channel types can cause the scaling
# to be a bit off. Here we define the limits by hand.
epochs.plot_topo_image(vmin=-200, vmax=200, title='ERF images')
Explanation: The numbers at the top refer to the event id of the epoch. We only have
events with id numbers of 1 and 2 since we included only those when
constructing the epochs.
Since we did no artifact correction or rejection, there are epochs
contaminated with blinks and saccades. For instance, epoch number 9 (see
numbering at the bottom) seems to be contaminated by a blink (scroll to the
bottom to view the EOG channel). This epoch can be marked for rejection by
clicking on top of the browser window. The epoch should turn red when you
click it. This means that it will be dropped as the browser window is closed.
You should check out help at the lower left corner of the window for more
information about the interactive features.
To plot individual channels as an image, where you see all the epochs at one
glance, you can use function :func:mne.Epochs.plot_image. It shows the
amplitude of the signal over all the epochs plus an average of the
activation.
End of explanation |
5,688 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nodes and Edges
Step1: Basic Network Statistics
Let's first understand how many students and friendships are represented in the network.
Step2: Exercise
Can you write a single line of code that returns the number of nodes in the graph? (1 min.)
Let's now figure out who is connected to who in the network
Step3: Exercise
Can you write a single line of code that returns the number of relationships represented? (1 min.)
Concept
A network, more technically known as a graph, is comprised of
Step4: Exercise
Can you count how many males and females are represented in the graph? (3 min.)
Hint
Step5: Edges can also store attributes in their attribute dictionary.
Step6: In this synthetic social network, the number of times the left student indicated that the right student was their favourite is stored in the "count" variable.
Exercise
Can you figure out the maximum times any student rated another student as their favourite? (3 min.)
Step7: Exercise
We found out that there are two individuals that we left out of the network, individual no. 30 and 31. They are one male (30) and one female (31), and they are a pair that just love hanging out with one another and with individual 7 (count=3), in both directions per pair. Add this information to the graph. (5 min.)
If you need more help, check out https
Step8: Verify that you have added in the edges and nodes correctly by running the following cell.
Step9: Exercise (break-time)
If you would like a challenge during the break, try figuring out which students have "unrequited" friendships, that is, they have rated another student as their favourite at least once, but that other student has not rated them as their favourite at least once.
Specifically, get a list of edges for which the reverse edge is not present.
Hint
Step10: In a previous session at ODSC East 2018, a few other class participants attempted this problem. You can find their solutions in the Instructor version of this notebook.
Tests
A note about the tests
Step11: If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels=True argument.
Step12: However, note that if the number of nodes in the graph gets really large, node-link diagrams can begin to look like massive hairballs. This is undesirable for graph visualization.
Matrix Plot
Instead, we can use a matrix to represent them. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes. This is done by using the MatrixPlot object from nxviz.
Step13: Arc Plot
The Arc Plot is the basis of the next set of rational network visualizations.
Step14: Circos Plot
Let's try another visualization, the Circos plot. We can order the nodes in the Circos plot according to the node ID, but any other ordering is possible as well. Edges are drawn between two nodes.
Credit goes to Justin Zabilansky (MIT) for the implementation, Jon Charest for subsequent improvements, and nxviz contributors for further development.
Step15: This visualization helps us highlight nodes that there are poorly connected, and others that are strongly connected.
Hive Plot
Next up, let's try Hive Plots. HivePlots are not yet implemented in nxviz just yet, so we're going to be using the old hiveplot API for this. When HivePlots have been migrated over to nxviz, its API will resemble that of the CircosPlot's. | Python Code:
G = cf.load_seventh_grader_network()
Explanation: Nodes and Edges: How do we represent relationships between individuals using NetworkX?
As mentioned earlier, networks, also known as graphs, are comprised of individual entities and their representatives. The technical term for these are nodes and edges, and when we draw them we typically use circles (nodes) and lines (edges).
In this notebook, we will work with a social network of seventh graders, in which nodes are individual students, and edges represent their relationships. Edges between individuals show how often the seventh graders indicated other seventh graders as their favourite.
Data credit: http://konect.cc/networks/moreno_seventh
Data Representation
In the networkx implementation, graph objects store their data in dictionaries.
Nodes are part of the attribute Graph.node, which is a dictionary where the key is the node ID and the values are a dictionary of attributes.
Edges are part of the attribute Graph.edge, which is a nested dictionary. Data are accessed as such: G.edge[node1][node2]['attr_name'].
Because of the dictionary implementation of the graph, any hashable object can be a node. This means strings and tuples, but not lists and sets.
Load Data
Let's load some real network data to get a feel for the NetworkX API. This dataset comes from a study of 7th grade students.
This directed network contains proximity ratings between studetns from 29 seventh grade students from a school in Victoria. Among other questions the students were asked to nominate their preferred classmates for three different activities. A node represents a student. An edge between two nodes shows that the left student picked the right student as his answer. The edge weights are between 1 and 3 and show how often the left student chose the right student as his favourite.
End of explanation
# Who are represented in the network?
G.nodes()
Explanation: Basic Network Statistics
Let's first understand how many students and friendships are represented in the network.
End of explanation
# Who is connected to who in the network?
G.edges()
Explanation: Exercise
Can you write a single line of code that returns the number of nodes in the graph? (1 min.)
Let's now figure out who is connected to who in the network
End of explanation
# Let's get a list of nodes with their attributes.
list(G.nodes(data=True))
# NetworkX will return a list of tuples in the form (node_id, attribute_dictionary)
Explanation: Exercise
Can you write a single line of code that returns the number of relationships represented? (1 min.)
Concept
A network, more technically known as a graph, is comprised of:
a set of nodes
joined by a set of edges
They can be represented as two lists:
A node list: a list of 2-tuples where the first element of each tuple is the representation of the node, and the second element is a dictionary of metadata associated with the node.
An edge list: a list of 3-tuples where the first two elements are the nodes that are connected together, and the third element is a dictionary of metadata associated with the edge.
Since this is a social network of people, there'll be attributes for each individual, such as a student's gender. We can grab that data off from the attributes that are stored with each node.
End of explanation
from collections import Counter
mf_counts = Counter([d['_________'] for _, _ in G._____(data=_____)])
def test_answer(mf_counts):
assert mf_counts['female'] == 17
assert mf_counts['male'] == 12
test_answer(mf_counts)
Explanation: Exercise
Can you count how many males and females are represented in the graph? (3 min.)
Hint: You may want to use the Counter object from the collections module.
End of explanation
G.edges(data=True)
Explanation: Edges can also store attributes in their attribute dictionary.
End of explanation
# Answer
counts = [d['_____'] for _, _, _ in G._______(_________)]
maxcount = max(_________)
def test_maxcount(maxcount):
assert maxcount == 3
test_maxcount(maxcount)
Explanation: In this synthetic social network, the number of times the left student indicated that the right student was their favourite is stored in the "count" variable.
Exercise
Can you figure out the maximum times any student rated another student as their favourite? (3 min.)
End of explanation
# Answer: Follow the coding pattern.
G.add_node(30, gender='male')
G.add_edge(30, 31, count=3)
Explanation: Exercise
We found out that there are two individuals that we left out of the network, individual no. 30 and 31. They are one male (30) and one female (31), and they are a pair that just love hanging out with one another and with individual 7 (count=3), in both directions per pair. Add this information to the graph. (5 min.)
If you need more help, check out https://networkx.github.io/documentation/stable/tutorial.html
End of explanation
def test_graph_integrity(G):
assert 30 in G.nodes()
assert 31 in G.nodes()
assert G.node[30]['gender'] == 'male'
assert G.node[31]['gender'] == 'female'
assert G.has_edge(30, 31)
assert G.has_edge(30, 7)
assert G.has_edge(31, 7)
assert G.edges[30, 7]['count'] == 3
assert G.edges[7, 30]['count'] == 3
assert G.edges[31, 7]['count'] == 3
assert G.edges[7, 31]['count'] == 3
assert G.edges[30, 31]['count'] == 3
assert G.edges[31, 30]['count'] == 3
print('All tests passed.')
test_graph_integrity(G)
Explanation: Verify that you have added in the edges and nodes correctly by running the following cell.
End of explanation
unrequitted_friendships = []
# Fill in your answer below.
assert len(unrequitted_friendships) == 124
Explanation: Exercise (break-time)
If you would like a challenge during the break, try figuring out which students have "unrequited" friendships, that is, they have rated another student as their favourite at least once, but that other student has not rated them as their favourite at least once.
Specifically, get a list of edges for which the reverse edge is not present.
Hint: You may need the class method G.has_edge(n1, n2). This returns whether a graph has an edge between the nodes n1 and n2.
End of explanation
nx.draw(G)
Explanation: In a previous session at ODSC East 2018, a few other class participants attempted this problem. You can find their solutions in the Instructor version of this notebook.
Tests
A note about the tests: Testing is good practice when writing code. Well-crafted assertion statements help you program defensivel, by forcing you to explicitly state your assumptions about the code or data.
For more references on defensive programming, check out Software Carpentry's website: http://swcarpentry.github.io/python-novice-inflammation/08-defensive/
For more information on writing tests for your data, check out these slides from a lightning talk I gave at Boston Python and SciPy 2015: http://j.mp/data-test
Coding Patterns
These are some recommended coding patterns when doing network analysis using NetworkX, which stem from my roughly two years of experience with the package.
Iterating using List Comprehensions
I would recommend that you use the following for compactness:
[d['attr'] for n, d in G.nodes(data=True)]
And if the node is unimportant, you can do:
[d['attr'] for _, d in G.nodes(data=True)]
Iterating over Edges using List Comprehensions
A similar pattern can be used for edges:
[n2 for n1, n2, d in G.edges(data=True)]
or
[n2 for _, n2, d in G.edges(data=True)]
If the graph you are constructing is a directed graph, with a "source" and "sink" available, then I would recommend the following pattern:
[(sc, sk) for sc, sk, d in G.edges(data=True)]
or
[d['attr'] for sc, sk, d in G.edges(data=True)]
Drawing Graphs
As illustrated above, we can draw graphs using the nx.draw() function. The most popular format for drawing graphs is the node-link diagram.
Hairballs
Nodes are circles and lines are edges. Nodes more tightly connected with one another are clustered together. Large graphs end up looking like hairballs.
End of explanation
nx.draw(G, with_labels=True)
Explanation: If the network is small enough to visualize, and the node labels are small enough to fit in a circle, then you can use the with_labels=True argument.
End of explanation
from nxviz import MatrixPlot
m = MatrixPlot(G)
m.draw()
plt.show()
Explanation: However, note that if the number of nodes in the graph gets really large, node-link diagrams can begin to look like massive hairballs. This is undesirable for graph visualization.
Matrix Plot
Instead, we can use a matrix to represent them. The nodes are on the x- and y- axes, and a filled square represent an edge between the nodes. This is done by using the MatrixPlot object from nxviz.
End of explanation
from nxviz import ArcPlot
a = ArcPlot(G, node_color='gender', node_grouping='gender')
a.draw()
Explanation: Arc Plot
The Arc Plot is the basis of the next set of rational network visualizations.
End of explanation
from nxviz import CircosPlot
c = CircosPlot(G, node_color='gender', node_grouping='gender')
c.draw()
plt.savefig('images/seventh.png', dpi=300)
Explanation: Circos Plot
Let's try another visualization, the Circos plot. We can order the nodes in the Circos plot according to the node ID, but any other ordering is possible as well. Edges are drawn between two nodes.
Credit goes to Justin Zabilansky (MIT) for the implementation, Jon Charest for subsequent improvements, and nxviz contributors for further development.
End of explanation
from hiveplot import HivePlot
nodes = dict()
nodes['male'] = [n for n,d in G.nodes(data=True) if d['gender'] == 'male']
nodes['female'] = [n for n,d in G.nodes(data=True) if d['gender'] == 'female']
edges = dict()
edges['group1'] = G.edges(data=True)
nodes_cmap = dict()
nodes_cmap['male'] = 'blue'
nodes_cmap['female'] = 'red'
edges_cmap = dict()
edges_cmap['group1'] = 'black'
h = HivePlot(nodes, edges, nodes_cmap, edges_cmap)
h.draw()
Explanation: This visualization helps us highlight nodes that there are poorly connected, and others that are strongly connected.
Hive Plot
Next up, let's try Hive Plots. HivePlots are not yet implemented in nxviz just yet, so we're going to be using the old hiveplot API for this. When HivePlots have been migrated over to nxviz, its API will resemble that of the CircosPlot's.
End of explanation |
5,689 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring the Lorenz System of Differential Equations
In this Notebook we explore the Lorenz system of differential equations
Step2: Computing the trajectories and plotting the result
We define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the parameters of the differential equation ($\sigma$, $\beta$, $\rho$), the numerical integration (N, max_time) and the visualization (angle).
Step3: Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors. | Python Code:
%matplotlib inline
from IPython.html.widgets import interact, interactive
from IPython.display import clear_output, display, HTML
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
Explanation: Exploring the Lorenz System of Differential Equations
In this Notebook we explore the Lorenz system of differential equations:
$$
\begin{aligned}
\dot{x} & = \sigma(y-x) \
\dot{y} & = \rho x - y - xz \
\dot{z} & = -\beta z + xy
\end{aligned}
$$
This is one of the classic systems in non-linear differential equations. It exhibits a range of different behaviors as the parameters ($\sigma$, $\beta$, $\rho$) are varied.
Imports
First, we import the needed things from IPython, NumPy, Matplotlib and SciPy.
End of explanation
def solve_lorenz(N=10, angle=0.0, max_time=4.0, sigma=10.0, beta=8./3, rho=28.0):
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
ax.axis('off')
# prepare the axes limits
ax.set_xlim((-25, 25))
ax.set_ylim((-35, 35))
ax.set_zlim((5, 55))
def lorenz_deriv(x_y_z, t0, sigma=sigma, beta=beta, rho=rho):
Compute the time-derivative of a Lorenz system.
x, y, z = x_y_z
return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z]
# Choose random starting points, uniformly distributed from -15 to 15
np.random.seed(1)
x0 = -15 + 30 * np.random.random((N, 3))
# Solve for the trajectories
t = np.linspace(0, max_time, int(250*max_time))
x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)
for x0i in x0])
# choose a different color for each trajectory
colors = plt.cm.jet(np.linspace(0, 1, N))
for i in range(N):
x, y, z = x_t[i,:,:].T
lines = ax.plot(x, y, z, '-', c=colors[i])
plt.setp(lines, linewidth=2)
ax.view_init(30, angle)
plt.show()
return t, x_t
Explanation: Computing the trajectories and plotting the result
We define a function that can integrate the differential equations numerically and then plot the solutions. This function has arguments that control the parameters of the differential equation ($\sigma$, $\beta$, $\rho$), the numerical integration (N, max_time) and the visualization (angle).
End of explanation
t, x_t = solve_lorenz(angle=0, N=30)
Explanation: Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors.
End of explanation |
5,690 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import the functions (assumes that QTLight_functions.py is in your current working directory or in your python path)
Step1: Fetch relevant files from stacks populations run
Step2: create 10 Bayenv input files with 5000 randomly selected loci in each
Step3: create 10 covariance matrizes with 100000 iterations each
Step4: extract covariance matrizes from final iteration into txt file
Step5: construct average covariance matrix from 10 random sets
Step6: Prepare environmental data - average and normalize
raw data is provided in a csv file with the first column containing the population id. See example in test-data.
Step7: convert vcf to bayenv - generate full SNP files
Step8: split up SNPfiles into single files
Step9: Run Bayenv2 for 10 replications serially
for this run I used bayenv2 version
Step10: ALTERNATIVE
Bayenv can be run on a HPC cluster in parallel. I provide a script submit_Bayenv_array_multi.sh that I used to run 10 replicates as arrayjob on a cluster that was running a PBS scheduling system. Total runtime for 10 replicates with 1M Bayenv iterations/SNP was ~ 24h. The results from the individual runs were then concatenated with the script concat_sorted.sh and moved to the directory running_Bayenv on the local machine.
ANALYSE RANK STATISTICS
please make sure you load all functions below first
Calculating RANK STATISTICS
Step11: CREATE POPE PLOTS and extract the SNP ids in the top 5 percent (assumes that the script pope_plot.sh is in your working directory)
Step12: CREATE POPE PLOTS and extract the SNP ids in the top 1 percent
Step13: find genes up and downstream of correlated SNPs
Step14: parse a gff file
Step15: identify genes within a defined distance (in kb) up and down-stream of the SNPs
Step16: annotated relevant genes based on blast2go annotation table
Step17: write summary table for SNPs and relevant genes in the vicinity
Step18: A strategy for removing noise could be to remove the most extreme Bayenv results and recalculate rank stats
Step19: find genes up and downstream of correlated SNPs | Python Code:
import QTLight_functions as QTL
Explanation: Import the functions (assumes that QTLight_functions.py is in your current working directory or in your python path)
End of explanation
%%bash
ln -s test-data/batch_1.vcf.gz .
ln -s test-data/populationmap .
mkdir matrix
Explanation: Fetch relevant files from stacks populations run
End of explanation
%%bash
#pip install pyvcf
for a in {1..10}
do
echo -e "\nrepitition $a:\n"
python /home/chrishah/Dropbox/Github/genomisc/popogeno/vcf_2_bayenv.py batch_1.vcf.gz --min_number 6 -r 5000 -o matrix/random_5000_rep_$a -m populationmap
done
Explanation: create 10 Bayenv input files with 5000 randomly selected loci in each
End of explanation
%%bash
cd matrix/
for a in {1..10}
do
rand=$RANDOM
echo -e "repitition $a (random seed: -$rand)\n"
/home/chrishah/src/Bayenv/bayenv2 0 -p 4 -r -$RANDOM -k 100000 -i random_5000_rep_$a.bayenv.SNPfile > random_5000_rep_$a.log
done
cd ../
Explanation: create 10 covariance matrizes with 100000 iterations each
End of explanation
%%bash
dimensions=4
dimensions=$((dimensions+1))
for a in {1..10}
do
tail -n $dimensions matrix/random_5000_rep_$a.log | grep "^$" -v > matrix/random_5000_rep_$a\_it-10e5.matrix
done
Explanation: extract covariance matrizes from final iteration into txt file
End of explanation
import numpy as np
main_list = []
for a in range(10):
current = "matrix/random_5000_rep_"+str(a+1)+"_it-10e5.matrix"
# print current
IN = open(current,"r")
temp_list = []
for line in IN:
temp_list.extend(line.rstrip().split("\t"))
for i in range(len(temp_list)):
if a == 0:
main_list.append([float(temp_list[i])])
else:
main_list[i].append(float(temp_list[i]))
#print main_list
av_out_list = []
std_out_list = []
for j in range(len(main_list)):
av_out_list.append(np.mean(main_list[j]))
#print av_out_list
outstring = ""
for z in range(len(av_out_list)):
av_out_list[z] = "%s\t" %av_out_list[z]
if not outstring:
outstring = av_out_list[z]
else:
outstring = outstring+av_out_list[z]
if ((z+1) % 4 == 0):
outstring = "%s\n" %(outstring)
OUT = open("matrix/av_matrix.matrix","w")
OUT.write(outstring)
OUT.close()
Explanation: construct average covariance matrix from 10 random sets
End of explanation
populations, IDs = QTL.normalize(csv='../Diplotaxodon_Morphometric_Data_raw.csv', normalize=True, norm_prefix='Diplotaxodon_Morphometric_Data_normalized', boxplot=False)
print populations
print IDs
Explanation: Prepare environmental data - average and normalize
raw data is provided in a csv file with the first column containing the population id. See example in test-data.
End of explanation
%%bash
mkdir SNPfiles
python /home/chrishah/Dropbox/Github/genomisc/popogeno/vcf_2_div.py ../batch_1.vcf.gz --min_number 6 -o SNPfiles/full_set -m ../populationmap
Explanation: convert vcf to bayenv - generate full SNP files
End of explanation
QTL.split_for_Bayenv(infile='SNPfiles/full_set.bayenv.SNPfile', out_prefix='SNPfiles/Diplo_SNP')
Explanation: split up SNPfiles into single files
End of explanation
#find the number of SNP files to add to specify in loop below
!ls -1 SNPfiles/SNP-* |wc -l
!mkdir running_Bayenv
%%bash
#adjust bayenv command to your requirements
iterations=1000000
cd running_Bayenv/
for rep in {1..10}; do ran=$RANDOM; for a in {0000001..0021968}; do /home/chrishah/src/Bayenv/bayenv2 -i ../SNPfiles/SNP-$a.txt -e ../Nyassochromis_normalized.bayenv -m ../matrix/av_matrix.matrix -k $iterations -r -$ran -p 3 -n 14 -t -X -o bayenv_out_k100000_env_rep_$rep-rand_$ran; done > log_rep_$rep; done
Explanation: Run Bayenv2 for 10 replications serially
for this run I used bayenv2 version: tguenther-bayenv2_public-48f0b51ced16
End of explanation
mkdir RANK_STATISTIC/
#create the list of Bayenv results files to be processed
import os
bayenv_res_dir = './running_bayenv/'
bayenv_files = []
for fil in os.listdir(bayenv_res_dir):
if fil.endswith(".bf"):
print(bayenv_res_dir+"/"+fil)
bayenv_files.append(bayenv_res_dir+"/"+fil)
print bayenv_files
print "\n%i" %len(bayenv_files)
print IDs
rank_results = QTL.calculate_rank_stats(SNP_map="SNPfiles/full_set.bayenv.SNPmap", infiles = bayenv_files, ids = IDs, prefix = 'RANK_STATISTIC/Diplo_k_1M')
Explanation: ALTERNATIVE
Bayenv can be run on a HPC cluster in parallel. I provide a script submit_Bayenv_array_multi.sh that I used to run 10 replicates as arrayjob on a cluster that was running a PBS scheduling system. Total runtime for 10 replicates with 1M Bayenv iterations/SNP was ~ 24h. The results from the individual runs were then concatenated with the script concat_sorted.sh and moved to the directory running_Bayenv on the local machine.
ANALYSE RANK STATISTICS
please make sure you load all functions below first
Calculating RANK STATISTICS
End of explanation
print IDs
full_rank_files = []
file_dir = 'RANK_STATISTIC/'
for id in IDs:
# print id
for file in os.listdir(file_dir):
if file.endswith('_'+id+'.txt'):
# print [id,file_dir+'/'+file]
full_rank_files.append([id,file_dir+'/'+file])
break
print full_rank_files
QTL.plot_pope(files_list=full_rank_files, cutoff=0.95, num_replicates=10)
Explanation: CREATE POPE PLOTS and extract the SNP ids in the top 5 percent (assumes that the script pope_plot.sh is in your working directory)
End of explanation
QTL.plot_pope(files_list=full_rank_files, cutoff=0.99, num_replicates=10)
Explanation: CREATE POPE PLOTS and extract the SNP ids in the top 1 percent
End of explanation
#make list desired rank statistic tsv files
import os
file_dir = 'RANK_STATISTIC/'
rank_stats_files = []
for file in os.listdir(file_dir):
if file.endswith('.tsv'):
print file_dir+'/'+file
rank_stats_files.append(file_dir+'/'+file)
Explanation: find genes up and downstream of correlated SNPs
End of explanation
gff_per_scaffold = QTL.parse_gff(gff='Metriaclima_zebra.BROADMZ2.gtf')
Explanation: parse a gff file
End of explanation
genes_per_analysis = QTL.find_genes(rank_stats = rank_stats_files, gff = gff_per_scaffold, distance = 15)
Explanation: identify genes within a defined distance (in kb) up and down-stream of the SNPs
End of explanation
QTL.annotate_genes(SNPs_to_genes=genes_per_analysis, annotations='blast2go_table_20150630_0957.txt')
mkdir find_genes
Explanation: annotated relevant genes based on blast2go annotation table
End of explanation
QTL.write_candidates(SNPs_to_genes=genes_per_analysis, whitelist=genes_per_analysis.keys(), out_dir='./find_genes/')
Explanation: write summary table for SNPs and relevant genes in the vicinity
End of explanation
mkdir RANK_STATISTIC_reduced
QTL.exclude_extreme_rep(dictionary = rank_results, ids = IDs, prefix = 'RANK_STATISTIC_reduced/Diplotaxodon_reduced')
reduced_rank_files = []
file_dir = 'RANK_STATISTIC_reduced/'
for id in IDs:
# print id
for file in os.listdir(file_dir):
if '_'+id+'_ex_rep' in file and file.endswith('.txt'):
# print [id,file_dir+'/'+file]
reduced_rank_files.append([id,file_dir+'/'+file])
break
print reduced_rank_files
QTL.plot_pope(files_list=reduced_rank_files, cutoff=0.95, num_replicates=9)
Explanation: A strategy for removing noise could be to remove the most extreme Bayenv results and recalculate rank stats
End of explanation
#make list desired rank statistic tsv files
import os
file_dir = 'RANK_STATISTIC_reduced/'
rank_stats_files = []
for file in os.listdir(file_dir):
if file.endswith('.tsv'):
print file_dir+'/'+file
rank_stats_files.append(file_dir+'/'+file)
mkdir find_genes_reduced/
genes_per_analysis = QTL.find_genes(rank_stats = rank_stats_files, gff = gff_per_scaffold, distance = 15)
QTL.annotate_genes(SNPs_to_genes=genes_per_analysis, annotations='blast2go_table_20150630_0957.txt')
mkdir find_genes_reduced/
QTL.write_candidates(SNPs_to_genes=genes_per_analysis, whitelist=genes_per_analysis.keys(), out_dir='./find_genes_reduced/')
Explanation: find genes up and downstream of correlated SNPs
End of explanation |
5,691 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize VGG16 Filters
Visualization of the filters of VGG16, via gradient ascent in input space.
Step1: Create a function to extract and display the generated input
Step2: Build the VGG16 network with ImageNet weights
Note that we only go up to the last convolutional layer --we don't include fully-connected layers. The reason is that adding the fully connected layers forces you to use a fixed input size for the model (224x224, the original ImageNet format). By only keeping the convolutional modules, our model can be adapted to arbitrary input sizes.
Step3: Define a loss function
This loss function will seek to maximize the activation of a specific filter (filter_index) in a specific layer (layer_name) | Python Code:
from __future__ import print_function
from scipy.misc import imsave
import numpy as np
import time
from keras.applications import vgg16
from keras import backend as K
# dimensions of the generated pictures for each filter.
img_width = 128
img_height = 128
# the name of the layer we want to visualize
# (see model definition at keras/applications/vgg16.py)
layer_name = 'block5_conv1'
Explanation: Visualize VGG16 Filters
Visualization of the filters of VGG16, via gradient ascent in input space.
End of explanation
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
if K.image_data_format() == 'channels_first':
x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
Explanation: Create a function to extract and display the generated input
End of explanation
model = vgg16.VGG16(weights='imagenet', include_top=False)
print('Model loaded.')
model.summary()
Explanation: Build the VGG16 network with ImageNet weights
Note that we only go up to the last convolutional layer --we don't include fully-connected layers. The reason is that adding the fully connected layers forces you to use a fixed input size for the model (224x224, the original ImageNet format). By only keeping the convolutional modules, our model can be adapted to arbitrary input sizes.
End of explanation
# this is the placeholder for the input images
input_img = model.input
# get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers[1:]])
def normalize(x):
# utility function to normalize a tensor by its L2 norm
return x / (K.sqrt(K.mean(K.square(x))) + 1e-5)
kept_filters = []
for filter_index in range(0, 200):
# we only scan through the first 200 filters,
# but there are actually 512 of them
print('Processing filter %d' % filter_index)
start_time = time.time()
# we build a loss function that maximizes the activation
# of the nth filter of the layer considered
layer_output = layer_dict[layer_name].output
if K.image_data_format() == 'channels_first':
loss = K.mean(layer_output[:, filter_index, :, :])
else:
loss = K.mean(layer_output[:, :, :, filter_index])
# we compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, input_img)[0]
# normalization trick: we normalize the gradient
grads = normalize(grads)
# this function returns the loss and grads given the input picture
iterate = K.function([input_img], [loss, grads])
# step size for gradient ascent
step = 1.
# we start from a gray image with some random noise
if K.image_data_format() == 'channels_first':
input_img_data = np.random.random((1, 3, img_width, img_height))
else:
input_img_data = np.random.random((1, img_width, img_height, 3))
input_img_data = (input_img_data - 0.5) * 20 + 128
# we run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([input_img_data])
input_img_data += grads_value * step
print('Current loss value:', loss_value)
if loss_value <= 0.:
# some filters get stuck to 0, we can skip them
break
# decode the resulting input image
if loss_value > 0:
img = deprocess_image(input_img_data[0])
kept_filters.append((img, loss_value))
end_time = time.time()
print('Filter %d processed in %ds' % (filter_index, end_time - start_time))
# we will stich the best 64 filters on a 8 x 8 grid.
n = 8
# the filters that have the highest loss are assumed to be better-looking.
# we will only keep the top 64 filters.
kept_filters.sort(key=lambda x: x[1], reverse=True)
kept_filters = kept_filters[:n * n]
# build a black picture with enough space for
# our 8 x 8 filters of size 128 x 128, with a 5px margin in between
margin = 5
width = n * img_width + (n - 1) * margin
height = n * img_height + (n - 1) * margin
stitched_filters = np.zeros((width, height, 3))
# fill the picture with our saved filters
for i in range(n):
for j in range(n):
img, loss = kept_filters[i * n + j]
stitched_filters[(img_width + margin) * i: (img_width + margin) * i + img_width,
(img_height + margin) * j: (img_height + margin) * j + img_height, :] = img
# save the result to disk
imsave('img/stitched_filters.png' , stitched_filters)
Explanation: Define a loss function
This loss function will seek to maximize the activation of a specific filter (filter_index) in a specific layer (layer_name)
End of explanation |
5,692 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SLTimer Example Analysis of TDC2 Data
This notebook shows you how to find the estimation of a lens time delay from TDC2 light curve data using the PyCS code. For a detailed tutorial through the PyCS code, please visit this address.
First, we'll import SLTimer, as well as a few other important commands.
Step1: Data Munging
Now, let's start a timer object, and download some data to use. The output should show 1006 imported points if we are using the correct tdc2-gateway-1.txt file.
Step2: Initialization
The TDC2 light curve data files have headers that contain some lens model information
Step3: Spline Modeling
We're now ready to analyze this data. We'll start with it as-is, and then later try "whitening" it.
The following lines will run an entire free-knot spline technique on your data with a complete error analysis using the TDC2 method. Below, you can specify how the time delays will be analyzed. The default is listed below according to the PyCS tutorial. See the bottom of the page for alternate methods.
Step4: While the time delays have been estimated, we can see that the different images' light curves are not shifted and microlensing-corrected terribly well.
Step5: Whitening the Light Curves
In the above analysis we ignored the fact that the magnitudes were measured in 6 different filters, and just used them all as if they were from the same filter. By offsetting the light curves to a common mean, we should get a set of points that look more like they were taken in one filter. This process is known as "whitening."
Step6: The change brought about by whitening is pretty subtle
Step7: How does the plot compare? Note that the y axis scale is different
Step8: Unwhitened
Step9: Truth | Python Code:
from __future__ import print_function
import os, urllib, numpy as np
%matplotlib inline
import sys
sys.path.append('../python')
import desc.sltimer
%load_ext autoreload
%autoreload 2
Explanation: SLTimer Example Analysis of TDC2 Data
This notebook shows you how to find the estimation of a lens time delay from TDC2 light curve data using the PyCS code. For a detailed tutorial through the PyCS code, please visit this address.
First, we'll import SLTimer, as well as a few other important commands.
End of explanation
timer = desc.sltimer.SLTimer()
url = "http://www.slac.stanford.edu/~pjm/LSST/DESC/SLTimeDelayChallenge/release/tdc2/gateway/tdc2-gateway-1.txt"
timer.download(url, and_read=True, format='tdc2')
timer.display_light_curves(jdrange=(59500,63100))
Explanation: Data Munging
Now, let's start a timer object, and download some data to use. The output should show 1006 imported points if we are using the correct tdc2-gateway-1.txt file.
End of explanation
# Time delays all set to zero:
# timer.initialize_time_delays(method=None)
# Draw time delays from the prior, using knowledge of H0 and the lens model:
# timer.initialize_time_delays(method='H0_prior', pars=[70.0, 7.0])
# "Guess" the time delays - for testing, let's try something close to the true value:
timer.initialize_time_delays(method='guess', pars={'AB':55.0})
Explanation: Initialization
The TDC2 light curve data files have headers that contain some lens model information: the Fermat potential differences between the image pairs. These are related to the time delays, by a cosmological distance factor $Q$ and the Hubble constant $H_0$. A broad Gaussian prior on $H_0$ will translate to an approximately Gaussian prior on the time delays. Let's draw from this prior to help initialize the time delays in our model.
End of explanation
timer.estimate_time_delays(method='pycs', microlensing='spline', agn='spline', error=None, quietly=True)
timer.report_time_delays()
timer.display_light_curves(jdrange=(59500,63100))
Explanation: Spline Modeling
We're now ready to analyze this data. We'll start with it as-is, and then later try "whitening" it.
The following lines will run an entire free-knot spline technique on your data with a complete error analysis using the TDC2 method. Below, you can specify how the time delays will be analyzed. The default is listed below according to the PyCS tutorial. See the bottom of the page for alternate methods.
End of explanation
# timer.estimate_uncertainties(n=3,npkl=5)
Explanation: While the time delays have been estimated, we can see that the different images' light curves are not shifted and microlensing-corrected terribly well.
End of explanation
wtimer = desc.sltimer.SLTimer()
wtimer.download(url, and_read=True, format='tdc2')
wtimer.whiten()
Explanation: Whitening the Light Curves
In the above analysis we ignored the fact that the magnitudes were measured in 6 different filters, and just used them all as if they were from the same filter. By offsetting the light curves to a common mean, we should get a set of points that look more like they were taken in one filter. This process is known as "whitening."
End of explanation
wtimer.display_light_curves(jdrange=(59500,63100))
wtimer.initialize_time_delays(method='guess', pars={'AB':55.0})
wtimer.estimate_time_delays(method='pycs', microlensing='spline', agn='spline', error=None, quietly=True)
wtimer.display_light_curves(jdrange=(59500,63100))
Explanation: The change brought about by whitening is pretty subtle: the means of each image's light curve stay the same (by design), but the scatter in each image's light curve is somewhat reduced.
End of explanation
wtimer.report_time_delays()
Explanation: How does the plot compare? Note that the y axis scale is different: the shifted light curves seem to match up better now.
Now let's look at the estimated time delays - by how much do they differ, between the unwhitened and whitened data?
Whitened:
End of explanation
timer.report_time_delays()
Explanation: Unwhitened:
End of explanation
truthurl = "http://www.slac.stanford.edu/~pjm/LSST/DESC/SLTimeDelayChallenge/release/tdc2/gateway/gatewaytruth.txt"
truthfile = truthurl.split('/')[-1]
if not os.path.isfile(truthfile):
urllib.urlretrieve(truthurl, truthfile)
d = np.loadtxt(truthfile).transpose()
truth = d[0]
print("True Time Delays:", truth[0])
Explanation: Truth:
End of explanation |
5,693 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Rules
Step1: 2) (From jakevdp)
Step2: 3) Write a function called sum_digits that returns the sum of the digits of an integer argument; that is, sum_digits(123) should return 6. Use this function in an other function that prints out the sum of the digits of every integer multiple of the first argument, up to either a second optional argument (if included) or the first argument's square. That is | Python Code:
#The following line will only work if you create the make_sentence.py in the current directory
import make_sentence.py
Explanation: Exercises
Rules:
Every variable/function/class name should be meaningful
Variable/function names should be lowercase, class names uppercase
Write a documentation string (even if minimal) for every function.
1) (From jakevdp): Create a program (a .py file) which repeatedly asks the user for a word. The program should append all the words together. When the user types a "!", "?", or a ".", the program should print the resulting sentence and exit.
For example, a session might look like this::
$ ./make_sentence.py
Enter a word (. ! or ? to end): My
Enter a word (. ! or ? to end): name
Enter a word (. ! or ? to end): is
Enter a word (. ! or ? to end): Walter
Enter a word (. ! or ? to end): White
Enter a word (. ! or ? to end): !
My name is Walter White!
End of explanation
#Your move
Explanation: 2) (From jakevdp): Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”. If you finish quickly... see how few characters you can write this program in (this is known as "code golf": going for the fewest key strokes).
End of explanation
def sum_digits(number):
#Fill it
return
#Don't forget the second function
#And test it
Explanation: 3) Write a function called sum_digits that returns the sum of the digits of an integer argument; that is, sum_digits(123) should return 6. Use this function in an other function that prints out the sum of the digits of every integer multiple of the first argument, up to either a second optional argument (if included) or the first argument's square. That is::
list_multiple(4) #with one argument
4
8
3
7
And I'll let you figure out what it looks like with a second optional argument
End of explanation |
5,694 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Computing source space SNR
This example shows how to compute and plot source space SNR as in [1]_.
Step1: EEG
Next we do the same for EEG and plot the result on the cortex | Python Code:
# Author: Padma Sundaram <[email protected]>
# Kaisu Lankinen <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.minimum_norm import make_inverse_operator, apply_inverse
import numpy as np
import matplotlib.pyplot as plt
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
# Read data
fname_evoked = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname_evoked, condition='Left Auditory',
baseline=(None, 0))
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
fname_cov = data_path + '/MEG/sample/sample_audvis-cov.fif'
fwd = mne.read_forward_solution(fname_fwd)
cov = mne.read_cov(fname_cov)
# Read inverse operator:
inv_op = make_inverse_operator(evoked.info, fwd, cov, fixed=True, verbose=True)
# Calculate MNE:
snr = 3.0
lambda2 = 1.0 / snr ** 2
stc = apply_inverse(evoked, inv_op, lambda2, 'MNE', verbose=True)
# Calculate SNR in source space:
snr_stc = stc.estimate_snr(evoked.info, fwd, cov)
# Plot an average SNR across source points over time:
ave = np.mean(snr_stc.data, axis=0)
fig, ax = plt.subplots()
ax.plot(evoked.times, ave)
ax.set(xlabel='Time (sec)', ylabel='SNR MEG-EEG')
fig.tight_layout()
# Find time point of maximum SNR:
maxidx = np.argmax(ave)
# Plot SNR on source space at the time point of maximum SNR:
kwargs = dict(initial_time=evoked.times[maxidx], hemi='split',
views=['lat', 'med'], subjects_dir=subjects_dir, size=(600, 600),
clim=dict(kind='value', lims=(-100, -70, -40)),
transparent=True, colormap='viridis')
brain = snr_stc.plot(**kwargs)
Explanation: Computing source space SNR
This example shows how to compute and plot source space SNR as in [1]_.
End of explanation
evoked_eeg = evoked.copy().pick_types(eeg=True, meg=False)
inv_op_eeg = make_inverse_operator(evoked_eeg.info, fwd, cov, fixed=True,
verbose=True)
stc_eeg = apply_inverse(evoked_eeg, inv_op_eeg, lambda2, 'MNE', verbose=True)
snr_stc_eeg = stc_eeg.estimate_snr(evoked_eeg.info, fwd, cov)
brain = snr_stc_eeg.plot(**kwargs)
Explanation: EEG
Next we do the same for EEG and plot the result on the cortex:
End of explanation |
5,695 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
seaborn.violinplot
Violinplots summarize numeric data over a set of categories. They are essentially a box plot with a kernel density estimate (KDE) overlaid along the range of the box and reflected to make it look nice. They provide more information than a boxplot because they also include information about how the data is distributed within the inner quartiles.
dataset
Step1: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
Step2: Basic plot
Step3: The outliers here are making things a bit squished, so I'll remove them since I am just interested in demonstrating the visualization tool.
Step4: Change the order of categories
Step5: Change the order that the colors are chosen
Change orientation to horizontal
Step6: Desaturate
Step7: Adjust width of violins
Step8: Change the size of outlier markers
Step9: Adjust the bandwidth of the KDE filtering parameter. Smaller values will use a thinner kernel and thus will contain higher feature resolution but potentially noise. Here are examples of low and high settings to demonstrate the difference.
Step10: Finalize | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
plt.rcParams['figure.figsize'] = (20.0, 10.0)
plt.rcParams['font.family'] = "serif"
df = pd.read_csv('../../../datasets/movie_metadata.csv')
df.head()
Explanation: seaborn.violinplot
Violinplots summarize numeric data over a set of categories. They are essentially a box plot with a kernel density estimate (KDE) overlaid along the range of the box and reflected to make it look nice. They provide more information than a boxplot because they also include information about how the data is distributed within the inner quartiles.
dataset: IMDB 5000 Movie Dataset
End of explanation
# split each movie's genre list, then form a set from the unwrapped list of all genres
categories = set([s for genre_list in df.genres.unique() for s in genre_list.split("|")])
# one-hot encode each movie's classification
for cat in categories:
df[cat] = df.genres.transform(lambda s: int(cat in s))
# drop other columns
df = df[['director_name','genres','duration'] + list(categories)]
df.head()
# convert from wide to long format and remove null classificaitons
df = pd.melt(df,
id_vars=['duration'],
value_vars = list(categories),
var_name = 'Category',
value_name = 'Count')
df = df.loc[df.Count>0]
top_categories = df.groupby('Category').aggregate(sum).sort_values('Count', ascending=False).index
howmany=10
df = df.loc[df.Category.isin(top_categories[:howmany])]
df.rename(columns={"duration":"Duration"},inplace=True)
df.head()
Explanation: For the bar plot, let's look at the number of movies in each category, allowing each movie to be counted more than once.
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration')
Explanation: Basic plot
End of explanation
df = df.loc[df.Duration < 250]
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration')
Explanation: The outliers here are making things a bit squished, so I'll remove them since I am just interested in demonstrating the visualization tool.
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()))
Explanation: Change the order of categories
End of explanation
p = sns.violinplot(data=df,
y = 'Category',
x = 'Duration',
order = sorted(df.Category.unique()),
orient="h")
Explanation: Change the order that the colors are chosen
Change orientation to horizontal
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
saturation=.25)
Explanation: Desaturate
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
width=.25)
Explanation: Adjust width of violins
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
fliersize=20)
Explanation: Change the size of outlier markers
End of explanation
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
bw=.05)
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
order = sorted(df.Category.unique()),
bw=5)
Explanation: Adjust the bandwidth of the KDE filtering parameter. Smaller values will use a thinner kernel and thus will contain higher feature resolution but potentially noise. Here are examples of low and high settings to demonstrate the difference.
End of explanation
sns.set(rc={"axes.facecolor":"#e6e6e6",
"axes.grid":False,
'axes.labelsize':30,
'figure.figsize':(20.0, 10.0),
'xtick.labelsize':25,
'ytick.labelsize':20})
p = sns.violinplot(data=df,
x = 'Category',
y = 'Duration',
palette = 'spectral',
order = sorted(df.Category.unique()),
notch=True)
plt.xticks(rotation=45)
l = plt.xlabel('')
plt.ylabel('Duration (min)')
plt.text(4.85,200, "Violin Plot", fontsize = 95, color="black", fontstyle='italic')
p.get_figure().savefig('../../figures/violinplot.png')
Explanation: Finalize
End of explanation |
5,696 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dimensionality Reduction with Eigenvector / Eigenvalues and Correlation Matrix (PCA)
inspired by http
Step1: First we need the correlation matrix
Step2: Eigenvalues
Step3: Eigenvector as Principal component
Step4: Create the projection matrix for a new two dimensional space | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from numpy import linalg as LA
from sklearn import datasets
iris = datasets.load_iris()
Explanation: Dimensionality Reduction with Eigenvector / Eigenvalues and Correlation Matrix (PCA)
inspired by http://sebastianraschka.com/Articles/2015_pca_in_3_steps.html#eigendecomposition---computing-eigenvectors-and-eigenvalues
End of explanation
df = pd.DataFrame(iris.data, columns=iris.feature_names)
corr = df.corr()
df.corr()
_ = sns.heatmap(corr)
eig_vals, eig_vecs = LA.eig(corr)
eig_pairs = [(np.abs(eig_vals[i]), eig_vecs[:,i]) for i in range(len(eig_vals))]
eig_pairs.sort(key=lambda x: x[0], reverse=True)
Explanation: First we need the correlation matrix
End of explanation
pd.DataFrame([eig_vals])
Explanation: Eigenvalues
End of explanation
pd.DataFrame(eig_vecs)
Explanation: Eigenvector as Principal component
End of explanation
matrix_w = np.hstack((eig_pairs[0][1].reshape(len(corr),1),
eig_pairs[1][1].reshape(len(corr),1)))
pd.DataFrame(matrix_w, columns=['PC1', 'PC2'])
new_dim = np.dot(np.array(iris.data), matrix_w)
df = pd.DataFrame(new_dim, columns=['X', 'Y'])
df['label'] = iris.target
df.head()
fig = plt.figure()
fig.suptitle('PCA with Eigenvector', fontsize=14, fontweight='bold')
ax = fig.add_subplot(111)
plt.scatter(df[df.label == 0].X, df[df.label == 0].Y, color='red', label=iris.target_names[0])
plt.scatter(df[df.label == 1].X, df[df.label == 1].Y, color='blue', label=iris.target_names[1])
plt.scatter(df[df.label == 2].X, df[df.label == 2].Y, color='green', label=iris.target_names[2])
_ = plt.legend(bbox_to_anchor=(1.25, 1))
Explanation: Create the projection matrix for a new two dimensional space
End of explanation |
5,697 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Titanic Project
For this project, I want to investigate the unfortunate tragedy of the sinking of the Titanic. The movie "Titanic"- which I watched when I was still a child left a strong memory for me. The event occurred in the early morning of 15 April 1912, when the ship collided with an iceberg, and out of 2,224 passengers, more than 1500 died.
The dataset I am working with contains the demographic information, and other information including ticket class, cabin number, fare price of 891 passengers. The main question I am curious about
Step1: Data Dictionary
Variables Definitions
survival (Survival 0 = No, 1 = Yes)
pclass (Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd)
sex (Sex)
Age (Age in years)
sibsp (# of siblings / spouses aboard the Titanic)
parch (# of parents / children aboard the Titanic)
ticket (Ticket number)
fare (Passenger fare price)
cabin (Cabin number)
embarked(Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton)
Note
Step2: Looks like there are no duplicated entries based on passengers ID. We have in total 891 passengers in the dataset. However I have noticed there is a lot of missing values in 'Cabin' feature, and the 'Ticket' feature does not provide useful information for my analysis. I decided to remove them from the dataset by using drop() function
There are also some missing values in the 'Age', I can either removed them or replace them with the mean. Considering there is still a good sample size (>700 entries) after removal, I decide to remove the missing values with dropNa()
Step3: Take a look at Survived and Pclass columns. They are not very descriptive, so I decided to add two additional columns called Survival and Class with more descriptive values.
Step4: Data overview
Now with a clean dataset, I am ready to formulate my hypothesis. I want to get a general overview of statistics for the dataset first. I use the describe() function on the data set. The useful statistic to look at is the mean, which gives us a general idea what the average value is for each feature. The standard deviation provides information on the spread of the data. The min and max give me information regarding whether there are outliers in the dataset. We should be careful and take these outliers into account when analyzing our data. I also calculate the median for each column in case there are outliers.
Step5: Looking at the means and medians, we see that the biggest difference is between mean and median of fare price. The mean is 34.57 while the median is only 15.65. It is likely due to the presence of outliers, the wealthy individuals who could afford the best suits. For example, the highest price fare is well over 500 dollars. I also see that the lowest fare price is 0, I suspect that those are the ship crews.
Now let's study the distribution of variables of interest. The countplot() from seaborn library plots a barplot that shows the counts of the variables. Let's take a look at our dependent variable - "Survived"
Step6: We see that there were 342 passengers survived the disaster or around 38% of the sample.
Now, we also want to look at the distribution of some of other data including gender, socioeconomic class, age, and fare price. Gender, socioeconomic class, age are all categorical data, and barplot is best suited to show their count distribution. Fare price is a continuous variable, and a frequency distribution plot is used to study it.
Step7: It is now a good idea to combine the two graph to see how is gender and socioeconomic class intertwined. We see that among men, there is a much higher number of lower socioeconomic class individuals compared to women. For middle and upper class, the number of men and women are very similar. It is likely that families made up of the majority middle and upper-class passengers, while the lower class passengers are mostly single men.
Step8: Fare price is a continuous variable, and for this type of variable, we use seaborn.distplot() to study its frequency distribution.
In comparison, age is a discrete variable and can be plotted by seaborn.countplot() which plots a bar plot that shows the counts.
We align the two plots horizontal using add_subplot to better demonstrate this difference.
Step9: We can see that the shape of two plots is quite different.
* The fare price distribution plot shows a positively skewed curve, as most of the prices are concentrated below 30 dollars, and highest prices are well over 500 dollars
* The age distribution plot demonstrates more of a bell-shaped curve (Gaussian distribution) with a slight mode for infants and young children. I suspect the slight spike for infants and young children to due to the presence of young families.
Observations on the dataset
342 passengers or roughly 38% of total survived.
There were significantly more men than women on board.
There are significantly higher numbers of lower class passengers compared to the mid and upper class.
The majority of fares sold are below 30 dollars, however, the upper price range of fare is very high, the most expensive ones are over 500 dollars, which should be considered outliers.
Hypothesis
Based on the overview of the data, I formulated 3 potential features that may have influenced the survival.
1. Fare price
Step10: This is not surprising that the outliers existed exclusively in the high socioeconomic class group, as only the wealthy individuals can afford the higher fare price.
This is clear that the upper class were able to afford more expensive fares, with highest fares above 500 dollars.
To look at the survival rate, I break down the fare data into two groups
Step11: The bar plot using matplotlib.pyplot does a reasonable job of showing the difference in survival rate between the two groups.
However with seaborn.barplot(), confidence intervals are directly calculated and displayed. This is an advantage of seaborn library.
Step12: As seen from the graph, taking into account of confidence intervals, higher fare group is associated with significantly higher survival rate (~0.62) compared to lower fare group (~0.31).
How about if we just look at fare price as the continuous variable in relation to survival outcome?
When the Y variable is binary like survival outcome in this case, the statistical analysis suitable is "logistic Regression", where x variable is used as an estimator for the binary outcome of Y variable.
Fortunately, Seaborn.lmplot() allows us to graph the logistic regression function using fare price as an estimator for survival, the function displays a sigmoid shape and higher fare price is indeed associated with the better chance of survival.
Note
Step13: The fare distribution between survivors and non-survivors shows that there is peak in mortality for low fare price.
Gender and Survival
For this section, I am interested in investigation gender and survival rate. I will first calculate the survival rate for both female and male. Then plot a few graphs to visualize the relationship between gender and survival, and combine with other factors such as fare price and socioeconomic class.
Step14: Therefore, being a female is associated with significantly higher survival rate compared to male.
In addition, being in the higher socioeconomic group and higher fare group are associated with a higher survival rate in both male and female.
The difference is that in the male the survival rates are similar for class 2 and 3 with class 1 being much higher, while in the female the survival rates are similar for class 1 and 2 with class 3 being much lower.
Age and Survival
To study the relationship between age and survival rate. First, I seperate age into 6 groups number from 1 to 6
Step15: Now, we want to plot a bar graph showing the relationship between age group and survival rate. Age group is used here instead of age because visually age group is easier to observe than using age variable when dealing with survival rate.
Step16: Age Group 1
Step17: Age Group 1
Step18: From the graph, we can see there is a negative linear relationship between age and survival outcome.
Step19: The age distribution comparison between survivors and non-survivors confirmed the survival spike in young children.
Limitations
There are limitations on our analysis | Python Code:
#load the libraries that I might need to use
%matplotlib inline
import pandas as pd
import numpy as np
import csv
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
#read the csv file into a pandas dataframe
titanic_original = pd.DataFrame.from_csv('titanic-data.csv', index_col=None)
titanic_original
Explanation: The Titanic Project
For this project, I want to investigate the unfortunate tragedy of the sinking of the Titanic. The movie "Titanic"- which I watched when I was still a child left a strong memory for me. The event occurred in the early morning of 15 April 1912, when the ship collided with an iceberg, and out of 2,224 passengers, more than 1500 died.
The dataset I am working with contains the demographic information, and other information including ticket class, cabin number, fare price of 891 passengers. The main question I am curious about: What are the factors that correlate with the survival outcome of passengers?
Load the dataset
First of all, I want to get an overview of the data and identify whether there is additional data cleaning/wrangling to be done before diving deeper. I start off by reading the CSV file into a Pandas Dataframe.
End of explanation
#check if there is duplicated data by checking passenger ID.
len(titanic_original['PassengerId'].unique())
Explanation: Data Dictionary
Variables Definitions
survival (Survival 0 = No, 1 = Yes)
pclass (Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd)
sex (Sex)
Age (Age in years)
sibsp (# of siblings / spouses aboard the Titanic)
parch (# of parents / children aboard the Titanic)
ticket (Ticket number)
fare (Passenger fare price)
cabin (Cabin number)
embarked(Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton)
Note:
pclass: A proxy for socio-economic status (SES)
-1nd = Upper
-2nd = Middle
-3rd = Lower
age: Age is fractional if less than 1.
sibsp: number of siblings and spouse
-Sibling = brother, sister, stepbrother, stepsister
-Spouse = husband, wife (mistresses and fiancés were ignored)
parch: number of parents and children
-Parent = mother, father
-Child = daughter, son, stepdaughter, stepson
-Some children travelled only with a nanny, therefore parch=0 for them.
Data Cleaning
I want to check if there is duplicated data. By using the unique(), I checked the passenger ID to see there is duplicated entries.
End of explanation
#make a copy of dataset
titanic_cleaned=titanic_original.copy()
#remove ticket and cabin feature from dataset
titanic_cleaned=titanic_cleaned.drop(['Ticket','Cabin'], axis=1)
#Remove missing values.
titanic_cleaned=titanic_cleaned.dropna()
#Check to see if the cleaning is successful
titanic_cleaned.head()
Explanation: Looks like there are no duplicated entries based on passengers ID. We have in total 891 passengers in the dataset. However I have noticed there is a lot of missing values in 'Cabin' feature, and the 'Ticket' feature does not provide useful information for my analysis. I decided to remove them from the dataset by using drop() function
There are also some missing values in the 'Age', I can either removed them or replace them with the mean. Considering there is still a good sample size (>700 entries) after removal, I decide to remove the missing values with dropNa()
End of explanation
# Create Survival Label Column
titanic_cleaned['Survival'] = titanic_cleaned.Survived.map({0 : 'Died', 1 : 'Survived'})
titanic_cleaned.head()
# Create Class Label Column
titanic_cleaned['Class'] = titanic_cleaned.Pclass.map({1 : 'Upper Class', 2 : 'Middle Class', 3 : 'Lower Class'})
titanic_cleaned.head()
Explanation: Take a look at Survived and Pclass columns. They are not very descriptive, so I decided to add two additional columns called Survival and Class with more descriptive values.
End of explanation
#describe() provides a statistical overview of the dataset
titanic_cleaned.describe()
#calculate the median for each column
titanic_cleaned.median()
Explanation: Data overview
Now with a clean dataset, I am ready to formulate my hypothesis. I want to get a general overview of statistics for the dataset first. I use the describe() function on the data set. The useful statistic to look at is the mean, which gives us a general idea what the average value is for each feature. The standard deviation provides information on the spread of the data. The min and max give me information regarding whether there are outliers in the dataset. We should be careful and take these outliers into account when analyzing our data. I also calculate the median for each column in case there are outliers.
End of explanation
#I am using seaborn.countplot() to count and show the distribution of a single variable
sns.set(style="darkgrid")
ax = sns.countplot(x="Survival", data=titanic_cleaned)
plt.title("Distribution of Survival")
Explanation: Looking at the means and medians, we see that the biggest difference is between mean and median of fare price. The mean is 34.57 while the median is only 15.65. It is likely due to the presence of outliers, the wealthy individuals who could afford the best suits. For example, the highest price fare is well over 500 dollars. I also see that the lowest fare price is 0, I suspect that those are the ship crews.
Now let's study the distribution of variables of interest. The countplot() from seaborn library plots a barplot that shows the counts of the variables. Let's take a look at our dependent variable - "Survived"
End of explanation
#plt.figure() allows me to specify the size of the graph.
#using fig.add_subplot allows me to display two subplots side by side
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
ax1=sns.countplot(x="Sex", data=titanic_cleaned)
plt.title("Distribution of Gender")
fig.add_subplot(122)
ax2 = sns.countplot(x="Class", data=titanic_cleaned)
plt.title('Distributrion of Class')
Explanation: We see that there were 342 passengers survived the disaster or around 38% of the sample.
Now, we also want to look at the distribution of some of other data including gender, socioeconomic class, age, and fare price. Gender, socioeconomic class, age are all categorical data, and barplot is best suited to show their count distribution. Fare price is a continuous variable, and a frequency distribution plot is used to study it.
End of explanation
#By using hue argument, we can study the another variable, combine with our original variable
sns.countplot(x='Sex', hue='Class', data=titanic_cleaned)
plt.title('Gender and Socioeconomic class')
Explanation: It is now a good idea to combine the two graph to see how is gender and socioeconomic class intertwined. We see that among men, there is a much higher number of lower socioeconomic class individuals compared to women. For middle and upper class, the number of men and women are very similar. It is likely that families made up of the majority middle and upper-class passengers, while the lower class passengers are mostly single men.
End of explanation
#Use fig to store plot dimension
#use add_subplot to display two plots side by side
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
sns.distplot(titanic_cleaned.Fare)
plt.title('Distribution of fare price')
plt.ylabel('Density')
#for this plot, kde must be explicitly turn off for the y axis to counts instead of frequency density
axe2=fig.add_subplot(122)
sns.distplot(titanic_cleaned.Age,bins=40,hist=True, kde=False)
plt.title('Distribution of age')
plt.ylabel('Count')
Explanation: Fare price is a continuous variable, and for this type of variable, we use seaborn.distplot() to study its frequency distribution.
In comparison, age is a discrete variable and can be plotted by seaborn.countplot() which plots a bar plot that shows the counts.
We align the two plots horizontal using add_subplot to better demonstrate this difference.
End of explanation
#multiple plots can be overlayed. Boxplot() and striplot() turned out to be a good combination
sns.boxplot(x="Class", y="Fare", data=titanic_cleaned)
sns.stripplot(x="Class", y="Fare", data=titanic_cleaned, color=".25")
plt.title('Class and fare price')
Explanation: We can see that the shape of two plots is quite different.
* The fare price distribution plot shows a positively skewed curve, as most of the prices are concentrated below 30 dollars, and highest prices are well over 500 dollars
* The age distribution plot demonstrates more of a bell-shaped curve (Gaussian distribution) with a slight mode for infants and young children. I suspect the slight spike for infants and young children to due to the presence of young families.
Observations on the dataset
342 passengers or roughly 38% of total survived.
There were significantly more men than women on board.
There are significantly higher numbers of lower class passengers compared to the mid and upper class.
The majority of fares sold are below 30 dollars, however, the upper price range of fare is very high, the most expensive ones are over 500 dollars, which should be considered outliers.
Hypothesis
Based on the overview of the data, I formulated 3 potential features that may have influenced the survival.
1. Fare price: What is the effect of fare price on survival rate? Are passengers who could afford more expensive tickets more likely to survive?
2. Gender: Does gender plays a role in survival? Are women more likely to survive than men?
3. Age: What age groups of the passengers are more likely to survive?
Fare Price and survival
Let's investigate fare price a bit deeper. First I am interested in looking at its relationship with socioeconomic class. Considering the large range of fare price, we use boxplot to better demonstrate the spread and confidence intervals of the data. The strip plot is used to show the density of data points, and more importantly the outliers.
End of explanation
#make a copy of the dataset and named it titanic_fare
#copy is used instead of assigning is to perserve the dataset in case anything goes wrong
#add a new column stating whether the fare >35 (value=1) or <=35 dollars (value=0)
titanic_fare = titanic_cleaned.copy()
titanic_fare['Fare>35'] = np.where(titanic_cleaned['Fare']>35,'Yes','No')
#check to see if the column creation is succesful
titanic_fare.head()
#Calculate the survival rate for passenger who holds fare > $35.
#float() was used to forced a decimal result due to the limitation of python 2
high_fare_survival=titanic_fare.loc[(titanic_fare['Survived'] == 1)&(titanic_fare['Fare>35']=='Yes')]
high_fare_holder=titanic_fare.loc[(titanic_fare['Fare>35']=='Yes')]
high_fare_survival_rate=len(high_fare_survival)/float(len(high_fare_holder))
print high_fare_survival_rate
#Calculate the survival rate for passenger who holds fare <= $35.
low_fare_survival=titanic_fare.loc[(titanic_fare['Survived'] == 1)&(titanic_fare['Fare>35']=='No')]
low_fare_holder=titanic_fare.loc[(titanic_fare['Fare>35']=='No')]
low_fare_survival_rate=len(low_fare_survival)/float(len(low_fare_holder))
print low_fare_survival_rate
#plot a barplot for survival rate for fare price > $35 and <= $35
fare_survival_table=pd.DataFrame({'Fare Price':pd.Categorical(['No','Yes']),
'Survival Rate':pd.Series([0.32,0.62], dtype='float64')
})
bar=fare_survival_table.plot(kind='bar', x='Fare Price', rot=0)
plt.ylabel('Survival Rate')
plt.xlabel('Fare>35')
plt.title('Fare price and survival rate')
Explanation: This is not surprising that the outliers existed exclusively in the high socioeconomic class group, as only the wealthy individuals can afford the higher fare price.
This is clear that the upper class were able to afford more expensive fares, with highest fares above 500 dollars.
To look at the survival rate, I break down the fare data into two groups:
1. Passengers with fare <=35 dollars
2. passengers with fare >35 dollars
End of explanation
#seaborn.barplot() can directly calculate/display the survival rate and confidence interval from the dataset
sns.barplot(x='Fare>35',y='Survived',data=titanic_fare, palette="Blues_d")
plt.title('Fare price and survival rate')
plt.ylabel('Survival Rate')
Explanation: The bar plot using matplotlib.pyplot does a reasonable job of showing the difference in survival rate between the two groups.
However with seaborn.barplot(), confidence intervals are directly calculated and displayed. This is an advantage of seaborn library.
End of explanation
#use seaborn.lmplot to graph the logistic regression function
sns.lmplot(x="Fare", y="Survived", data=titanic_fare,
logistic=True, y_jitter=.03)
plt.title('Logistic regression using fare price as estimator for survival outcome')
plt.yticks([0, 1], ['Died', 'Survived'])
fare_bins = np.arange(0,500,10)
sns.distplot(titanic_cleaned.loc[(titanic_cleaned['Survived']==0) & (titanic_cleaned['Fare']),'Fare'], bins=fare_bins)
sns.distplot(titanic_cleaned.loc[(titanic_cleaned['Survived']==1) & (titanic_cleaned['Fare']),'Fare'], bins=fare_bins)
plt.title('fare distribution among survival classes')
plt.ylabel('frequency')
plt.legend(['did not survive', 'survived']);
Explanation: As seen from the graph, taking into account of confidence intervals, higher fare group is associated with significantly higher survival rate (~0.62) compared to lower fare group (~0.31).
How about if we just look at fare price as the continuous variable in relation to survival outcome?
When the Y variable is binary like survival outcome in this case, the statistical analysis suitable is "logistic Regression", where x variable is used as an estimator for the binary outcome of Y variable.
Fortunately, Seaborn.lmplot() allows us to graph the logistic regression function using fare price as an estimator for survival, the function displays a sigmoid shape and higher fare price is indeed associated with the better chance of survival.
Note: the area around the line shows the confidence interval of the function.
End of explanation
#Calculate the survival rate for female
female_survived=titanic_fare.loc[(titanic_cleaned['Survived'] == 1)&(titanic_cleaned['Sex']=='female')]
female_total=titanic_fare.loc[(titanic_cleaned['Sex']=='female')]
female_survival_rate=len(female_survived)/(len(female_total)*1.00)
print female_survival_rate
#Calculate the survival rate for male
male_survived=titanic_fare.loc[(titanic_cleaned['Survived'] == 1)&(titanic_cleaned['Sex']=='male')]
male_total=titanic_fare.loc[(titanic_cleaned['Sex']=='male')]
male_survival_rate=len(male_survived)/(len(male_total)*1.00)
print male_survival_rate
#plot a barplot for survival rate for female and male
#we can see that seaborn.barplot
sns.barplot(x='Sex',y='Survived',data=titanic_fare)
plt.title('Gender and survival rate')
plt.ylabel('Survival Rate')
##plot a barplot for survival rate for female and male, combine with fare price group
sns.barplot(x='Sex',y='Survived', hue='Fare>35',data=titanic_fare)
plt.title('Gender and survival rate')
plt.ylabel('Survival Rate')
#plot a barplot for survival rate for female and male, combine with socioeconomic class
sns.barplot(x='Sex',y='Survived', hue='Class',data=titanic_fare)
plt.title('Socioeconomic class and survival rate')
plt.ylabel('Survival Rate')
Explanation: The fare distribution between survivors and non-survivors shows that there is peak in mortality for low fare price.
Gender and Survival
For this section, I am interested in investigation gender and survival rate. I will first calculate the survival rate for both female and male. Then plot a few graphs to visualize the relationship between gender and survival, and combine with other factors such as fare price and socioeconomic class.
End of explanation
#create a age_group function
def age_group(age):
age_group=0
if age<10:
age_group=1
elif age <20:
age_group=2
elif age <30:
age_group=3
elif age <40:
age_group=4
elif age <50:
age_group=5
else:
age_group=6
return age_group
#create a series of age group number by applying the age_group function to age column
ageGroup_column = titanic_fare['Age'].apply(age_group)
#make a copy of titanic_fare and name it titanic_age
titanic_age=titanic_fare.copy()
#add age group column
titanic_age['Age Group'] = ageGroup_column
#check to see if age group column was added properly
titanic_age.head()
Explanation: Therefore, being a female is associated with significantly higher survival rate compared to male.
In addition, being in the higher socioeconomic group and higher fare group are associated with a higher survival rate in both male and female.
The difference is that in the male the survival rates are similar for class 2 and 3 with class 1 being much higher, while in the female the survival rates are similar for class 1 and 2 with class 3 being much lower.
Age and Survival
To study the relationship between age and survival rate. First, I seperate age into 6 groups number from 1 to 6:
1. newborn to 10 years old
2. 10 to 20 years old
3. 20 to 30 years old
4. 30 to 40 years old
5. 40 to 50 years old
6. over 50 years old
Then, I added the age group number as a new column to the dataset.
End of explanation
#Seaborn.barplot is used to plot a bargraph and confidence intervals for survival rate
sns.barplot(x='Age Group', y='Survived',data=titanic_age)
plt.title('Age group and survival rate')
plt.ylabel('Survival Rate')
Explanation: Now, we want to plot a bar graph showing the relationship between age group and survival rate. Age group is used here instead of age because visually age group is easier to observe than using age variable when dealing with survival rate.
End of explanation
#draw bargram and bring additional factors including gender and class
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
sns.barplot(x='Age Group', y='Survived', hue='Sex',data=titanic_age)
plt.title('Age group, gender and survival rate')
plt.ylabel('Survival Rate')
ax1 = fig.add_subplot(122)
sns.barplot(x='Age Group', y='Survived',hue='Pclass',data=titanic_age)
plt.title('Age group, class and survival rate')
plt.ylabel('Survival Rate')
Explanation: Age Group 1: < 10
Age Group 2: >= 10 and < 20
Age Group 3: >= 20 and < 30
Age Group 4: >= 30 and < 40
Age Group 5: >= 40 and < 50
Age Group 6: >= 50
End of explanation
#use seaborn.lmplot to graph the logistic regression function
sns.lmplot(x="Age", y="Survived", data=titanic_age,
logistic=True, y_jitter=.03)
plt.title('Logistic regression using age as the estimator for survival outcome')
plt.yticks([0, 1], ['Died', 'Survived'])
Explanation: Age Group 1: < 10
Age Group 2: >= 10 and < 20
Age Group 3: >= 20 and < 30
Age Group 4: >= 30 and < 40
Age Group 5: >= 40 and < 50
Age Group 6: >= 50
The bar graphs demonstrate that only age group 1 (infants/young children) is associated with significantly higher survival rate. There are no clear distinctions on survival rate between the rest of age groups.
How about using age instead of age group. Is there a linear relationship between age and survival outcome? By using seaborn.lmplot(), We can perform a logistic regression on survival outcome using age as an estimator. Let's take a look.
End of explanation
fare_bins = np.arange(0,100,2)
sns.distplot(titanic_cleaned.loc[(titanic_cleaned['Survived']==0) & (titanic_cleaned['Age']),'Age'], bins=fare_bins)
sns.distplot(titanic_cleaned.loc[(titanic_cleaned['Survived']==1) & (titanic_cleaned['Age']),'Age'], bins=fare_bins)
plt.title('age distribution among survival classes')
plt.ylabel('frequency')
plt.legend(['did not survive', 'survived']);
Explanation: From the graph, we can see there is a negative linear relationship between age and survival outcome.
End of explanation
# using the apply function and lambda to count missing values for each column
print titanic_original.apply(lambda x: sum(x.isnull().values), axis = 0)
Explanation: The age distribution comparison between survivors and non-survivors confirmed the survival spike in young children.
Limitations
There are limitations on our analysis:
1. Missing values: due to too much missing values(688) for the cabin. I decided to remove this column from my analysis. However, the 178 missing values for age data posed problems for us. In my analysis, I decided to drop the missing values because I felt we still had a reasonable sample size of >700, but selection bias definitely increased as the sample size decreased. Another option could be using the mean of existing age data to fill in for the missing values, this approach could be a good option if we had lots of missing value and still wants to incorporate age variable into our analysis. In this case, bias also increases as we are making assumptions for the passengers with missing age.
Survival bias: the data was partially collected from survivors of the disaster, and there could be a lot of data that were missing for people who did not survive. This leads the dataset becomes more representative toward the survivors. This limitation is difficult to overcome, as data we have today is the best of what we could gather due to the disaster has been happened over 100 years ago.
Outliers: for the fare price analysis, we saw that the fare prices had a large difference between mean(34.57) and median(15.65). The highest fares were well over 500 dollars. As a result, the distribution of our fare prices distribution is very positive skewed. This can affect the validity and the accuracy of our analysis. However, because I really wanted to see the survival outcome for the wealthier individuals, I decided to incorporate those outliers into my analysis. An alternative approach is to drop the outliers (e.g. fare prices >500) from our analysis, especially if we are only interested in studying the majority of the sample.
End of explanation |
5,698 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extinction (ebv, Av, & Rv)
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new Bundle.
Step2: Relevant Parameters
Extinction is parameterized by 3 parameters | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Extinction (ebv, Av, & Rv)
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new Bundle.
End of explanation
print(b.filter(qualifier='ebv'))
print(b.get_parameter(qualifier='ebv', context='system'))
print(b.get_parameter(qualifier='ebv', context='constraint'))
print(b.get_parameter(qualifier='Av'))
print(b.get_parameter(qualifier='Rv'))
Explanation: Relevant Parameters
Extinction is parameterized by 3 parameters: ebv (E(B-V)), Av, and Rv. Of these three, two can be provided and the other must be constrained. By default, ebv is the constrained parameter. To change this, see the tutorial on constraints and the b.flip_constraint API docs.
End of explanation |
5,699 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grid Generation with Interactive Widgets
This notebook demostrates how to use the interative widgets.
See a version of it in action
Step1: Main Tutorial
Step2: Loading and plotting the boundary data
Step3: Generating a grid with pygridgen, plotting with pygridtools
Step4: Interactively manipulate the Focus
Step5: Interactively change the number of nodes in the grid
(Notice how the focus stay where we want)
Step6: Save, load, and recreate the altered grid without widgets | Python Code:
from IPython.display import Audio,Image, YouTubeVideo
YouTubeVideo('S5SG9km2f_A', height=450, width=900)
Explanation: Grid Generation with Interactive Widgets
This notebook demostrates how to use the interative widgets.
See a version of it in action:
End of explanation
%matplotlib inline
import warnings
warnings.simplefilter('ignore')
import numpy as np
import matplotlib.pyplot as plt
import pandas
import geopandas
from pygridgen import Gridgen
from pygridtools import viz, iotools
def plotter(x, y, **kwargs):
figsize = kwargs.pop('figsize', (9, 9))
fig, ax = plt.subplots(figsize=figsize)
ax.set_aspect('equal')
viz.plot_domain(domain, betacol='beta', ax=ax)
ax.set_xlim([0, 25])
ax.set_ylim([0, 25])
return viz.plot_cells(x, y, ax=ax, **kwargs)
Explanation: Main Tutorial
End of explanation
domain = geopandas.read_file('basic_data/domain.geojson')
fig, ax = plt.subplots(figsize=(9, 9), subplot_kw={'aspect':'equal'})
fig = viz.plot_domain(domain, betacol='beta', ax=ax)
Explanation: Loading and plotting the boundary data
End of explanation
grid = Gridgen(domain.geometry.x, domain.geometry.y,
domain.beta, shape=(50, 50), ul_idx=2)
fig_orig, artists = plotter(grid.x, grid.y)
Explanation: Generating a grid with pygridgen, plotting with pygridtools
End of explanation
focus, focuser_widget = iotools.interactive_grid_focus(grid, n_points=3, plotfxn=plotter)
focuser_widget
Explanation: Interactively manipulate the Focus
End of explanation
reshaped, shaper_widget = iotools.interactive_grid_shape(grid, max_n=100, plotfxn=plotter)
shaper_widget
fig_orig
Explanation: Interactively change the number of nodes in the grid
(Notice how the focus stay where we want)
End of explanation
import json
from pathlib import Path
from tempfile import TemporaryDirectory
with TemporaryDirectory() as td:
f = Path(td, 'widget_grid.json')
with f.open('w') as grid_write:
json.dump(grid.to_spec(), grid_write)
with f.open('r') as grid_read:
spec = json.load(grid_read)
new_grid = Gridgen.from_spec(spec)
plotter(new_grid.x, new_grid.y)
Explanation: Save, load, and recreate the altered grid without widgets
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.